Updates from: 03/25/2021 04:10:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
Previously updated : 10/15/2020 Last updated : 03/24/2021
HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/r
> [!IMPORTANT] > This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Client certificate authentication is a mutual certificate-based authentication, where the client provides a client certificate to the server to prove its identity. In this case, Azure AD B2C will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Only services that have proper certificates can access your REST API service. The client certificate is an X.509 digital certificate. In production environments, it should be signed by a certificate authority.
+Client certificate authentication is a mutual certificate-based authentication method where the client provides a client certificate to the server to prove its identity. In this case, Azure AD B2C will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Your API service can then limit access to only services that have proper certificates. The client certificate is an PKCS12 (PFX) X.509 digital certificate. In production environments, it should be signed by a certificate authority.
+To create a certificate, you can use [Azure Key Vault](../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
+- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`
+- **Content Type**: `PKCS #12`
+- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`
+- **Key Type**: `RSA`
+- **Key Size**: `2048`
+- **Exportable Private Key**: `Yes` (in order to be able to export pfx file)
-To create a certificate, you can use [Azure Key Vault](../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. You can then [export the certificate](../key-vault/certificates/how-to-export-certificate.md) and upload it for use in the API connectors configuration. Note that password is only required for certificate files protected by a password. You can also use PowerShell's [New-SelfSignedCertificate cmdlet](./secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
+You can then [export the certificate](../key-vault/certificates/how-to-export-certificate.md). You can alternatively use PowerShell's [New-SelfSignedCertificate cmdlet](../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
-For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and validate the certificate from your API endpoint.
+After you have a certificate, you can then upload it as part of the API connector configuration. Note that password is only required for certificate files protected by a password.
-It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **API connectors (preview)** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure AD B2C.
+Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can also use Azure API Management to [check client certificate properties](
+../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values using policy expressions.
+
+It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above. Your API service can temporarily continue to accept old and new certificates while the new certificate is deployed. To upload a new certificate to an existing API connector, select the API connector under **API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will automatically be used by Azure Active Directory.
### API Key Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
Content-type: application/json
| -- | - | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `ValidationError`. |
-| status | Integer | Yes | Must be value `400` for a ValidationError response. |
+| status | Integer / String | Yes | Must be value `400`, or `"400"` for a ValidationError response. |
| userMessage | String | Yes | Message to display to the user. | > [!NOTE]
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
Last updated 12/16/2020 + zone_pivot_groups: b2c-policy-type
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
![Attributes and claims selection page with three claims selected](./media/add-sign-up-and-sign-in-policy/signup-signin-attributes.png) 1. Click **Create** to add the user flow. A prefix of *B2C_1* is automatically prepended to the name.
+2. Follow the steps to [handle the flow for "Forgot your password?"](add-password-reset-policy.md?pivots=b2c-user-flow.md#self-service-password-reset-recommended) within the sign-up or sign-in policy.
### Test the user flow
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Click **Run user flow**, and then select **Sign up now**.
- ![Run user flow page in portal with Run user flow button highlighted](./media/add-sign-up-and-sign-in-policy/signup-signin-run-now.PNG)
+ ![Run user flow page in portal with Run user flow button highlighted](./media/add-sign-up-and-sign-in-policy/signup-signin-run-now.png)
1. Enter a valid email address, click **Send verification code**, enter the verification code that you receive, then select **Verify code**. 1. Enter a new password and confirm the password.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Next, store the SendGrid API key in an Azure AD B2C policy key for your policies
1. Select **Policy Keys** and then select **Add**. 1. For **Options**, choose **Manual**. 1. Enter a **Name** for the policy key. For example, `SendGridSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
-1. In **Secret**, enter your client secret that you previously recorded.
+1. In **Secret**, enter the SendGrid API key that you previously recorded.
1. For **Key usage**, select **Signature**. 1. Select **Create**.
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-verification-proofing.md
+
+ Title: Identity proofing and verification for Azure AD B2C
+
+description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions
+++++++ Last updated : 03/23/2021++++
+# Identity verification and proofing partners
+
+With Azure AD B2C partners, customers can enable identity verification and proofing of their end users before allowing account registration or access. Identity verification and proofing can check document, knowledge-based information and liveness.
+
+A high-level architecture diagram explains the flow.
+
+![Diagram shows the identity proofing flow](./media/partner-gallery/third-party-identity-proofing.png)
+
+Microsoft partners with the following ISV partners.
+
+| ISV partner | Description and integration walkthroughs |
+|:-|:--|
+|![Screenshot of an Experian logo.](./medi) is an Identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. |
+|![Screenshot of an IDology logo.](./medi) is an Identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
+|![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
+| ![Screenshot of a LexisNexis logo.](./medi) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. |
+| ![Screenshot of a Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
+
+## Additional information
+
+- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+
+## Next steps
+
+Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Configure the application settings in the [App service in Azure](../app-service/
| :-- | :| :--| |FraudProtectionSettings:InstanceId | Microsoft DFP Configuration | | |FraudProtectionSettings:DeviceFingerprintingCustomerId | Your Microsoft device fingerprinting customer ID | |
-| FraudProtectionSettings:ApiBaseUrl | Your Base URL from Microsoft DFP Portal | Remove '-int' to call the production API instead
-| TokenProviderConfig: Resource | https://api.dfp.dynamics-int.com | Remove '-int' to call the production API instead |
+| FraudProtectionSettings:ApiBaseUrl | Your Base URL from Microsoft DFP Portal | Remove '-int' to call the production API instead|
+| TokenProviderConfig: Resource | | Remove '-int' to call the production API instead|
| TokenProviderConfig:ClientId |Your Fraud Protection merchant Azure AD client app ID | | | TokenProviderConfig:Authority | https://login.microsoftonline.com/<directory_ID> | Your Fraud Protection merchant Azure AD tenant authority | | TokenProviderConfig:CertificateThumbprint* | The thumbprint of the certificate to use to authenticate against your merchant Azure AD client app |
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
Our ISV partner network extends our solution capabilities to help you build seam
## Identity verification and proofing
-With Azure AD B2C partners, customers can enable identity verification and proofing of their end users before allowing account registration or access. Identity verification and proofing can check document, knowledge-based information and liveness.
-
-A high-level architecture diagram explains the flow.
-
-![Diagram shows the identity proofing flow](./media/partner-gallery/third-party-identity-proofing.png)
- Microsoft partners with the following ISVs for identity verification and proofing. | ISV partner | Description and integration walkthroughs |
active-directory-b2c User Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-overview.md
Last updated 11/05/2019 + # Overview of user accounts in Azure Active Directory B2C
A work account is created the same way for all tenants based on Azure AD. To cre
When you add a new work account, you need to consider the following configuration settings: -- **Name** and **User name** - The **Name** property contains the given and surname of the user. The **User name** is the identifier that the user enters to sign in. The user name includes the full domain. The domain name portion of the user name must either be the initial default domain name *your-domain.onmicrosoft.com*, or a verified, non-federated [custom domain](../active-directory/fundamentals/add-custom-domain.md) name such as *contoso.com*.
+- **Name** and **User name** - The **Name** property contains the given and surname of the user. The **User name** is the identifier that the user enters to sign in. The user name includes the full domain. The domain name portion of the user name must either be the initial default domain name *your-domain.onmicrosoft.com*, or a verified, non-federated [custom domain](../active-directory/fundamentals/add-custom-domain.md) name such as *contoso.com*.
+- **Email** - The new user can also sign in using an email address. We do not support special characters or multibyte characters in email, for example Japanese characters.
- **Profile** - The account is set up with a profile of user data. You have the opportunity to enter a first name, last name, job title, and department name. You can edit the profile after the account is created. - **Groups** - Use groups to perform management tasks such as assigning licenses or permissions to many users, or devices at once. You can put the new account into an existing [group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) in your tenant. - **Directory role** - You need to specify the level of access that the user account has to resources in your tenant. The following permission levels are available:
For more information about managing consumer accounts, see [Manage Azure AD B2C
### Migrate consumer user accounts
-You might have a need to migrate existing consumer user accounts from any identity provider to Azure AD B2C. For more information, see [Migrate users to Azure AD B2C](user-migration.md).
+You might have a need to migrate existing consumer user accounts from any identity provider to Azure AD B2C. For more information, see [Migrate users to Azure AD B2C](user-migration.md).
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 02/01/2021 Last updated : 03/22/2021
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specif
|The filter [excludedAttributes=members](#get-group) when querying the group resource|section 3.4.2.5| |Accept a single bearer token for authentication and authorization of AAD to your application.|| |Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user should not be returned is when it is hard deleted from the application.|
+|Support the /Schemas endpoint|[section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint will be used to discover additional attributes.|
Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with AAD:
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com). * The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md). * Support HTTPS on your SCIM endpoint-
+* [Schema discovery](#schema-discovery)
+ * Schema discovery is not currently supported on the custom application, but it is being used on certain gallery applications. Going forward, schema discovery will be used as the primary method to add additional attributes to a connector.
+ * If a value is not present, do not send null values.
+ * Property values should be camel cased (e.g. readWrite).
+ * Must return a list response.
+
### User provisioning and deprovisioning The following illustration shows the messages that AAD sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
This section provides example SCIM requests emitted by the AAD SCIM client and e
- [Update Group [Remove Members]](#update-group-remove-members) ([Request](#request-12) / [Response](#response-12)) - [Delete Group](#delete-group) ([Request](#request-13) / [Response](#response-13))
+[Schema discovery](#schema-discovery)
+ - [Discover schema](#discover-schema) ([Request](#request-15) / [Response](#response-15))
+ ### User Operations * Users can be queried by `userName` or `email[type eq "work"]` attributes.
This section provides example SCIM requests emitted by the AAD SCIM client and e
*HTTP/1.1 204 No Content*
+### Schema discovery
+#### Discover schema
+
+##### <a name="request-15"></a>Request
+*GET /Schemas*
+##### <a name="response-15"></a>Response
+*HTTP/1.1 200 OK*
+```json
+{
+ "schemas": [
+ "urn:ietf:params:scim:api:messages:2.0:ListResponse"
+ ],
+ "itemsPerPage": 50,
+ "startIndex": 1,
+ "totalResults": 3,
+ "Resources": [
+ {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:Schema"],
+ "id" : "urn:ietf:params:scim:schemas:core:2.0:User",
+ "name" : "User",
+ "description" : "User Account",
+ "attributes" : [
+ {
+ "name" : "userName",
+ "type" : "string",
+ "multiValued" : false,
+ "description" : "Unique identifier for the User, typically
+used by the user to directly authenticate to the service provider.
+Each User MUST include a non-empty userName value. This identifier
+MUST be unique across the service provider's entire set of Users.
+REQUIRED.",
+ "required" : true,
+ "caseExact" : false,
+ "mutability" : "readWrite",
+ "returned" : "default",
+ "uniqueness" : "server"
+ },
+ ],
+ "meta" : {
+ "resourceType" : "Schema",
+ "location" :
+ "/v2/Schemas/urn:ietf:params:scim:schemas:core:2.0:User"
+ }
+ },
+ {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:Schema"],
+ "id" : "urn:ietf:params:scim:schemas:core:2.0:Group",
+ "name" : "Group",
+ "description" : "Group",
+ "attributes" : [
+ {
+ "name" : "displayName",
+ "type" : "string",
+ "multiValued" : false,
+ "description" : "A human-readable name for the Group.
+REQUIRED.",
+ "required" : false,
+ "caseExact" : false,
+ "mutability" : "readWrite",
+ "returned" : "default",
+ "uniqueness" : "none"
+ },
+ ],
+ "meta" : {
+ "resourceType" : "Schema",
+ "location" :
+ "/v2/Schemas/urn:ietf:params:scim:schemas:core:2.0:Group"
+ }
+ },
+ {
+ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:Schema"],
+ "id" : "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User",
+ "name" : "EnterpriseUser",
+ "description" : "Enterprise User",
+ "attributes" : [
+ {
+ "name" : "employeeNumber",
+ "type" : "string",
+ "multiValued" : false,
+ "description" : "Numeric or alphanumeric identifier assigned
+to a person, typically based on order of hire or association with an
+organization.",
+ "required" : false,
+ "caseExact" : false,
+ "mutability" : "readWrite",
+ "returned" : "default",
+ "uniqueness" : "none"
+ },
+ ],
+ "meta" : {
+ "resourceType" : "Schema",
+ "location" :
+"/v2/Schemas/urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
+ }
+ }
+]
+}
+```
+ ### Security requirements **TLS Protocol Versions**
To help drive awareness and demand of our joint integration, we recommend you up
> [Writing expressions for attribute mappings](functions-for-customizing-application-data.md) > [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md) > [Account provisioning notifications](user-provisioning.md)
-> [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
+> [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Continuous access evaluation is implemented by enabling services, like Exchange
This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within mins after one of these critical events.
+[!NOTE] Teams does not support user risk events yet.
+ ### Conditional Access policy evaluation (preview) Exchange and SharePoint are able to synchronize key Conditional Access policies so they can be evaluated within the service itself.
This process enables the scenario where users lose access to organizational file
| | Outlook Web | Outlook Win32 | Outlook iOS | Outlook Android | Outlook Mac | | : | :: | :: | :: | :: | :: |
-| **SharePoint Online** | Supported | Supported | Not Supported | Not Supported | Supported |
+| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
| **Exchange Online** | Supported | Supported | Supported | Supported | Supported | | | Office web apps | Office Win32 apps | Office for iOS | Office for Android | Office for Mac |
This process enables the scenario where users lose access to organizational file
| **SharePoint Online** | Not Supported | Supported | Supported | Supported | Supported | | **Exchange Online** | Not Supported | Supported | Supported | Supported | Supported |
+| | OneDrive web | OneDrive Win32 | OneDrive iOS | OneDrive Android | OneDrive Mac |
+| : | :: | :: | :: | :: | :: |
+| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
+
+| | Teams web apps | Teams Win32 apps | Teams for iOS | Teams for Android | Teams for Mac |
+| : | :: | :: | :: | :: | :: |
+| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
+| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
+| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
+ ### Client-side claim challenge Before continuous access evaluation, clients would always try to replay the access token from its cache as long as it was not expired. With CAE, we are introducing a new case that a resource provider can reject a token even when it is not expired. In order to inform clients to bypass their cache even though the cached tokens have not expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications below support claim challenge: -- Outlook Windows-- Outlook iOS-- Outlook Android-- Outlook Mac-- Outlook Web App-- Teams for Windows (Only for Teams resource)-- Teams iOS (Only for Teams resource)-- Teams Android (Only for Teams resource)-- Teams Mac (Only for Teams resource)-- Word/Excel/PowerPoint for Windows-- Word/Excel/PowerPoint for iOS-- Word/Excel/PowerPoint for Android-- Word/Excel/PowerPoint for Mac
+| | Web | Win32 | iOS | Android | Mac |
+| : | :: | :: | :: | :: | :: |
+| **Outlook** | Supported | Supported | Supported | Supported | Supported |
+| **Teams** | Supported | Supported | Supported | Supported | Supported |
+| **Office** | Not Supported | Supported | Supported | Supported | Supported |
+| **OneDrive** | Supported | Supported | Supported | Supported | Supported |
### Token lifetime
For an explanation of the office update channels, see [Overview of update channe
### Policy change timing
-Due to the potential of replication delay between Azure AD and resource providers, policy changes made by administrators could take up to 2 hours to be effective for Exchange Online.
+Policy changes made by administrators could take up to one day to be effective. Some optimization has been done to reduce the delay to two hours. However, it does not cover all the scenarios yet.
-Example: Administrator adds a policy to block a range of IP addresses from accessing email at 11:00 AM, a user who has come from that IP range before could possibly continue to access email until 1:00 PM.
+If there is an emergency and you need to have your updated policies to be applied to certain users immediately, you should use this [PowerShell command](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?view=azureadps-2.0) or "Revoke Session" in the user profile page to revoke the users' session, which will make sure that the updated policies will be applied immediately.
### Coauthoring in Office apps
active-directory Authentication National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authentication-national-cloud.md
The following table lists the base URLs for the Azure AD endpoints used to acqui
|-|-| | Azure AD for US Government | `https://login.microsoftonline.us` | | Azure AD Germany| `https://login.microsoftonline.de` |
-| Azure AD China operated by 21Vianet | `https://login.partner.microsoftonline.cn/common` |
+| Azure AD China operated by 21Vianet | `https://login.partner.microsoftonline.cn` |
| Azure AD (global service)| `https://login.microsoftonline.com` | You can form requests to the Azure AD authorization or token endpoints by using the appropriate region-specific base URL. For example, for Azure Germany:
active-directory Quickstart Restore App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-restore-app.md
+
+ Title: "How to: Restore or remove a recently deleted application with the Microsoft identity platform | Azure"
+
+description: In this how-to, you learn how to restore or permanently delete a recently deleted application registered with the Microsoft identity platform.
++++++++ Last updated : 3/22/2021++
+#Customer intent: As an application developer, I want to know how to restore or permanently delete my recently deleted application from the Microsoft identity platform.
++
+# Restore or remove a recently deleted application with the Microsoft identity platform
+After you delete an app registration, the app remains in a suspended state for 30 days. During that 30-day window, the app registration can be restored, along with all its properties. After that 30-day window passes, app registrations cannot be restored and the permanent deletion process may be automatically started. This functionality only applies to applications associated to a directory. It is not available for applications from a personal Microsoft account, which cannot be restored.
+
+You can view your deleted applications, restore a deleted application, or permanently delete an application using the App registrations experience under Azure Active Directory (Azure AD) in the Azure portal.
+
+Note that neither you nor Microsoft customer support can restore a permanently deleted application or an application deleted more than 30 days ago.
+
+## Required permissions
+You must have one of the following roles to permanently delete applications.
+
+- Global administrator
+
+- Application administrator
+
+- Cloud application administrator
+
+- Hybrid identity administrator
+
+- Application owner
+
+You must have one of the following roles to restore applications.
+
+- Global administrator
+
+- Application owner
+
+## Deleted applications UI | Preview
+
+> [!IMPORTANT]
+> The deleted applications portal UI feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
+
+### View your deleted applications
+You can see all the applications in a soft deleted state. Only applications deleted less than 30 days ago can be restored.
+
+#### To view your restorable applications
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Search and select **Azure Active Directory**, select **App registrations**, and then select the **Deleted applications (Preview)** tab.
+
+ Review the list of applications. Only applications that have been deleted in the past 30 days are available to restore. If using the App registrations search preview, you can filter by the 'Deleted date' column to see only these applications.
+
+### Restore a recently deleted application
+
+When an app registration is deleted from the organization, the app is in a suspended state and its configurations are preserved. When you restore an app registration, its configurations are also restored. However, if there were any organization-specific settings in **Enterprise applications** for the application's home tenant, those will not be restored.
+
+This is because organization-specific settings are stored on a separate object, called the service principal. Settings held on the service principal include permission consents and user and group assignments for a certain organization; these configurations will not be restored when the app is restored. For more information, see [Application and service principal objects](app-objects-and-service-principals.md).
++
+#### To restore an application
+1. On the **Deleted applications (Preview)** tab, search for and select one of the applications deleted less than 30 days ago.
+
+2. Select **Restore app registration**.
+
+### Permanently delete an application
+You can manually permanently delete an application from your organization. A permanently deleted application can't be restored by you, another administrator, or by Microsoft customer support.
+
+#### To permanently delete an application
+
+1. On the **Deleted applications (Preview)** tab, search for and select one of the available applications.
+
+2. Select **Delete permanently**.
+
+3. Read the warning text and select **Yes**.
+
+## Next steps
+After you've restored or permanently deleted your app, you can:
+
+- [Add an application](quickstart-register-app.md)
+
+- Learn more about [application and service principal objects](app-objects-and-service-principals.md) in the Microsoft identity platform.
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
In this quickstart, you download and run a code sample that demonstrates how a J
See [How the sample works](#how-the-sample-works) for an illustration.
-This quickstart uses MSAL.js 2.0 with the authorization code flow. For a similar quickstart that uses MSAL.js 1.0 with the implicit flow, see [Quickstart: Sign in users in JavaScript single-page apps](./quickstart-v2-javascript.md).
+This quickstart uses MSAL.js v2 with the authorization code flow. For a similar quickstart that uses MSAL.js v1 with the implicit flow, see [Quickstart: Sign in users in JavaScript single-page apps](./quickstart-v2-javascript.md).
## Prerequisites
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
See [How the sample works](#how-the-sample-works) for an illustration.
> - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered. > > To find the value of **Application (client) ID**, go to the app's **Overview** page in the Azure portal.
-> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, simply enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](./authentication-national-cloud.md).
+> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, simply enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](./authentication-national-cloud.md).
> - `Enter_the_Tenant_info_here` is set to one of the following options: > - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name** (for example, `contoso.microsoft.com`). >
See [How the sample works](#how-the-sample-works) for an illustration.
> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`. > > To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
->
+> - `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
> > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="docs"] > > Where:
-> - *\<Enter_the_Graph_Endpoint_Here>* is the endpoint that API calls will be made against. For the main or global Microsoft Graph API service, simply enter `https://graph.microsoft.com`. For more information, see [National cloud deployment](/graph/deployments)
+> - *\<Enter_the_Graph_Endpoint_Here>* is the endpoint that API calls will be made against. For the main or global Microsoft Graph API service, simply enter `https://graph.microsoft.com/`. For more information, see [National cloud deployment](/graph/deployments)
> > #### Step 4: Run the project
The MSAL library signs in users and requests the tokens that are used to access
```html <script type="text/javascript" src="https://alcdn.msftauth.net/lib/1.2.1/js/msal.js" integrity="sha384-9TV1245fz+BaI+VvCjMYL0YDMElLBwNS84v3mY57pXNOt6xcUYch2QLImaTahcOP" crossorigin="anonymous"></script> ```
-> [!TIP]
-> You can replace the preceding version with the latest released version under [MSAL.js releases](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases).
+
+You can replace the preceding version with the latest released version under [MSAL.js releases](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases).
Alternatively, if you have Node.js installed, you can download the latest version through Node.js Package Manager (npm):
The quickstart code also shows how to initialize the MSAL library:
const myMSALObj = new Msal.UserAgentApplication(msalConfig); ```
-> |Where | Description |
-> |||
-> |`clientId` | The application ID of the application that's registered in the Azure portal.|
-> |`authority` | (Optional) The authority URL that supports account types, as described previously in the configuration section. The default authority is `https://login.microsoftonline.com/common`. |
-> |`redirectUri` | The application registration's configured reply/redirectUri. In this case, `http://localhost:3000/`. |
-> |`cacheLocation` | (Optional) Sets the browser storage for the auth state. The default is sessionStorage. |
-> |`storeAuthStateInCookie` | (Optional) The library that stores the authentication request state that's required for validation of the authentication flows in the browser cookies. This cookie is set for IE and Edge browsers to mitigate certain [known issues](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues). |
+|Where | Description |
+|||
+|`clientId` | The application ID of the application that's registered in the Azure portal.|
+|`authority` | (Optional) The authority URL that supports account types, as described previously in the configuration section. The default authority is `https://login.microsoftonline.com/common`. |
+|`redirectUri` | The application registration's configured reply/redirectUri. In this case, `http://localhost:3000/`. |
+|`cacheLocation` | (Optional) Sets the browser storage for the auth state. The default is sessionStorage. |
+|`storeAuthStateInCookie` | (Optional) The library that stores the authentication request state that's required for validation of the authentication flows in the browser cookies. This cookie is set for IE and Edge browsers to mitigate certain [known issues](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues). |
For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
myMSALObj.loginPopup(loginRequest)
}); ```
-> |Where | Description |
-> |||
-> | `scopes` | (Optional) Contains scopes that are being requested for user consent at sign-in time. For example, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (that is, `api://<Application ID>/access_as_user`). |
+|Where | Description |
+|||
+| `scopes` | (Optional) Contains scopes that are being requested for user consent at sign-in time. For example, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (that is, `api://<Application ID>/access_as_user`). |
-> [!TIP]
-> Alternatively, you might want to use the `loginRedirect` method to redirect the current page to the sign-in page instead of a popup window.
+Alternatively, you might want to use the `loginRedirect` method to redirect the current page to the sign-in page instead of a popup window.
### Request tokens
myMSALObj.acquireTokenSilent(tokenRequest)
}); ```
-> |Where | Description |
-> |||
-> | `scopes` | Contains scopes being requested to be returned in the access token for API. For example, `[ "mail.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (that is, `api://<Application ID>/access_as_user`).|
+|Where | Description |
+|||
+| `scopes` | Contains scopes being requested to be returned in the access token for API. For example, `[ "mail.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (that is, `api://<Application ID>/access_as_user`).|
#### Get a user token interactively
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> ``` > In that code: > - `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
+ To find the values for the application (client) ID and the directory (tenant) ID, go to the app's **Overview** page in the Azure portal.
> - Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`). > - Replace `Enter_the_Client_Secret_Here` with the client secret that you created in step 1.-
-> [!div renderon="docs"]
-> > [!TIP]
-> > To find the values for the application (client) ID and the directory (tenant) ID, go to the app's **Overview** page in the Azure portal. To generate a new key, go to the **Certificates & secrets** page.
+ To generate a new key, go to the **Certificates & secrets** page.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Admin consent
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_i
``` > [!div renderon="docs"]
->> In that URL:
->> * Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
->> * `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
+> In that URL:
+> * Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
+> * `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
-> [!NOTE]
-> You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
+You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 4: Run the application
If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the
cd {ProjectFolder}\1-Call-MSGraph\daemon-console dotnet run ```-
-> In that code:
-> * `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
+In that code:
+* `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
You should see a list of users in Azure Active Directory as result.
-> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
+This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
## More information This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
.Build(); ```
-> | Element | Description |
-> |||
-> | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
-> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
-> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
+ | Element | Description |
+ |||
+ | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
+ | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
+ | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
result = await app.AcquireTokenForClient(scopes)
.ExecuteAsync(); ```
-> |Element| Description |
-> |||
-> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
+|Element| Description |
+|||
+| `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
Previously updated : 09/16/2020 Last updated : 03/23/2021
The **All devices** page enables you to:
- Configure your device identity settings. - Enable or disable Enterprise State Roaming. - Review device-related audit logs
+- Download devices (preview)
[![All devices view in the Azure portal](./media/device-management-azure-portal/all-devices-azure-portal.png)](./media/device-management-azure-portal/all-devices-azure-portal.png#lightbox)
To enable the preview filtering functionality in the **All devices** view:
You will now have the ability to **Add filters** to your **All devices** view.
+### Download devices (preview)
+
+Cloud device administrators, Intune administrators, and Global administrators can use the **Download devices (preview)** option to export a CSV file of devices based on any applied filters. If no filters are applied to the list then all devices will be exported. An export may run for a period of up to one hour depending on the
+
+The exported list includes the following device identity attributes:
+
+`accountEnabled, approximateLastLogonTimeStamp, deviceOSType, deviceOSVersion, deviceTrustType, dirSyncEnabled, displayName, isCompliant, isManaged, lastDirSyncTime, objectId, profileType, registeredOwners, systemLabels, registrationTime, mdmDisplayName`
+ ## Configure device settings To manage device identities using the Azure AD portal, those devices need to be either [registered or joined](overview.md) to Azure AD. As an administrator, you can control the process of registering and joining devices by configuring the following device settings.
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/r
> [!IMPORTANT] > This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Client certificate authentication is a mutual certificate-based authentication, where the client provides a client certificate to the server to prove its identity. In this case, Azure Active Directory will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Only services that have proper certificates can access your API service. The client certificate is an X.509 digital certificate. In production environments, it should be signed by a certificate authority.
+Client certificate authentication is a mutual certificate-based authentication method where the client provides a client certificate to the server to prove its identity. In this case, Azure Active Directory will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Your API service can then limit access to only services that have proper certificates. The client certificate is an PKCS12 (PFX) X.509 digital certificate. In production environments, it should be signed by a certificate authority.
-To create a certificate, you can use [Azure Key Vault](../../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. You can then [export the certificate](../../key-vault/certificates/how-to-export-certificate.md) and upload it for use in the API connectors configuration. Note that password is only required for certificate files protected by a password. You can also use PowerShell's [New-SelfSignedCertificate cmdlet](../../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
+To create a certificate, you can use [Azure Key Vault](../../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
+- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`
+- **Content Type**: `PKCS #12`
+- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`
+- **Key Type**: `RSA`
+- **Key Size**: `2048`
+- **Exportable Private Key**: `Yes` (in order to be able to export pfx file)
-For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and validate the certificate from your API endpoint.
+You can then [export the certificate](../../key-vault/certificates/how-to-export-certificate.md). You can alternatively use PowerShell's [New-SelfSignedCertificate cmdlet](../../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
-It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure Active Directory.
+After you have a certificate, you can then upload it as part of the API connector configuration. Note that password is only required for certificate files protected by a password.
+
+Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can also use Azure API Management to protect your API and [check client certificate properties](
+../../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values using policy expressions.
+
+It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above. Your API service can temporarily continue to accept old and new certificates while the new certificate is deployed. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will automatically be used by Azure Active Directory.
### API Key Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
For more information on how to create, view, and manage log alerts using Azure M
The query log opens.
- [![Screenshot showing the query log.](./media/monitor-sign-in-health-for-resilience/query-log.png)](/media/monitor-sign-in-health-for-resilience/query-log.png)
+ [![Screenshot showing the query log.](./media/monitor-sign-in-health-for-resilience/query-log.png)](./media/monitor-sign-in-health-for-resilience/query-log.png)
ΓÇÄ 2. Copy one of the sample scripts for a new Kusto query.
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy lets you add DNS entries to an allowlist, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- - If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is whitelisted . You should specify this URL explicitly since wildcard may not be accepted.
+ - If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is on the allowed list. You should specify this URL explicitly since wildcard may not be accepted.
- Your Authentication Agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well. - For certificate validation, unblock the following URLs: **crl3.digicert.com:80**, **crl4.digicert.com:80**, **ocsp.digicert.com:80**, **www\.d-trust.net:80**, **root-c3-ca2-2009.ocsp.d-trust.net:80**, **crl.microsoft.com:80**, **oneocsp.microsoft.com:80**, and **ocsp.msocsp.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
Ensure that the following prerequisites are in place:
>Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash synchronization. If you _don't_ intend to use password hash synchronization in conjunction with Pass-through Authentication, read the [Azure AD Connect release notes](./reference-connect-version-history.md) to learn more. >[!NOTE]
- >If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is whitelisted . You should specify this URL explicitly since wildcard may not be accepted.
+ >If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is on the allowed list. You should specify this URL explicitly since wildcard may not be accepted.
* **Use a supported Azure AD Connect topology**: Ensure that you are using one of Azure AD Connect's supported topologies described [here](plan-connect-topologies.md).
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-custom-domain-no-cert.md
-# Get all Azure AD Proxy application apps published with no certificate uploaded
+# Get all Application Proxy apps published with no certificate uploaded
This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy apps that are using custom domains but do not have a valid TLS/SSL certificate uploaded.
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-custom-domain-identical-cert.md
-# Get all Azure AD Proxy application apps that are published with the identical certificate
+# Get all Application Proxy apps that are published with the identical certificate
This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate.
active-directory Segment Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/segment-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Segment for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Segment.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 20939a92-5f48-4ef7-ab95-042e70ec1e0e
+++
+ na
+ms.devlang: na
+ Last updated : 03/24/2021+++
+# Tutorial: Configure Segment for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Segment and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Segment](https://www.segment.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Segment
+> * Remove users in Segment when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Segment
+> * Provision groups and group memberships in Segment
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/segment-tutorial) to Segment (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Segment with Owner permissions.
+* Your workspace must have SSO enabled (requires a Business Tier subscription).
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Segment](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Segment to support provisioning with Azure AD
+
+1. The Tenant URL is `http://scim.segmentapis.com/scim/v2`. This value will be entered in the **Tenant URL** field in the Provisioning tab of your Segment application in the Azure portal.
+
+2. Login to [Segment](https://www.segment.com/) app.
+
+3. On the left panel, navigate to **Settings** > **Authentication** > **Advanced Settings**.
+
+ ![panel](media/segment-provisioning-tutorial/left.png)
+
+4. Scroll down to **SSO Sync** and click on **Generate SSO Token**.
+
+ ![access](media/segment-provisioning-tutorial/token.png)
+
+5. Copy and save the Bearer token. This value will be entered in the **Secret Token** field in the Provisioning tab of your Segment application in the Azure portal.
+
+ ![token](media/segment-provisioning-tutorial/access.png)
+
+## Step 3. Add Segment from the Azure AD application gallery
+
+Add Segment from the Azure AD application gallery to start managing provisioning to Segment. If you have previously setup Segment for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Segment, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Segment
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Segment in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Segment**.
+
+ ![The Segment link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Segment Tenant URL and Secret Token retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to Segment. If the connection fails, ensure your Segment account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Segment**.
+
+9. Review the user attributes that are synchronized from Azure AD to Segment in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Segment for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Segment API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering
+ |||--|
+ |userName|String|&check;|
+ |emails[type eq "work"].value|String|
+ |displayName|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Segment**.
+
+11. Review the group attributes that are synchronized from Azure AD to Segment in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Segment for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering
+ |||--|
+ |displayName|String|&check;|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Segment, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Segment by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-aad.md
na Previously updated : 11/04/2019 Last updated : 03/22/2021
This article shows you how to enable access to the developer portal for users fr
Controls that enable you to enter other necessary information appear in the pane. The controls include **Client ID** and **Client secret**. (You get information about these controls later in the article.) 9. Make a note of the content of **Redirect URL**.
- ![Steps for adding an identity provider in the Azure portal](./media/api-management-howto-aad/api-management-with-aad001.png)
+
+ :::image type="content" source="media/api-management-howto-aad/api-management-with-aad001.png" alt-text="Add identity provider in Azure portal":::
+ > [!NOTE]
+ > There are two redirect URLs:<br/>
+ > **Redirect URL** - points to the latest developer portal of the API Management.<br/>
+ > **Redirect URL (deprecated portal)** - points to the deprecated developer portal of API Management.
+ >
+ > It is recommended to use the latest developer portal Redirect URL.
+
10. In your browser, open a different tab. 11. Navigate to the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to register an app in Active Directory. 12. Under **Manage**, select **App registrations**.
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-kubernetes.md
To get a subscription key for accessing APIs, a subscription is required. A subs
### Option 3: Deploy APIM inside the cluster VNet
-In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there is no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Premium tier](https://aka.ms/apimpricing) supports VNet deployment.
+In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there is no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Developer and Premium tiers](https://aka.ms/apimpricing) support VNet deployment.
There are two modes of [deploying API Management into a VNet](./api-management-using-with-vnet.md) ΓÇô External and Internal.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
app-service App Service Authentication How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-authentication-how-to.md
There are two versions of the management API for the Authentication / Authorizat
"allowedExternalRedirectUrls": null, "defaultProvider": "AzureActiveDirectory", "clientId": "3197c8ed-2470-480a-8fae-58c25558ac9b",
- "clientSecret": null,
+ "clientSecret": "",
"clientSecretSettingName": "MICROSOFT_IDENTITY_AUTHENTICATION_SECRET", "clientSecretCertificateThumbprint": null, "issuer": "https://sts.windows.net/0b2ef922-672a-4707-9643-9a5726eec524/",
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-custom-container.md
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"DB
When your app runs, the App Service app settings are injected into the process as environment variables automatically. You can verify container environment variables with the URL `https://<app-name>.scm.azurewebsites.net/Env)`.
+If your app uses images from a private registry or from Docker Hub, credentials for accessing the repository are saved in environment variables: `DOCKER_REGISTRY_SERVER_URL`, `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. Because of security risks, none of these reserved variable names are exposed to the application.
+ ::: zone pivot="container-windows" For IIS or .NET Framework (4.0 or above) based containers, they're injected into `System.ConfigurationManager` as .NET app settings and connection strings automatically by App Service. For all other language or framework, they're provided as environment variables for the process, with one of the following corresponding prefixes:
app-service Overview Patch Os Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-patch-os-runtime.md
When a new major or minor version is added, it is installed side by side with th
az webapp config set --net-framework-version v4.7 --resource-group <groupname> --name <appname> az webapp config set --php-version 7.0 --resource-group <groupname> --name <appname> az webapp config appsettings set --settings WEBSITE_NODE_DEFAULT_VERSION=8.9.3 --resource-group <groupname> --name <appname>
-az webapp config set --python-version 3.4 --resource-group <groupname> --name <appname>
+az webapp config set --python-version 3.8 --resource-group <groupname> --name <appname>
az webapp config set --java-version 1.8 --java-container Tomcat --java-container-version 9.0 --resource-group <groupname> --name <appname> ```
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-url-redirect-powershell.md
description: Learn how to create an application gateway with URL path-based redi
Previously updated : 03/19/2020 Last updated : 03/24/2021 #Customer intent: As an IT administrator, I want to use Azure PowerShell to set up URL path redirection of web traffic to specific pools of servers so I can ensure my customers have access to the information they need.
Set-AzApplicationGateway -ApplicationGateway $appgw
In this example, you create three virtual machine scale sets that support the three backend pools that you created. The scale sets that you create are named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances on which you install IIS. You assign the scale set to the backend pool when you configure the IP settings.
+Replace \<username> and \<password> with your own values before you run the script.
+ ```azurepowershell-interactive $vnet = Get-AzVirtualNetwork ` -ResourceGroupName myResourceGroupAG `
for ($i=1; $i -le 3; $i++)
-OsDiskCreateOption FromImage Set-AzVmssOsProfile $vmssConfig `
- -AdminUsername azureuser `
- -AdminPassword "Azure123456!" `
+ -AdminUsername <username> `
+ -AdminPassword "<password>" `
-ComputerNamePrefix myvmss$i Add-AzVmssNetworkInterfaceConfiguration `
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/claim-sets.md
Claims generated in the process of attesting enclaves using Microsoft Azure Atte
- **Incoming claims**: The claims generated by Microsoft Azure Attestation after parsing the attestation evidence and can be used by policy authors to define authorization rules in a custom policy -- **Outgoing claims**: The claims generated by Azure Attestation and contains all claims that end up in the attestation token
+- **Outgoing claims**: The claims generated by Azure Attestation and included in the attestation token
- **Property claims**: The claims created as an output by Azure Attestation. It contains all the claims that represent properties of the attestation token, such as encoding of the report, validity duration of the report, and so on.
-### Common incoming claims across all attestation types
+## Incoming claims
-Below claims are generated by Azure Attestation and can be used by policy authors to define authorization rules in a custom policy for all attestation types.
+### SGX attestation
-- **x-ms-ver**: JWT schema version (expected to be "1.0")-- **x-ms-attestation-type**: String value representing attestation type -- **x-ms-policy-hash**: Hash of Azure Attestation evaluation policy computed as BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text)))))-- **x-ms-policy-signer**: JSON object with a "jwkΓÇ¥ member representing the key a customer used to sign their policy, when customer uploads a signed policy
+Claims to be used by policy authors to define authorization rules in an SGX attestation policy:
+
+- **x-ms-sgx-is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not
+- **x-ms-sgx-product-id**: Product ID value of the SGX enclave
+- **x-ms-sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote
+- **x-ms-sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote
+- **x-ms-sgx-svn**: security version number encoded in the quote
-Below claims are considered deprecated but are fully supported. It is recommended to use the non-deprecated claim names.
+Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names.
-Deprecated claim | Recommended claim
+Deprecated claim | Recommended claim
|
-ver | x-ms-ver
-tee | x-ms-attestation-type
-maa-policyHash | x-ms-policy-hash
-policy_hash | x-ms-policy-hash
-policy_signer | x-ms-policy-signer
+$is-debuggable | x-ms-sgx-is-debuggable
+$product-id | x-ms-sgx-product-id
+$sgx-mrsigner | x-ms-sgx-mrsigner
+$sgx-mrenclave | x-ms-sgx-mrenclave
+$svn | x-ms-sgx-svn
+
+### TPM attestation
-### Common outgoing claims across all attestation types
+Claims to be used by policy authors to define authorization rules in a TPM attestation policy:
-Below claims are included in the attestation token for all attestation types by the service.
+- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not
+- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))
+- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version
+- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled
+- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled
+- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled
+- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode
+- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode
+- **vbsEnabled**: Boolean value indicating if VBS is enabled
+- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available
-Source: As defined by [IETF JWT](https://tools.ietf.org/html/rfc7519)
+### VBS attestation
-- **"jti" (JWT ID) Claim**-- **"iss" (Issuer) Claim**-- **"iat" (Issued At) Claim**-- **"exp" (Expiration Time) Claim**-- **"nbf" (Not Before) Claim**
+In addition to the TPM attestation policy claims, below claims can be used by policy authors to define authorization rules in a VBS attestation policy.
-Source: As defined by [IETF EAT](https://tools.ietf.org/html/draft-ietf-rats-eat-03#page-9)
+- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave
+- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave
+- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave
+- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave
+- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave
+- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave
+- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave
+
+## Outgoing claims
-- **"Nonce claim" (nonce)**
+### Common for all attestation types
-Below claims are included in the attestation token by default based on the incoming claims:
+Azure Attestation includes the below claims in the attestation token for all attestation types.
- **x-ms-ver**: JWT schema version (expected to be "1.0") - **x-ms-attestation-type**: String value representing attestation type -- **x-ms-policy-hash**: String value containing SHA256 hash of the policy text computed by BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text)))))-- **x-ms-policy-signer**: Contains a JWK with the public key or the certificate chain present in the signed policy header. x-ms-policy-signer is only added if the policy is signed
+- **x-ms-policy-hash**: Hash of Azure Attestation evaluation policy computed as BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text)))))
+- **x-ms-policy-signer**: JSON object with a "jwkΓÇ¥ member representing the key a customer used to sign their policy. This is applicable when customer uploads a signed policy
-## Claims specific to SGX enclaves
+Below claim names are used from [IETF JWT specification](https://tools.ietf.org/html/rfc7519)
-### Incoming claims specific to SGX attestation
+- **"jti" (JWT ID) Claim** - Unique identifier for the JWT
+- **"iss" (Issuer) Claim** - The principal that issued the JWT
+- **"iat" (Issued At) Claim** - The time at which the JWT was issued at
+- **"exp" (Expiration Time) Claim** - Expiration time after which the JWT must not be accepted for processing
+- **"nbf" (Not Before) Claim** - Not Before time before which the JWT must not be accepted for processing
-Below claims are generated by Azure Attestation and can be used by policy authors to define authorization rules in a custom policy for SGX attestation.
+Below claim names are used from [IETF EAT draft specification](https://tools.ietf.org/html/draft-ietf-rats-eat-03#page-9)
-- **x-ms-sgx-is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not-- **x-ms-sgx-product-id**-- **x-ms-sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote-- **x-ms-sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote-- **x-ms-sgx-svn**: security version number encoded in the quote
+- **"Nonce claim" (nonce)** - An untransformed direct copy of an optional nonce value provided by a client
+
+Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names.
+
+Deprecated claim | Recommended claim
+ |
+ver | x-ms-ver
+tee | x-ms-attestation-type
+policy_hash | x-ms-policy-hash
+maa-policyHash | x-ms-policy-hash
+policy_signer | x-ms-policy-signer
-### Outgoing claims specific to SGX attestation
+### SGX attestation
Below claims are generated and included in the attestation token by the service for SGX attestation. - **x-ms-sgx-is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not-- **x-ms-sgx-product-id**
+- **x-ms-sgx-product-id**: Product ID value of the SGX enclave
- **x-ms-sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote - **x-ms-sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote - **x-ms-sgx-svn**: security version number encoded in the quote - **x-ms-sgx-ehd**: enclave held data formatted as BASE64URL(enclave held data) - **x-ms-sgx-collateral**: JSON object describing the collateral used to perform attestation. The value for the x-ms-sgx-collateral claim is a nested JSON object with the following key/value pairs:
- - **qeidcertshash**: SHA256 value of QE Identity issuing certs
+ - **qeidcertshash**: SHA256 value of Quoting Enclave (QE) Identity issuing certs
- **qeidcrlhash**: SHA256 value of QE Identity issuing certs CRL list - **qeidhash**: SHA256 value of the QE Identity collateral - **quotehash**: SHA256 value of the evaluated quote - **tcbinfocertshash**: SHA256 value of the TCB Info issuing certs - **tcbinfocrlhash**: SHA256 value of the TCB Info issuing certs CRL list
- - **tcbinfohash**: JSON object describing the collateral used to perform attestation
+ - **tcbinfohash**: SHA256 value of the TCB Info collateral
Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names. Deprecated claim | Recommended claim | $is-debuggable | x-ms-sgx-is-debuggable
+$product-id | x-ms-sgx-product-id
$sgx-mrsigner | x-ms-sgx-mrsigner $sgx-mrenclave | x-ms-sgx-mrenclave
-$product-id | x-ms-sgx-product-id
$svn | x-ms-sgx-svn
-$tee | x-ms-attestation-type
-maa-ehd | x-ms-sgx-ehd
-aas-ehd | x-ms-sgx-ehd
-maa-attestationcollateral | x-ms-sgx-collateral
-
-## Claims specific to Trusted Platform Module (TPM)/ VBS attestation
+$maa-ehd | x-ms-sgx-ehd
+$aas-ehd | x-ms-sgx-ehd
+$maa-attestationcollateral | x-ms-sgx-collateral
-### Incoming claims for TPM attestation
-
-Claims issued by Azure Attestation for TPM attestation. The availability of the claims is dependent on the evidence provided for attestation.
--- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not-- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))-- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version-- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled-- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled-- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled-- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode-- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode-- **vbsEnabled**: Boolean value indicating if VBS is enabled-- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available-
-### Incoming claims for VBS attestation
-
-Claims issued by Azure Attestation for VBS attestation is in addition to the claims made available for TPM attestation. The availability of the claims is dependent on the evidence provided for attestation.
--- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave-- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave-- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave-- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave-- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave-- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave-- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave-
-### Outgoing claims specific to TPM and VBS attestation
+### TPM and VBS attestation
- **cnf (Confirmation)**: The "cnf" claim is used to identify the proof-of-possession key. Confirmation claim as defined in RFC 7800, contains the public part of the attested enclave key represented as a JSON Web Key (JWK) object (RFC 7517) - **rp_data (relying party data)**: Relying party data, if any, specified in the request, used by the relying party as a nonce to guarantee freshness of the report. rp_data is only added if there is rp_data
-### Property claims
+## Property claims
+
+### TPM and VBS attestation
-- **report_validity_in_minutes**: An integer claim signifying for how long the token is valid.
+- **report_validity_in_minutes**: An integer claim to signify for how long the token is valid.
- **Default value(time)**: One day in minutes. - **Maximum value(time)**: One year in minutes. - **omit_x5c**: A Boolean claim indicating if Azure Attestation should omit the cert used to provide proof of service authenticity. If true, x5t will be added to the attestation token. If false(default), x5c will be added to the attestation token.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-configurations.md
This at-scale enforcement ensures a common baseline configuration (containing co
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./connect-cluster.md).
-* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./use-gitops-connected-cluster.md).
+* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
Consider an application deployed to one or more Kubernetes environments.
### Application repo The application repo contains the application code that developers work on during their inner loop. The applicationΓÇÖs deployment templates live in this repo in a generic form, like Helm or Kustomize. Environment-specific values aren't stored. Changes to this repo invoke a PR or CI pipeline that starts the deployment process. ### Container Registry
-The container registry holds all the first- and third-party images used in the Kubernetes environments. Tag first-party application images with human readable tags and the Git commit used to build the image. Cache third-party images for security, speed, and resilience. Set a plan for timely testing and integration of security updates. For more information, see the [ACR Consume and maintain public content](https://docs.microsoft.com/azure/container-registry/tasks-consume-public-content) guide for an example.
+The container registry holds all the first- and third-party images used in the Kubernetes environments. Tag first-party application images with human readable tags and the Git commit used to build the image. Cache third-party images for security, speed, and resilience. Set a plan for timely testing and integration of security updates. For more information, see the [ACR Consume and maintain public content](../../container-registry/tasks-consume-public-content.md) guide for an example.
### PR Pipeline PRs to the application repo are gated on a successful run of the PR pipeline. This pipeline runs the basic quality gates, such as linting and unit tests on the application code. The pipeline tests the application and lints Dockerfiles and Helm templates used for deployment to a Kubernetes environment. Docker images should be built and tested, but not pushed. Keep the pipeline duration relatively short to allow for rapid iteration. ### CI Pipeline
Suppose Alice wants to make an application change that alters the Docker image u
8. Once all the environments have received successful deployments, the pipeline completes. ## Next steps
-Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc enabled Kubernetes](./conceptual-configurations.md)
+Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc enabled Kubernetes](./conceptual-configurations.md)
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/faq.md
This feature applies baseline configurations (like network policies, role bindin
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./connect-cluster.md).
-* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./use-gitops-connected-cluster.md).
+* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021 #
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/troubleshooting.md
REVISION: 5
TEST SUITE: None ```
-If the Helm release isn't found or missing, try [connecting the cluster to Azure Arc](./connect-cluster.md) again.
+If the Helm release isn't found or missing, try [connecting the cluster to Azure Arc](./quickstart-connect-cluster.md) again.
If the Helm release is present with `STATUS: deployed`, check the status of the agents using `kubectl`:
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
This tutorial assumes familiarity with Azure DevOps, Azure Repos and Pipelines, and Azure CLI. * Sign into [Azure DevOps Services](https://dev.azure.com/).
-* Complete the [previous tutorial](https://docs.microsoft.com/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster) to learn how to deploy GitOps for your CI/CD environment.
-* Understand the [benefits and architecture](https://docs.microsoft.com/azure/azure-arc/kubernetes/conceptual-configurations) of this feature.
+* Complete the [previous tutorial](./tutorial-use-gitops-connected-cluster.md) to learn how to deploy GitOps for your CI/CD environment.
+* Understand the [benefits and architecture](./conceptual-configurations.md) of this feature.
* Verify you have:
- * A [connected Azure Arc enabled Kubernetes cluster](https://docs.microsoft.com/azure/azure-arc/kubernetes/quickstart-connect-cluster#connect-an-existing-kubernetes-cluster) named **arc-cicd-cluster**.
- * A connected Azure Container Registry (ACR) with either [AKS integration](https://docs.microsoft.com/azure/aks/cluster-container-registry-integration) or [non-AKS cluster authentication](https://docs.microsoft.com/azure/container-registry/container-registry-auth-kubernetes).
- * "Build Admin" and "Project Admin" permissions for [Azure Repos](https://docs.microsoft.com/azure/devops/repos/get-started/what-is-repos) and [Azure Pipelines](https://docs.microsoft.com/azure/devops/pipelines/get-started/pipelines-get-started).
+ * A [connected Azure Arc enabled Kubernetes cluster](./quickstart-connect-cluster.md#connect-an-existing-kubernetes-cluster) named **arc-cicd-cluster**.
+ * A connected Azure Container Registry (ACR) with either [AKS integration](../../aks/cluster-container-registry-integration.md) or [non-AKS cluster authentication](../../container-registry/container-registry-auth-kubernetes.md).
+ * "Build Admin" and "Project Admin" permissions for [Azure Repos](/azure/devops/repos/get-started/what-is-repos) and [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-get-started).
* Install the following Azure Arc enabled Kubernetes CLI extensions of versions >= 1.0.0: ```azurecli
Import an [application repo](https://docs.microsoft.com/azure/azure-arc/kubernet
* URL: https://github.com/Azure/arc-cicd-demo-gitops * Works as a base for your cluster resources that house the Azure Vote App.
-Learn more about [importing Git repos](https://docs.microsoft.com/azure/devops/repos/git/import-git-repository).
+Learn more about [importing Git repos](/azure/devops/repos/git/import-git-repository).
>[!NOTE] > Importing and using two separate repositories for application and GitOps repos can improve security and simplicity. The application and GitOps repositories' permissions and visibility can be tuned individually.
The GitOps connection that you create will automatically:
The CI/CD workflow will populate the manifest directory with extra manifests to deploy the app.
-1. [Create a new GitOps connection](https://docs.microsoft.com/azure/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster) to your newly imported **arc-cicd-demo-gitops** repo in Azure Repos.
+1. [Create a new GitOps connection](./tutorial-use-gitops-connected-cluster.md) to your newly imported **arc-cicd-demo-gitops** repo in Azure Repos.
```azurecli az k8sconfiguration create \
kubectl create secret docker-registry <secret-name> \
## Create environment variable groups ### App repo variable group
-[Create a variable group](https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups) named **az-vote-app-dev**. Set the following values:
+[Create a variable group](/azure/devops/pipelines/library/variable-groups) named **az-vote-app-dev**. Set the following values:
| Variable | Value | | -- | -- |
kubectl create secret docker-registry <secret-name> \
| ENVIRONMENT_NAME | Dev | | MANIFESTS_BRANCH | `master` | | MANIFESTS_REPO | The Git connection string for your GitOps repo |
-| PAT | A [created PAT token](https://docs.microsoft.com/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?#create-a-pat) with Read/Write source permissions. Save it to use later when creating the `stage` variable group. |
+| PAT | A [created PAT token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat) with Read/Write source permissions. Save it to use later when creating the `stage` variable group. |
| SRC_FOLDER | `azure-vote` | | TARGET_CLUSTER | `arc-cicd-cluster` | | TARGET_NAMESPACE | `dev` | > [!IMPORTANT]
-> Mark your PAT as a secret type. In your applications, consider linking secrets from an [Azure KeyVault](https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups#link-secrets-from-an-azure-key-vault).
+> Mark your PAT as a secret type. In your applications, consider linking secrets from an [Azure KeyVault](/azure/devops/pipelines/library/variable-groups#link-secrets-from-an-azure-key-vault).
> ### Stage environment variable group
If the dev environment reveals a break after deployment, keep it from going to l
1. Provide the approvers and an optional message. 1. Select **Create** again to complete the addition of the manual approval check.
-For more details, see the [Define approval and checks](https://docs.microsoft.com/azure/devops/pipelines/process/approvals) tutorial.
+For more details, see the [Define approval and checks](/azure/devops/pipelines/process/approvals) tutorial.
Next time the CD pipeline runs, the pipeline will pause after the GitOps PR creation. Verify the change has been synced properly and passes basic functionality. Approve the check from the pipeline to let the change flow to the next environment.
Errors found during pipeline execution appear in the test results section of the
Once the pipeline run has finished, you have assured the quality of the application code and the template that will deploy it. You can now approve and complete the PR. The CI will run again, regenerating the templates and manifests, before triggering the CD pipeline. > [!TIP]
-> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see the [Set branch policies](https://docs.microsoft.com/azure/devops/repos/git/branch-policies) article.
+> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see the [Set branch policies](/azure/devops/repos/git/branch-policies) article.
## CD process approvals
In this tutorial, you have set up a full CI/CD workflow that implements DevOps f
Advance to our conceptual article to learn more about GitOps and configurations with Azure Arc enabled Kubernetes. > [!div class="nextstepaction"]
-> [CI/CD Workflow using GitOps - Azure Arc enabled Kubernetes](https://docs.microsoft.com/azure/azure-arc/kubernetes/conceptual-gitops-cicd)
+> [CI/CD Workflow using GitOps - Azure Arc enabled Kubernetes](https://docs.microsoft.com/azure/azure-arc/kubernetes/conceptual-gitops-cicd)
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Just like private keys, you can provide your known_hosts content directly or in
>[!NOTE] >* Helm operator chart version 1.2.0+ supports the HTTPS Helm release private auth. >* HTTPS Helm release is not supported for AKS managed clusters.
->* If you need Flux to access the Git repository through your proxy, you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./connect-cluster.md#connect-using-an-outbound-proxy-server).
+>* If you need Flux to access the Git repository through your proxy, you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
## Additional Parameters
az k8s-configuration delete --name cluster-config --cluster-name AzureArcTest1 -
Advance to the next tutorial to learn how to implement CI/CD with GitOps. > [!div class="nextstepaction"]
-> [Implement CI/CD with GitOps](./tutorial-gitops-ci-cd.md)
+> [Implement CI/CD with GitOps](./tutorial-gitops-ci-cd.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
When you connect your machine to Azure Arc enabled servers, it enables the abili
- Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) using the same experience as policy assignment for Azure virtual machines. Today, most Guest Configuration policies do not apply configurations, they only audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/). -- Report on configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers using Azure Automation [Change Tracking and Inventory](../../automation/change-tracking/overview.md) and [Azure Security Center File Integrity Monitoring](https://docs.microsoft.com/azure/security-center/security-center-file-integrity-monitoring), for servers enabled with [Azure Defender for servers](https://docs.microsoft.com/azure/security-center/defender-for-servers-introduction).
+- Report on configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers using Azure Automation [Change Tracking and Inventory](../../automation/change-tracking/overview.md) and [Azure Security Center File Integrity Monitoring](../../security-center/security-center-file-integrity-monitoring.md), for servers enabled with [Azure Defender for servers](../../security-center/defender-for-servers-introduction.md).
- Monitor your connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources the application communicates using [Azure Monitor for VMs](../../azure-monitor/vm/vminsights-overview.md).
The Connected Machine agent sends a regular heartbeat message to the service eve
## Next steps
-Before evaluating or enabling Arc enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
+Before evaluating or enabling Arc enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-at-scale-deployment.md
Next, we add to the foundation laid in phase 1 by preparing for and deploying th
|Task |Detail |Duration | |--|-||
-| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> <ul><li>[At-scale deployment using PowerShell remoting](https://docs.microsoft.com/azure/azure-arc/servers/onboard-powershell) (Windows only)</ul></li>| One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
+| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> <ul><li>[At-scale deployment using PowerShell remoting](./onboard-powershell.md) (Windows only)</ul></li>| One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
| [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour | | Deploy the Connected Machine agent to your target servers and machines |Use your automation tool to deploy the scripts to your servers and connect them to Azure.| One or more days depending on your release plan and if following a phased rollout. |
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-australia Gateway Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/gateway-egress-traffic.md
Examples of services that can be added using the next hop of Internet are:
|Resource|Link| ||| | Outbound connections in Azure | [https://docs.microsoft.com/azure/load-balancer/load-balancer-outbound-connections](../load-balancer/load-balancer-outbound-connections.md) |
-| Use Azure custom routes to enable KMS activation | [https://docs.microsoft.com/azure/virtual-machines/troubleshooting/custom-routes-enable-kms-activation](../virtual-machines/troubleshooting/custom-routes-enable-kms-activation.md) |
+| Use Azure custom routes to enable KMS activation | [https://docs.microsoft.com/azure/virtual-machines/troubleshooting/custom-routes-enable-kms-activation](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation) |
| Locking down an App Service Environment | [https://docs.microsoft.com/azure/app-service/environment/firewall-integration](../app-service/environment/firewall-integration.md) | |
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups two or more Enterprise Azure Cache for Redis insta
![Active geo-replication configured](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png)
-1. Repeat the above steps for each additional cache instance in the geo-replication group.
+1. Wait for the first cache to be created successfully. Repeat the above steps for each additional cache instance in the geo-replication group.
## Remove from an active geo-replication group
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-cache-for-redis Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-baseline.md
You may also specify firewall rules with a start and end IP address range. When
- [How to configure Virtual Network Support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md) -- [How to configure Azure Cache for Redis firewall rules](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-configure#firewall)
+- [How to configure Azure Cache for Redis firewall rules](./cache-configure.md#firewall)
**Responsibility**: Customer
Enable DDoS Protection Standard on the VNets associated with your Azure Cache fo
- [How to configure Virtual Network Support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md) -- [Manage Azure DDoS Protection Standard using the Azure portal](/azure/virtual-network/manage-ddos-protection)
+- [Manage Azure DDoS Protection Standard using the Azure portal](../ddos-protection/manage-ddos-protection.md)
**Responsibility**: Customer
You may also use application security groups (ASG) to help simplify complex secu
- [Virtual network service tags](../virtual-network/service-tags-overview.md) -- [Application Security Groups](/azure/virtual-network/security-overview#application-security-groups)
+- [Application Security Groups](../virtual-network/network-security-groups-overview.md#application-security-groups)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use tags for network resources associated with your Azure Cache for Redis deployment in order to logically organize them into a taxonomy. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use the Azure Activity log to monitor network resource configurations and detect changes for network resources related to your Azure Cache for Redis instances. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive. Activity logs provide insight into the operations that were performed on your Azure Cache for Redis instances at the control plane level. Using Azure Activity Log data, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) performed at the control plane level for your Azure Cache for Redis instances. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
While metrics are available by enabling Diagnostic Settings, audit logging at the data plane is not yet available for Azure Cache for Redis. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
While metrics are available by enabling Diagnostic Settings, audit logging at th
Note that audit logging at the data plane is not yet available for Azure Cache for Redis. -- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
Note that audit logging at the data plane is not yet available for Azure Cache f
Note that audit logging at the data plane is not yet available for Azure Cache for Redis. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
-- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](/azure/azure-monitor/platform/activity-log-collect)
+- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
Note that audit logging at the data plane is not yet available for Azure Cache f
While metrics are available by enabling Diagnostic Settings, audit logging at the data plane is not yet available for Azure Cache for Redis. -- [How to configure alerts for Azure Cache for Redis](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-how-to-monitor#alerts)
+- [How to configure alerts for Azure Cache for Redis](./cache-how-to-monitor.md#alerts)
**Responsibility**: Customer
While metrics are available by enabling Diagnostic Settings, audit logging at th
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
Data plane access to Azure Cache for Redis is controlled through access keys. Th
It is not recommended that you build default passwords into your application. Instead, you can store your passwords in Azure Key Vault and then use Azure AD to retrieve them. -- [How to regenerate Azure Cache for Redis access keys](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-configure#settings)
+- [How to regenerate Azure Cache for Redis access keys](./cache-configure.md#settings)
**Responsibility**: Shared
In addition, use Azure AD risk detections to view alerts and reports on risky us
- [How to deploy Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md) -- [Understand Azure AD risk detections](/azure/active-directory/reports-monitoring/concept-risk-events)
+- [Understand Azure AD risk detections](../active-directory/identity-protection/overview-identity-protection.md)
**Responsibility**: Customer
Azure AD authentication cannot be used for direct access to Azure Cache for Redi
**Guidance**: Azure Active Directory (Azure AD) provides logs to help you discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
Azure AD authentication cannot be used for direct access to Azure Cache for Redi
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired log alerts within Log Analytics. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- [How to on-board Azure Sentinel](../sentinel/quickstart-onboard.md)
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: For account login behavior deviation on the control plane, use Azure Active Directory (Azure AD) Identity Protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use tags to assist in tracking Azure resources that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Azure Cache for Redis instances should be separated by virtual network/subnet and tagged appropriately. Optionally, use the Azure Cache for Redis firewall to define rules so that only client connections from specified IP address ranges can connect to the cache. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create Management Groups](/azure/governance/management-groups/create)
+- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md)
- [How to deploy Azure Cache for Redis into a Vnet](cache-how-to-premium-vnet.md) -- [How to configure Azure Cache for Redis firewall rules](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-configure#firewall)
+- [How to configure Azure Cache for Redis firewall rules](./cache-configure.md#firewall)
-- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Microsoft manages the underlying infrastructure for Azure Cache for Redis and ha
- [Understand encryption in transit for Azure Cache for Redis](cache-best-practices.md) -- [Understand required ports used in Vnet cache scenarios](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-how-to-premium-vnet#outbound-port-requirements)
+- [Understand required ports used in Vnet cache scenarios](./cache-how-to-premium-vnet.md#outbound-port-requirements)
**Responsibility**: Shared
Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production instances of Azure Cache for Redis and other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Apply tags to Azure resources giving metadata to logically organize them into a taxonomy. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
In addition, use Azure Policy to put restrictions on the type of resources that
For more information, see the following references: -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create management groups](/azure/governance/management-groups/create)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
-- [How to create and use resource tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use resource tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
For more information, see the following references:
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
For more information, see the following references:
For more information, see the following references: -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
For more information, see the following references:
**Guidance**: If using custom Azure Policy definitions or Azure Resource Manager templates for your Azure Cache for Redis instances and related resources, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
For more information, see the following references:
- [How to create a Key Vault](../key-vault/general/quick-create-portal.md) -- [How to authenticate to Key Vault](/azure/key-vault/managed-identity)
+- [How to authenticate to Key Vault](../key-vault/general/assign-access-policy-portal.md)
**Responsibility**: Customer
Periodically test data restoration of your Azure Key Vault secrets.
- [How to use Azure Cache for Redis Import](cache-how-to-import-export-data.md) -- [How to restore Key Vault Secrets](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultsecret?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Key Vault Secrets](/powershell/module/az.keyvault/restore-azkeyvaultsecret?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
To write to an output binding, you must apply an output binding attribute to the
### Multiple output bindings
-The data written to an output binding is always the return value of the function. If you need to write to more than one output binding, you must create a custom return type. This return type must have the output binding attribute applied to one or more properties of the class. The following example writes to both an HTTP response and a queue output binding:
+The data written to an output binding is always the return value of the function. If you need to write to more than one output binding, you must create a custom return type. This return type must have the output binding attribute applied to one or more properties of the class. The following example from an HTTP trigger writes to both the HTTP response and a queue output binding:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/MultiOutput/MultiOutput.cs" id="docsnippet_multiple_outputs":::
+The response from an HTTP trigger is always considered an output, so a return value attribute isn't required.
+ ### HTTP trigger HTTP triggers translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optional a message `Body`. This object is a representation of the HTTP request object and not the request itself.
-Likewise, the function returns an [HttpReponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
+Likewise, the function returns an [HttpResponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
The following code is an HTTP trigger
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
Clients can enqueue *operations* for (also known as "signaling") an entity funct
[FunctionName("EventHubTriggerCSharp")] public static async Task Run( [EventHubTrigger("device-sensor-events")] EventData eventData,
- [DurableClient] IDurableOrchestrationClient entityClient)
+ [DurableClient] IDurableEntityClient entityClient)
{ var metricType = (string)eventData.Properties["metric"]; var delta = BitConverter.ToInt32(eventData.Body, eventData.Body.Offset);
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
zone_pivot_groups: programming-languages-set-functions-temp
This article shows you how to use Visual Studio Code to connect [Azure Cosmos DB](../cosmos-db/introduction.md) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a JSON document stored in an Azure Cosmos DB container. ::: zone pivot="programming-language-csharp"
-Before you begin, you must complete the article, [Quickstart: Create an Azure Functions project from the command line](create-first-function-cli-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+Before you begin, you must complete the [quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
::: zone-end ::: zone pivot="programming-language-javascript"
-Before you begin, you must complete the article, [Quickstart: Create an Azure Functions project from the command line](create-first-function-cli-node.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+Before you begin, you must complete the [quickstart: Create a JavaScript function in Azure using Visual Studio Code](create-first-function-vs-code-node.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
## Configure your environment
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Using this configuration, the function is now addressable with the following rou
http://<APP_NAME>.azurewebsites.net/api/products/electronics/357 ```
-This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/routing#route-constraint-reference).
+This configuration allows the function code to support two parameters in the address, _category_ and _id_. For more information on how route parameters are tokenized in a URL, see [Routing in ASP.NET Core](/aspnet/core/fundamentals/routing#route-constraint-reference).
# [C#](#tab/csharp)
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
## Next steps -- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
+- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md)
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-output.md
Access the blob data via a parameter that matches the name designated by binding
You can declare function parameters as the following types to write out to blob storage:
-* Strings as `func.Out(str)`
-* Streams as `func.Out(func.InputStream)`
+* Strings as `func.Out[str]`
+* Streams as `func.Out[func.InputStream]`
Refer to the [output example](#example) for details.
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-vnet.md
Congratulations! You've successfully deployed your sample function app.
Now create the private endpoint to lock down your function app. This private endpoint will connect your function app privately and securely to your virtual network by using a private IP address.
-For more information, see the [private endpoint documentation](https://docs.microsoft.com/azure/private-link/private-endpoint-overview).
+For more information, see the [private endpoint documentation](../private-link/private-endpoint-overview.md).
1. In your function app, in the menu on the left, select **Networking**.
Use the following links to learn more about the available networking features:
> [!div class="nextstepaction"]
-> [Azure Functions Premium plan](./functions-premium-plan.md)
+> [Azure Functions Premium plan](./functions-premium-plan.md)
azure-functions Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
In addition, ensure remote debugging has been disabled for your production Azure
Consider deploying Azure Web Application Firewall (WAF) as part of the networking configuration for additional inspection of incoming traffic. Enable Diagnostic Setting for WAF and ingest logs into a Storage Account, Event Hub, or Log Analytics Workspace. -- [How to secure Azure Functions endpoints in production](https://docs.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp#secure-an-http-endpoint-in-production)
+- [How to secure Azure Functions endpoints in production](./functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production)
- [How to deploy Azure WAF](../web-application-firewall/ag/create-waf-policy-ag.md)
Alternatively, there are multiple marketplace options like the Barracuda Web App
- [Using Private Endpoints for Azure Functions](../app-service/networking/private-endpoint.md) -- [Understand Barracuda WAF Cloud Service](https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-web-application-firewall#configuring-your-barracuda-waf-cloud-service)
+- [Understand Barracuda WAF Cloud Service](../app-service/environment/app-service-app-service-environment-web-application-firewall.md#configuring-your-barracuda-waf-cloud-service)
**Responsibility**: Customer
You may use Azure PowerShell or Azure CLI to look-up or perform actions on resou
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network settings and resources related to your Azure Functions deployments. Create alerts within Azure Monitor that will trigger when changes to critical network settings or resources takes place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
If you have built-in custom security/audit logging within your function app, ena
Optionally, you may enable and on-board data to Azure Sentinel or a third-party system information and event management solution. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to set up Azure Functions with Azure Application Insights](functions-monitoring.md)
Optionally, you may enable and on-board data to Azure Sentinel or a third-party
If you have built-in custom security/audit logging within your function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable Diagnostic Settings (user-generated logs) for Azure Functions](functions-monitor-log-analytics.md)
If you have built-in custom security/audit logging within your function app, ena
**Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your function apps according to your organization's compliance regulations. -- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
If you have built-in custom security/audit logging within your function app, ena
Optionally, you may enable and on-board data to Azure Sentinel or a third-party system information and event management solution. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Functions](functions-monitor-log-analytics.md)
Enable Application Insights for your function apps to collect log, performance,
Optionally, you may enable and on-board data to Azure Sentinel or a third-party system information and event management solution. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Functions](functions-monitor-log-analytics.md) -- [How to enable Application Insights for Azure Functions](https://docs.microsoft.com/azure/azure-functions/configure-monitoring#enable-application-insights-integration)
+- [How to enable Application Insights for Azure Functions](./configure-monitoring.md#enable-application-insights-integration)
**Responsibility**: Customer
Optionally, you may enable and on-board data to Azure Sentinel or a third-party
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
Data plane access can be controlled through several means, including authorizati
Multiple deployment methods are available to function apps, some of which may leverage a set of generated credentials. Review the deployment methods that will be used for your application. -- [Secure an HTTP endpoint](https://docs.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp#secure-an-http-endpoint-in-production)
+- [Secure an HTTP endpoint](./functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production)
-- [How to obtain and regenerate authorization keys](https://docs.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp#obtaining-keys)
+- [How to obtain and regenerate authorization keys](./functions-bindings-http-webhook-trigger.md?tabs=csharp#obtaining-keys)
- [Deployment technologies in Azure Functions](functions-deployment-technologies.md)
Additional information is available at the referenced links.
**Guidance**: Wherever possible, use Azure Active Directory (Azure AD) SSO instead than configuring individual stand-alone credentials for data access to your function app. Use Azure Security Center Identity and Access Management recommendations. Implement single sign-on for your functions apps using the App Service Authentication / Authorization feature. -- [Understand authentication and authorization in Azure Functions](https://docs.microsoft.com/azure/app-service/overview-authentication-authorization#identity-providers)
+- [Understand authentication and authorization in Azure Functions](../app-service/overview-authentication-authorization.md#identity-providers)
- [Understand SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md)
In addition, use Azure AD risk detections to view alerts and reports on risky us
**Guidance**: Azure Active Directory (Azure AD) provides logs to help you discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
You can streamline this process by creating diagnostic settings for Azure AD use
- [How to configure your function app to use Azure AD login](../app-service/configure-authentication-provider-aad.md) -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- [How to on-board Azure Sentinel](../sentinel/quickstart-onboard.md)
You may also use Private Endpoints to perform network isolation. An Azure Privat
**Guidance**: In the Azure portal for your function apps, under "Platform Features: Networking: SSL", enable the "HTTPs Only" setting and set the minimum TLS version to 1.2. -- [Require HTTPS on function apps](https://docs.microsoft.com/azure/azure-functions/security-concepts#require-https)
+- [Require HTTPS on function apps](./security-concepts.md#require-https)
**Responsibility**: Customer
Microsoft manages the underlying infrastructure for Azure Functions and has impl
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production function apps as well as other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Microsoft manages the underlying infrastructure for Azure Functions and has impl
In addition, follow recommendations from Azure Security Center to help secure your function apps. -- [How to add continuous security validation to your CI/CD pipeline](https://docs.microsoft.com/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops&amp;preserve-view=true)
+- [How to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline?preserve-view=true&view=azure-devops)
- [How to implement Azure Security Center vulnerability assessment recommendations](../security-center/deploy-vulnerability-assessment-vm.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Additional information is available at the referenced links.
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Additional information is available at the referenced links.
Additional information is available at the referenced links. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
Additional information is available at the referenced links.
- [Design policy as code workflows](../governance/policy/concepts/policy-as-code.md) -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Also make use of a source control solution such as Azure Repos and Azure DevOps
- [Back up your app in Azure](../app-service/manage-backup.md) -- [Understand data availability in Azure DevOps](https://docs.microsoft.com/azure/devops/organizations/security/data-protection?view=azure-devops#data-availability&amp;preserve-view=true)
+- [Understand data availability in Azure DevOps](/azure/devops/organizations/security/data-protection?preserve-view=true&view=azure-devops#data-availability)
-- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Also make use of a source control solution such as Azure Repos and Azure DevOps
- [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey) -- [Understand data availability in Azure DevOps](https://docs.microsoft.com/azure/devops/organizations/security/data-protection?view=azure-devops#data-availability&amp;preserve-view=true)
+- [Understand data availability in Azure DevOps](/azure/devops/organizations/security/data-protection?preserve-view=true&view=azure-devops#data-availability)
-- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Also make use of a source control solution such as Azure Repos and Azure DevOps
- [Restore an app in Azure from a snapshot](../app-service/app-service-web-restore-snapshots.md) -- [How to restore key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore key vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux.md
Starting with versions released after August 2018, we are making the following c
Starting from Agent version 1.13.27, the Linux Agent will support both Python 2 and 3. We always recommend using the latest agent.
-If you are using an older version of the agent, you must have the Virtual Machine use python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
+If you are using an older version of the agent, you must have the Virtual Machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
- Red Hat, CentOS, Oracle: `yum install -y python2` - Ubuntu, Debian: `apt-get install -y python2`
The following table highlights the packages required for [supported Linux distro
|Glibc | GNU C Library | 2.5-12 |Openssl | OpenSSL Libraries | 1.0.x or 1.1.x | |Curl | cURL web client | 7.15.5 |
-|Python | | 2.6+ or 3.3+
+|Python | | 2.7 or 3.6+
|Python-ctypes | | |PAM | Pluggable Authentication Modules | |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
Use Azure diagnostic extension if you need to:
- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md). - Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only). - Send data to third-party tools using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Collect [Boot Diagnostics](../../virtual-machines/troubleshooting/boot-diagnostics.md) to investigate VM boot issues.
+- Collect [Boot Diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
Limitations of Azure diagnostics extension include:
Get more details on each of the agents at the following:
- [Overview of the Log Analytics agent](./log-analytics-agent.md) - [Azure Diagnostics extension overview](./diagnostics-extension-overview.md)-- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
+- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
The following table shows examples for filtering events using a custom XPath.
| Description | XPath | |:|:| | Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
-| Collect only System events with Event ID = 4648 and a process name of consent.exe | `System!*[System[(EventID=4648) and (EventData[@Name='ProcessName']='C:\Windows\System32\consent.exe')]]`
+| Collect only System events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` | | Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Once matching events are generated, you should start to see the ETW events appea
### Step 4: Configure Log Analytics storage account collection
-Follow [these instructions](https://docs.microsoft.com/azure/azure-monitor/essentials/diagnostics-extension-logs#collect-logs-from-azure-storage) to collect the logs from Azure Storage. Once configured, the ETW event data should appear in Log Analytics under the **ETWEvent** table.
+Follow [these instructions](./diagnostics-extension-logs.md#collect-logs-from-azure-storage) to collect the logs from Azure Storage. Once configured, the ETW event data should appear in Log Analytics under the **ETWEvent** table.
## Next steps - Use [custom fields](../logs/custom-fields.md) to create structure in your ETW events-- Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+- Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
Last updated 03/15/2021
# Action rules (preview)
-Action rules help you define or suppress actions at any Azure Resource Manager scope (Azure subscription, resource group, or target resource). They have various filters that help you narrow down the specific subset of alert instances that you want to act on.
+Action rules let you add or suppress the action groups on your fired alerts. A single rule can cover different scopes of target resources, for example - any alert on a specific resource (like a specific virtual machine) or any alert fired on any resource in a subscription. You can optionally add various filters to control which alerts are covered by a rule and define a schedule for it, for example for it to be in effect only outside business hours or during a planned maintenance window.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4rBZ2]
Although alert rules help you define the action group that triggers when the ale
Action rules help you simplify this process. By defining actions at scale, an action group can be triggered for any alert that's generated on the configured scope. In the previous example, the team can define one action rule on **ContosoRG** that will trigger the same action group for all alerts generated within it. > [!NOTE]
-> Action rules currently don't apply to Azure Service Health alerts.
+> Action rules do not apply to Azure Service Health alerts.
## Configuring an action rule
In the [alerts list page](./alerts-managing-alert-instances.md), you can choose
Suppression always takes precedence on the same scope.
-### What happens if I have a resource that's monitored in two separate action rules? Do I get one or two notifications? For example, **VM2** in the following scenario:
+### What happens if I have a resource that is covered by two action rules? Do I get one or two notifications? For example, **VM2** in the following scenario:
`action rule AR1 defined for VM1 and VM2 with action group AG1`
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You can also set the sampling percentage using the environment variable `APPLICA
> [!NOTE] > For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
+## Sampling overrides (preview)
+
+This feature is in preview, starting from 3.0.3-BETA.2.
+
+Sampling overrides allow you to override the [default sampling percentage](#sampling), for example:
+* Set the sampling percentage to 0 (or some small value) for noisy health checks.
+* Set the sampling percentage to 0 (or some small value) for noisy dependency calls.
+* Set the sampling percentage to 100 for an important request type (e.g. `/login`)
+ even though you have the default sampling configured to something lower.
+
+For more information, check out the [sampling overrides](./java-standalone-sampling-overrides.md) documentation.
+ ## JMX metrics If you want to collect some additional JMX metrics:
This feature is in preview.
It allows you to configure rules that will be applied to request, dependency and trace telemetry, for example: * Mask sensitive data * Conditionally add custom dimensions
- * Update the telemetry name used for aggregation and display
+ * Update the span name, which is used to aggregate similar telemetry in the Azure portal.
+ * Drop specific span attributes to control ingestion costs.
For more information, check out the [telemetry processor](./java-standalone-telemetry-processors.md) documentation.
+> [!NOTE]
+> If you are looking to drop specific (whole) spans for controlling ingestion cost,
+> see [sampling overrides](./java-standalone-sampling-overrides.md).
+ ## Auto-collected logging Log4j, Logback, and java.util.logging are auto-instrumented, and logging performed via these logging frameworks
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-sampling-overrides.md
# Sampling overrides (preview) - Azure Monitor Application Insights for Java > [!NOTE]
-> The sampling overrides feature is in preview.
+> The sampling overrides feature is in preview, starting from 3.0.3-BETA.2.
-Here are some use cases for sampling overrides:
- * Suppress collecting telemetry for health checks.
- * Suppress collecting telemetry for noisy dependency calls.
- * Reduce the noise from health checks or noisy dependency calls without suppressing them completely.
- * Collect 100% of telemetry for an important request type (e.g. `/login`) even though you have default sampling
- configured to something lower.
+Sampling overrides allow you to override the [default sampling percentage](./java-standalone-config.md#sampling),
+for example:
+ * Set the sampling percentage to 0 (or some small value) for noisy health checks.
+ * Set the sampling percentage to 0 (or some small value) for noisy dependency calls.
+ * Set the sampling percentage to 100 for an important request type (e.g. `/login`)
+ even though you have the default sampling configured to something lower.
## Terminology
Only the first sampling override that matches is used.
If no sampling overrides match:
-* If this is the first span in the trace, then the [normal sampling percentage](./java-standalone-config.md#sampling)
+* If this is the first span in the trace, then the [default sampling percentage](./java-standalone-config.md#sampling)
is used. * If this is not the first span in the trace, then the parent sampling decision is used.
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-telemetry-processors.md
The Java 3.0 agent for Application Insights can process telemetry data before the data is exported. Here are some use cases for telemetry processors:
- * Create sensitive data.
+ * Mask sensitive data.
* Conditionally add custom dimensions. * Update the span name, which is used to aggregate similar telemetry in the Azure portal.
- * Drop span attributes to control ingestion costs.
+ * Drop specific span attribute(s) to control ingestion costs.
+
+> [!NOTE]
+> If you are looking to drop specific (whole) spans for controlling ingestion cost,
+> see [sampling overrides](./java-standalone-sampling-overrides.md).
## Terminology
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/opencensus-python.md
Azure Monitor supports distributed tracing, metric collection, and logging of Py
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The SDK only supports Python v2.7 and v3.4-v3.7.
+- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The SDK only supports Python versions 2.7 and 3.6+.
- Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource. ## Instrument with OpenCensus Python SDK for Azure Monitor
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-troubleshooting.md
### Make sure you're using the appropriate Profiler Endpoint
-Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
|App Setting | US Government Cloud | China Cloud | |||-|
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler.md
You can set these values using [Azure Resource Manager Templates](./azure-web-ap
## Enable Profiler for other clouds
-Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
|App Setting | US Government Cloud | China Cloud | |||-|
Profiler's files can be deleted when using WebDeploy to deploy changes to your w
[Enablement UI]: ./media/profiler/Enablement_UI.png [profiler-app-setting]:./media/profiler/profiler-app-setting.png
-[disable-profiler-webjob]: ./media/profiler/disable-profiler-webjob.png
-
+[disable-profiler-webjob]: ./media/profiler/disable-profiler-webjob.png
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-appservice.md
Once you've deployed an app, follow the steps below to enable the snapshot debug
## Enable Snapshot Debugger for other clouds
-Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide) through the Application Insights Connection String.
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide) through the Application Insights Connection String.
|Connection String Property | US Government Cloud | China Cloud | |||-| |SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
-For more information about other connection overrides, see [Application Insights documentation](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string?tabs=net#connection-string-with-explicit-endpoint-overrides).
+For more information about other connection overrides, see [Application Insights documentation](./sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
## Disable Snapshot Debugger
For an Azure App Service, you can set app settings within the Azure Resource Man
- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md?toc=/azure/azure-monitor/toc.json). [Enablement UI]: ./media/snapshot-debugger/enablement-ui.png
-[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
-
+[snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-function-app.md
Host file
## Enable Snapshot Debugger for other clouds
-Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
Below is an example of the `host.json` updated with the US Government Cloud agent endpoint: ```json
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
There can be many different reasons why snapshots aren't generated. You can star
## Make sure you're using the appropriate Snapshot Debugger Endpoint
-Currently the only regions that require endpoint modifications are [Azure Government](https://docs.microsoft.com/azure/azure-government/compare-azure-government-global-azure#application-insights) and [Azure China](https://docs.microsoft.com/azure/china/resources-developer-guide).
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
For App Service and applications using the Application Insights SDK, you have to
|||-| |SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
-For more information about other connection overrides, see [Application Insights documentation](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string?tabs=net#connection-string-with-explicit-endpoint-overrides).
+For more information about other connection overrides, see [Application Insights documentation](./sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
For Function App, you have to update the `host.json` using the supported overrides below:
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-users.md
To complete this tutorial:
- Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger). - Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md). - [Send telemetry from your application](../app/usage-overview.md#send-telemetry-from-your-app) for adding custom events/page views-- Send [user context](../app/usage-send-user-context.md) to track what a user does over time and fully utilize the usage features.
+- Send [user context](./usage-overview.md) to track what a user does over time and fully utilize the usage features.
## Log in to Azure Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
A **User flow** visualizes how users navigate between the pages and features of
Now that you've learned how to analyze your users, advance to the next tutorial to learn how to create custom dashboards that combine this information with other useful data about your application. > [!div class="nextstepaction"]
-> [Create custom dashboards](./tutorial-app-dashboards.md)
-
+> [Create custom dashboards](./tutorial-app-dashboards.md)
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-funnels.md
The preceding screenshot includes five highlighted areas. These are features of
* [Users, Sessions, and Events](usage-segmentation.md) * [Retention](usage-retention.md) * [Workbooks](../visualize/workbooks-overview.md)
- * [Add user context](usage-send-user-context.md)
- * [Export to Power BI](./export-power-bi.md)
-
+ * [Add user context](./usage-overview.md)
+ * [Export to Power BI](./export-power-bi.md)
azure-monitor Usage Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-impact.md
How Impact is ultimately calculated varies based on whether we are analyzing by
- [Retention](usage-retention.md) - [User Flows](usage-flows.md) - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](usage-send-user-context.md)
-
+ - [Add user context](./usage-overview.md)
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-retention.md
Or in ASP.NET server code:
- [Funnels](usage-funnels.md) - [User Flows](usage-flows.md) - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](usage-send-user-context.md)
-
+ - [Add user context](./usage-overview.md)
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-segmentation.md
The **Meet your users** section shows information about five sample users matche
- [Retention](usage-retention.md) - [User Flows](usage-flows.md) - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](usage-send-user-context.md)
-
+ - [Add user context](./usage-overview.md)
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-troubleshoot.md
All telemetry events in Application Insights have an [anonymous user ID](./data-
If you're monitoring a web app, the easiest solution is to add the [Application Insights JavaScript SDK](./javascript.md) to your app, and make sure the script snippet is loaded on each page you want to monitor. The JavaScript SDK automatically generates anonymous user and session IDs, then populates telemetry events with these IDs as they're sent from your app.
-If you're monitoring a web service (no user interface), [create a telemetry initializer that populates the anonymous user ID and session ID properties](usage-send-user-context.md) according to your service's notions of unique users and sessions.
+If you're monitoring a web service (no user interface), [create a telemetry initializer that populates the anonymous user ID and session ID properties](./usage-overview.md) according to your service's notions of unique users and sessions.
If your app is sending [authenticated user IDs](./api-custom-events-metrics.md#authenticated-users), you can count based on authenticated user IDs in the Users tool. In the "Show" dropdown, choose "Authenticated users."
If your app is sending too many custom event names, change the name in the code
* [User behavior analytics tools overview](usage-overview.md) ## Get help
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
-
+* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Container insights automatically starts monitoring PV usage by collecting the fo
|Metric name |Metric Dimension (tags) | Metric Description | | `pvUsedBytes`|podUID, podName, pvcName, pvcNamespace, capacityBytes, clusterId, clusterName|Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
-Learn more about configuring collected PV metrics [here](https://aka.ms/ci/pvconfig).
+Learn more about configuring collected PV metrics [here](./container-insights-agent-config.md).
## PV inventory
You can find an overview of persistent volume inventory in the **Persistent Volu
:::image type="content" source="./media/container-insights-persistent-volumes/pv-details-workbook-example.PNG" alt-text="Azure Monitor PV details workbook example"::: ### Persistent Volume Usage Recommended Alert
-You can enable a recommended alert to alert you when average PV usage for a pod is above 80%. Learn more about alerting [here](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-metric-alerts) and how to override the default threshold [here](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-metric-alerts#configure-alertable-metrics-in-configmaps).
+You can enable a recommended alert to alert you when average PV usage for a pod is above 80%. Learn more about alerting [here](./container-insights-metric-alerts.md) and how to override the default threshold [here](./container-insights-metric-alerts.md#configure-alertable-metrics-in-configmaps).
## Next steps -- Learn more about collected PV metrics [here](./container-insights-agent-config.md).
+- Learn more about collected PV metrics [here](./container-insights-agent-config.md).
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-charts.md
By clicking on the failure option, you will be led to a custom failure blade tha
### Common problems with Drill into Logs
-* Log and queries are disabled - To view recommended logs and queries, you must route your diagnostic logs to Log Analytics. Read [this document](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostic-settings) to learn how to do this.
+* Log and queries are disabled - To view recommended logs and queries, you must route your diagnostic logs to Log Analytics. Read [this document](./diagnostic-settings.md) to learn how to do this.
* Activity logs are only provided - The Drill into Logs feature is only available for select resource providers. By default, activity logs are provided.
If you don't see any data on your chart, review the following troubleshooting in
## Next steps
-To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md).
-
+To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md).
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
Verify the user was created.
:::image type="content" source="media/sql-insights-enable/telegraf-user-database-verify.png" alt-text="Verify telegraf user script." lightbox="media/sql-insights-enable/telegraf-user-database-verify.png"::: ### Azure SQL Managed Instance
-Log into your Azure SQL Managed Instance and use [SSMS](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
+Log into your Azure SQL Managed Instance and use [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md) or similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a password.
```
The Azure virtual machines has the following requirements.
> [!NOTE] > The Standard_B2s (2 cpus, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
-The virtual machines need to be placed in the same VNET as your SQL systems so they can make network connections to collect monitoring data. If use the monitoring virtual machine to monitor SQL running on Azure virtual machines or as an Azure Managed Instance, consider placing the monitoring virtual machine in an application security group or the same virtual network as those resources so that you donΓÇÖt need to provide a public network endpoint for monitoring the SQL server.
+Depending upon the network settings of your SQL resources, the virtual machines may need to be placed in the same virtual network as your SQL resources so they can make network connections to collect monitoring data.
## Configure network settings Each type of SQL offers methods for your monitoring virtual machine to securely access SQL. The sections below cover the options based upon the type of SQL.
For access via the public endpoint, you would add a rule under the **Firewall se
:::image type="content" source="media/sql-insights-enable/firewall-settings.png" alt-text="Firewall settings." lightbox="media/sql-insights-enable/firewall-settings.png":::
-> [!NOTE]
-> SQL insights does not currently support Azure Private Endpoint for Azure SQL Database. We recommend using [Service Tags](https://docs.microsoft.com/azure/virtual-network/service-tags-overview) on your network security group or virtual network firewall settings that the [Azure Monitor agent supports](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview#networking).
### Azure SQL Managed Instances
To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOn
-## Profile created
-Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL deployments. Do not return to the **Overview** tab. In a few minutes, the Status column should change to say "Collecting", you should see data for the systems you have chosen to monitor.
+## Monitoring profile created
+
+Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL resources. Do not return to the **Overview** tab. In a few minutes, the Status column should change to read "Collecting", you should see data for the SQL resources you have chosen to monitor.
If you do not see data, see [Troubleshooting SQL insights](sql-insights-troubleshoot.md) to identify the issue. :::image type="content" source="media/sql-insights-enable/profile-created.png" alt-text="Profile created" lightbox="media/sql-insights-enable/profile-created.png":::
+> [!NOTE]
+> If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the SQL insights **Manage profile** tab. Once your updates have been saved the changes will be applied in approximately 5 minutes.
+ ## Next steps - See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-ingestion-time.md
Ingestion time may vary for different resources under different circumstances. Y
| Step | Property or Function | Comments | |:|:|:| | Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated-and-timestamp) <br>If the data source doesn't set this value, then it will be set to the same time as _TimeReceived. |
-| Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | |
-| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | |
+| Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field is not optimized for mass processing and should not be used to filter large datasets. |
+| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | It is recommended to use ingestion_time() if there is a need to filter only records that where ingested in a certain time window. In such case, it is recommended to add also TimeGenerated filter with a larger range. |
### Ingestion latency delays You can measure the latency of a specific record by comparing the result of the [ingestion_time()](/azure/kusto/query/ingestiontimefunction) function to the _TimeGenerated_ property. This data can be used with various aggregations to find how ingestion latency behaves. Examine some percentile of the ingestion time to get insights for large amount of data.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
To change the Log Analytics pricing tier of your workspace,
3. After reviewing the estimated costs based on the last 31 days of usage, if you decide to change the pricing tier, click **Select**.
-You can also [set the pricing tier via Azure Resource Manager](../samples/resource-manager-workspace.md) using the `sku` parameter (`pricingTier` in the Azure Resource Manager template).
+You can also [set the pricing tier via Azure Resource Manager](./resource-manager-workspace.md) using the `sku` parameter (`pricingTier` in the Azure Resource Manager template).
## Legacy pricing tiers
To set the default retention for your workspace,
When the retention is lowered, there is a several day grace period before the data older than the new retention setting is removed.
-The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 180, 270, 365, 550 and 730 days. If another setting is required, that can be configured using [Azure Resource Manager](../samples/resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the several day grace period). This may be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
+The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 180, 270, 365, 550 and 730 days. If another setting is required, that can be configured using [Azure Resource Manager](./resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the several day grace period). This may be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
Workspaces with 30 days retention may actually retain data for 31 days. If it is imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
If the workspace has the Update Management solution installed, add the Update an
> [!TIP]
-> Use these `find` queries sparingly as scans across data types are [resource intensive](../log-query/query-optimization.md#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type (see below).
+> Use these `find` queries sparingly as scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type (see below).
## Understanding ingested data volume
find where TimeGenerated > ago(24h) project _IsBillable, Computer
``` > [!TIP]
-> Use these `find` queries sparingly as scans across data types are [resource intensive](../log-query/query-optimization.md#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type.
+> Use these `find` queries sparingly as scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type.
### Data volume by Azure resource, resource group, or subscription
You can also parse the `_ResourceId` more fully if needed as well using
``` > [!TIP]
-> Use these `find` queries sparingly as scans across data types are [resource intensive](../log-query/query-optimization.md#query-performance-pane) to execute. If you do not need results per subscription, resouce group or resource name, then query on the Usage data type.
+> Use these `find` queries sparingly as scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you do not need results per subscription, resouce group or resource name, then query on the Usage data type.
> [!WARNING] > Some of the fields of the Usage data type, while still in the schema, have been deprecated and will their values are no longer populated.
Some suggestions for reducing the volume of logs collected include:
| Source of high data volume | How to reduce data volume | | -- | - |
-| Container Insights | [Configure Container Insights](../insights/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
+| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier) <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)) <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10))<br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10))<br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10))<br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10))<br> - audit removable storage | | Performance counters | Change [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection <br> - Reduce number of performance counters | | Event logs | Change [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected <br> - Collect only required event levels. For example, do not collect *Information* level events |
Some suggestions for reducing the volume of logs collected include:
| AzureDiagnostics | Change [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources send logs to Log Analytics <br> - Collect only required logs | | Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. | | Application Insights | Review options for [https://docs.microsoft.com/azure/azure-monitor/app/pricing#managing-your-data-volume](managing Application Insights data volume) |
-| [SQL Analytics](https://docs.microsoft.com/azure/azure-monitor/insights/azure-sql) | Use [Set-AzSqlServerAudit](https://docs.microsoft.com/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
-| Azure Sentinel | Review any [Sentinel data sources](https://docs.microsoft.com/azure/sentinel/connect-data-sources) which you recently enabled as sources of additional data volume. |
+| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
+| Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) which you recently enabled as sources of additional data volume. |
### Getting nodes as billed in the Per Node pricing tier
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-storage.md
For the storage account to successfully connect to your private link, it must:
* Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, you should select the exception: ΓÇ£Allow trusted Microsoft services to access this storage accountΓÇ¥. ![Storage account trust MS services image](./media/private-storage/storage-trust.png) * If your workspace handles traffic from other networks as well, you should configure the storage account to allow incoming traffic coming from the relevant networks/internet.
-* Coordinate TLS version between the agents and the storage account - It's recommended that you send data to Log Analytics using TLS 1.2 or higher. Review [platform-specific guidance](https://docs.microsoft.com/azure/azure-monitor/logs/data-security#sending-data-securely-using-tls-12), and if required [configure your agents to use TLS 1.2](https://docs.microsoft.com/azure/azure-monitor/agents/agent-windows#configure-agent-to-use-tls-12). If for some reason that's not possible, configure the storage account to accept TLS 1.0.
+* Coordinate TLS version between the agents and the storage account - It's recommended that you send data to Log Analytics using TLS 1.2 or higher. Review [platform-specific guidance](./data-security.md#sending-data-securely-using-tls-12), and if required [configure your agents to use TLS 1.2](../agents/agent-windows.md#configure-agent-to-use-tls-12). If for some reason that's not possible, configure the storage account to accept TLS 1.0.
### Using a customer-managed storage account for CMK data encryption Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMK) to encrypt the data; However, Azure Storage also allows you to use CMK from Azure Key vault to encrypt your storage data. You can either import your own keys into Azure Key Vault, or you can use the Azure Key Vault APIs to generate keys.
Storage accounts are charged by the volume of stored data, the type of the stora
## Next steps - Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/overview.md
Multiple APIs are available to read and write metrics and logs to and from Azure
## Next steps Learn more about:
-* [Metrics and logs](https://docs.microsoft.com/azure/azure-monitor/data-platform#metrics) for the data collected by Azure Monitor.
+* [Metrics and logs](./data-platform.md#metrics) for the data collected by Azure Monitor.
* [Data sources](agents/data-sources.md) for how the different components of your application send telemetry. * [Log queries](logs/log-query-overview.md) for analyzing collected data.
-* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services.
+* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services.
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-monitor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-baseline.md
Virtual network rules enable Azure Monitor to only accept communications that ar
Use Log Analytics gateway to send data to a Log Analytics workspace in Azure Monitor on behalf of the computers that cannot directly connect to the internet preventing need of computers to be connected to internet. -- [How to set up Private Link for Azure Monitor](/azure/azure-monitor/platform/private-link-security)
+- [How to set up Private Link for Azure Monitor](./logs/private-link-security.md)
-- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](/azure/azure-monitor/platform/gateway)
+- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](./agents/gateway.md)
**Responsibility**: Customer
Use Log Analytics gateway to send data to a Log Analytics workspace in Azure Mon
When using Azure Monitor with Private Link, you get access to network logging such as 'Data processed by the Private Endpoint (IN/OUT)'. -- [Network requirements for Azure Monitor agents](/azure/azure-monitor/platform/log-analytics-agent#network-requirements)
+- [Network requirements for Azure Monitor agents](./agents/log-analytics-agent.md#network-requirements)
-- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](/azure/azure-monitor/platform/gateway)
+- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](./agents/gateway.md)
- [How to enable network security group flow logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md)
When using Azure Monitor with Private Link, you get access to network logging su
**Guidance**: Azure Monitor is part of the Azure core services and cannot be deployed as a service separately. Azure Monitor components, including the Azure Monitor Agent, and Application Insights SDK may be deployed with your resources, and this may impact the security posture of those resources. -- [Network requirements for Azure Monitor agents](/azure/azure-monitor/platform/log-analytics-agent#network-requirements)
+- [Network requirements for Azure Monitor agents](./agents/log-analytics-agent.md#network-requirements)
-- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](/azure/azure-monitor/platform/gateway)
+- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](./agents/gateway.md)
-- [See getting started with Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview#get-started)
+- [See getting started with Application Insights](./app/app-insights-overview.md#get-started)
- [How to set up availability web tests](app/monitor-web-app-availability.md)
When using Azure Monitor with Private Link, you get access to network logging su
**Guidance**: Use the Azure Activity Log to monitor resource configurations and detect changes to your network resources related to Azure Monitor. Create alerts within Azure Monitor that will trigger when changes to those critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](./essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](./alerts/alerts-activity-log.md)
**Responsibility**: Customer
When using Azure Monitor with Private Link, you get access to network logging su
Alternatively, you may enable and on-board data to Azure Sentinel or a third-party SIEM. -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](./essentials/diagnostic-settings.md)
-- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](/azure/azure-monitor/learn/quick-collect-azurevm)
+- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](./vm/quick-collect-azurevm.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Guidance**: Azure Monitor uses Activity logs, the Activity Log is automatically enabled and logs operations taken on Azure Monitor resources, such as: who started the operation, when the operation occurred, the status of the operation and other useful audit information. -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](./essentials/diagnostic-settings.md)
-- [Understand logging and different log types in Azure](/azure/azure-monitor/platform/platform-logs-overview)
+- [Understand logging and different log types in Azure](./essentials/platform-logs-overview.md)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Guidance**: In Azure Monitor, set your Log Analytics workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for any long-term/archival storage of your logs. -- [Change the data retention period in Log Analytics](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [Change the data retention period in Log Analytics](./logs/manage-cost-storage.md#change-the-data-retention-period)
-- [How to configure retention policy for Azure Storage account logs](/azure/storage/common/storage-monitor-storage-account#configure-logging)
+- [How to configure retention policy for Azure Storage account logs](../storage/common/manage-storage-analytics-logs.md#configure-logging)
**Responsibility**: Customer
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Getting started with Log Analytics queries](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Getting started with Log Analytics queries](./logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](./logs/get-started-queries.md)
**Responsibility**: Customer
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
- [How to manage alerts in Azure Security Center](../security-center/security-center-managing-and-responding-alerts.md) -- [How to alert on log analytics log data](/azure/azure-monitor/learn/tutorial-response)
+- [How to alert on log analytics log data](./alerts/tutorial-response.md)
**Responsibility**: Customer
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage access to Azure resources through role assignments. You can assign these roles to users, groups service principals and managed identities. There are pre-defined built-in roles for certain resources, and these roles can be inventoried or queried through tools such as Azure CLI, Azure PowerShell or the Azure portal. -- [How to get a directory role in Azure Active Directory (Azure AD) with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure Active Directory (Azure AD) with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?amp;preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?amp;preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
**Guidance**: You have access to Azure Active Directory (Azure AD) Sign-in Activity, Audit and Risk Event log sources, which allow you to integrate with any SIEM/Monitoring tool. You can streamline this process by creating Diagnostic Settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Responsibility**: Customer
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md) -- [Manage access to log data and workspaces in Azure Monitor](/azure/azure-monitor/platform/manage-access)
+- [Manage access to log data and workspaces in Azure Monitor](./logs/manage-access.md)
**Responsibility**: Customer
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
Application Insights and Log Analytics both continue to allow TLS 1.1 and TLS 1.0 data to be ingested. Data may be restricted to TLS 1.2 by configuring on the client side. -- [How to send data securely using TLS 1.2](/azure/azure-monitor/platform/data-security#sending-data-securely-using-tls-12)
+- [How to send data securely using TLS 1.2](./logs/data-security.md#sending-data-securely-using-tls-12)
**Responsibility**: Shared
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure role-based access control (RBAC) to manage access to Azure Monitor. -- [Roles, permissions, and security in Azure Monitor](/azure/azure-monitor/platform/roles-permissions-security)
+- [Roles, permissions, and security in Azure Monitor](./roles-permissions-security.md)
- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). Azure Monitor also provides an option for encryption using your own key that is stored in your Azure Key Vault and accessed by storage using system-assigned managed identity authentication. This customer-managed key (CMK) can be either software or hardware-HSM protected. -- [Azure Monitor customer-managed keys](/azure/azure-monitor/platform/customer-managed-keys)
+- [Azure Monitor customer-managed keys](./logs/customer-managed-keys.md)
-- [Log Analytics data security](/azure/azure-monitor/platform/data-security)
+- [Log Analytics data security](./logs/data-security.md)
- [Data collection, retention, and storage in Application Insights](app/data-retention-privacy.md)
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure Monitor with the Azure Activity Log to create alerts for when changes take place in Azure Monitor and related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](./alerts/alerts-activity-log.md)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure CLI to query and discover Azure Monitor resources within your subscriptions. Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions as well as resources within your subscriptions. -- [Azure Monitor CLI](https://docs.microsoft.com/cli/azure/monitor)
+- [Azure Monitor CLI](/cli/azure/monitor)
-- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md) -- [Roles, permissions, and security in Azure Monitor](/azure/azure-monitor/platform/roles-permissions-security)
+- [Roles, permissions, and security in Azure Monitor](./roles-permissions-security.md)
**Responsibility**: Customer
Use Azure Resource Graph to query for and discover resources within their subscr
**Guidance**: Reconcile inventory on a regular basis and ensure unauthorized Azure Monitor related resources are deleted from the subscription in a timely manner. -- [Delete Azure Log Analytics workspace](/azure/azure-monitor/platform/delete-workspace)
+- [Delete Azure Log Analytics workspace](./logs/delete-workspace.md)
**Responsibility**: Customer
Use Azure Resource Graph to query for and discover resources within their subscr
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
You may also use recommendations from Azure Security Center as a secure configur
If using live streaming APM capabilities, make the channel secure with a secret API key in addition to the instrumentation key. -- [Secure APM Live Metrics Stream](https://docs.microsoft.com/azure/azure-monitor/app/live-stream#secure-the-control-channel)
+- [Secure APM Live Metrics Stream](./app/live-stream.md#secure-the-control-channel)
-- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?amp;preserve-view=true&view=azps-4.8.0)
- [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
If using live streaming APM capabilities, make the channel secure with a secret
**Guidance**: Use Azure DevOps to securely store and manage your code like custom Azure policies and Azure Resource Manager templates. To access the resources you manage in Azure DevOps, you can grant or deny permissions to specific users, built-in security groups, or groups defined in Azure Active Directory (Azure AD) if integrated with Azure DevOps, or Active Directory if integrated with TFS. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
- [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
If using live streaming APM capabilities, make the channel secure with a secret
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Azure Policy aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
+- [Azure Policy aliases](../governance/policy/concepts/definition-structure.md#aliases)
**Responsibility**: Customer
Use Azure Security Center's Threat detection for data services to detect malware
**Guidance**: Use Azure Resource Manager to export the Azure Monitor and related resources in a JavaScript Object Notation (JSON) template which can be used as backup for Azure Monitor and related configurations. Use Azure Automation to run the backup scripts automatically. -- [Manage Log Analytics workspace using Azure Resource Manager templates](/azure/azure-monitor/samples/resource-manager-workspace)
+- [Manage Log Analytics workspace using Azure Resource Manager templates](./logs/resource-manager-workspace.md)
- [Single and multi-resource export to a template in Azure portal](../azure-resource-manager/templates/export-template-portal.md)
Use Azure Security Center's Threat detection for data services to detect malware
**Guidance**: Use Azure Resource Manager to export the Azure Monitor and related resources in a JavaScript Object Notation (JSON) template which can be used as backup for Azure Monitor and related configurations. Backup customer-managed keys within Azure Key Vault if Azure Monitor related resources are using customer-managed keys, -- [Manage Log Analytics workspace using Azure Resource Manager templates](/azure/azure-monitor/platform/template-workspace-configuration)
+- [Manage Log Analytics workspace using Azure Resource Manager templates](./logs/resource-manager-workspace.md)
- [Single and multi-resource export to a template in Azure portal](../azure-resource-manager/templates/export-template-portal.md) -- [How to backup key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Use Azure Security Center's Threat detection for data services to detect malware
**Guidance**: Ensure ability to periodically perform restoration using Azure Resource Manager backed template files. Test restoration of backed up customer-managed keys. -- [Manage Log Analytics workspace using Azure Resource Manager templates](/azure/azure-monitor/samples/resource-manager-workspace)
+- [Manage Log Analytics workspace using Azure Resource Manager templates](./logs/resource-manager-workspace.md)
-- [How to restore key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore key vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Use Azure Security Center's Threat detection for data services to detect malware
Additionally, Enable Soft-Delete and purge protection in Key Vault to protect keys against accidental or malicious deletion. If Azure Storage is used to store Azure Resource Manager template backups, enable soft delete to save and recover your data when blobs or blob snapshots are deleted. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
- [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
Additionally, clearly mark subscriptions (for ex. production, non-prod) using ta
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-monitor Vminsights Health Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-alerts.md
An [Azure alert](../alerts/alerts-overview.md) will be created for each virtual
If an alert is already in **Fired** state when the virtual machine state changes, then a second alert won't be created, but the severity of the same alert will be changed to match the state of the virtual machine. For example, if the virtual machine changes to **Critical** state when a **Warning** alert was already in **Fired** state, that alert's severity will be changed to **Sev1**. If the virtual machine changes to a **Warning** state when a **Sev1** alert was already in **Fired** state, that alert's severity will be changed to **Sev2**. If the virtual machine moves back to a **Healthy** state, then the alert will be resolved with severity changed to **Sev4**. ## Viewing alerts
-View alerts created by VM insights guest health with other [alerts in the Azure portal](../platform/alerts-overview.md#alerts-experience). You can select **Alerts** from the **Azure Monitor** menu to view alerts for all monitored resources, or select **Alerts** from a virtual machine's menu to view alerts for just that virtual machine.
+View alerts created by VM insights guest health with other [alerts in the Azure portal](../alerts/alerts-overview.md#alerts-experience). You can select **Alerts** from the **Azure Monitor** menu to view alerts for all monitored resources, or select **Alerts** from a virtual machine's menu to view alerts for just that virtual machine.
## Alert properties
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 03/19/2021 Last updated : 03/24/2021 # Create and manage Active Directory connections for Azure NetApp Files
A subnet must be delegated to Azure NetApp Files.
[LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
+* For non-AD integrated DNS, you should add a DNS A/PTR record to enable Azure NetApp Files to function by using a ΓÇ£friendly name".
+ ## Decide which Domain Services to use Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (ADDS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use ADDS or AADDS.
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-dk-datasheet.md
Last updated 02/16/2021
|Included in Box |1x Azure Percept DK Carrier Board <br> 1x [Azure Percept Vision](./azure-percept-vision-datasheet.md) <br> 1x RGB Sensor (Camera) <br> 1x USB 3.0 Type C Cable <br> 1x DC Power Cable <br> 1x AC/DC Converter <br> 2x Wi-Fi Antennas | |OS  |[CBL-Mariner](https://github.com/microsoft/CBL-Mariner) | |Management Control Plane |Azure Device Update (ADU) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) |
-|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](https://docs.microsoft.com/azure/iot-dps/) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) |
+|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](../iot-dps/index.yml) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) |
|General Processor |NXP iMX8m (Azure Percept DK Carrier Board) | |AI Acceleration |1x Intel Movidius Myriad X Integrated ISP (Azure Percept Vision) | |Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50cm - infinity<br>FoV: 120 degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
Last updated 02/16/2021
|Non-Operating Temperature |-40 to 85 degrees C | |Relative Humidity |10% to 95% | |Certification  |FCC <br> IC <br> RoHS <br> REACH <br> UL |
-|Power Supply |19VDC at 3.42A (65W) |
+|Power Supply |19VDC at 3.42A (65W) |
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
The Dev Tools Pack Installer is a one-stop solution that installs and configures
## Mandatory Tools Installed * [Visual Studio Code](https://code.visualstudio.com/)
-* [Python 3.6 (Windows) or 3.5 (Linux)](https://www.python.org/)
+* [Python 3.6 or later](https://www.python.org/)
* [Docker 19.03](https://www.docker.com/) * [PIP3](https://pip.pypa.io/en/stable/user_guide/) * [TensorFlow 1.13](https://www.tensorflow.org/)
-* [Azure Machine Learning SDK 1.1](https://docs.microsoft.com/python/api/overview/azure/ml/)
+* [Azure Machine Learning SDK 1.1](/python/api/overview/azure/ml/)
## Optional Tools Available for Installation
If the installer notifies you to verify Docker Desktop is in a good running stat
## Next steps
-Check out the [Azure Percept advanced development repository](https://github.com/microsoft/azure-percept-advanced-development) to get started with advanced development for Azure Percept DK.
+Check out the [Azure Percept advanced development repository](https://github.com/microsoft/azure-percept-advanced-development) to get started with advanced development for Azure Percept DK.
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-capture-images.md
All images will be accessible in [Custom Vision](https://www.customvision.ai/).
## Next steps
-[Test and retrain your no-code vision model](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model).
+[Test and retrain your no-code vision model](../cognitive-services/custom-vision-service/test-your-model.md).
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-manage-voice-assistant.md
A keyword is a word or short phrase used to activate a voice assistant. For exam
With [Speech Studio](https://speech.microsoft.com/), you can create a custom keyword for your voice assistant. It takes up to 30 minutes to train a basic custom keyword model.
-Follow the [Speech Studio documentation](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-devices-sdk-create-kws) for guidance on creating a custom keyword. Once configured, your new keyword will be available in the Project Santa Cruz portal for use with your voice assistant application.
+Follow the [Speech Studio documentation](../cognitive-services/speech-service/custom-keyword-basics.md) for guidance on creating a custom keyword. Once configured, your new keyword will be available in the Project Santa Cruz portal for use with your voice assistant application.
## Commands configuration
Custom commands make it easy to build rich voice commanding apps optimized for v
With [Speech Studio](https://speech.microsoft.com/), you can create custom commands for your voice assistant to execute.
-Follow the [Speech Studio documentation](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstart-custom-commands-application) for guidance on creating custom commands. Once configured, your new commands will be available in Azure Percept Studio for use with your voice assistant application.
+Follow the [Speech Studio documentation](../cognitive-services/speech-service/quickstart-custom-commands-application.md) for guidance on creating custom commands. Once configured, your new commands will be available in Azure Percept Studio for use with your voice assistant application.
## Next steps
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-over-the-air.md
Group Tag Requirements:
1. Add a Tag to your device(s). 1. From **IoT Edge** on the left navigation pane, find your Azure Percept DK and navigate to its **Device Twin**.
- 1. Add a new **Device Update for IoT Hub** tag value as shown below (Change ```<CustomTagValue>``` to your value, i.e. AzurePerceptGroup1). Learn more about device twin [JSON document tags](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-device-twins#device-twins).
+ 1. Add a new **Device Update for IoT Hub** tag value as shown below (Change ```<CustomTagValue>``` to your value, i.e. AzurePerceptGroup1). Learn more about device twin [JSON document tags](../iot-hub/iot-hub-devguide-device-twins.md#device-twins).
``` "tags": {
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
If you encounter any of these issues, it is not necessary to open a bug. If you
| Device update | Users may get a message that the update failed, even if it succeeded. | Confirm the device updated by navigating to the Device Twin for the device in IoT Hub. This is fixed after the first update. | | Device update | Users may lose their Wi-Fi connection settings after their first update. | Run through on-boarding experience after updating to set up the Wi-Fi connection. This is fixed after the first update. | | Device update | After performing an OTA update, users can no longer log on via SSH using previously created user accounts, and new SSH users cannot be created through the on-boarding experience. This issue affects systems performing OTA updates from the following pre-installed image versions: 2020.110.114.105 and 2020.109.101.105. | To recover your user profiles, perform these steps after the OTA update: <br> [SSH into your devkit](./how-to-ssh-into-percept-dk.md) using ΓÇ£rootΓÇ¥ as the username. If you disabled the SSH ΓÇ£rootΓÇ¥ user login via on-boarding experience, you must re-enable it. Run this command after successfully connecting: <br> ```mkdir -p /var/custom-configs/home; chmod 755 /var/custom-configs/home``` <br> To recover previous user home data, run the following command: <br> ```mkdir -p /tmp/prev-rootfs && mount /dev/mmcblk0p3 /tmp/prev-rootfs && [ ! -L /tmp/prev-rootfs/home ] && cp -a /tmp/prev-rootfs/home/* /var/custom-configs/home/. && echo "User home migrated!"; umount /tmp/prev-rootfs``` |
-| Device update | After taking an OTA update, update groups are lost. | Update the deviceΓÇÖs tag by following [these instructions](https://docs.microsoft.com/azure/azure-percept/how-to-update-over-the-air#create-a-device-update-group). |
+| Device update | After taking an OTA update, update groups are lost. | Update the deviceΓÇÖs tag by following [these instructions](./how-to-update-over-the-air.md#create-a-device-update-group). |
| Dev Tools Pack Installer | Optional Caffe install may fail if Docker is not running properly on system. | Make sure Docker is installed and running, then retry Caffe installation. | | Dev Tools Pack Installer | Optional CUDA install fails on incompatible systems. | Verify system compatibility with CUDA prior to running installer. | | Docker, Network, IoT Edge | If your internal network uses 172.x.x.x, docker containers will fail to connect to edge. | Add a special bip section to the /etc/docker/daemon.json file like this: `{ "bip": "192.168.168.1/24"}` |
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-dk.md
Azure Percept DK is an edge AI development kit designed for developing vision an
## Next steps > [!div class="nextstepaction"]
-> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
+> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-percept-security.md
Azure Percept DK devices are designed with a hardware root of trust: additional
### Azure Percept DK
-Azure Percept DK includes a Trusted Platform Module (TPM) version 2.0 which can be utilized to connect the device to Azure Device Provisioning Services with additional security. TPM is an industry-wide, ISO standard from the Trusted Computing Group, and you can read more about TPM at the [complete TPM 2.0 spec](https://trustedcomputinggroup.org/resource/tpm-library-specification/) or the ISO/IEC 11889 spec. For more information on how DPS can provision devices in a secure manner see [Azure IoT Hub Device Provisioning Service - TPM Attestation](https://docs.microsoft.com/azure/iot-dps/concepts-tpm-attestation).
+Azure Percept DK includes a Trusted Platform Module (TPM) version 2.0 which can be utilized to connect the device to Azure Device Provisioning Services with additional security. TPM is an industry-wide, ISO standard from the Trusted Computing Group, and you can read more about TPM at the [complete TPM 2.0 spec](https://trustedcomputinggroup.org/resource/tpm-library-specification/) or the ISO/IEC 11889 spec. For more information on how DPS can provision devices in a secure manner see [Azure IoT Hub Device Provisioning Service - TPM Attestation](../iot-dps/concepts-tpm-attestation.md).
### Azure Percept system on module (SOM)
Azure Percept devices use the hardware root trust to secure firmware. The boot R
### IoT Edge
-Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](https://docs.microsoft.com/azure/iot-edge/iot-edge-security-manager).
+Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](../iot-edge/iot-edge-security-manager.md).
### Device Update for IoT Hub
This checklist is a starting point for firewall rules:
|*.auth.azureperceptdk.azure.net| 443| Azure DK SOM Authentication and Authorization| |*.auth.projectsantacruz.azure.net| 443| Azure DK SOM Authentication and Authorization|
-Additionally, review the list of [connections used by Azure IoT Edge](https://docs.microsoft.com/azure/iot-edge/production-checklist#allow-connections-from-iot-edge-devices).
+Additionally, review the list of [connections used by Azure IoT Edge](../iot-edge/production-checklist.md#allow-connections-from-iot-edge-devices).
<! ## Additional Recommendations for Deployment to Production
Azure Percept DK offers a great variety of security capabilities out of the box.
## Next steps
-Learn about the available [Azure Percept AI models](./overview-ai-models.md).
+Learn about the available [Azure Percept AI models](./overview-ai-models.md).
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
If you experience any issues during this process, refer to the [setup troublesho
- An Azure Percept DK (dev kit). - A Windows, Linux, or OS X based host computer with Wi-Fi capability and a web browser. - An Azure account with an active subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- The Azure account must have the **owner** or **contributor** role within the subscription. Follow the steps below to check your Azure account role. For more information on Azure role definitions, check out the [Azure role-based access control documentation](https://docs.microsoft.com/azure/role-based-access-control/rbac-and-directory-admin-roles#azure-roles).
+- The Azure account must have the **owner** or **contributor** role within the subscription. Follow the steps below to check your Azure account role. For more information on Azure role definitions, check out the [Azure role-based access control documentation](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
> [!CAUTION] > If you have multiple Azure accounts, your browser may cache credentials from another account. To avoid confusion, it is recommended that you close all unused browser windows and log into the [Azure portal](https://portal.azure.com/) before starting the setup experience. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for additional information on how to ensure you are signed in with the correct account.
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
1. Click on the **Subscriptions** icon (it looks like a yellow key).
-1. Select your subscription from the list. If you do not see your subscription, make sure you are signed in with the correct Azure account. If you wish to create a new subscription, follow [these steps](https://docs.microsoft.com/azure/cost-management-billing/manage/create-subscription).
+1. Select your subscription from the list. If you do not see your subscription, make sure you are signed in with the correct Azure account. If you wish to create a new subscription, follow [these steps](../cost-management-billing/manage/create-subscription.md).
1. From the Subscription menu, select **Access control (IAM)**. 1. Click **View my access**.
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
scp [remote username]@[IP address]:[remote file path]/[file name].txt [local hos
```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md). If you did not set up an SSH login during the OOBE, your remote username is ```root```.
-For additional information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](https://docs.microsoft.com/azure/iot-edge/troubleshoot).
+For additional information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](../iot-edge/troubleshoot.md).
|Category: |Command: |Function: | ||-||
There are three small LEDs on top of the carrier board housing. A cloud icon is
|LED 2 (Wi-Fi) |Slow blink |Device is ready to be configured by Wi-Fi Easy Connect and is announcing its presence to a configurator. | |LED 2 (Wi-Fi) |Fast blink |Authentication was successful, device association in progress. | |LED 2 (Wi-Fi) |On (solid) |Authentication and association were successful; device is connected to a Wi-Fi network. |
-|LED 3 |NA |LED not in use. |
--
+|LED 3 |NA |LED not in use. |
azure-percept Tutorial No Code Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-no-code-speech.md
In this tutorial, you will create a voice assistant from a template to use with your Azure Percept DK and Azure Percept Audio. The voice assistant demo runs within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and contains a selection of voice-controlled virtual objects. To control an object, say your keyword, which is a word or short phrase that wakes your device, followed by a command. Each template responds to a set of specific commands.
-This guide will walk you through the process of setting up your devices, creating a voice assistant and the necessary [Speech Services](https://docs.microsoft.com/azure/cognitive-services/speech-service/overview) resources, testing your voice assistant, configuring your keyword, and creating custom keywords.
+This guide will walk you through the process of setting up your devices, creating a voice assistant and the necessary [Speech Services](../cognitive-services/speech-service/overview.md) resources, testing your voice assistant, configuring your keyword, and creating custom keywords.
## Prerequisites
Once you create a custom command, you must go to [Speech Studio](https://speech.
:::image type="content" source="./media/tutorial-no-code-speech/speech-studio.png" alt-text="Screenshot of speech studio home screen.":::
-For more information on developing custom commands, please see the [Speech Service documentation](https://docs.microsoft.com/azure/cognitive-services/speech-service/custom-commands).
+For more information on developing custom commands, please see the [Speech Service documentation](../cognitive-services/speech-service/custom-commands.md).
## Troubleshooting
Once you are done working with your voice assistant application, follow these st
## Next Steps
-Now that you have created a no-code speech solution, try creating a [no-code vision solution](./tutorial-nocode-vision.md) for your Azure Percept DK.
+Now that you have created a no-code speech solution, try creating a [no-code vision solution](./tutorial-nocode-vision.md) for your Azure Percept DK.
azure-percept Tutorial Nocode Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-nocode-vision.md
Before training your model, add labels to your images.
1. On the left-hand side of the **Custom Vision** page, click **Untagged** under **Tags** to view the images you just collected in the previous step. Select one or more of your untagged images.
-1. In the **Image Detail** window, click on the image to begin tagging. If you selected object detection as your project type, you must also draw a [bounding box](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector#upload-and-tag-images) around specific objects you would like to tag. Adjust the bounding box as needed. Type your object tag and click **+** to apply the tag. For example, if you were creating a vision solution that would notify you when a store shelf needs restocking, add the tag "Empty Shelf" to images of empty shelves, and add the tag "Full Shelf" to images of fully-stocked shelves. Repeat for all untagged images.
+1. In the **Image Detail** window, click on the image to begin tagging. If you selected object detection as your project type, you must also draw a [bounding box](../cognitive-services/custom-vision-service/get-started-build-detector.md#upload-and-tag-images) around specific objects you would like to tag. Adjust the bounding box as needed. Type your object tag and click **+** to apply the tag. For example, if you were creating a vision solution that would notify you when a store shelf needs restocking, add the tag "Empty Shelf" to images of empty shelves, and add the tag "Full Shelf" to images of fully-stocked shelves. Repeat for all untagged images.
:::image type="content" source="./media/tutorial-nocode-vision/image-tagging.png" alt-text="Image tagging screen in Custom Vision.":::
Before training your model, add labels to your images.
:::image type="content" source="./media/tutorial-nocode-vision/train-model.png" alt-text="Training image selection with train button highlighted.":::
-1. When the training has completed, your screen will show your model performance. For more information about evaluating these results, please see the [model evaluation documentation](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector#evaluate-the-detector). After training, you may also wish to [test your model](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model) on additional images and retrain as necessary. Each time you train your model, it will be saved as a new iteration. Reference the [Custom Vision documentation](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-improving-your-classifier) for additional information on how to improve model performance.
+1. When the training has completed, your screen will show your model performance. For more information about evaluating these results, please see the [model evaluation documentation](../cognitive-services/custom-vision-service/get-started-build-detector.md#evaluate-the-detector). After training, you may also wish to [test your model](../cognitive-services/custom-vision-service/test-your-model.md) on additional images and retrain as necessary. Each time you train your model, it will be saved as a new iteration. Reference the [Custom Vision documentation](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md) for additional information on how to improve model performance.
:::image type="content" source="./media/tutorial-nocode-vision/iteration.png" alt-text="Model training results.":::
After closing this window, you may go back and edit your vision project anytime
## Improve your model by setting up retraining
-After you have trained your model and deployed it to the device, you can improve model performance by setting up retraining parameters to capture more training data. This feature is used to improve a trained model's performance by giving you the ability to capture images based on a probability range. For example, you can set your device to only capture training images when the probability is low. Here is some [additional guidance](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-improving-your-classifier) on adding more images and balancing training data.
+After you have trained your model and deployed it to the device, you can improve model performance by setting up retraining parameters to capture more training data. This feature is used to improve a trained model's performance by giving you the ability to capture images based on a probability range. For example, you can set your device to only capture training images when the probability is low. Here is some [additional guidance](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md) on adding more images and balancing training data.
1. To set up retraining, go back to your **Project**, then to **Project Summary** 1. In the **Image capture** tab, select **Automatic image capture** and **Set up retraining**.
Next, check out the vision how-to articles for information on additional vision
<!-- Add links to how-to articles and oobe article.>
+-->
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) | | Microsoft.Security | [Security Center](../../security-center/index.yml) | | Microsoft.SecurityInsights | [Azure Sentinel](../../sentinel/index.yml) |
-| Microsoft.SerialConsole - [registered](#registration) | [Azure Serial Console for Windows](../../virtual-machines/troubleshooting/serial-console-windows.md) |
+| Microsoft.SerialConsole - [registered](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) |
| Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) | | Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) | | Microsoft.ServiceFabricMesh | [Service Fabric Mesh](../../service-fabric-mesh/index.yml) |
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/request-limits-and-throttling.md
The Microsoft.Network resource provider applies the following throttle limits:
### Compute throttling
-For information about throttling limits for compute operations, see [Troubleshooting API throttling errors - Compute](../../virtual-machines/troubleshooting/troubleshooting-throttling-errors.md).
+For information about throttling limits for compute operations, see [Troubleshooting API throttling errors - Compute](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors).
For checking virtual machine instances within a virtual machine scale set, use the [Virtual Machine Scale Sets operations](/rest/api/compute/virtualmachinescalesetvms). For example, use the [Virtual Machine Scale Set VMs - List](/rest/api/compute/virtualmachinescalesetvms/list) with parameters to check the power state of virtual machine instances. This API reduces the number of requests.
You can determine the number of remaining requests by examining response headers
| x-ms-ratelimit-remaining-tenant-resource-requests |Tenant scoped resource type requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. Resource Manager adds this value instead of the tenant reads or writes. | | x-ms-ratelimit-remaining-tenant-resource-entities-read |Tenant scoped resource type collection requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. |
-The resource provider can also return response headers with information about remaining requests. For information about response headers returned by the Compute resource provider, see [Call rate informational response headers](../../virtual-machines/troubleshooting/troubleshooting-throttling-errors.md#call-rate-informational-response-headers).
+The resource provider can also return response headers with information about remaining requests. For information about response headers returned by the Compute resource provider, see [Call rate informational response headers](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors#call-rate-informational-response-headers).
## Retrieving the header values
msrest.http_logger : 'x-ms-ratelimit-remaining-subscription-writes': '1199'
* For a complete PowerShell example, see [Check Resource Manager Limits for a Subscription](https://github.com/Microsoft/csa-misc-utils/tree/master/psh-GetArmLimitsViaAPI). * For more information about limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
-* To learn about handling asynchronous REST requests, see [Track asynchronous Azure operations](async-operations.md).
+* To learn about handling asynchronous REST requests, see [Track asynchronous Azure operations](async-operations.md).
azure-resource-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md) -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md)
-- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](/azure/azure-monitor/learn/quick-collect-azurevm)
+- [How to collect Azure Virtual Machine internal host logs with Azure Monitor](../../azure-monitor/vm/quick-collect-azurevm.md)
- [How to get started with Azure Monitor and third-party SIEM integration](https://azure.microsoft.com/blog/use-azure-monitor-to-integrate-with-siem-tools/)
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Guidance**: Azure Resource Manager uses activity logs, which are automatically enabled, to include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md)
-- [Understand logging and different log types in Azure](/azure/azure-monitor/platform/platform-logs-overview)
+- [Understand logging and different log types in Azure](../../azure-monitor/essentials/platform-logs-overview.md)
**Responsibility**: Customer
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md) -- [Getting started with Log Analytics queries](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Getting started with Log Analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Shared
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
- [How to manage alerts in Azure Security Center](../../security-center/security-center-managing-and-responding-alerts.md) -- [How to alert on Log Analytics log data](/azure/azure-monitor/learn/tutorial-response)
+- [How to alert on Log Analytics log data](../../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
Additionally, to help you keep track of dedicated administrative accounts, you c
You can also enable a Just-In-Time access by using Azure Active Directory (Azure AD) Privileged Identity Management and Azure Resource Manager. -- [Learn more about Privileged Identity Management](/azure/active-directory/privileged-identity-management/)
+- [Learn more about Privileged Identity Management](../../active-directory/privileged-identity-management/index.yml)
- [How to use Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
You can also enable a Just-In-Time access by using Azure Active Directory (Azure
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure AD identity and access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../../active-directory/reports-monitoring/index.yml)
- [How to use Azure AD identity and access reviews](../../active-directory/governance/access-reviews-overview.md)
You can also enable a Just-In-Time access by using Azure Active Directory (Azure
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired alerts within Log Analytics workspace. -- [How to integrate Azure activity logs with Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure activity logs with Azure Monitor](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Responsibility**: Customer
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: For server-side encryption at rest, Azure Resource Manager supports Microsoft-managed keys. -- [Understand data protection in Azure Resource Manager](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-resource-manager-security-controls#data-protection)
+- [Understand data protection in Azure Resource Manager](#data-protection)
**Responsibility**: Customer
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts when changes take place to critical Azure resources. -- [How to create alerts for Azure Activity log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity log events](../../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Azure Resource Graph Expl
**Guidance**: Use Policy Name, Description, and Category to logically organize assets according to a taxonomy. -- [For more information about tagging assets, see Resource naming and tagging decision guide](https://docs.microsoft.com/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json)
+- [For more information about tagging assets, see Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=%2fazure%2fazure-resource-manager%2fmanagement%2ftoc.json)
**Responsibility**: Customer
More related details are provided below,
- [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Additionally, as an administrator, you may need to lock a subscription, resource
- [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md) -- [How to use aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
+- [How to use aliases](../../governance/policy/concepts/definition-structure.md#aliases)
**Responsibility**: Customer
Implement Credential Scanner to identify credentials within code. Credential Sca
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../../security/benchmarks/security-baselines-overview.md)
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
For Azure Blueprints, use:
* [blueprintAssignments](/azure/templates/microsoft.blueprint/blueprintassignments) * [versions](/azure/templates/microsoft.blueprint/blueprints/versions)
-For Azure Policies, use:
+For Azure Policy, use:
* [policyAssignments](/azure/templates/microsoft.authorization/policyassignments) * [policyDefinitions](/azure/templates/microsoft.authorization/policydefinitions)
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-signalr Signalr Howto Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-guide.md
For ASP.NET SignalR, a known issue was fixed in SDK 1.6.0. Upgrade your SDK to n
## Thread pool starvation
-If your server is starving, that means no threads are working on message processing. All threads are hanging in a certain method.
+If your server is starving, that means no threads are working on message processing. All threads are not responding in a certain method.
Normally, this scenario is caused by async over sync or by `Task.Result`/`Task.Wait()` in async methods.
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
Make sure you have a code editor such as [Visual Studio Code](https://code.visua
Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 2.7.1505 or higher) to run Python Azure Function apps locally.
-Azure Functions requires [Python 3.6 or 3.7](https://www.python.org/downloads/).
+Azure Functions requires [Python 3.6+](https://www.python.org/downloads/). (See [Supported Python versions](/azure/azure-functions/functions-reference-python#python-version))
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
azure-sql Maintenance Window Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window-configure.md
Previously updated : 03/04/2021 Last updated : 03/23/2021 # Configure maintenance window (Preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The *System default* maintenance window is 5PM to 8AM daily (local time of the A
The ability to change to a different maintenance window is not available for every service level or in every region. For details on availability, see [Maintenance window availability](maintenance-window.md#availability). > [!Important]
-> Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short failover that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of failover you should perform the operation outside of the peak hours.
+> Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
## Configure maintenance window during database creation
The following example creates a new managed instance and sets the maintenance wi
## Configure maintenance window for existing databases
-When applying a maintenance window selection to a database, a brief failover (several seconds) may be experienced in some cases as Azure applies the required changes.
+When applying a maintenance window selection to a database, a brief reconfiguration (several seconds) may be experienced in some cases as Azure applies the required changes.
# [Portal](#tab/azure-portal)
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Previously updated : 03/11/2021 Last updated : 03/23/2021 # Maintenance window (Preview)
The maintenance window feature allows you to configure maintenance schedule for
## Overview
-Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL Database and SQL managed instance resources. During Azure SQL maintenance event, databases are fully available but can be subject to short failovers within respective availability SLAs for [SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database) and [SQL managed instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance), as resource reconfiguration is required in some cases.
+Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL Database and SQL managed instance resources. During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations within respective availability SLAs for [SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database) and [SQL managed instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance).
-Maintenance window is intended for production workloads that are not resilient to database or instance failovers and cannot absorb short connection interruptions caused by planned maintenance events. By choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on Azure SQL's default maintenance policy.
+Maintenance window is intended for production workloads that are not resilient to database or instance reconfigurations and cannot absorb short connection interruptions caused by planned maintenance events. By choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on Azure SQL's default maintenance policy.
The maintenance window can be configured on creation or for existing Azure SQL resources. It can be configured using the Azure portal, PowerShell, CLI, or Azure API. > [!Important]
-> Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short failover that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of failover you should perform the operation outside of the peak hours.
+> Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
### Gain more predictability with maintenance window
Choosing a maintenance window other than the default is currently available in t
To get the maximum benefit from maintenance windows, make sure your client applications are using the redirect connection policy. Redirect is the recommended connection policy, where clients establish connections directly to the node hosting the database, leading to reduced latency and improved throughput.
-* In Azure SQL Database, any connections using the proxy connection policy could be affected by both the chosen maintenance window and a gateway node maintenance window. However, client connections using the recommended redirect connection policy are unaffected by a gateway node maintenance failover.
+* In Azure SQL Database, any connections using the proxy connection policy could be affected by both the chosen maintenance window and a gateway node maintenance window. However, client connections using the recommended redirect connection policy are unaffected by a gateway node maintenance reconfiguration.
* In Azure SQL managed instance, the gateway nodes are hosted [within the virtual cluster](../../azure-sql/managed-instance/connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture) and have the same maintenance window as the managed instance, but using the redirect connection policy is still recommended to minimize number of disruptions during the maintenance event.
All instances hosted in a virtual cluster share the maintenance window. By defau
Expected duration of configuring maintenance window on managed instance can be calculated using [estimated duration of instance management operations](/azure/azure-sql/managed-instance/management-operations-overview#duration). > [!Important]
-> A short failover happens at the end of the maintenance operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of failover you should schedule the operation outside of the peak hours.
+> A short reconfiguration happens at the end of the maintenance operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should schedule the operation outside of the peak hours.
### IP address space requirements Each new virtual cluster in subnet requires additional IP addresses according to the [virtual cluster IP address allocation](/azure/azure-sql/managed-instance/vnet-subnet-determine-size#determine-subnet-size). Changing maintenance window for existing managed instance also requires [temporary additional IP capacity](/azure/azure-sql/managed-instance/vnet-subnet-determine-size#address-requirements-for-update-scenarios) as in scaling vCores scenario for corresponding service tier.
azure-sql Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/planned-maintenance.md
Previously updated : 1/21/2021 Last updated : 3/23/2021 # Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance
Learn how to prepare for planned maintenance events on your database in Azure SQ
To keep Azure SQL Database and Azure SQL Managed Instance services secure, compliant, stable, and performant, updates are being performed through the service components almost continuously. Thanks to the modern and robust service architecture and innovative technologies like [hot patching](https://aka.ms/azuresqlhotpatching), majority of updates are fully transparent and non-impactful in terms of service availability. Still, few types of updates cause short service interrupts and require special treatment.
-For each database, Azure SQL Database and Azure SQL Managed Instance maintain a quorum of database replicas where one replica is the primary. At all times, a primary replica must be online servicing, and at least one secondary replica must be healthy. During planned maintenance, members of the database quorum will go offline one at a time, with the intent that there is one responding primary replica and at least one secondary replica online to ensure no client downtime. When the primary replica needs to be brought offline, a reconfiguration/failover process will occur in which one secondary replica will become the new primary.
+For each database, Azure SQL Database and Azure SQL Managed Instance maintain a quorum of database replicas where one replica is the primary. At all times, a primary replica must be online servicing, and at least one secondary replica must be healthy. During planned maintenance, members of the database quorum will go offline one at a time, with the intent that there is one responding primary replica and at least one secondary replica online to ensure no client downtime. When the primary replica needs to be brought offline, a reconfiguration process will occur in which one secondary replica will become the new primary.
## What to expect during a planned maintenance event
-Maintenance event can produce single or multiple failovers, depending on the constellation of the primary and secondary replicas at the beginning of the maintenance event. On average, 1.7 failovers occur per planned maintenance event. Reconfigurations/failovers generally finish within 30 seconds. The average is eight seconds. If already connected, your application must reconnect to the new primary replica of your database. If a new connection is attempted while the database is undergoing a reconfiguration before the new primary replica is online, you get error 40613 (Database Unavailable): *"Database '{databasename}' on server '{servername}' is not currently available. Please retry the connection later."* If your database has a long-running query, this query will be interrupted during a reconfiguration and will need to be restarted.
+Maintenance event can produce single or multiple reconfigurations, depending on the constellation of the primary and secondary replicas at the beginning of the maintenance event. On average, 1.7 reconfigurations occur per planned maintenance event. Reconfigurations generally finish within 30 seconds. The average is eight seconds. If already connected, your application must reconnect to the new primary replica of your database. If a new connection is attempted while the database is undergoing a reconfiguration before the new primary replica is online, you get error 40613 (Database Unavailable): *"Database '{databasename}' on server '{servername}' is not currently available. Please retry the connection later."* If your database has a long-running query, this query will be interrupted during a reconfiguration and will need to be restarted.
## How to simulate a planned maintenance event
Ensuring that your client application is resilient to maintenance events prior t
## Retry logic
-Any client production application that connects to a cloud database service should implement a robust connection [retry logic](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors). This will help make failovers transparent to the end users, or at least minimize negative effects.
+Any client production application that connects to a cloud database service should implement a robust connection [retry logic](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors). This will help make reconfigurations transparent to the end users, or at least minimize negative effects.
## Resource health
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
Previously updated : 01/22/2021 Last updated : 03/23/2021 # Resource limits for elastic pools using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
You can set the service tier, compute size (service objective), and storage amou
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1024|1024|1024|1536| |Max log size (GB)|336|336|336|336|512|
-|TempDB max data size (GB)|333|333|333|333|333|
+|TempDB max data size (GB)|37|46|56|65|74|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|2560|3200|3840|4480|5120|
You can set the service tier, compute size (service objective), and storage amou
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1536|1536|1536|3072|3072|4096| |Max log size (GB)|512|512|512|1024|1024|1024|
-|TempDB max data size (GB)|83.25|92.5|111|148|166.5|333|
+|TempDB max data size (GB)|83|93|111|148|167|333|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|5760|6400|7680|10240|11520|12800|
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 01/22/2021 Last updated : 03/23/2021 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1024|1024|1024|1536| |Max log size (GB)|336|336|336|336|512|
-|TempDB max data size (GB)|333|333|333|333|333|
+|TempDB max data size (GB)|37|46|56|65|74|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|2560|3200|3840|4480|5120|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1536|1536|1536|3072|3072|4096| |Max log size (GB)|512|512|512|1024|1024|1024|
-|TempDB max data size (GB)|83.25|92.5|111|148|166.5|333|
+|TempDB max data size (GB)|83|93|111|148|167|333|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|5760|6400|7680|10240|11520|12800|
azure-sql Scale Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scale-resources.md
Azure SQL Managed Instance allows you to scale as well:
Initiating scale up or scale down action in any of the flavors would restart database engine process and move it to a different virtual machine if needed. Moving database engine process to a new virtual machine is **online process** where you can continue using your existing Azure SQL Database service while the process is in progress. Once the target database engine is fully initialized and ready to process the queries, the connections will be [switched from source to target database engine](single-database-scale.md#impact).
+> [!NOTE]
+> It is not recommended to scale your managed instance if a long-running transaction, such as data import, data processing jobs, index rebuild, etc., is running, or if you have any active connection on the instance. To prevent the scaling from taking longer time to complete than usual, you should scale the instance upon the completion of all long-running operations.
+ > [!NOTE] > You can expect a short connection break when the scale up/scale down process is finished. If you have implemented [Retry logic for standard transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors), you will not notice the failover.
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
Yes, you can purchase add-on storage, independently from compute, to some extent
**How can I optimize my storage performance in General Purpose service tier?**
-To optimize storage performance, see [Storage best practices in General Purpose](https://techcommunity.microsoft.com).
+To optimize storage performance, see [Storage best practices in General Purpose](https://techcommunity.microsoft.com/t5/datacat/storage-performance-best-practices-and-considerations-for-azure/ba-p/305525).
## Backup and restore
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
Last updated 03/19/2021
This migration guide teaches you to migrate your Microsoft Access databases to Azure SQL Database using the SQL Server Migration Assistant for Access.
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites To migrate your Access database to Azure SQL Database, you need: -- to verify your source environment is supported.
+- To verify your source environment is supported.
- [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+- Connectivity and sufficient permissions to access both source and target.
+ ## Pre-migration
After you have met the prerequisites, you are ready to discover the topology of
### Assess
-Create an assessment using [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+Use SQL Server Migration Assistant (SSMA) for Access to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
-1. Open SQL Server Migration Assistant for Access.
-1. Select **File** and then choose **New Project**. Provide a name for your migration project.
+1. Open [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+1. Select **File** and then choose **New Project**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
![Choose New Project](./media/access-to-sql-database-guide/new-project.png)
-1. Select **Add Databases** and choose databases to be added to your new project.
+1. Select **Add Databases** and choose databases to be added to your new project:
![Choose Add databases](./media/access-to-sql-database-guide/add-databases.png)
-1. In **Access Metadata Explorer**, right-click the database and then choose **Create Report**.
+1. In **Access Metadata Explorer**, right-click the database and then choose **Create Report**. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
![Right-click the database and choose Create Report](./media/access-to-sql-database-guide/create-report.png)
-1. Review the sample assessment. For example:
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Access objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects
+
+ For example: `drive:\<username>\Documents\SSMAProjects\MyAccessMigration\report\report_<date>`
![Review the sample report assessment](./media/access-to-sql-database-guide/sample-assessment.png)
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
![Type Mappings](./media/access-to-sql-database-guide/type-mappings.png)
Validate the default data type mappings and change them based on requirements if
To convert database objects, follow these steps:
-1. Select **Connect to Azure SQL Database** and provide connection details.
+1. Select **Connect to Azure SQL Database**.
+ 1. Enter connection details to connect your database in Azure SQL Database.
+ 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
![Connect to Azure SQL Database](./media/access-to-sql-database-guide/connect-to-sqldb.png)
-1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your database.
+1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your database:
![Right-click the database and choose convert schema](./media/access-to-sql-database-guide/convert-schema.png)
+
- Compare converted queries to original queries:
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
- ![Converted queries can be compared with source code](./media/access-to-sql-database-guide/query-comparison.png)
+ ![Converted objects can be compared with source](./media/access-to-sql-database-guide/table-comparison.png)
- Compare converted objects to original objects:
+ Compare the converted Transact-SQL text to the original code and review the recommendations:
- ![Converted objects can be compared with source](./media/access-to-sql-database-guide/table-comparison.png)
+ ![Converted queries can be compared with source code](./media/access-to-sql-database-guide/query-comparison.png)
1. (Optional) To convert an individual object, right-click the object and choose **Convert schema**. Converted objects appear bold in the **Access Metadata Explorer**: ![Bold objects in metadata explorer have been converted](./media/access-to-sql-database-guide/converted-items.png) 1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
## Migrate After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migrating data is a bulk-load operation that moves rows of data into Azure SQL Database in transactions. The number of rows to be loaded into Azure SQL Database in each transaction is configured in the project settings.
-To migrate data by using SSMA for Access, follow these steps:
+To publish your schema and migrate the data by using SSMA for Access, follow these steps:
1. If you haven't already, select **Connect to Azure SQL Database** and provide connection details.
-1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.
+1. Publish the schema: Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database:
![Synchronize with Database](./media/access-to-sql-database-guide/synchronize-with-database.png)
To migrate data by using SSMA for Access, follow these steps:
![Review the synchronization with the database](./media/access-to-sql-database-guide/synchronize-with-database-review.png)
-1. Use **Access Metadata Explorer** to check boxes next to the items you want to migrate. If you want to migrate the entire database, check the box next to the database.
-1. Right-click the database or object you want to migrate, and choose **Migrate data**.
- To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box.
+1. Migrate the data: Right-click the database or object you want to migrate in **Access Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
![Migrate Data](./media/access-to-sql-database-guide/migrate-data.png)
- Review the migrated data:
+1. After migration completes, view the **Data Migration Report**:
![Migrate Data Review](./media/access-to-sql-database-guide/migrate-data-review.png)
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
![Validate in SSMA](./media/access-to-sql-database-guide/validate-data.png)
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Title: "DB2 to SQL Database: Migration guide"
-description: This guide teaches you to migrate your DB2 databases to Azure SQL Database using SQL Server Migration Assistant for DB2 (SSMA for DB2).
+ Title: "Db2 to Azure SQL Database: Migration guide"
+description: This guide teaches you to migrate your Db2 databases to Azure SQL Database using SQL Server Migration Assistant for Db2 (SSMA for Db2).
Last updated 11/06/2020
-# Migration guide: DB2 to SQL Database
+# Migration guide: Db2 to Azure SQL Database
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your DB2 databases to Azure SQL Database using SQL Server Migration Assistant for DB2.
+This guide teaches you to migrate your Db2 databases to Azure SQL Database using SQL Server Migration Assistant for Db2.
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your DB2 database to SQL Database, you need:
+To migrate your Db2 database to SQL Database, you need:
+
+- To verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).
+- To download [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+- A target [Azure SQL Database](../../database/single-database-create-quickstart.md).
+- Connectivity and sufficient permissions to access both source and target.
-- to verify your source environment is supported.-- to download [SQL Server Migration Assistant (SSMA) for DB2](https://www.microsoft.com/download/details.aspx?id=54254).-- a target [Azure SQL Database](../../database/single-database-create-quickstart.md). ## Pre-migration
After you have met the prerequisites, you are ready to discover the topology of
### Assess and convert
-Create an assessment using SQL Server Migration Assistant (SSMA).
+Use SQL Server Migration Assistant (SSMA) for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
-1. Open SQL Server Migration Assistant (SSMA) for DB2.
+1. Open [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
:::image type="content" source="media/db2-to-sql-database-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
-1. Enter in values for the DB2 connection details on the **Connect to DB2** dialog box.
+1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box.
- :::image type="content" source="media/db2-to-sql-database-guide/connect-to-db2.png" alt-text="Connect to your DB2 instance":::
+ :::image type="content" source="media/db2-to-sql-database-guide/connect-to-db2.png" alt-text="Connect to your Db2 instance":::
-1. Right-click the DB2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
:::image type="content" source="media/db2-to-sql-database-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of DB2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyDB2Migration\report\report_<date>`.
+ For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
:::image type="content" source="media/db2-to-sql-database-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
:::image type="content" source="media/db2-to-sql-database-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
-1. You can change the type mapping for each table by selecting the table in the **DB2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
-### Schema conversion
+### Convert schema
To convert the schema, follow these steps: 1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**. 1. Select **Connect to Azure SQL Database**.
- 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down.
- 1. Select **Connect**.
+ 1. Enter connection details to connect your database in Azure SQL Database.
+ 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
:::image type="content" source="media/db2-to-sql-database-guide/connect-to-sql-database.png" alt-text="Fill in details to connect to the logical server in Azure":::
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
:::image type="content" source="media/db2-to-sql-database-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations.
+1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
:::image type="content" source="media/db2-to-sql-database-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
## Migrate
To publish your schema and migrate your data, follow these steps:
:::image type="content" source="media/db2-to-sql-database-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
-1. Migrate the data: Right-click the schema from the **DB2 Metadata Explorer** and choose **Migrate Data**.
+1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
:::image type="content" source="media/db2-to-sql-database-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
-1. Provide connection details for both the DB2 and Azure SQL Database.
-1. View the **Data Migration report**.
+1. Provide connection details for both the Db2 and Azure SQL Database.
+1. After migration completes, view the **Data Migration Report**:
:::image type="content" source="media/db2-to-sql-database-guide/data-migration-report.png" alt-text="Review the data migration report":::
-1. Connect to your Azure SQL Database by using SQL Server Management Studio and validate the migration by reviewing the data and schema.
+1. Connect to your database in Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
:::image type="content" source="media/db2-to-sql-database-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[DB2 zOS data assets discovery and assessment package](https://github.com/Microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM DB2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM DB2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[DB2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a DB2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
Last updated 03/19/2021
This guide teaches you to migrate your MySQL database to Azure SQL Database using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites To migrate your MySQL database to Azure SQL Database, you need: -- to verify your source environment is supported. Currently, MySQL 5.6 and 5.7 is supported. -- [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/confirmation.aspx?id=54257)
+- To verify your source environment is supported. Currently, MySQL 5.6 and 5.7 is supported.
+- [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/details.aspx?id=54257)
+- Connectivity and sufficient permissions to access both source and target.
## Pre-migration
After you have met the prerequisites, you are ready to discover the topology of
### Assess
-By using [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/confirmation.aspx?id=54257), you can review database objects and data, and assess databases for migration.
+Use SQL Server Migration Assistant (SSMA) for MySQL to review database objects and data, and assess databases for migration.
-To create an assessment, perform the following steps.
+To create an assessment, perform the following steps:
-1. Open SQL Server Migration Assistant for MySQL.
-1. Select **File** from the menu and then choose **New Project**. Provide the project name, a location to save your project. Choose **Azure SQL Database** as the migration target.
+1. Open [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/details.aspx?id=54257).
+1. Select **File** from the menu and then choose **New Project**.
+1. Provide the project name, a location to save your project. Choose **Azure SQL Database** as the migration target. Select **OK**:
![New Project](./media/mysql-to-sql-database-guide/new-project.png)
-1. Choose **Connect to MySQL** and provide connection details to connect your MySQL server.
+1. Choose **Connect to MySQL** and provide connection details to connect your MySQL server:
![Connect to MySQL](./media/mysql-to-sql-database-guide/connect-to-mysql.png)
-1. Right-click the MySQL schema in **MySQL Metadata Explorer** and choose **Create report**. Alternatively, you can select **Create report** from the top-line navigation bar.
+1. Right-click the MySQL schema in **MySQL Metadata Explorer** and choose **Create report**. Alternatively, you can select **Create report** from the top-line navigation bar:
![Create Report](./media/mysql-to-sql-database-guide/create-report.png)
-1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions.
-
- This report can also be accessed from the SSMA projects folder as selected in the first screen. From the example above locate the report.xml file from:
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of MySQL objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- `drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\`
+ For example: `drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\`
- and open it in Excel to get an inventory of MySQL objects and the effort required to perform schema conversions.
-
- ![Conversion Report](./media/mysql-to-sql-database-guide/conversion-report.png)
+ ![Conversion Report](./media/mysql-to-sql-database-guide/conversion-report.png)
### Validate data types
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
![Type Mappings](./media/mysql-to-sql-database-guide/type-mappings.png)
Validate the default data type mappings and change them based on requirements if
To convert the schema, follow these steps: 1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**.
-1. Choose **Connect to Azure SQL Database** from the top-line navigation bar and provide connection details. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. Select **Connect to Azure SQL Database**.
+ 1. Enter connection details to connect your database in Azure SQL Database.
+ 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
![Connect to SQL](./media/mysql-to-sql-database-guide/connect-to-sqldb.png)
-1. Right-click the schema and choose **Convert schema**.
+1. Right-click the schema and choose **Convert schema**. Alternatively, you can choose **Convert schema** from the top line navigation bar after choosing your database:
![Convert Schema](./media/mysql-to-sql-database-guide/convert-schema.png)
-1. After the schema is finished converting, compare the converted code to the original code to identify potential problems.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
+
+ ![Converted objects can be compared with source](./media/mysql-to-sql-database-guide/table-comparison.png)
- Compare converted objects to original objects:
+ Compare the converted Transact-SQL text to the original code and review the recommendations:
- ![ Compare And Review object ](./media/mysql-to-sql-database-guide/table-comparison.png)
+ ![Converted queries can be compared with source code](./media/mysql-to-sql-database-guide/procedure-comparison.png)
- Compare converted procedures to original procedures:
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
- ![Compare And Review object code](./media/mysql-to-sql-database-guide/procedure-comparison.png)
## Migrate After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
-To publish the schema and migrate the data, follow these steps:
+To publish your schema and migrate the data, follow these steps:
-1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.
+1. Publish the schema: Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database:
![Synchronize with Database](./media/mysql-to-sql-database-guide/synchronize-database.png)
To publish the schema and migrate the data, follow these steps:
![Synchronize with Database Review](./media/mysql-to-sql-database-guide/synchronize-database-review.png)
-1. Right-click the MySQL schema from the **MySQL Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select **Migrate Data** from the top-line navigation.
+1. Migrate the data: Right-click the database or object you want to migrate in **MySQL Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
![Migrate data](./media/mysql-to-sql-database-guide/migrate-data.png)
To publish the schema and migrate the data, follow these steps:
![Data Migration Report](./media/mysql-to-sql-database-guide/data-migration-report.png)
-1. Validate the migration by reviewing the data and schema on Azure SQL Database by using SQL Server Management Studio (SSMS).
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
![Validate in SSMA](./media/mysql-to-sql-database-guide/validate-in-ssms.png)
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Title: "Oracle to SQL Database: Migration guide"
+ Title: "Oracle to Azure SQL Database: Migration guide"
description: This guide teaches you to migrate your Oracle schema to Azure SQL Database using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
Last updated 08/25/2020
This guide teaches you to migrate your Oracle schemas to Azure SQL Database using SQL Server Migration Assistant for Oracle.
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
After you have met the prerequisites, you are ready to discover the topology of
Use the SQL Server Migration Assistant (SSMA) for Oracle to review database objects and data, assess databases for migration, migrate database objects to Azure SQL Database, and then finally migrate data to the database. - To create an assessment, follow these steps: - 1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). 1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
![New Project](./media/oracle-to-sql-database-guide/new-project.png) -
-1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box:
![Connect to Oracle](./media/oracle-to-sql-database-guide/connect-to-oracle.png)
To create an assessment, follow these steps:
![Select Oracle schema](./media/oracle-to-sql-database-guide/select-schema.png)
-1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
![Create Report](./media/oracle-to-sql-database-guide/create-report.png)
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
![Type Mappings](./media/oracle-to-sql-database-guide/type-mappings.png)
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**. 1. Select **Connect to Azure SQL Database**. 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down.
- 1. Select **Connect**.
+ 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
![Connect to SQL Database](./media/oracle-to-sql-database-guide/connect-to-sql-database.png)
-1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
![Convert Schema](./media/oracle-to-sql-database-guide/convert-schema.png)
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
![Review recommendations schema](./media/oracle-to-sql-database-guide/table-mapping.png)
- Compare the converted Transact-SQL text to the original stored procedures and review the recommendations.
+ Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
![Review recommendations](./media/oracle-to-sql-database-guide/procedure-comparison.png)
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**.
+1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**:
![Synchronize with Database](./media/oracle-to-sql-database-guide/synchronize-with-database.png)
To publish your schema and migrate your data, follow these steps:
![Synchronize with Database review](./media/oracle-to-sql-database-guide/synchronize-with-database-review.png)
-1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**. Alternatively, you can choose **Migrate Data** from the top line navigation bar after selecting the schema.
+1. Migrate the data: Right-click the database or object you want to migrate in **Oracle Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
![Migrate Data](./media/oracle-to-sql-database-guide/migrate-data.png) 1. Provide connection details for both Oracle and Azure SQL Database.
-1. View the **Data Migration report**.
+1. After migration completes, view the **Data Migration Report**:
![Data Migration Report](./media/oracle-to-sql-database-guide/data-migration-report.png)
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
![Validate in SSMA](./media/oracle-to-sql-database-guide/validate-data.png) Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see: -- [SQL Server Migration Assistant: How to assess and migrate data from non-Microsoft data platforms to SQL Server](https://blogs.msdn.microsoft.com/datamigration/2016/11/16/sql-server-migration-assistant-how-to-assess-and-migrate-databases-from-non-microsoft-data-platforms-to-sql-server/) - [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services) - [SQL Server Integration
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
Last updated 03/19/2021
This guide teaches you to migrate your SAP ASE databases to Azure SQL Database using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
To migrate your SAP SE database to Azure SQL Database, you need:
- to verify your source environment is supported. - [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256).
+- Connectivity and sufficient permissions to access both source and target.
+ ## Pre-migration
To create an assessment, follow these steps:
1. Select **File** and then choose **New Project**. 1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**. 1. Enter in values for SAP connection details on the **Connect to Sybase** dialog box.
-1. Right-click the SAP database you want to migrate, and then choose **Create report**. This generates an HTML report.
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of DB2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Right-click the SAP database you want to migrate, and then choose **Create report**. This generates an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of SAP ASE objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyDB2Migration\report\report_<date>`.
+ For example: `drive:\<username>\Documents\SSMAProjects\MySAPMigration\report\report_<date>`.
### Validate type mappings
To convert the schema, follow these steps:
After schema conversion you can save this project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to Azure SQL Database.
-To learn more, see [Convert schema](/sql/ssma/sybase/converting-sybase-ase-database-objects-sybasetosql)
-
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
## Migrate After you have the necessary prerequisites in place and have completed the tasks associated with the **Pre-migration** stage, you are ready to perform the schema and data migration.
-To publish the schema and migrate the data, follow these steps:
+To publish your schema and migrate the data, follow these steps:
-1. Right-click the database in **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the SAP ASE schema to the Azure SQL Database instance.
-1. Right-click the SAP ASE schema in **SAP ASE Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar.
+1. Publish the schema: Right-click the database in **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the SAP ASE schema to the Azure SQL Database instance.
+1. Migrate the data: Right-click the database or object you want to migrate in **SAP ASE Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
1. After migration completes, view the **Data Migration Report**:
-1. Validate the migration by reviewing the data and schema on the Azure SQL Database instance by using Azure SQL Database Management Studio (SSMS).
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
## Post-migration
azure-sql Sql Server To Sql Database Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules.md
Title: "Assessment rules for SQL Server to SQL Database migration"
+ Title: "Assessment rules for SQL Server to Azure SQL Database migration"
description: Assessment rules to identify issues with the source SQL Server instance that must be addressed before migrating to Azure SQL Database.
Last updated 12/15/2020
-# Assessment rules for SQL Server to SQL Database migration
+# Assessment rules for SQL Server to Azure SQL Database migration
[!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)] Migration tools validate your source SQL Server instance by running a number of assessment rules to identify issues that must be addressed before migrating your SQL Server database to Azure SQL Database.
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
Title: "SQL Server to SQL Database: Migration guide"
+ Title: "SQL Server to Azure SQL Database: Migration guide"
description: Follow this guide to migrate your SQL Server databases to Azure SQL Database.
Last updated 03/19/2021
-# Migration guide: SQL Server to SQL Database
+# Migration guide: SQL Server to Azure SQL Database
[!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)] This guide helps you migrate your SQL Server instance to Azure SQL Database.
You can migrate SQL Server running on-premises or on:
- Compute Engine (Google Cloud Platform - GCP) - Cloud SQL for SQL Server (Google Cloud Platform ΓÇô GCP)
-For more migration information, see the [migration overview](sql-server-to-sql-database-overview.md). For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For more migration information, see the [migration overview](sql-server-to-sql-database-overview.md). For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
:::image type="content" source="media/sql-server-to-database-overview/migration-process-flow-small.png" alt-text="Migration process flow":::
For more migration information, see the [migration overview](sql-server-to-sql-d
To migrate your SQL Server to Azure SQL Database, make sure you have the following prerequisites: -- A chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools -- [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) installed on a machine that can connect to your source SQL Server-- A target [Azure SQL Database](../../database/single-database-create-quickstart.md)
+- A chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools .
+- [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) installed on a machine that can connect to your source SQL Server.
+- A target [Azure SQL Database](../../database/single-database-create-quickstart.md).
+- Connectivity and proper permissions to access both source and target.
+ ## Pre-migration
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
Title: "SQL Server to SQL Database: Migration overview"
+ Title: "SQL Server to Azure SQL Database: Migration overview"
description: Learn about the different tools and options available to migrate your SQL Server databases to Azure SQL Database.
Last updated 11/06/2020
-# Migration overview: SQL Server to SQL Database
+# Migration overview: SQL Server to Azure SQL Database
[!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)] Learn about different migration options and considerations to migrate your SQL Server to Azure SQL Database.
You can migrate SQL Server running on-premises or on:
- Compute Engine (Google Cloud Platform - GCP) - Cloud SQL for SQL Server (Google Cloud Platform ΓÇô GCP)
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Overview
Different tools are available for different workloads and user preferences. Some
## Choose appropriate target
-Consider general guidelines to help you choose the right deployment model and service tier of Azure SQL Database. You can choose compute and storage resources during deployment and then change them afterwards using the [Azure portal](../../database/scale-resources.md) without incurring downtime for your application.
+Consider general guidelines to help you choose the right deployment model and service tier of Azure SQL Database. You can choose compute and storage resources during deployment and then [change them afterwards using the Azure portal](../../database/scale-resources.md) without incurring downtime for your application.
**Deployment models**: Understand your application workload and the usage pattern to decide between a single database or elastic pool.
These resources were developed as part of the Data SQL Ninja Program, which is s
## Next steps
-To start migrating your SQL Server to Azure SQL Database, see the [SQL Server to SQL Database migration guide](sql-server-to-sql-database-guide.md).
+To start migrating your SQL Server to SQL Database, see the [SQL Server to Azure SQL Database migration guide](sql-server-to-sql-database-guide.md).
- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
Title: "DB2 to SQL Managed Instance: Migration guide"
-description: This guide teaches you to migrate your DB2 databases to Azure SQL Managed Instance using SQL Server Migration Assistant for DB2.
+ Title: "Db2 to Azure SQL Managed Instance: Migration guide"
+description: This guide teaches you to migrate your Db2 databases to Azure SQL Managed Instance using SQL Server Migration Assistant for Db2.
Last updated 11/06/2020
-# Migration guide: DB2 to SQL Managed Instance
+# Migration guide: Db2 to Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
-This guide teaches you to migrate your DB2 databases to Azure SQL Managed Instance using the SQL Server Migration Assistant for DB2.
+This guide teaches you to migrate your Db2 databases to Azure SQL Managed Instance using the SQL Server Migration Assistant for Db2.
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your DB2 database to SQL Managed Instance, you need:
+To migrate your Db2 database to SQL Managed Instance, you need:
+
+- to verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).
+- to download [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+- a target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md).
+- Connectivity and sufficient permissions to access both source and target.
-- to verify your source environment is supported.-- to download [SQL Server Migration Assistant (SSMA) for DB2](https://www.microsoft.com/download/details.aspx?id=54254).-- a target [Azure SQL Managed Instance](../../database/single-database-create-quickstart.md). ## Pre-migration
After you have met the prerequisites, you are ready to discover the topology of
### Assess and convert ++ Create an assessment using SQL Server Migration Assistant (SSMA). To create an assessment, follow these steps:
-1. Open SQL Server Migration Assistant (SSMA) for DB2.
+1. Open SQL Server Migration Assistant (SSMA) for Db2.
1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**:
:::image type="content" source="media/db2-to-managed-instance-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
-1. Enter in values for the DB2 connection details on the **Connect to DB2** dialog box.
+1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box:
- :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-db2.png" alt-text="Connect to your DB2 instance":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-db2.png" alt-text="Connect to your Db2 instance":::
-1. Right-click the DB2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
:::image type="content" source="media/db2-to-managed-instance-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of DB2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyDB2Migration\report\report_<date>`.
+ For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
:::image type="content" source="media/db2-to-managed-instance-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
:::image type="content" source="media/db2-to-managed-instance-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
-1. You can change the type mapping for each table by selecting the table in the **DB2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
### Schema conversion To convert the schema, follow these steps: 1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
-1. Select **Connect to Azure SQL Database**.
- 1. Enter connection details to connect to your Azure SQL Managed Instance.
- 1. Choose your target database from the drop-down.
- 1. Select **Connect**.
+1. Select **Connect to Azure SQL Managed Instance**.
+ 1. Enter connection details to connect to your Azure SQL Managed Instance.
+ 1. Choose your target database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
:::image type="content" source="media/db2-to-managed-instance-guide/connect-to-sql-managed-instance.png" alt-text="Fill in details to connect to SQL Server":::
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
:::image type="content" source="media/db2-to-managed-instance-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations.
+1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
:::image type="content" source="media/db2-to-managed-instance-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Managed Instance.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Managed Instance Metadata Explorer** and choose **Synchronize with Database**.
+1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Managed Instance Metadata Explorer** and choose **Synchronize with Database**:
:::image type="content" source="media/db2-to-managed-instance-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
-1. Migrate the data: Right-click the schema from the **DB2 Metadata Explorer** and choose **Migrate Data**.
+1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
:::image type="content" source="media/db2-to-managed-instance-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
-1. Provide connection details for both DB2 and SQL Managed Instance.
-1. View the **Data Migration report**.
+1. Provide connection details for both Db2 and SQL Managed Instance.
+1. After migration completes, view the **Data Migration Report**:
:::image type="content" source="media/db2-to-managed-instance-guide/data-migration-report.png" alt-text="Review the data migration report":::
-1. Connect to SQL Managed Instance by using SQL Server Management Studio and validate the migration by reviewing the data and schema.
+1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
:::image type="content" source="media/db2-to-managed-instance-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[DB2 zOS data assets discovery and assessment package](https://github.com/Microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM DB2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM DB2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[DB2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a DB2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
Title: "Oracle to SQL Managed Instance: Migration guide"
+ Title: "Oracle to Azure SQL Managed Instance: Migration guide"
description: This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
Last updated 11/06/2020
This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
To create an assessment, follow these steps:
1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). 1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**:
![New Project](./media/oracle-to-managed-instance-guide/new-project.png)
-1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box:
![Connect to Oracle](./media/oracle-to-managed-instance-guide/connect-to-oracle.png)
To create an assessment, follow these steps:
![Choose Oracle schema](./media/oracle-to-managed-instance-guide/select-schema.png)
-1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
![Create Report](./media/oracle-to-managed-instance-guide/create-report.png)
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
![Type Mappings](./media/oracle-to-managed-instance-guide/type-mappings.png)
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**. 1. Select **Connect to Azure SQL Managed Instance**. 1. Enter connection details to connect your database in Azure SQL Managed Instance.
- 1. Choose your target database from the drop-down.
- 1. Select **Connect**.
+ 1. Choose your target database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
![Connect to SQL Managed Instance](./media/oracle-to-managed-instance-guide/connect-to-sql-managed-instance.png)
-1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
![Convert Schema](./media/oracle-to-managed-instance-guide/convert-schema.png)
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
![Compare table recommendations](./media/oracle-to-managed-instance-guide/table-comparison.png)
- Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
+ Compare the converted Transact-SQL text to the original code and review the recommendations:
![Compare procedure recommendations](./media/oracle-to-managed-instance-guide/procedure-comparison.png)
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Managed Instance.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Managed Instance Metadata Explorer** and choose **Synchronize with Database**.
+1. Publish the schema: Right-click the schema or object you want to migrate in **Oracle Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
![Synchronize with Database](./media/oracle-to-managed-instance-guide/synchronize-with-database.png)
To publish your schema and migrate your data, follow these steps:
![Migrate Data](./media/oracle-to-managed-instance-guide/migrate-data.png) 1. Provide connection details for both Oracle and Azure SQL Managed Instance.
-1. View the **Data Migration report**.
+1. After migration completes, view the **Data Migration Report**:
![Data Migration Report](./media/oracle-to-managed-instance-guide/data-migration-report.png)
-1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
![Validate in SSMA](./media/oracle-to-managed-instance-guide/validate-data.png) Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see: -- [SQL Server Migration Assistant: How to assess and migrate data from non-Microsoft data platforms to SQL Server](https://blogs.msdn.microsoft.com/datamigration/2016/11/16/sql-server-migration-assistant-how-to-assess-and-migrate-databases-from-non-microsoft-data-platforms-to-sql-server/) - [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services) - [SQL Server Integration
For additional assistance with completing this migration scenario, please see th
| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. | | [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.| | [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Serverbase. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
These resources were developed as part of the Data SQL Ninja Program, which is s
- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix). - To learn more about Azure SQL Managed Instance, see:
- - [An overview of Azure SQL Managed Instance](../../database/sql-database-paas-overview.md)
+ - [An overview of Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md)
- [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/)
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
Title: "SQL Server to SQL Managed Instance: Migration guide"
+ Title: "SQL Server to Azure SQL Managed Instance: Migration guide"
description: This guide teaches you to migrate your SQL Server databases to Azure SQL Managed Instance.
Last updated 11/06/2020
-# Migration guide: SQL Server to SQL Managed Instance
+# Migration guide: SQL Server to Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)] This guide helps you migrate your SQL Server instance to Azure SQL Managed Instance.
You can migrate SQL Server running on-premises or on:
- Compute Engine (Google Cloud Platform - GCP) - Cloud SQL for SQL Server (Google Cloud Platform ΓÇô GCP)
-For more migration information, see the [migration overview](sql-server-to-managed-instance-overview.md). For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For more migration information, see the [migration overview](sql-server-to-managed-instance-overview.md). For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
:::image type="content" source="media/sql-server-to-managed-instance-overview/migration-process-flow-small.png" alt-text="Migration process flow":::
To migrate your SQL Server to Azure SQL Managed Instance, make sure to go throug
- Choose a [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options) and the corresponding tools that are required for the chosen method - Install [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server
+- Connectivity and proper permissions to access both source and target.
+ ## Pre-migration
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
Title: "SQL Server to SQL Managed Instance: Migration overview"
+ Title: "SQL Server to Azure SQL Managed Instance: Migration overview"
description: Learn about the different tools and options available to migrate your SQL Server databases to Azure SQL Managed Instance.
Last updated 02/18/2020
-# Migration overview: SQL Server to SQL Managed Instance
+# Migration overview: SQL Server to Azure SQL Managed Instance
[!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlmi.md)] Learn about different migration options and considerations to migrate your SQL Server to Azure SQL Managed Instance.
You can migrate SQL Server running on-premises or on:
- Compute Engine (Google Cloud Platform - GCP) - Cloud SQL for SQL Server (Google Cloud Platform ΓÇô GCP)
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Overview
Some general guidelines to help you choose the right service tier and characteri
- Use the baseline IO latency of the file subsystem to choose between General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers. - Use the baseline throughput to preallocate the size of the data and log files to achieve expected IO performance.
-You can choose compute and storage resources during deployment and then change them after using the [Azure portal](../../database/scale-resources.md) without incurring downtime for your application.
+You can choose compute and storage resources during deployment and then [change them after using the Azure portal](../../database/scale-resources.md) without incurring downtime for your application.
> [!IMPORTANT] > Any discrepancy in the [managed instance virtual network requirements](../../managed-instance/connectivity-architecture-overview.md#network-requirements) can prevent you from creating new instances or using existing ones. Learn more about [creating new](../../managed-instance/virtual-network-subnet-create-arm-template.md) and [configuring existing](../../managed-instance/vnet-existing-add-subnet.md) networks.
The following table lists the recommended migration tools:
The following table lists alternative migration tools:
-|Technology |Description |
+|**Technology** |**Description** |
|||
-|[Transactional replication](../../managed-instance/replication-transactional-overview.md) | Replicate data from source SQL Server database table(s) to SQL Managed Instance by providing a publisher-subscriber type migration option while maintaining transactional consistency. | |
+|[Transactional replication](../../managed-instance/replication-transactional-overview.md) | Replicate data from source SQL Server database table(s) to SQL Managed Instance by providing a publisher-subscriber type migration option while maintaining transactional consistency. |
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| The [bulk copy program (bcp) utility](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the BCP utility to export the data from your source and import the data file into the target SQL Managed Instance.</br></br> For high-speed bulk copy operations to move data to Azure SQL Database, [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) can be used to maximize transfer speeds by leveraging parallel copy tasks. | |[Import Export Wizard / BACPAC](../../database/database-import.md?tabs=azure-powershell)| [BACPAC](/sql/relational-databases/data-tier-applications/data-tier-applications#bacpac) is a Windows file with a `.bacpac` extension that encapsulates a database's schema and data. BACPAC can be used to both export data from a source SQL Server and to import the file back into Azure SQL Managed Instance. | |[Azure Data Factory (ADF)](../../../data-factory/connector-azure-sql-managed-instance.md)| The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server database(s) to SQL Managed Instance using built-in connectors and an [Integration Runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> ADF supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to SQL Managed Instance. |
These resources were developed as part of the Data SQL Ninja Program, which is s
## Next steps
-To start migrating your SQL Server to Azure SQL Managed Instance, see the [SQL Server to SQL Managed Instance migration guide](sql-server-to-managed-instance-guide.md).
+To start migrating your SQL Server to Azure SQL Managed Instance, see the [SQL Server to Azure SQL Managed Instance migration guide](sql-server-to-managed-instance-guide.md).
- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
azure-sql Sql Server To Managed Instance Performance Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-performance-baseline.md
Title: "SQL Server to SQL Managed Instance: Performance analysis"
+ Title: "SQL Server to Azure SQL Managed Instance: Performance analysis"
description: Learn to create and compare a performance baseline when migrating your SQL Server databases to Azure SQL Managed Instance.
Last updated 11/06/2020
-# Migration performance: SQL Server to SQL Managed Instance performance analysis
+# Migration performance: SQL Server to Azure SQL Managed Instance performance analysis
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)] Create a performance baseline to compare the performance of your workload on a SQL Managed Instance with your original workload running on SQL Server.
azure-sql Sql Server To Sql Managed Instance Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-sql-managed-instance-assessment-rules.md
Title: "Assessment rules for SQL Server to SQL Managed Instance migration"
+ Title: "Assessment rules for SQL Server to Azure SQL Managed Instance migration"
description: Assessment rules to identify issues with the source SQL Server instance that must be addressed before migrating to Azure SQL Managed Instance.
Last updated 12/15/2020
-# Assessment rules for SQL Server to SQL Managed Instance migration
+# Assessment rules for SQL Server to Azure SQL Managed Instance migration
[!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)] Migration tools validate your source SQL Server instance by running a number of assessment rules to identify issues that must be addressed before migrating your SQL Server database to Azure SQL Managed Instance.
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
Title: "DB2 to SQL Server on Azure VMs: Migration guide"
-description: This guide teaches you to migrate your DB2 database to SQL Server on Azure VMs using SQL Server Migration Assistant for DB2.
+ Title: "Db2 to SQL Server on Azure VMs: Migration guide"
+description: This guide teaches you to migrate your Db2 database to SQL Server on Azure VMs using SQL Server Migration Assistant for Db2.
Last updated 11/06/2020
-# Migration guide: DB2 to SQL Server on Azure VMs
+# Migration guide: Db2 to SQL Server on Azure VMs
[!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlvm.md)]
-This migration guide teaches you to migrate your user databases from DB2 to SQL Server on Azure VMs using the SQL Server Migration Assistant for DB2.
+This migration guide teaches you to migrate your user databases from Db2 to SQL Server on Azure VMs using the SQL Server Migration Assistant for Db2.
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your DB2 database to SQL Server, you need:
+To migrate your Db2 database to SQL Server, you need:
-- to verify your source environment is supported.-- [SQL Server Migration Assistant (SSMA) for DB2](https://www.microsoft.com/download/details.aspx?id=54254).
+- to verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-Db2-client-Db2tosql#prerequisites).
+- [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
- [Connectivity](../../virtual-machines/windows/ways-to-connect-to-sql.md) between your source environment and your SQL Server VM in Azure.
+- A target [SQL Server on Azure VM](../../virtual-machines/windows/create-sql-vm-portal.md).
To migrate your DB2 database to SQL Server, you need:
After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration. + ### Assess
-Create an assessment using SQL Server Migration Assistant (SSMA).
+Use SQL Server Migration Assistant (SSMA) for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
-1. Open SQL Server Migration Assistant (SSMA) for DB2.
+1. Open [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
-1. Enter in values for the DB2 connection details on the **Connect to DB2** dialog box.
+1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box:
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/connect-to-db2.png" alt-text="Connect to your DB2 instance":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/connect-to-Db2.png" alt-text="Connect to your Db2 instance":::
-1. Right-click the DB2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of DB2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyDB2Migration\report\report_<date>`.
+ For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
-1. You can change the type mapping for each table by selecting the table in the **DB2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
### Convert schema
To convert the schema, follow these steps:
1. Select **Connect to SQL Server**. 1. Enter connection details to connect to your SQL Server instance on your Azure VM. 1. Choose to connect to an existing database on the target server, or provide a new name to create a new database on the target server.
- 1. Select **Connect**.
+ 1. Provide authentication details.
+ 1. Select **Connect**:
:::image type="content" source="../../../../includes/media/virtual-machines-sql-server-connection-steps/rm-ssms-connect.png" alt-text="Connect to your SQL Server on Azure VM"::: -
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations.
+1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server on Azure VM.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **SQL Server Metadata Explorer** and choose **Synchronize with Database**.
+1. Publish the schema: Right-click the database from the **Databases** node in the **SQL Server Metadata Explorer** and choose **Synchronize with Database**:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
-1. Migrate the data: Right-click the schema from the **DB2 Metadata Explorer** and choose **Migrate Data**.
+1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
-1. Provide connection details for both the DB2 and SQL Server instances.
-1. View the **Data Migration report**.
+1. Provide connection details for both the Db2 and SQL Server instances.
+1. After migration completes, view the **Data Migration Report**:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/data-migration-report.png" alt-text="Review the data migration report":::
-1. Connect to your SQL Server instance by using SQL Server Management Studio and validate the migration by reviewing the data and schema.
+1. Connect to your SQL Server on Azure VM instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
:::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[DB2 zOS data assets discovery and assessment package](https://github.com/Microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM DB2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM DB2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[DB2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a DB2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
Last updated 11/06/2020
This guide teaches you to migrate your Oracle schemas to SQL Server on Azure VM using SQL Server Migration Assistant for Oracle.
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Prerequisites
To migrate your Oracle schema to SQL Server on Azure VM, you need:
- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). - A target [SQL Server VM](../../virtual-machines/windows/sql-vm-create-portal-quickstart.md). - The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+- Connectivity and sufficient permissions to access both source and target.
+ ## Pre-migration
Use the [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883) to identif
To use the MAP Toolkit to perform an inventory scan, follow these steps: 1. Open the [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883).
-1. Select **Create/Select database**.
+1. Select **Create/Select database**:
![Select database](./media/oracle-to-sql-on-azure-vm-guide/select-database.png)
-1. Select **Create an inventory database**, enter a name for the new inventory database you're creating, provide a brief description, and then select **OK**.
+1. Select **Create an inventory database**, enter a name for the new inventory database you're creating, provide a brief description, and then select **OK**:
:::image type="content" source="media/oracle-to-sql-on-azure-vm-guide/create-inventory-database.png" alt-text="Create an inventory database":::
-1. Select **Collect inventory data** to open the **Inventory and Assessment Wizard**.
+1. Select **Collect inventory data** to open the **Inventory and Assessment Wizard**:
:::image type="content" source="media/oracle-to-sql-on-azure-vm-guide/collect-inventory-data.png" alt-text="Collect inventory data":::
-1. In the **Inventory and Assessment Wizard**, choose **Oracle** and then select **Next**.
+1. In the **Inventory and Assessment Wizard**, choose **Oracle** and then select **Next**:
![Choose oracle](./media/oracle-to-sql-on-azure-vm-guide/choose-oracle.png)
To use the MAP Toolkit to perform an inventory scan, follow these steps:
![Choose the computer search option that best suits your business needs](./media/oracle-to-sql-on-azure-vm-guide/choose-search-option.png)
-1. Either enter credentials or create new credentials for the systems that you want to explore, and then select **Next**.
+1. Either enter credentials or create new credentials for the systems that you want to explore, and then select **Next**:
![Enter credentials](./media/oracle-to-sql-on-azure-vm-guide/choose-credentials.png)
-1. Set the order of the credentials, and then select **Next**.
+1. Set the order of the credentials, and then select **Next**:
![Set credential order](./media/oracle-to-sql-on-azure-vm-guide/set-credential-order.png)
-1. Specify the credentials for each computer you want to discover. You can use unique credentials for every computer/machine, or you can choose to use the **All Computer Credentials** list.
+1. Specify the credentials for each computer you want to discover. You can use unique credentials for every computer/machine, or you can choose to use the **All Computer Credentials** list:
![Specify the credentials for each computer you want to discover](./media/oracle-to-sql-on-azure-vm-guide/specify-credentials-for-each-computer.png)
-1. Verify your selection summary, and then select **Finish**.
+1. Verify your selection summary, and then select **Finish**:
![Review summary](./media/oracle-to-sql-on-azure-vm-guide/review-summary.png)
-1. After the scan completes, view the **Data Collection** summary report. The scan can take a few minutes, and depends on the number of databases. Select **Close** when finished.
+1. After the scan completes, view the **Data Collection** summary report. The scan can take a few minutes, and depends on the number of databases. Select **Close** when finished:
![Collection summary report](./media/oracle-to-sql-on-azure-vm-guide/collection-summary-report.png)
To create an assessment, follow these steps:
1. Open the [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). 1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**.
+1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**:
![New project](./media/oracle-to-sql-on-azure-vm-guide/new-project.png)
-1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box:
![Connect to Oracle](./media/oracle-to-sql-on-azure-vm-guide/connect-to-oracle.png)
To create an assessment, follow these steps:
![Select Oracle schema](./media/oracle-to-sql-on-azure-vm-guide/select-schema.png)
-1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
![Create Report](./media/oracle-to-sql-on-azure-vm-guide/create-report.png)
-1. In **Oracle Metadata Explorer**, select the Oracle schema, and then select **Create Report** to generate an HTML report with conversion statistics and error/warnings, if any..
-1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions.
-
- This report can also be accessed from the SSMA projects folder as selected in the first screen. From the example above locate the report.xml file from:
-
- `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2016_11_12T02_47_55\`
-
- and then open it in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions.
+1. In **Oracle Metadata Explorer**, select the Oracle schema, and then select **Create Report** to generate an HTML report with conversion statistics and error/warnings, if any.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+ For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2016_11_12T02_47_55\`
+
![Conversion Report](./media/oracle-to-sql-on-azure-vm-guide/conversion-report.png) -- ### Validate data types Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps: 1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab.
+1. Select the **Type mappings** tab:
![Type Mappings](./media/oracle-to-sql-on-azure-vm-guide/type-mappings.png) 1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata explorer**. -- ### Convert schema To convert the schema, follow these steps: 1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**.
-1. Choose **Connect to SQL Server** from the top-line navigation bar and provide connection details for your SQL Server on Azure VM. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. Select **Connect to SQL Server** from the top-line navigation bar.
+ 1. Enter connection details for your SQL Server on Azure VM.
+ 1. Choose your target database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**.
![Connect to SQL](./media/oracle-to-sql-on-azure-vm-guide/connect-to-sql-vm.png)
-1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and choose **Convert Schema**.
+1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and choose **Convert Schema**. Alternatively, you can select **Convert schema** from the top-line navigation bar:
![Convert Schema](./media/oracle-to-sql-on-azure-vm-guide/convert-schema.png)
-1. After the schema is finished converting, compare and review the structure of the schema to identify potential problems.
+1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
![Review recommendations](./media/oracle-to-sql-on-azure-vm-guide/table-mapping.png)
To convert the schema, follow these steps:
You can save the project locally for an offline schema remediation exercise. You can do so by selecting **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server.
+1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server on Azure VM.
+ ## Migrate After you have the necessary prerequisites in place and have completed the tasks associated with the **Pre-migration** stage, you are ready to perform the schema and data migration. Migration involves two steps ΓÇô publishing the schema and migrating the data.
-To publish the schema and migrate the data, follow these steps:
+To publish your schema and migrate the data, follow these steps:
-1. Right-click the database from the **SQL Server Metadata Explorer** and choose **Synchronize with Database**. This action publishes the Oracle schema to SQL Server on Azure VM.
+1. Publish the schema: Right-click the database from the **SQL Server Metadata Explorer** and choose **Synchronize with Database**. This action publishes the Oracle schema to SQL Server on Azure VM:
- ![Synchronize with Database](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database.png)
+ ![Synchronize with database](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database.png)
- Review the synchronization status:
+ Review the mapping between your source project and your target:
![Review synchronization status](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database-review.png)
-1. Right-click the Oracle schema from the **Oracle Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select Migrate Data from the top-line navigation.
+1. Migrate the data: Right-click the database or object you want to migrate in **Oracle Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
![Migrate Data](./media/oracle-to-sql-on-azure-vm-guide/migrate-data.png) 1. Provide connection details for Oracle and SQL Server on Azure VM at the dialog box.
-1. After migration completes, view the Data Migration report:
+1. After migration completes, view the **Data Migration Report**:
![Data Migration Report](./media/oracle-to-sql-on-azure-vm-guide/data-migration-report.png)
-1. Connect to your SQL Server on Azure VM using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) to review data and schema on your SQL Server instance.
+1. Connect to your SQL Server on Azure VM instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
![Validate in SSMA](./media/oracle-to-sql-on-azure-vm-guide/validate-in-ssms.png) -- In addition to using SSMA, you can also use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see: - The article [Getting Started with SQL Server Integration Services](https://docs.microsoft.com//sql/integration-services/sql-server-integration-services). - The white paper [SQL Server Integration
azure-sql Sql Server To Sql On Azure Vm Individual Databases Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide.md
You can migrate SQL Server running on-premises or on:
- Amazon Relational Database Service (AWS RDS) - Compute Engine (Google Cloud Platform - GCP)
-For information about additional migration strategies, see the [SQL Server VM migration overview](sql-server-to-sql-on-azure-vm-migration-overview.md).
+For information about additional migration strategies, see the [SQL Server VM migration overview](sql-server-to-sql-on-azure-vm-migration-overview.md). For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
:::image type="content" source="media/sql-server-to-sql-on-azure-vm-migration-overview/migration-process-flow-small.png" alt-text="Migration process flow":::
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
You can migrate SQL Server running on-premises or on:
- Amazon Relational Database Service (AWS RDS) - Compute Engine (Google Cloud Platform - GCP)
-For other scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
## Overview
azure-sql Availability Group Distributed Network Name Dnn Listener Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-distributed-network-name-dnn-listener-configure.md
# Configure a DNN listener for an availability group [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)]
-With SQL Server on Azure VMs, the distributed network name (DNN) routes traffic to the appropriate clustered resource. It provides an easier way to connect to an Always On availability group (AG) than the virtual network name (VNN) listener, without the need for an Azure Load Balancer.
+With SQL Server on Azure VMs, the distributed network name (DNN) routes traffic to the appropriate clustered resource. It provides an easier way to connect to an Always On availability group (AG) than the virtual network name (VNN) listener, without the need for an Azure Load Balancer.
-This article teaches you to configure a DNN listener to replace the VNN listener and route traffic to your availability group with SQL Server on Azure VMs for high availability and disaster recovery (HADR).
+This article teaches you to configure a DNN listener to replace the VNN listener and route traffic to your availability group with SQL Server on Azure VMs for high availability and disaster recovery (HADR).
-The DNN listener feature is currently only available starting with SQL Server 2019 CU8 on Windows Server 2016 and later.
+The DNN listener feature is currently only available starting with SQL Server 2019 CU8 on Windows Server 2016 and later.
-For an alternative connectivity option, consider a [VNN listener and Azure Load Balancer](availability-group-vnn-azure-load-balancer-configure.md) instead.
+For an alternative connectivity option, consider a [VNN listener and Azure Load Balancer](availability-group-vnn-azure-load-balancer-configure.md) instead.
## Overview
-A distributed network name (DNN) listener replaces the traditional virtual network name (VNN) availability group listener when used with [Always On availability groups on SQL Server VMs](availability-group-overview.md). This negates the need for an Azure Load Balancer to route traffic, simplifying deployment, maintenance, and improving failover.
+A distributed network name (DNN) listener replaces the traditional virtual network name (VNN) availability group listener when used with [Always On availability groups on SQL Server VMs](availability-group-overview.md). This negates the need for an Azure Load Balancer to route traffic, simplifying deployment, maintenance, and improving failover.
-Use the DNN listener to replace an existing VNN listener, or alternatively, use it in conjunction with an existing VNN listener so that your availability group has two distinct connection points - one using the VNN listener name (and port if non-default), and one using the DNN listener name and port.
+Use the DNN listener to replace an existing VNN listener, or alternatively, use it in conjunction with an existing VNN listener so that your availability group has two distinct connection points - one using the VNN listener name (and port if non-default), and one using the DNN listener name and port.
+
+> [!CAUTION]
+> The routing behavior when using a DNN differs when using a VNN. Do not use port 1433. To learn more, see the [Port consideration](#port-considerations) section later in this article.
## Prerequisites
Before you complete the steps in this article, you should already have:
- Decided that the distributed network name is the appropriate [connectivity option for your HADR solution](hadr-cluster-best-practices.md#connectivity). - Configured your [Always On availability group](availability-group-overview.md). - Installed the latest version of [PowerShell](/powershell/azure/install-az-ps).
+- Identified the unique port that you will use for the DNN listener. The port used for a DNN listener must be unique across all replicas of the availability group or failover cluster instance. No other connection can share the same port.
+ ## Create script
-Use PowerShell to create the distributed network name (DNN) resource and associate it with your availability group.
+Use PowerShell to create the distributed network name (DNN) resource and associate it with your availability group.
-To do so, follow these steps:
+To do so, follow these steps:
-1. Open a text editor, such as Notepad.
-1. Copy and paste the following script:
+1. Open a text editor, such as Notepad.
+1. Copy and paste the following script:
```powershell param (
To do so, follow these steps:
Start-ClusterResource -Name $Ag ```
-1. Save the script as a `.ps1` file, such as `add_dnn_listener.ps1`.
-
+1. Save the script as a `.ps1` file, such as `add_dnn_listener.ps1`.
## Execute script
-To create the DNN listener, execute the script passing in parameters for the name of the availability group, listener name, and port.
+To create the DNN listener, execute the script passing in parameters for the name of the availability group, listener name, and port.
-For example, assuming an availability group name of `ag1`, listener name of `dnnlsnr`, and listener port as `6789`, follow these steps:
+For example, assuming an availability group name of `ag1`, listener name of `dnnlsnr`, and listener port as `6789`, follow these steps:
-1. Open a command-line interface tool, such as command prompt or PowerShell.
-1. Navigate to where you saved the `.ps1` script, such as c:\Documents.
-1. Execute the script: ```add_dnn_listener.ps1 <ag name> <listener-name> <listener port>```. For example:
+1. Open a command-line interface tool, such as command prompt or PowerShell.
+1. Navigate to where you saved the `.ps1` script, such as c:\Documents.
+1. Execute the script: ```add_dnn_listener.ps1 <ag name> <listener-name> <listener port>```. For example:
```console c:\Documents> add_dnn_listener.ps1 ag1 dnnlsnr 6789
For example, assuming an availability group name of `ag1`, listener name of `dnn
## Verify listener
-Use either SQL Server Management Studio or Transact-SQL to confirm your DNN listener is created successfully.
+Use either SQL Server Management Studio or Transact-SQL to confirm your DNN listener is created successfully.
### SQL Server Management Studio
-Expand **Availability Group Listeners** in [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) to view your DNN listener:
+Expand **Availability Group Listeners** in [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) to view your DNN listener:
:::image type="content" source="media/availability-group-distributed-network-name-dnn-listener-configure/dnn-listener-in-ssms.png" alt-text="View the DNN listener under availability group listeners in SQL Server Management Studio (SSMS)"::: ### Transact-SQL
-Use Transact-SQL to view the status of the DNN listener:
+Use Transact-SQL to view the status of the DNN listener:
```sql SELECT * FROM SYS.AVAILABILITY_GROUP_LISTENERS ```
-A value of `1` for `is_distributed_network_name` indicates the listener is a distributed network name (DNN) listener:
+A value of `1` for `is_distributed_network_name` indicates the listener is a distributed network name (DNN) listener:
:::image type="content" source="media/availability-group-distributed-network-name-dnn-listener-configure/dnn-listener-tsql.png" alt-text="Use sys.availability_group_listeners to identify DNN listeners that have a value of 1 in is_distributed_network_name"::: - ## Update connection string
-Update connection strings for applications so that they connect to the DNN listener. To ensure rapid connectivity upon failover, add `MultiSubnetFailover=True` to the connection string if the SQL client supports it.
+Update connection strings for applications so that they connect to the DNN listener. Connection strings for DNN listeners must provide the DNN port number. To ensure rapid connectivity upon failover, add `MultiSubnetFailover=True` to the connection string if the SQL client supports it.
## Test failover
-Test failover of the availability group to ensure functionality.
+Test failover of the availability group to ensure functionality.
-To test failover, follow these steps:
+To test failover, follow these steps:
-1. Connect to the DNN listener or one of the replicas by using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).
-1. Expand **Always On Availability Group** in **Object Explorer**.
-1. Right-click the availability group and choose **Failover** to open the **Failover Wizard**.
-1. Follow the prompts to choose a failover target and fail the availability group over to a secondary replica.
-1. Confirm the database is in a synchronized state on the new primary replica.
-1. (Optional) Fail back to the original primary, or another secondary replica.
+1. Connect to the DNN listener or one of the replicas by using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).
+1. Expand **Always On Availability Group** in **Object Explorer**.
+1. Right-click the availability group and choose **Failover** to open the **Failover Wizard**.
+1. Follow the prompts to choose a failover target and fail the availability group over to a secondary replica.
+1. Confirm the database is in a synchronized state on the new primary replica.
+1. (Optional) Fail back to the original primary, or another secondary replica.
## Test connectivity Test the connectivity to your DNN listener with these steps: 1. Open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
-1. Connect to your DNN listener.
-1. Open a new query window and check which replica you're connected to by running `SELECT @@SERVERNAME`.
+1. Connect to your DNN listener.
+1. Open a new query window and check which replica you're connected to by running `SELECT @@SERVERNAME`.
1. Fail the availability group over to another replica.
-1. After a reasonable amount of time, run `SELECT @@SERVERNAME` to confirm your availability group is now hosted on another replica.
-
+1. After a reasonable amount of time, run `SELECT @@SERVERNAME` to confirm your availability group is now hosted on another replica.
## Limitations - Currently, a DNN listener for an availability group is only supported for SQL Server 2019 CU8 and later on Windows Server 2016 and later.
+- DNN Listeners **MUST** be configured with a unique port. The port cannot be shared with any other connection on any replica.
- There might be additional considerations when you're working with other SQL Server features and an availability group with a DNN. For more information, see [AG with DNN interoperability](availability-group-dnn-interoperability.md).
-## Next steps
+## Port considerations
-To learn more about SQL Server HADR features in Azure, see [Availability groups](availability-group-overview.md) and [Failover cluster instance](failover-cluster-instance-overview.md). You can also learn [best practices](hadr-cluster-best-practices.md) for configuring your environment for high availability and disaster recovery.
+DNN listeners are designed to listen on all IP addresses, but on a specific, unique port. The DNS entry for the listener name should resolve to the addresses of all replicas in the availability group. This is done automatically with the PowerShell script provided in the [Create Script](#create-script) section. Since DNN listeners accept connections on all IP addresses, it is critical that the listener port be unique, and not in use by any other replica in the availability group. Since SQL Server always listens on port 1433, either directly or via the SQL Browser service, port 1433 cannot be used for any DNN listener.
+## Next steps
+To learn more about SQL Server HADR features in Azure, see [Availability groups](availability-group-overview.md) and [Failover cluster instance](failover-cluster-instance-overview.md). You can also learn [best practices](hadr-cluster-best-practices.md) for configuring your environment for high availability and disaster recovery.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution description: Learn about the platform updates to Azure VMware Solution. Previously updated : 03/16/2021 Last updated : 03/24/2021 # Platform updates for Azure VMware Solution
-Important updates to Azure VMware Solution will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. For more details about the key upgrade processes and features in Azure VMware Solution, see [Azure VMware Solution private cloud updates and upgrades](concepts-upgrades.md).
+Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Azure VMware Solution private cloud updates and upgrades](concepts-upgrades.md).
+
+## March 24, 2021
+All new Azure VMware Solution private clouds are deployed with VMware vCenter version 6.7U3l and NSX-T version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the above-mentioned releases.
+
+You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes.
## March 15, 2021 -- Azure VMware Solution service will perform maintenance work through March 19, 2021, to update vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
+- Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
-- During this time, VMware vCenter will be unavailable, and you won't be able to manage VMs (stop, start, create, delete). Private cloud scaling (adding/removing servers and clusters) will also be unavailable. VMware High Availability (HA) will continue to operate to provide protection for existing VMs.
+- VMware vCenter will be unavailable during this time. So, you won't be able to manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html). ## March 4, 2021 -- Azure VMware Solutions will apply patches through March 15, 2021, to ESXi in existing private clouds to [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html).
+- Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) to existing privates **through March 15, 2021**.
-- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied through March 15, 2021.
+- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**.
>[!NOTE] >This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter and clear automatically as the maintenance progresses. ## Post update
-Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
-----
+Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Use the *administrator* account to access NSX-T Manager. It has full privileges
Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about: -- [Private cloud upgrade concepts](concepts-upgrades.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).-- [Details of each privilege](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).-- [How Azure VMware Solution monitors and repairs private clouds](concepts-monitor-repair-private-cloud.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Private cloud upgrade concepts](concepts-upgrades.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
+- [Details of each privilege](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html)
+- [How Azure VMware Solution monitors and repairs private clouds](concepts-monitor-repair-private-cloud.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
<!-- LINKS - external-->
azure-vmware Create Ipsec Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/create-ipsec-tunnel.md
Title: Create an IPSec tunnel into Azure VMware Solution
-description: Learn how to create a Virtual WAN hub to establish an IPSec tunnel into Azure VMware Solutions.
+description: Learn how to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel into Azure VMware Solutions.
Previously updated : 10/02/2020 Last updated : 03/23/2021 # Create an IPSec tunnel into Azure VMware Solution
-In this article, we'll go through the steps to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. We'll create an Azure Virtual WAN hub and a VPN gateway with a public IP address attached to it. Then we'll create an Azure ExpressRoute gateway and establish an Azure VMware Solution endpoint. We'll also go over the details of enabling a policy-based VPN on-premises setup.
+In this article, we'll go through the steps to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
-## Topology
-![Diagram showing VPN site-to-site tunnel architecture.](media/create-ipsec-tunnel/vpn-s2s-tunnel-architecture.png)
+In this how to, you'll:
+- Create an Azure Virtual WAN hub and a VPN gateway with a public IP address attached to it.
+- Create an Azure ExpressRoute gateway and establish an Azure VMware Solution endpoint.
+- Enable a policy-based VPN on-premises setup.
-The Azure Virtual hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
+## Prerequisites
+You must have a public-facing IP address terminating on an on-premises VPN device.
-## Before you begin
+## Step 1. Create an Azure Virtual WAN
-To create the site-to-site VPN tunnel, you'll need to create a public-facing IP address terminating on an on-premises VPN device.
-## Create a Virtual WAN hub
+## Step 2. Create a Virtual WAN hub and gateway
-1. In the Azure portal, search on **Virtual WANS**. Select **+Add**. The Create WAN page opens.
+>[!TIP]
+>You can also [create a gateway in an existing hub](../virtual-wan/virtual-wan-expressroute-portal.md#existinghub).
-2. Enter the required fields on the **Create WAN** page and then select **Review + Create**.
-
- | Field | Value |
- | | |
- | **Subscription** | Value is pre-populated with the subscription belonging to the resource group. |
- | **Resource group** | The Virtual WAN is a global resource and isn't confined to a specific region. |
- | **Resource group location** | To create the Virtual WAN hub, you need to set a location for the resource group. |
- | **Name** | |
- | **Type** | Select **Standard**, which will allow more than just the VPN gateway traffic. |
-
- :::image type="content" source="media/create-ipsec-tunnel/create-wan.png" alt-text="Screenshot showing the Create WAN page in the Azure portal.":::
+1. Select the Virtual WAN you created in the previous step.
-3. In the Azure portal, select the Virtual WAN you created in the previous step, select **Create virtual hub**, enter the required fields, and then select **Next: Site to site**.
+1. Select **Create virtual hub**, enter the required fields, and then select **Next: Site to site**.
- | Field | Value |
- | | |
- | **Region** | Selecting a region is required from a management perspective. |
- | **Name** | |
- | **Hub private address space** | Enter the subnet using a `/24` (minimum). |
+ Enter the subnet using a `/24` (minimum).
:::image type="content" source="media/create-ipsec-tunnel/create-virtual-hub.png" alt-text="Screenshot showing the Create virtual hub page.":::
-4. On the **Site-to-site** tab, define the site-to-site gateway by setting the aggregate throughput from the **Gateway scale units** drop-down.
+4. Select the **Site-to-site** tab, define the site-to-site gateway by setting the aggregate throughput from the **Gateway scale units** drop-down.
>[!TIP]
- >One scale unit = 500 Mbps. The scale units are in pairs for redundancy, each supporting 500 Mbps.
+ >The scale units are in pairs for redundancy, each supporting 500 Mbps (one scale unit = 500 Mbps).
-5. On the **ExpressRoute** tab, create an ExpressRoute gateway.
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-hub-include/site-to-site.png" alt-text="Screenshot showing the Site-to-site details.":::
+
+5. Select the **ExpressRoute** tab, create an ExpressRoute gateway.
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-er-hub-include/hub2.png" alt-text="Screenshot of the ExpressRoute settings.":::
>[!TIP] >A scale unit value is 2 Gbps. It takes approximately 30 minutes to create each hub.
-## Create a VPN site
+## Step 3. Create a site-to-site VPN
-1. In **Recent resources** in the Azure portal, select the virtual WAN you created in the previous section.
+1. In the Azure portal, select the virtual WAN you created earlier.
-2. In the **Overview** of the virtual hub, select **Connectivity** > **VPN (Site-to-site)**, and then select **Create new VPN site**.
+2. In the **Overview** of the virtual hub, select **Connectivity** > **VPN (Site-to-site)** > **Create new VPN site**.
:::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics.png" alt-text="Screenshot of the Overview page for the virtual hub, with VPN (site-to-site) and Create new VPN site selected.":::
-3. On the **Basics** tab, enter the required fields and then select **Next : Links**.
-
- | Field | Value |
- | | |
- | **Region** | The same region you specified in the previous section. |
- | **Name** | |
- | **Device vendor** | |
- | **Border Gateway Protocol** | Set to **Enable** to ensure both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX will fail to form the service mesh. For more information, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md). |
- | **Private address space** | Enter the on-premises CIDR block. It's used to route all traffic bound for on-premises across the tunnel. The CIDR block is only required if you don't enable BGP. |
- | **Connect to** | |
-
-4. On the **Links** tab, fill in the required fields and select **Review + create**. Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. BGP and autonomous system number (ASN) must be unique inside your organization.
+3. On the **Basics** tab, enter the required fields.
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics2.png" alt-text="Screenshot of the Basics tab for the new VPN site.":::
+
+ 1. Set the **Border Gateway Protocol** to **Enable**. When enabled, it ensures that both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX will fail to form the service mesh. For more information, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md).
+
+ 1. For the **Private address space**, enter the on-premises CIDR block. It's used to route all traffic bound for on-premises across the tunnel. The CIDR block is only required if you don't enable BGP.
+
+1. Select **Next : Links** and complete the required fields. Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. BGP and autonomous system number (ASN) must be unique inside your organization.
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-links.png" alt-text="Screenshot that shows link details.":::
+
+1. Select **Review + create**.
+
+1. Navigate to the virtual hub that you want, and deselect **Hub association** to connect your VPN site to the hub.
-## (Optional) Defining a VPN site for policy-based VPN site-to-site tunnels
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/connect.png" alt-text="Screenshot that shows the Connected Sites pane for Virtual HUB ready for Pre-shared key and associated settings.":::
+
+## Step 4. (Optional) Create policy-based VPN site-to-site tunnels
-This section applies only to policy-based VPNs. Policy-based (or static, route-based) VPN setups are driven by on-premise VPN device capabilities in most cases. They require on-premise and Azure VMware Solution networks to be specified. For Azure VMware Solution with an Azure Virtual WAN hub, you can't select *any* network. Instead, you have to specify all relevant on-premise and Azure VMware Solution Virtual WAN hub ranges. These hub ranges are used to specify the encryption domain of the policy base VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
+>[!IMPORTANT]
+>This is an optional step and applies only to policy-based VPNs.
+
+Policy-based VPN setups require on-premise and Azure VMware Solution networks to be specified, including the hub ranges. These hub ranges specify the encryption domain of the policy-based VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
1. In the Azure portal, go to your Virtual WAN hub site. Under **Connectivity**, select **VPN (Site to site)**.
This section applies only to policy-based VPNs. Policy-based (or static, route-b
Your traffic selectors or subnets that are part of the policy-based encryption domain should be:
- - The virtual WAN hub /24
- - The Azure VMware Solution private cloud /22
- - The connected Azure virtual network (if present)
+ - Virtual WAN hub `/24`
+ - Azure VMware Solution private cloud `/22`
+ - Connected Azure virtual network (if present)
-## Connect your VPN site to the hub
+## Step 5. Connect your VPN site to the hub
1. Select your VPN site name and then select **Connect VPN sites**. + 1. In the **Pre-shared key** field, enter the key previously defined for the on-premise endpoint. >[!TIP] >If you don't have a previously defined key, you can leave this field blank. A key is generated for you automatically.
-
- >[!IMPORTANT]
- >Only enable **Propagate Default Route** if you're deploying a firewall in the hub and it is the next hop for connections through that tunnel.
-1. Select **Connect**. A connection status screen shows the status of the tunnel creation.
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/connect.png" alt-text="Screenshot that shows the Connected Sites pane for Virtual HUB ready for a Pre-shared key and associated settings. ":::
+
+1. If you're deploying a firewall in the hub and it's the next hop, set the **Propagate Default Route** option to **Enable**.
+
+ When enabled, the Virtual WAN hub propagates to a connection only if the hub already learned the default route when deploying a firewall in the hub or if another connected site has forced tunneling enabled. The default route does not originate in the Virtual WAN hub.
-2. Go to the Virtual WAN overview and open the VPN site page to download the VPN configuration file for the on-premises endpoint.
+1. Select **Connect**. After a few minutes, the site shows the connection and connectivity status.
-3. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub. This step requires first creating your private cloud.
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png" alt-text="Screenshot that shows a site-to-site connection and connectivity status." lightbox="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png":::
+
+1. [Download the VPN configuration file](../virtual-wan/virtual-wan-site-to-site-portal.md#device) for the on-premises endpoint.
+
+3. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub.
+
+ >[!IMPORTANT]
+ >You must first have a private cloud created before you can patch the platform.
[!INCLUDE [request-authorization-key](includes/request-authorization-key.md)]
-4. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub.
- 1. In the Azure portal, open the Virtual WAN you created earlier.
- 1. Select the created Virtual WAN hub and then select **ExpressRoute** in the left pane.
- 1. Select **+ Redeem authorization key**.
+4. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. You'll use the authorization key and ExpressRoute ID (peer circuit URI) from the previous step.
+
+ 1. Select your ExpressRoute gateway and then select **Redeem authorization key**.
:::image type="content" source="media/create-ipsec-tunnel/redeem-authorization-key.png" alt-text="Screenshot of the ExpressRoute page for the private cloud, with Redeem authorization key selected.":::
- 1. Paste the authorization key into the Authorization key field.
- 1. Past the ExpressRoute ID into the **Peer circuit URI** field.
- 1. Select **Automatically associate this ExpressRoute circuit with the hub.**
+ 1. Paste the authorization key in the **Authorization Key** field.
+ 1. Paste the ExpressRoute ID into the **Peer circuit URI** field.
+ 1. Select **Automatically associate this ExpressRoute circuit with the hub** check box.
1. Select **Add** to establish the link. 5. Test your connection by [creating an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/introduction.md
Title: Introduction description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 11/11/2020 Last updated : 03/24/2021 # What is Azure VMware Solution?
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
![Image of Azure VMware Solution private cloud adjacency to Azure and on-premises](./media/adjacency-overview-drawing-final.png)
+## Customer communication
+Service issues, planned maintenance, health advisories, security advisories notifications are published through **Service Health** in the Azure portal. You can take timely actions when you set up activity log alerts for these notifications. For more information, see [Create service health alerts using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md#create-service-health-alert-using-azure-portal).
++ ## Hosts, clusters, and private clouds Azure VMware Solution private clouds and clusters are built from a bare-metal, hyper-converged Azure infrastructure host. The high-end hosts have 576-GB RAM and dual Intel 18 core, 2.3-GHz processors. The HE hosts have two vSAN diskgroups with 15.36 TB (SSD) of raw vSAN capacity tier and a 3.2 TB (NVMe) vSAN cache tier.
azure-vmware Public Ip Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/public-ip-usage.md
The web server receives the request and replies with the requested information o
## Test case In this scenario, you'll publish the IIS webserver to the internet. Use the public IP feature in Azure VMware Solution to publish the website on a public IP address. You'll also configure NAT rules on the firewall and access Azure VMware Solution resource (VMs with a web server) with public IP.
+>[!TIP]
+>To enable egress traffic, you must set Security configuration > Internet traffic to **Azure Firewall**.
+ ## Deploy Virtual WAN 1. Sign in to the Azure portal and then search for and select **Azure VMware Solution**.
Once all components are deployed, you can see them in the added Resource group.
## Limitations
-You can have 100 public IPs per SDDCs.
+You can have 100 public IPs per private cloud.
## Next steps
azure-vmware Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/windows-server-failover-cluster.md
The following diagram illustrates the architecture of WSFC virtual nodes on an A
Currently, the following configurations are supported: -- Microsoft Windows Server 2012 or later.-- Up to five failover clustering nodes per cluster.-- Up to four PVSCSI adapters per VM.-- Up to 64 disks per PVSCSI adapter.
+- Microsoft Windows Server 2012 or later
+- Up to five failover clustering nodes per cluster
+- Up to four PVSCSI adapters per VM
+- Up to 64 disks per PVSCSI adapter
## Virtual Machine configuration requirements
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
batch Batch Application Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-application-packages.md
Title: Deploy application packages to compute nodes description: Use the application packages feature of Azure Batch to easily manage multiple applications and versions for installation on Batch compute nodes. Previously updated : 09/24/2020 Last updated : 03/24/2021 - H1Hack27Feb2017 - devx-track-csharp
With application packages, your pool's start task doesn't have to specify a long
You can use the [Azure portal](https://portal.azure.com) or the Batch Management APIs to manage the application packages in your Batch account. The following sections explain how to link a storage account, and how to add and manage applications and application packages in the Azure portal.
+> [!NOTE]
+> While you can define application values in the [Microsoft.Batch/batchAccounts](/templates/microsoft.batch/batchaccounts) resource of an [ARM template](quick-create-template.md), it's not currently possible to use an ARM template to upload application packages to use in your Batch account. You must upload them to your linked storage account as described [below](#add-a-new-application).
+ ### Link a storage account To use application packages, you must link an [Azure Storage account](accounts.md#azure-storage-accounts) to your Batch account. The Batch service will use the associated storage account to store your application packages. We recommend that you create a storage account specifically for use with your Batch account.
batch Batch Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-custom-images.md
To scale Batch pools reliably with a managed image, we recommend creating the ma
If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of Azure Marketplace image references supported by Azure Batch, see the [List node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus) operation. > [!NOTE]
-> You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#deploy-an-image-with-marketplace-terms) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#deploy-an-image-with-marketplace-terms) VMs.
+> You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs.
- Ensure the VM is created with a managed disk. This is the default storage setting when you create a VM. - Do not install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool.
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 11/24/2020 Last updated : 03/18/2021
When you select a node size for an Azure Batch pool, you can choose from among a
## Supported VM series and sizes
-There are a few exceptions and limitations to choosing a VM size for your Batch pool:
--- Some VM series or VM sizes are not supported in Batch.-- Some VM sizes are restricted and need to be specifically enabled before they can be allocated.- ### Pools in Virtual Machine configuration Batch pools in the Virtual Machine configuration support almost all [VM sizes](../virtual-machines/sizes.md). See the following table to learn more about supported sizes and restrictions.
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| A | All sizes *except* Standard_A0, Standard_A8, Standard_A9, Standard_A10, Standard_A11 | | Av2 | All sizes | | B | Not supported |
-| DC | Not supported |
+| DCsv2 | All sizes |
| Dv2, DSv2 | All sizes | | Dv3, Dsv3 | All sizes |
-| Dav4 | All sizes |
-| Dasv4 | All sizes |
+| Dav4, Dasv4 | All sizes |
| Ddv4, Ddsv4 | All sizes | | Dv4, Dsv4 | Not supported | | Ev3, Esv3 | All sizes, except for E64is_v3 |
-| Eav4 | All sizes |
-| Easv4 | All sizes |
+| Eav4, Easv4 | All sizes |
| Edv4, Edsv4 | All sizes | | Ev4, Esv4 | Not supported | | F, Fs | All sizes |
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| H | All sizes | | HB | All sizes | | HBv2 | All sizes |
+| HBv3 | Standard_HB120rs_v3 (other sizes not yet available) |
| HC | All sizes | | Ls | All sizes | | Lsv2 | All sizes |
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| NC | All sizes | | NCv2 | All sizes | | NCv3 | All sizes |
-| NCasT4_v3 | None - not yet available |
+| NCasT4_v3 | All sizes |
| ND | All sizes | | NDv2 | None - not yet available | | NV | All sizes |
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-sig-images.md
The following steps show how to prepare a VM, take a snapshot, and create an ima
If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of Azure Marketplace image references supported by Azure Batch, see the [List node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus) operation. > [!NOTE]
-> You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#deploy-an-image-with-marketplace-terms) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#deploy-an-image-with-marketplace-terms) VMs.
+> You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs.
Follow these guidelines when creating VMs:
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
batch Quick Run Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/quick-run-python.md
After completing this quickstart, you'll understand key concepts of the Batch se
- A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md). -- [Python](https://python.org/downloads) version 2.7 or 3.3 or later, including the [pip](https://pip.pypa.io/en/stable/installing/) package manager
+- [Python](https://python.org/downloads) version 2.7 or 3.6, including the [pip](https://pip.pypa.io/en/stable/installing/) package manager
## Sign in to Azure
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-parallel-python.md
In this tutorial, you convert MP4 media files in parallel to MP3 format using th
## Prerequisites
-* [Python version 2.7 or 3.3 or later](https://www.python.org/downloads/)
+* [Python version 2.7 or 3.6+](https://www.python.org/downloads/)
* [pip](https://pip.pypa.io/en/stable/installing/) package manager
cdn Cdn Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-traffic-manager.md
After you configure your CDN and Traffic Manager profiles, follow these steps to
> [!NOTE] > If your domain is currently live and cannot be interrupted, do this step last. Verify that the CDN endpoints and traffic manager domains are live before you update your custom domain DNS to Traffic Manager. >-
+
+ > [!NOTE]
+ > For implemeting this fail over scenerio both endpoints needs to be in different profiles, and the different profiles should be by different CDN provider to avoid domain name conflicts.
+ >
2. From your Azure CDN profile, select the first CDN endpoint (Akamai). Select **Add custom domain** and input **cdndemo101.dustydogpetcare.online**. Verify that the checkmark to validate the custom domain is green.
To test the functionality, disable the primary CDN endpoint and verify that the
## Next steps You can configure other routing methods, such as geographic, to balance the load among different CDN endpoints.
-For more information, see [Configure the geographic traffic routing method using Traffic Manager](../traffic-manager/traffic-manager-configure-geographic-routing-method.md).
+For more information, see [Configure the geographic traffic routing method using Traffic Manager](../traffic-manager/traffic-manager-configure-geographic-routing-method.md).
cloud-services-extended-support Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/faq.md
Template and parameter files are only used for deployment automation. Like Cloud
### How does my application code change on Cloud Services (extended support) There are no changes required for your application code packaged in cspkg. Your existing applications will continue to work as before.
+### Does Cloud Services (extended support) allow CTP package format?
+CTP package format is not supported in Cloud Services (extended support). However, it allows an enhanced package size limit of 800 MB
## Migration
Cloud Services (extended support) has adopted the same process as other compute
No. Key Vault is a regional resource and customers need one Key Vault in each region. However, one Key Vault can be used for all deployments within a given region. ## Next steps
-To start using Cloud Services (extended support), see [Deploy a Cloud Service (extended support) using PowerShell](deploy-powershell.md)
+To start using Cloud Services (extended support), see [Deploy a Cloud Service (extended support) using PowerShell](deploy-powershell.md)
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-python-ptvs.md
This article provides an overview of using Python web and worker roles using [Py
* [Azure SDK Tools for VS 2013][Azure SDK Tools for VS 2013] or [Azure SDK Tools for VS 2015][Azure SDK Tools for VS 2015] or [Azure SDK Tools for VS 2017][Azure SDK Tools for VS 2017]
-* [Python 2.7 32-bit][Python 2.7 32-bit] or [Python 3.5 32-bit][Python 3.5 32-bit]
+* [Python 2.7 32-bit][Python 2.7 32-bit] or [Python 3.8 32-bit][Python 3.8 32-bit]
[!INCLUDE [create-account-and-websites-note](../../includes/create-account-and-websites-note.md)]
Your cloud service can contain roles implemented in different languages. For ex
The main problem with the setup scripts is that they do not install python. First, define two [startup tasks](cloud-services-startup-tasks.md) in the [ServiceDefinition.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. The first task (**PrepPython.ps1**) downloads and installs the Python runtime. The second task (**PipInstaller.ps1**) runs pip to install any dependencies you may have.
-The following scripts were written targeting Python 3.5. If you want to use the version 2.x of python, set the **PYTHON2** variable file to **on** for the two startup tasks and the runtime task: `<Variable name="PYTHON2" value="<mark>on</mark>" />`.
+The following scripts were written targeting Python 3.8. If you want to use the version 2.x of python, set the **PYTHON2** variable file to **on** for the two startup tasks and the runtime task: `<Variable name="PYTHON2" value="<mark>on</mark>" />`.
```xml <Startup>
The **PYTHON2** and **PYPATH** variables must be added to the worker startup tas
Next, create the **PrepPython.ps1** and **PipInstaller.ps1** files in the **./bin** folder of your role. #### PrepPython.ps1
-This script installs python. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is installed, otherwise Python 3.5 is installed.
+This script installs python. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is installed, otherwise Python 3.8 is installed.
```powershell [Net.ServicePointManager]::SecurityProtocol = "tls12, tls11, tls"
if (-not $is_emulated){
if (-not $?) {
- $url = "https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe"
- $outFile = "${env:TEMP}\python-3.5.2-amd64.exe"
+ $url = "https://www.python.org/ftp/python/3.8.8/python-3.8.8-amd64.exe"
+ $outFile = "${env:TEMP}\python-3.8.8-amd64.exe"
if ($is_python2) {
- $url = "https://www.python.org/ftp/python/2.7.12/python-2.7.12.amd64.msi"
- $outFile = "${env:TEMP}\python-2.7.12.amd64.msi"
+ $url = "https://www.python.org/ftp/python/2.7.18/python-2.7.18.amd64.msi"
+ $outFile = "${env:TEMP}\python-2.7.18.amd64.msi"
} Write-Output "Not found, downloading $url to $outFile$nl"
if (-not $is_emulated){
``` #### PipInstaller.ps1
-This script calls up pip and installs all of the dependencies in the **requirements.txt** file. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.5 is used.
+This script calls up pip and installs all of the dependencies in the **requirements.txt** file. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used.
```powershell $is_emulated = $env:EMULATED -eq "true"
if (-not $is_emulated){
The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work but it doesn't really work. Replace the contents in that file with the following script.
-This script calls the **worker.py** file from your python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.5 is used.
+This script calls the **worker.py** file from your python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used.
```powershell $is_emulated = $env:EMULATED -eq "true"
For more details about using Azure services from your web and worker roles, such
[Azure SDK Tools for VS 2015]: https://go.microsoft.com/fwlink/?LinkId=746481 [Azure SDK Tools for VS 2017]: https://go.microsoft.com/fwlink/?LinkId=746483 [Python 2.7 32-bit]: https://www.python.org/downloads/
-[Python 3.5 32-bit]: https://www.python.org/downloads/
+[Python 3.8 32-bit]: https://www.python.org/downloads/
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/quickstart-powershell.md
MyResourceGroup MyVM1 eastus Standard_DS1 Windows S
MyResourceGroup MyVM2 eastus Standard_DS2_v2_Promo Windows Succeeded deallocated ```
-## Navigate Azure resources
-
- 1. List all your subscriptions from `Azure` drive
-
- ```azurepowershell-interactive
- PS Azure:\> dir
- ```
-
- 2. `cd` to your preferred subscription
-
- ```azurepowershell-interactive
- PS Azure:\> cd MySubscriptionName
- PS Azure:\MySubscriptionName>
- ```
-
- 3. View all your Azure resources under the current subscription
-
- Type `dir` to list multiple views of your Azure resources.
-
- ```azurepowershell-interactive
- PS Azure:\MySubscriptionName> dir
-
- Directory: azure:\MySubscriptionName
-
- Mode Name
- - -
- + AllResources
- + ResourceGroups
- + StorageAccounts
- + VirtualMachines
- + WebApps
- ```
-
-### AllResources view
-
-Type `dir` under `AllResources` directory to view your Azure resources.
-
-```azurepowershell-interactive
-PS Azure:\MySubscriptionName> dir AllResources
-```
-
-### Explore resource groups
-
- You can go to the `ResourceGroups` directory and inside a specific resource group you can find virtual machines.
-
-```azurepowershell-interactive
-PS Azure:\MySubscriptionName> cd ResourceGroups\MyResourceGroup1\Microsoft.Compute\virtualMachines
-
-PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup1\Microsoft.Compute\virtualMachines> dir
--
- Directory: Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup1\Microsoft.Compute\virtualMachines
--
-VMName Location ProvisioningState VMSize OS SKU OSVersion AdminUserName NetworkInterfaceName
- -- -- -- - --
-TestVm1 westus Succeeded Standard_DS2_v2 WindowsServer 2016-Datacenter Latest AdminUser demo371
-TestVm2 westus Succeeded Standard_DS1_v2 WindowsServer 2016-Datacenter Latest AdminUser demo271
-```
-
-> [!NOTE]
-> You may notice that the second time when you type `dir`, the Cloud Shell is able to display the items much faster.
-> This is because the child items are cached in memory for a better user experience.
-However, you can always use `dir -Force` to get fresh data.
-
-### Navigate storage resources
-
-By entering into the `StorageAccounts` directory, you can easily navigate all your storage resources
-
-```azurepowershell-interactive
-PS Azure:\MySubscriptionName\StorageAccounts\MyStorageAccountName\Files> dir
-
- Directory: Azure:\MySubscriptionNameStorageAccounts\MyStorageAccountName\Files
-
-Name ConnectionString
-- -
-MyFileShare1 \\MyStorageAccountName.file.core.windows.net\MyFileShare1;AccountName=MyStorageAccountName AccountKey=<key>
-MyFileShare2 \\MyStorageAccountName.file.core.windows.net\MyFileShare2;AccountName=MyStorageAccountName AccountKey=<key>
-MyFileShare3 \\MyStorageAccountName.file.core.windows.net\MyFileShare3;AccountName=MyStorageAccountName AccountKey=<key>
-```
-
-With the connection string, you can use the following command to mount the Azure Files share.
-
-```azurepowershell-interactive
-net use <DesiredDriveLetter>: \\<MyStorageAccountName>.file.core.windows.net\<MyFileShareName> <AccountKey> /user:Azure\<MyStorageAccountName>
-```
-
-For details, see [Mount an Azure Files share and access the share in Windows][azmount].
-
-You can also navigate the directories under the Azure Files share as follows:
-
-```azurepowershell-interactive
-PS Azure:\MySubscriptionName\StorageAccounts\MyStorageAccountName\Files> cd .\MyFileShare1\
-PS Azure:\MySubscriptionName\StorageAccounts\MyStorageAccountName\Files\MyFileShare1> dir
-
-Mode Name
-- -
-+ TestFolder
-. hello.ps1
-```
- ### Interact with virtual machines You can find all your virtual machines under the current subscription via `VirtualMachines` directory.
Type `exit` to terminate the session.
[customex]:https://docs.microsoft.com/azure/virtual-machines/windows/extensions-customscript [profile]: /powershell/module/microsoft.powershell.core/about/about_profiles [azmount]: ../storage/files/storage-how-to-use-files-windows.md
-[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
If your app or business depends on the use of a Custom Vision project, we recomm
- Two Azure Custom Vision resources. If you don't have them, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). - The training keys and endpoint URLs of your Custom Vision resources. You can find these values on the resource's **Overview** tab on the Azure portal. - A created Custom Vision project. See [Build a classifier](./getting-started-build-a-classifier.md) for instructions on how to do this.
-* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
+* [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
## Process overview
cognitive-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-model-python.md
After you have [exported your TensorFlow model](./export-your-model.md) from the
To use the tutorial, you need to do the following: -- Install either Python 2.7+ or Python 3.5+.
+- Install either Python 2.7+ or Python 3.6+.
- Install pip. Next, you'll need to install the following packages:
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
This guide shows you how to use these REST APIs with cURL. You can also use an H
- A Custom Vision resource in Azure. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). This feature doesn't currently support the Cognitive Service resource (all in one key). - An Azure Storage account with a blob container. Follow [Exercises 1 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1) if you need help with this step.
-* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.
+* [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.
## Set up Azure storage integration
The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`
In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision. * [REST API reference documentation (training)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
-* [REST API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
+* [REST API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/ReleaseNotes.md
The Azure Face service is updated on an ongoing basis. Use this article to stay
## February 2021
-* New Face API detection model: The new detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining detection 03 with the new recognition 04 will provide improved recognition accuracy as well. See [Specify a face detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model) for more details.
-* Face mask attribute: The face mask attribute is available with the latest detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model) for more details.
-* New Face API Recognition Model: The new recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of recognition 03, including improved recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now customers can build safe and seamless user experiences that detect whether an enrolled user is wearing a face cover with the latest detection 03 model, and recognize who they are with the latest recognition 04 model. See [Specify a face recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model) for more details.
+* New Face API detection model: The new detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining detection 03 with the new recognition 04 will provide improved recognition accuracy as well. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
+* Face mask attribute: The face mask attribute is available with the latest detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
+* New Face API Recognition Model: The new recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of recognition 03, including improved recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now customers can build safe and seamless user experiences that detect whether an enrolled user is wearing a face cover with the latest detection 03 model, and recognize who they are with the latest recognition 04 model. See [Specify a face recognition model](./face-api-how-to-topics/specify-recognition-model.md) for more details.
## January 2021
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
Cognitive Search instance can be isolated via a Private Endpoint after the QnA M
If the QnA Maker App Service is restricted using an App Service Environment, use the same VNet to create a Private Endpoint connection to the Cognitive Search instance. Create a new DNS entry in the VNet to map the Cognitive Search endpoint to the Cognitive Search Private Endpoint IP address.
-If an App Service Environment is not used for the QnAMaker App Service, create a new VNet resource first and then create the Private Endpoint connection to the Cognitive Search instance. In this case, the QnA Maker App Service needs [to be integrated with the VNet](https://docs.microsoft.com/azure/app-service/web-sites-integrate-with-vnet) to connect to the Cognitive Search instance.
+If an App Service Environment is not used for the QnAMaker App Service, create a new VNet resource first and then create the Private Endpoint connection to the Cognitive Search instance. In this case, the QnA Maker App Service needs [to be integrated with the VNet](../../../app-service/web-sites-integrate-with-vnet.md) to connect to the Cognitive Search instance.
# [QnA Maker managed (preview release)](#tab/v2) [Create Private endpoints](../reference-private-endpoint.md) to the Azure Search resource. -+
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
- Upgraded to new Microsoft Audio Stack (MAS) with improved beamforming and noise reduction for speech. - Reduced the binary size by as much as 70% depending on target.-- Support for [Azure Percept Audio](https://docs.microsoft.com/azure/azure-percept/overview-azure-percept-audio) with [binary release](https://aka.ms/sdsdk-download-APAudio).
+- Support for [Azure Percept Audio](../../azure-percept/overview-azure-percept-audio.md) with [binary release](https://aka.ms/sdsdk-download-APAudio).
- Updated the [Speech SDK](./speech-sdk.md) component to version 1.15.0. For more information, see its [release notes](./releasenotes.md). ## Speech Devices SDK 1.11.0:
cognitive-services How To Automatic Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md
In this article, you'll learn how to use `AutoDetectSourceLanguageConfig` to con
## Automatic language detection with the Speech SDK
-Automatic language detection currently has a services-side limit of four languages per detection. Keep this limitation in mind when construction your `AudoDetectSourceLanguageConfig` object. In the samples below, you'll create an `AutoDetectSourceLanguageConfig`, then use it to construct a `SpeechRecognizer`.
+Automatic language detection currently has a services-side limit of four languages per detection. Keep this limitation in mind when construction your `AutoDetectSourceLanguageConfig` object. In the samples below, you'll create an `AutoDetectSourceLanguageConfig`, then use it to construct a `SpeechRecognizer`.
> [!TIP] > You can also specify a custom model to use when performing speech to text. For more information, see [Use a custom model for automatic language detection](#use-a-custom-model-for-automatic-language-detection).
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice in West US (`westus`) region for now, and will be available for all `en-US` voices by the end of April, 2021.
+> Viseme only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word.
There is no one-to-one correspondence between visemes and phonemes.
Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as `s` and `z`. See the [mapping table between visemes and phonemes](#map-phonemes-to-visemes).
-Using visemes, you can create more natural and intelligent news broadcast assistant, more interactive gaming and cartoon characters, and more intuitive language teaching videos. The hearing-impaired can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
+Using visemes, you can create more natural and intelligent news broadcast assistant, more interactive gaming and cartoon characters, and more intuitive language teaching videos. People with hearing impairment can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
## Get viseme events with the Speech SDK
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
- **C++/C#/Java/Objective-C/Python**: Added support to decode compressed TTS/synthesized audio with the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. This can lower the bandwidth needed for your use case. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python). - **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_), allowing customers to send the path on disk to a wav file to the SDK which the SDK will then recognize. This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252). - **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices programmatically. This allows you to list available voices in your application, or programmatically choose from different voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoices), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. The hearing-impaired can also pick up sounds visually and "lip-read" any speech content. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. People with hearing impairment can also pick up sounds visually and "lip-read" any speech content. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).
- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. You might use this in your application to take an action when certain words are spoken by text-to-speech. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element). - **Java**: Added support for speaker recognition APIs, allowing you to use speaker recognition from Java. Details [here](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable). - **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-java-stable), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat?view=azure-node-latest), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-python).
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK can be used for transcribing call center scenarios, where telepho
### Codec compressed audio input
-Several of the Speech SDK programming languages support codec compressed audio input streams. For more information, see <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams" target="_blank">use compressed audio input formats </a>.
+Several of the Speech SDK programming languages support codec compressed audio input streams. For more information, see <a href="/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams" target="_blank">use compressed audio input formats </a>.
**Codec compressed audio input** is available on the following platforms:
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Depending on the Speech SDK language, you'll set the `"SpeechServiceResponse_Syn
# [C#](#tab/csharp)
-For more information, see <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.setproperty" target="_blank"> `SetProperty` </a>.
+For more information, see <a href="/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.setproperty" target="_blank"> `SetProperty` </a>.
```csharp speechConfig.SetProperty(
speechConfig.SetProperty(
# [C++](#tab/cpp)
-For more information, see <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#setproperty" target="_blank"> `SetProperty` </a>.
+For more information, see <a href="/cpp/cognitive-services/speech/speechconfig#setproperty" target="_blank"> `SetProperty` </a>.
```cpp speechConfig->SetProperty(
speechConfig->SetProperty(
# [Java](#tab/java)
-For more information, see <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setproperty#com_microsoft_cognitiveservices_speech_SpeechConfig_setProperty_String_String_" target="_blank"> `setProperty` </a>.
+For more information, see <a href="/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setproperty#com_microsoft_cognitiveservices_speech_SpeechConfig_setProperty_String_String_" target="_blank"> `setProperty` </a>.
```java speechConfig.setProperty(
speechConfig.setProperty(
# [Python](#tab/python)
-For more information, see <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#set-property-by-name-property-name--str--value--str-" target="_blank"> `set_property_by_name` </a>.
+For more information, see <a href="/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#set-property-by-name-property-name--str--value--str-" target="_blank"> `set_property_by_name` </a>.
```python speech_config.set_property_by_name(
speech_config.set_property_by_name(
# [JavaScript](#tab/javascript)
-For more information, see <a href="https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#setproperty-string--string-" target="_blank"> `setProperty`</a>.
+For more information, see <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#setproperty-string--string-" target="_blank"> `setProperty`</a>.
```javascript speechConfig.setProperty(
speechConfig.setProperty(
# [Objective-C](#tab/objectivec)
-For more information, see <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
+For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
```objectivec [speechConfig setPropertyTo:@"false" byName:@"SpeechServiceResponse_Synthesis_WordBoundaryEnabled"];
For more information, see <a href="https://docs.microsoft.com/objectivec/cogniti
# [Swift](#tab/swift)
-For more information, see <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
+For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
```swift speechConfig!.setPropertyTo(
We will not read out the bookmark elements.
The bookmark element can be used to reference a specific location in the text or tag sequence. > [!NOTE]
-> `bookmark` element only works for `en-US-AriaNeural` voice in West US (`westus`) region for now.
+> `bookmark` element only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
**Syntax**
For more information, see <a href="https://docs.microsoft.com/swift/cognitive-se
## Next steps
-* [Language support: voices, locales, languages](language-support.md)
+* [Language support: voices, locales, languages](language-support.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
In this overview, you learn about the benefits and capabilities of the text-to-s
* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice in West US (`westus`) region for now, and will be available for all `en-US` voices by the end of April, 2021.
+> Viseme only works for `en-US-AriaNeural` voice in West US 2 (`westus2`) region for now.
## Get started
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
The following headers are included with each Document Translator API request:
> [!IMPORTANT] >
-> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp) for ways to securely store and access your credentials.
+> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](../../cognitive-services-security.md?tabs=command-line%2ccsharp) for ways to securely store and access your credentials.
> > You may need to update the following fields, depending upon the operation: >>>
The table below lists the limits for data that you send to Document Translation.
> [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../custom-translator/overview.md) >
->
+>
cognitive-services Tutorial Build Flask App Translation Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/tutorial-build-flask-app-translation-synthesis.md
# Tutorial: Build a Flask app with Azure Cognitive Services
-In this tutorial, you'll build a Flask web app that uses Azure Cognitive Services to translate text, analyze sentiment, and synthesize translated text into speech. Our focus is on the Python code and Flask routes that enable our application, however, we will help you out with the HTML and Javascript that pulls the app together. If you run into any issues let us know using the feedback button below.
+In this tutorial, you'll build a Flask web app that uses Azure Cognitive Services to translate text, analyze sentiment, and synthesize translated text into speech. Our focus is on the Python code and Flask routes that enable our application, however, we will help you out with the HTML and JavaScript that pulls the app together. If you run into any issues let us know using the feedback button below.
Here's what this tutorial covers:
For those of you who want to deep dive after this tutorial here are a few helpfu
Let's review the software and subscription keys that you'll need for this tutorial.
-* [Python 3.5.2 or later](https://www.python.org/downloads/)
+* [Python 3.6 or later](https://www.python.org/downloads/)
* [Git tools](https://git-scm.com/downloads) * An IDE or text editor, such as [Visual Studio Code](https://code.visualstudio.com/) or [Atom](https://atom.io/) * [Chrome](https://www.google.com/chrome/browser/) or [Firefox](https://www.mozilla.org/firefox)
Now that you have an idea of how a simple Flask app works, let's:
* Write some Python to call the Translator and return a response * Create a Flask route to call your Python code * Update the HTML with an area for text input and translation, a language selector, and translate button
-* Write Javascript that allows users to interact with your Flask app from the HTML
+* Write JavaScript that allows users to interact with your Flask app from the HTML
### Call the Translator
Let's update `https://docsupdatetracker.net/index.html`.
</div> ```
-The next step is to write some Javascript. This is the bridge between your HTML and Flask route.
+The next step is to write some JavaScript. This is the bridge between your HTML and Flask route.
### Create `main.js`
In this section, you're going to do a few things:
* Write some Python to call the Text Analytics API to perform sentiment analysis and return a response * Create a Flask route to call your Python code * Update the HTML with an area for sentiment scores, and a button to perform analysis
-* Write Javascript that allows users to interact with your Flask app from the HTML
+* Write JavaScript that allows users to interact with your Flask app from the HTML
### Call the Text Analytics API
-Let's write a function to call the Text Analytics API. This function will take four arguments: `input_text`, `input_language`, `output_text`, and `output_language`. This function is called whenever a user presses the run sentiment analysis button in your app. Data provided by the user from the text area and language selector, as well as the detected language and translation output are provided with each request. The response object includes sentiment scores for the source and translation. In the following sections, you're going to write some Javascript to parse the response and use it in your app. For now, let's focus on call the Text Analytics API.
+Let's write a function to call the Text Analytics API. This function will take four arguments: `input_text`, `input_language`, `output_text`, and `output_language`. This function is called whenever a user presses the run sentiment analysis button in your app. Data provided by the user from the text area and language selector, as well as the detected language and translation output are provided with each request. The response object includes sentiment scores for the source and translation. In the following sections, you're going to write some JavaScript to parse the response and use it in your app. For now, let's focus on call the Text Analytics API.
1. Let's create a file called `sentiment.py` in the root of your working directory. 2. Next, add this code to `sentiment.py`.
In this section, you're going to do a few things:
* Write some Python to convert text-to-speech with the Text-to-speech API * Create a Flask route to call your Python code * Update the HTML with a button to convert text-to-speech, and an element for audio playback
-* Write Javascript that allows users to interact with your Flask app
+* Write JavaScript that allows users to interact with your Flask app
### Call the Text-to-Speech API
The source code for this project is available on [GitHub](https://github.com/Mic
* [Translator reference](./reference/v3-0-reference.md) * [Text Analytics API reference](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics.V2.0/operations/56f30ceeeda5650db055a3c7)
-* [Text-to-speech API reference](../speech-service/rest-text-to-speech.md)
+* [Text-to-speech API reference](../speech-service/rest-text-to-speech.md)
cognitive-services Cognitive Services Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-security.md
All of the Cognitive Services endpoints exposed over HTTP enforce TLS 1.2. With
* The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request * Depending on the language and platform, specifying TLS is done either implicitly or explicitly
-For .NET users, consider the <a href="https://docs.microsoft.com/dotnet/framework/network-programming/tls" target="_blank">Transport Layer Security best practices </a>.
+For .NET users, consider the <a href="/dotnet/framework/network-programming/tls" target="_blank">Transport Layer Security best practices </a>.
## Authentication When discussing authentication, there are several common misconceptions. Authentication and authorization are often confused for one another. Identity is also a major component in security. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal </a>. Identity providers (IdP) provide identities to authentication services. Authentication is the act of verifying a user's identity. Authorization is the specification of access rights and privileges to resources for a given identity. Several of the Cognitive Services offerings, include Azure role-based access control (Azure RBAC). Azure RBAC could be used to simplify some of the ceremony involved with manually managing principals. For more details, see [Azure role-based access control for Azure resources](../role-based-access-control/overview.md).
-For more information on authentication with subscription keys, access tokens and Azure Active Directory (AAD), see <a href="https://docs.microsoft.com/azure/cognitive-services/authentication" target="_blank">authenticate requests to Azure Cognitive Services</a>.
+For more information on authentication with subscription keys, access tokens and Azure Active Directory (AAD), see <a href="/azure/cognitive-services/authentication" target="_blank">authenticate requests to Azure Cognitive Services</a>.
## Environment variables and application configuration
To get an environment variable, it must be read into memory. Depending on the la
# [C#](#tab/csharp)
-For more information, see <a href="https://docs.microsoft.com/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
+For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
```csharp using static System.Environment;
class Program
# [C++](#tab/cpp)
-For more information, see <a href="https://docs.microsoft.com/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>.
+For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>.
```cpp #include <stdlib.h>
cognitive-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-azure-function.md
In this tutorial, you learn how to:
* A local PDF document to analyze. You can download this [sample document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) to use. * [Python 3.8.x](https://www.python.org/downloads/) installed. * [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) installed.
-* [Azure Functions Core Tools](https://docs.microsoft.com/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash#install-the-azure-functions-core-tools) installed.
+* [Azure Functions Core Tools](../../azure-functions/functions-run-local.md?tabs=windows%2ccsharp%2cbash#install-the-azure-functions-core-tools) installed.
* Visual Studio Code with the following extensions installed:
- * [Azure Functions extension](https://docs.microsoft.com/azure/developer/python/tutorial-vs-code-serverless-python-01#visual-studio-code-python-and-the-azure-functions-extension)
+ * [Azure Functions extension](/azure/developer/python/tutorial-vs-code-serverless-python-01#visual-studio-code-python-and-the-azure-functions-extension)
* [Python extension](https://code.visualstudio.com/docs/python/python-tutorial#_install-visual-studio-code-and-the-python-extension) ## Create an Azure Storage account
In this tutorial, you learned how to use an Azure Function written in Python to
> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)
-* Learn more about the [Layout API](concept-layout.md)
+* Learn more about the [Layout API](concept-layout.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/whats-new.md
The Form Recognizer service is updated on an ongoing basis. Use this article to
* **Currency support** - Detection and extraction of global currency symbols. * **Azure Gov** - Form Recognizer is now also available in Azure Gov. * **Enhanced security features**:
- * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./form-recognizer-encryption-of-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+ * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
* **Private endpoints** ΓÇô Enables you on a virtual network (VNet) to [securely access data over a Private Link.](../../private-link/private-link-overview.md) ## June 2020
Complete a [quickstart](quickstarts/client-library.md) to get started writing a
## See also
-* [What is Form Recognizer?](./overview.md)
+* [What is Form Recognizer?](./overview.md)
cognitive-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-create-immersive-reader.md
The script is designed to be flexible. It will first look for existing Immersive
## Next steps * View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](./tutorial-android.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](./tutorial-ios.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](./tutorial-python.md) to see what else you can do with the Immersive Reader SDK using Python
+* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
cognitive-services Set Cookie Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to/set-cookie-policy.md
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
## Next steps * View the [Node.js quickstart](../quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](../tutorial-android.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](../tutorial-ios.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](../tutorial-python.md) to see what else you can do with the Immersive Reader SDK using Python
+* View the [Android tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
cognitive-services Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
Virtual network and service endpoint support for Cognitive Services is limited to a specific set of regions. -- [How to configure Azure Cognitive Services virtual networks](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-virtual-networks?tabs=portal)
+- [How to configure Azure Cognitive Services virtual networks](./cognitive-services-virtual-networks.md?tabs=portal)
- [Overview of Azure Virtual Networks](../virtual-network/virtual-networks-overview.md)
Bear in mind that Cognitive Services containers are required to submit metering
Also note that you must disable deep packet inspection for your firewall solution on the secure channels that the Cognitive Services containers create to Microsoft servers. Failure to do so will prevent the container from functioning correctly. -- [Understand Azure Cognitive Services container security](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support#azure-cognitive-services-container-security)
+- [Understand Azure Cognitive Services container security](./cognitive-services-container-support.md#azure-cognitive-services-container-security)
**Responsibility**: Customer
If you are using Cognitive Services within a container, you may augment your con
- [How to create an Azure Blueprint](../governance/blueprints/create-blueprint-portal.md) -- [Understand Azure Cognitive Services container security](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support#azure-cognitive-services-container-security)
+- [Understand Azure Cognitive Services container security](./cognitive-services-container-support.md#azure-cognitive-services-container-security)
**Responsibility**: Customer
Bear in mind that Cognitive Services containers are required to submit metering
Also note that you must disable deep packet inspection for your firewall solution on the secure channels that the Cognitive Services containers create to Microsoft servers. Failure to do so will prevent the container from functioning correctly. -- [Understand Azure Cognitive Services container security](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support#azure-cognitive-services-container-security)
+- [Understand Azure Cognitive Services container security](./cognitive-services-container-support.md#azure-cognitive-services-container-security)
- [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/?term=Firewall)
Bear in mind that Cognitive Services containers are required to submit metering
Also note that you must disable deep packet inspection for your firewall solution on the secure channels that the Cognitive Services containers create to Microsoft servers. Failure to do so will prevent the container from functioning correctly. -- [Understand Azure Cognitive Services container security](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support#azure-cognitive-services-container-security)
+- [Understand Azure Cognitive Services container security](./cognitive-services-container-support.md#azure-cognitive-services-container-security)
**Responsibility**: Customer
You may also use application security groups to help simplify complex security c
- [Virtual network service tags](../virtual-network/service-tags-overview.md) -- [Application Security Groups](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview#application-security-groups)
+- [Application Security Groups](../virtual-network/network-security-groups-overview.md#application-security-groups)
**Responsibility**: Customer
You can also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use the Azure Activity log to monitor network resource configurations and detect changes for network resources related to your Cognitive Services container. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You can also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive. Activity logs provide insight into the operations that were performed on your Cognitive Services container at the control plane level. Using Azure Activity Log data, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) performed at the control plane level for your Azure Cache for Redis instances. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
You can also use Azure Blueprints to simplify large-scale Azure deployments by p
Additionally, Cognitive Services sends diagnostics events that can be collected and used for the purposes of analysis, alerting and reporting. You can configure diagnostics settings for a Cognitive Services container via the Azure portal. You can send one or more diagnostics events to a Storage Account, Event Hub, or a Log Analytics workspace. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [Using diagnostic settings to for Azure Cognitive Services](diagnostic-logging.md)
Additionally, Cognitive Services sends diagnostics events that can be collected
**Guidance**: Within Azure Monitor, set your Log Analytics Workspace retention period according to your organization's compliance regulations. Use Azure Storage accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
Additionally, Cognitive Services sends diagnostics events that can be collected
**Guidance**: Enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the Activity Log Data that may have been collected for Azure Cognitive Services. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
-- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](/azure/azure-monitor/platform/activity-log)
+- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
Configure diagnostic settings for your Cognitive Services container and send log
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Create, view, and manage log alerts using Azure Monitor](/azure/azure-monitor/platform/alerts-log)
+- [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md)
**Responsibility**: Customer
Configure diagnostic settings for your Cognitive Services container and send log
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?amp;preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?amp;preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
Data plane access to Cognitive Services is controlled through access keys. These
It is not recommended that you build default passwords into your application. Instead, you can store your passwords in Azure Key Vault and then use Azure AD to retrieve them. -- [How to regenerate Azure Cache for Redis access keys](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-configure#settings)
+- [How to regenerate Azure Cache for Redis access keys](../azure-cache-for-redis/cache-configure.md#settings)
**Responsibility**: Customer
In addition, use Azure AD risk detections to view alerts and reports on risky us
Currently, only the Computer Vision API, Face API, Text Analytics API, Immersive Reader, Form Recognizer, Anomaly Detector, and all Bing services except Bing Custom Search support authentication using Azure AD. -- [How to authenticate requests to Cognitive Services](https://docs.microsoft.com/azure/cognitive-services/authentication#authenticate-with-azure-active-directory)
+- [How to authenticate requests to Cognitive Services](./authentication.md#authenticate-with-azure-active-directory)
**Responsibility**: Customer
Customer to maintain inventory of API Management user accounts, reconcile access
- [How to manage user accounts in Azure API Management](../api-management/api-management-howto-create-or-invite-developers.md) -- [How to get list of API Management users](https://docs.microsoft.com/powershell/module/az.apimanagement/get-azapimanagementuser?view=azps-4.8.0&amp;preserve-view=true)
+- [How to get list of API Management users](/powershell/module/az.apimanagement/get-azapimanagementuser?amp;preserve-view=true&view=azps-4.8.0)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Not available for Cognitive Services. Customer Lockbox is not yet supported for Cognitive Services. -- [List of Customer Lockbox-supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox-supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
Microsoft manages the underlying platform and treats all customer content as sen
You may also use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. -- [List of services that encrypt information at rest](/azure/cognitive-services/encryption/cognitive-services-encryption-keys-portal)
+- [List of services that encrypt information at rest](./encryption/cognitive-services-encryption-keys-portal.md)
**Responsibility**: Customer
You may also use Azure Key Vault to store your customer-managed keys. You can ei
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production instances of Cognitive Services and other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?amp;preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
In addition, use Azure Resource Graph to query or discover resources within the
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
In addition, use Azure Resource Graph to query or discover resources within the
**Guidance**: Define and implement standard security configurations for your Cognitive Services container with Azure Policy. Use Azure Policy aliases in the "Microsoft.CognitiveServices" namespace to create custom policies to audit or enforce the configuration of your Azure Cache for Redis instances. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?amp;preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
In addition, use Azure Resource Graph to query or discover resources within the
**Guidance**: If you are using custom Azure Policy definitions or Azure Resource Manager templates for your Cognitive Services containers and related resources, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?amp;preserve-view=true&view=azure-devops)
**Responsibility**: Customer
In addition, use Azure Resource Graph to query or discover resources within the
- [How to integrate with Azure Managed Identities](../azure-app-configuration/howto-integrate-azure-managed-service-identity.md) -- [How to create a Key Vault](/azure/key-vault/quick-create-portal)
+- [How to create a Key Vault](../key-vault/secrets/quick-create-portal.md)
- [How to authenticate to Key Vault](../key-vault/general/authentication.md)
You can also use lifecycle management feature to back up data to the Archive tie
- [Overview of Azure Resource Manager](../azure-resource-manager/management/overview.md) -- [How to create a Cognitive Services resource using an Azure Resource Manager template](https://docs.microsoft.com/azure/cognitive-services/resource-manager-template?tabs=portal)
+- [How to create a Cognitive Services resource using an Azure Resource Manager template](./create-account-resource-manager-template.md?tabs=portal)
- [Single and multi-resource export to a template in Azure portal](../azure-resource-manager/templates/export-template-portal.md)
You can also use lifecycle management feature to back up data to the Archive tie
- [Introduction to Azure Automation](../automation/automation-intro.md) -- [How to backup key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
You can also use lifecycle management feature to back up data to the Archive tie
- [Deploy resources with ARM templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md) -- [How to restore key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore key vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
You can also use lifecycle management feature to back up data to the Archive tie
Use Azure role-based access control to protect customer-managed keys. Enable Soft-Delete and purge protection in Key Vault to protect keys against accidental or malicious deletion. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
- [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
cognitive-services Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
Welcome to what's new in the Cognitive Services docs from February 1, 2021 throu
### New articles -- [Azure Cognitive Services containers frequently asked questions (FAQ)](/azure/cognitive-services/containers/container-faq)
+- [Azure Cognitive Services containers frequently asked questions (FAQ)](./containers/container-faq.yml)
### Updated articles -- [Azure Cognitive Services container image tags and release notes](/azure/cognitive-services/containers/container-image-tags)
+- [Azure Cognitive Services container image tags and release notes](./containers/container-image-tags.md)
## Form Recognizer ### Updated articles -- [Deploy the sample labeling tool](/azure/cognitive-services/form-recognizer/deploy-label-tool)-- [What is Form Recognizer?](/azure/cognitive-services/form-recognizer/overview)-- [Train a Form Recognizer model with labels using the sample labeling tool](/azure/cognitive-services/form-recognizer/quickstarts/label-tool)
+- [Deploy the sample labeling tool](./form-recognizer/deploy-label-tool.md)
+- [What is Form Recognizer?](./form-recognizer/overview.md)
+- [Train a Form Recognizer model with labels using the sample labeling tool](./form-recognizer/quickstarts/label-tool.md)
## Text Analytics ### Updated articles -- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support)-- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api)
+- [Text Analytics API v3 language support](./text-analytics/language-support.md)
+- [How to call the Text Analytics REST API](./text-analytics/how-tos/text-analytics-how-to-call-api.md)
[!INCLUDE [Service specific updates](./includes/service-specific-updates.md)]
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
+
+ Title: Prepare your organization's network for Azure Communication Services
+
+description: Learn about the network requirements for Azure Communication Services voice and video calling
+++++ Last updated : 3/23/2021+++++
+# Ensure high-quality media in Azure Communication Services
+
+This document provides an overview of the factors and best practices that should be considered when building high-quality multimedia communication experiences with Azure Communication Services.
+
+## Factors that affect media quality and reliability
+
+There are many different factors that contribute to Azure Communication Services real-time media (audio, video, and application sharing) quality. These include network quality, bandwidth, firewall, host, and device configurations.
++
+### Network quality
+
+The quality of real-time media over IP is significantly impacted by the quality of the underlying network connectivity, but especially by the amount of:
+* **Latency**. This is the time it takes to get an IP packet from point A to point B on the network. This network propagation delay is determined by the physical distance between the two points and any additional overhead incurred by the devices that your traffic flows through. Latency is measured as one-way or Round-trip Time (RTT).
+* **Packet Loss**. A percentage of packets that are lost in a given window of time. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets having almost no impact, to back-to-back burst losses that cause complete audio cut-out.
+* **Inter-packet arrival jitter or simply jitter**. This is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering that a participant will notice its effects.
+
+### Network bandwidth
+
+Ensure that your network is configured to support the bandwidth required by concurrent Azure Communication Services media sessions and other business applications. Testing the end-to-end network path for bandwidth bottlenecks is critical to the successful deployment of your multimedia Communication Services solution.
+
+Below are the bandwidth requirements for the JavaScript client libraries:
+
+|Bandwidth|Scenarios|
+|:--|:--|
+|40 kbps|Peer-to-peer audio calling|
+|500 kbps|Peer-to-peer audio calling and screen sharing|
+|500 kbps|Peer-to-peer quality video calling 360p at 30fps|
+|1.2 Mbps|Peer-to-peer HD quality video calling with resolution of HD 720p at 30fps|
+|500 kbps|Group video calling 360p at 30fps|
+|1.2 Mbps|HD Group video calling with resolution of HD 720p at 30fps|
+
+Below are the bandwidth requirements for the native Android and iOS client libraries:
+
+|Bandwidth|Scenarios|
+|:--|:--|
+|30 kbps|Peer-to-peer audio calling |
+|130 kbps|Peer-to-peer audio calling and screen sharing|
+|500 kbps|Peer-to-peer quality video calling 360p at 30fps|
+|1.2 Mbps|Peer-to-peer HD quality video calling with resolution of HD 720p at 30fps|
+|1.5 Mbps|Peer-to-peer HD quality video calling with resolution of HD 1080p at 30fps |
+|500kbps/1Mbps|Group video calling|
+|1Mbps/2Mbps|HD Group video calling (540p videos on 1080p screen)|
+
+### Firewall(s) configuration
+
+Azure Communication Services connections require internet connectivity to specific ports and IP addresses in order to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Azure Communication Services can still work. However, the optimal experience is provided when the recommended ports and IP ranges are open.
+
+| Category | IP ranges or FQDN | Ports |
+| :-- | :-- | :-- |
+| Media traffic | [Range of Azure Public Cloud IP Addresses](https://www.microsoft.com/download/confirmation.aspx?id=56519) | UDP 3478 through 3481, TCP ports 443 |
+| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io | TCP 443, 80 |
+
+### Network optimization
+
+The following tasks are optional and aren't required for rolling out Azure Communication Services. Use this guidance to optimize your network and Azure Communication Services performance or if you know you have some network limitations.
+You might want to optimize further if:
+* Azure Communication Services runs slowly (maybe you have insufficient bandwidth)
+* Calls keep dropping (might be due to firewall or proxy blockers)
+* Calls have static and cut out, or voices sound like robots (could be jitter or packet loss)
+
+| Network optimization task | Details |
+| :-- | :-- |
+| Plan your network | In this documentation you can find minimal requirements to your network for calls. Refer to the [Teams example for planning your network](https://docs.microsoft.com/microsoftteams/tutorial-network-planner-example) |
+| External name resolution | Be sure that all computers running the Azure Communications Services client libraries can resolve external DNS queries to discover the services provided by Azure Communication Servicers and that your firewalls are not preventing access. Please ensure that the client libraries can resolve addresses *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io |
+| Maintain session persistence | Make sure your firewall doesn't change the mapped Network Address Translation (NAT) addresses or ports for UDP
+Validate NAT pool size | Validate the network address translation (NAT) pool size required for user connectivity. When multiple users and devices access Azure Communication Services using [Network Address Translation (NAT) or Port Address Translation (PAT)](https://docs.microsoft.com/office365/enterprise/nat-support-with-office-365), ensure that the devices hidden behind each publicly routable IP address do not exceed the supported number. Ensure that adequate public IP addresses are assigned to the NAT pools to prevent port exhaustion. Port exhaustion will contribute to internal users and devices being unable to connect to the Azure Communication Services |
+| Intrusion Detection and Prevention Guidance | If your environment has an [Intrusion Detection](https://docs.microsoft.com/azure/network-watcher/network-watcher-intrusion-detection-open-source-tools) or Prevention System (IDS/IPS) deployed for an extra layer of security for outbound connections, allow all Azure Communication Services URLs |
+| Configure split-tunnel VPN | We recommend that you provide an alternate path for Teams traffic that bypasses the virtual private network (VPN), commonly known as [split-tunnel VPN](https://docs.microsoft.com/windows/security/identity-protection/vpn/vpn-routing). Split tunneling means that traffic for Azure Communications Services doesn't go through the VPN but instead goes directly to Azure. Bypassing your VPN will have a positive impact on media quality, and it reduces load from the VPN devices and the organization's network. To implement a split-tunnel VPN, work with your VPN vendor. Other reasons why we recommend bypassing the VPN: <ul><li> VPNs are typically not designed or configured to support real-time media.</li><li> VPNs might also not support UDP (which is required for Azure Communication Services)</li><li>VPNs also introduce an extra layer of encryption on top of media traffic that's already encrypted.</li><li>Connectivity to Azure Communication Services might not be efficient due to hair-pinning traffic through a VPN device.</li></ul>|
+| Implement QoS | [Use Quality of Service (QoS)](https://docs.microsoft.com/microsoftteams/qos-in-teams) to configure packet prioritization. This will improve call quality and help you monitor and troubleshoot call quality. QoS should be implemented on all segments of a managed network. Even when a network has been adequately provisioned for bandwidth, QoS provides risk mitigation in the event of unanticipated network events. With QoS, voice traffic is prioritized so that these unanticipated events don't negatively affect quality. |
+| Optimize WiFi | Similar to VPN, WiFi networks aren't necessarily designed or configured to support real-time media. Planning for, or optimizing, a WiFi network to support Azure Communication Services is an important consideration for a high-quality deployment. Consider these factors: <ul><li>Implement QoS or WiFi Multimedia (WMM) to ensure that media traffic is getting prioritized appropriately over your WiFi networks.</li><li>Plan and optimize the WiFi bands and access point placement. The 2.4 GHz range might provide an adequate experience depending on access point placement, but access points are often affected by other consumer devices that operate in that range. The 5 GHz range is better suited to real-time media due to its dense range, but it requires more access points to get sufficient coverage. Endpoints also need to support that range and be configured to leverage those bands accordingly.</li><li>If you're using dual-band WiFi networks, consider implementing band steering. Band steering is a technique implemented by WiFi vendors to influence dual-band clients to use the 5 GHz range.</li><li>When access points of the same channel are too close together, they can cause signal overlap and unintentionally compete, resulting in a degraded user experience. Ensure that access points that are next to each other are on channels that don't overlap.</li></ul> Each wireless vendor has its own recommendations for deploying its wireless solution. Consult your WiFi vendor for specific guidance.|
+++
+### Operating system and Browsers (for JavaScript client libraries)
+
+Azure Communication Services voice/video client libraries support certain operating systems and browsers.
+Learn about the operating systems and browsers that the calling client libraries support in the [calling conceptual documentation](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+
+## Next steps
+
+The following documents may be interesting to you:
+
+- Learn more about [calling libraries](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features)
+- Learn about [Client-server architecture](https://docs.microsoft.com/azure/communication-services/concepts/client-and-server-architecture)
+- Learn about [Call flow topologies](https://docs.microsoft.com/azure/communication-services/concepts/call-flows)
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
An advantage of the Azure Identity client library is that it enables you to use
## Prerequisites
+ - Azure CLI. [Installation guide](/cli/azure/install-azure-cli)
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free) ## Setting Up
The Azure Identity client library reads values from three environment variables
You may also want to: -- [Learn more about Azure Identity library](/dotnet/api/overview/azure/identity-readme)
+- [Learn more about Azure Identity library](/dotnet/api/overview/azure/identity-readme)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
Find the finalized code for this quickstart on [GitHub](https://github.com/Azure
## Prerequisites - Obtain an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Node.js](https://nodejs.org/en/) Active LTS and Maintenance LTS versions (8.11.1 and 10.14.1)-- Create an active Communication Services resource. [Create a Communication Services resource](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp).-- Create a User Access Token to instantiate the call client. [Learn how to create and manage user access tokens](https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp).
+- Create an active Communication Services resource. [Create a Communication Services resource](../create-communication-resource.md?pivots=platform-azp&tabs=windows).
+- Create a User Access Token to instantiate the call client. [Learn how to create and manage user access tokens](../access-tokens.md?pivots=programming-language-csharp).
## Setting up ### Create a new Node.js application
You can make an 1:1 outgoing video call by providing a user ID in the text field
You can download the sample app from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-video-calling). ## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp#clean-up-resources).
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md?pivots=platform-azp&tabs=windows#clean-up-resources).
## Next steps For more information, see the following articles:-- Check out our [web calling sample](https://docs.microsoft.com/azure/communication-services/samples/web-calling-sample)-- Learn about [calling client library capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/calling-client-samples?pivots=platform-web)-- Learn more about [how calling works](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/about-call-types)
+- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Learn about [calling client library capabilities](./calling-client-samples.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/support.md
With Azure, there are many [support options and plans](https://azure.microsoft.c
## Post a question to Microsoft Q&A
-For quick and reliable answers to product or technical questions you might have about Azure Communication Services from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our community, engage with us on [Microsoft Q&A](https://docs.microsoft.com/answers/products/azure).
+For quick and reliable answers to product or technical questions you might have about Azure Communication Services from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our community, engage with us on [Microsoft Q&A](/answers/products/azure).
-If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question make sure to use the [Azure Communication Services Tag](https://docs.microsoft.com/answers/topics/azure-communication-services.html).
+If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question make sure to use the [Azure Communication Services Tag](/answers/topics/azure-communication-services.html).
## Post a question on Stack Overflow
-You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Communication Services has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-communication-services) there too.
+You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Communication Services has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-communication-services) there too.
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/postman-tutorial.md
You can learn more about variables by reading [Postman's documentation on them](
### Creating a pre-request script
-The next step is to create a pre-request Script within Postman. A pre-request script, is a script that runs before each request in Postman and can modify or alter request parameters on your behalf. We'll be using this to sign our HTTP requests so that they can be authorized by ACS' Services. For more information about the Signing requirements, you can [read our guide on authentication](https://docs.microsoft.com/rest/api/communication/authentication).
+The next step is to create a pre-request Script within Postman. A pre-request script, is a script that runs before each request in Postman and can modify or alter request parameters on your behalf. We'll be using this to sign our HTTP requests so that they can be authorized by ACS' Services. For more information about the Signing requirements, you can [read our guide on authentication](/rest/api/communication/authentication).
We'll be creating this script within the Collection such that it runs on any request within the collection. To do this, within the collection tab click the "Pre-request Script" Sub-Tab.
Now that everything is set up, we're ready to create an ACS request within Postm
:::image type="content" source="media/postman/create-request.png" alt-text="Postman's plus button.":::
-This will create a new tab for our request within Postman. With it created we need to configure it. We'll be making a request against the SMS Send API so be sure to refer to the [documentation for this API for assistance](https://docs.microsoft.com/rest/api/communication/sms/send). Let's configure Postman's request.
+This will create a new tab for our request within Postman. With it created we need to configure it. We'll be making a request against the SMS Send API so be sure to refer to the [documentation for this API for assistance](/rest/api/communication/sms/send). Let's configure Postman's request.
Start by setting, the request type to `POST` and entering `{{endpoint}}/sms?api-version=2021-03-07` into the request URL field. This URL uses our previously created `endpoint` variable to automatically send it to your ACS Resource.
The Mobile phone, which owns the number you provided in the "to" value, should a
## Next steps > [!div class="nextstepaction"]
-> [Explore ACS APIs](https://docs.microsoft.com/rest/api/communication/)
-> [Read more about Authentication](https://docs.microsoft.com/rest/api/communication/authentication)
+> [Explore ACS APIs](/rest/api/communication/)
+> [Read more about Authentication](/rest/api/communication/authentication)
> [Learn more about Postman](https://learning.postman.com/) You might also want to:
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Brazil South | 4 | 16 | 4 | 16 | 20 | | Canada Central | 2 | 8 | 2 | 3.5 | 20 | | Central India | 2 | 3.5 | 2 | 3.5 | 20 |
-| Central US | 2 | 3.5 | 2 | 3.5 | 20 |
+| Central US | 2 | 8 | 2 | 3.5 | 20 |
| East Asia | 2 | 3.5 | 2 | 3.5 | 20 |
-| East US | 4 | 16 | 2 | 8 | 20 |
-| East US 2 | 2 | 3.5 | 4 | 16 | 20 |
+| East US | 2 | 8 | 2 | 8 | 20 |
+| East US 2 | 2 | 8 | 4 | 16 | 20 |
| Japan East | 4 | 16 | 4 | 16 | 20 | | Korea Central | 4 | 16 | 4 | 16 | 20 |
-| North Central US | 4 | 16 | 4 | 16 | 20 |
+| North Central US | 2 | 8 | 4 | 16 | 20 |
| North Europe | 2 | 8 | 2 | 8 | 20 |
-| South Central US | 2 | 3.5 | 2 | 8 | 20 |
+| South Central US | 2 | 8 | 2 | 8 | 20 |
| Southeast Asia | N/A | N/A | 2 | 3.5 | 20 | | South India | 2 | 3.5 | 2 | 3.5 | 20 | | UK South | 2 | 8 | 2 | 3.5 | 20 |
-| West Central US | 4 | 16 | 2 | 8 | 20 |
+| West Central US | 2 | 8 | 2 | 8 | 20 |
| West Europe | 4 | 16 | 4 | 16 | 20 |
-| West US | 4 | 16 | 2 | 8 | 20 |
+| West US | 2 | 8 | 2 | 8 | 20 |
| West US 2 | 2 | 8 | 2 | 3.5 | 20 |
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
container-instances Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/security-baseline.md
Control outbound network access from a subnet delegated to Azure Container Insta
You may use Azure Security Center Just In Time Network access to configure NSGs to limit exposure of endpoints to approved IP addresses for a limited period. Also , use Azure Security Center Adaptive Network Hardening to recommend NSG configurations that limit Ports and Source IPs based on actual traffic and threat intelligence. -- [How to configure DDoS protection](/azure/virtual-network/manage-ddos-protection)
+- [How to configure DDoS protection](../ddos-protection/manage-ddos-protection.md)
- [How to deploy Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md)
Deploy the firewall solution of your choice at each of your organization's netwo
**Guidance**: If using a cloud-based private registry like Azure container registry with Azure Container Instances, for resources that need access to your container registry, use virtual network service tags for the Azure Container Registry service to define network access controls on Network Security Groups or Azure Firewall. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name "AzureContainerRegistry" in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. -- [Allow access by service tag](https://docs.microsoft.com/azure/container-registry/container-registry-firewall-access-rules#allow-access-by-service-tag)
+- [Allow access by service tag](../container-registry/container-registry-firewall-access-rules.md#allow-access-by-service-tag)
**Responsibility**: Customer
You can use Azure Blueprints to simplify large-scale Azure deployments by packag
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to your container registries. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You can use Azure Blueprints to simplify large-scale Azure deployments by packag
**Guidance**: Within Azure Monitor, set your Log Analytics workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
You can use Azure Blueprints to simplify large-scale Azure deployments by packag
**Guidance**: Analyze and monitor Azure Container Instances logs for anomalous behavior and regularly review results. Use Azure Monitor's Log Analytics workspace to review logs and perform queries on log data. -- [Understand Log Analytics workspace](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Understand Log Analytics workspace](../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
- [How to create a log-enabled container group and query logs](container-instances-log-analytics.md)
You can use Azure Blueprints to simplify large-scale Azure deployments by packag
- [Azure Container Registry logs for diagnostic evaluation and auditing](../container-registry/container-registry-diagnostics-audit-logs.md) -- [How to alert on log analytics log data](/azure/azure-monitor/learn/tutorial-response)
+- [How to alert on log analytics log data](../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
You can use Azure Blueprints to simplify large-scale Azure deployments by packag
If using a cloud-based private registry like Azure container registry with Azure Container Instances, for each Azure container registry, track whether the built-in admin account is enabled or disabled. Disable the account when not in use. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?amp;preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?amp;preserve-view=true&view=azureadps-2.0)
-- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](../container-registry/container-registry-authentication.md#admin-account)
**Responsibility**: Customer
If using a cloud-based private registry like Azure container registry with Azure
If using a cloud-based private registry like Azure container registry with Azure Container Instances, if the default admin account of an Azure container registry is enabled, complex passwords are automatically created and should be rotated. Disable the account when not in use. -- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](../container-registry/container-registry-authentication.md#admin-account)
**Responsibility**: Customer
If using a cloud-based private registry like Azure container registry with Azure
- [Understand Azure Security Center Identity and Access](../security-center/security-center-identity-access.md) -- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](../container-registry/container-registry-authentication.md#admin-account)
**Responsibility**: Customer
If using a cloud-based private registry like Azure container registry with Azure
- [Understand SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md) -- [Individual sign in to a container registry](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#individual-login-with-azure-ad)
+- [Individual sign in to a container registry](../container-registry/container-registry-authentication.md#individual-login-with-azure-ad)
**Responsibility**: Customer
If using a cloud-based private registry like Azure container registry with Azure
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure identity access reviews](../active-directory/governance/access-reviews-overview.md)
If using a cloud-based private registry like Azure container registry with Azure
You can streamline this process by creating Diagnostic Settings for Azure AD user accounts and sending the audit logs and sign in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Responsibility**: Customer
You can streamline this process by creating Diagnostic Settings for Azure AD use
**Guidance**: Not available; Customer Lockbox not currently supported for Azure Container Instances. -- [List of Customer Lockbox supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
Follow Azure Security Center recommendations for encryption at rest and encryption in transit, where applicable. -- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
+- [Understand encryption in transit with Azure](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)
**Responsibility**: Shared
For the underlying platform which is managed by Microsoft, Microsoft treats all
- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md) -- [Customer-managed keys in Azure Container Registry](https://aka.ms/acr/cmk)
+- [Customer-managed keys in Azure Container Registry](../container-registry/container-registry-customer-managed-keys.md)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
- [Container monitoring and scanning security recommendations for Azure Container Instances](container-instances-image-security.md) -- [Azure Container Registry integration with Security Center](/azure/security-center/azure-container-registry-integration)
+- [Azure Container Registry integration with Security Center](../security-center/defender-for-container-registries-introduction.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?amp;preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Use Azure Resource Graph to query/discover resources within their subscription(s
- [Azure Container Registry logs for diagnostic evaluation and auditing](../container-registry/container-registry-diagnostics-audit-logs.md) -- [Understand Log Analytics Workspace](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Understand Log Analytics Workspace](../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
**Guidance**: Use operating system specific configurations or third-party resources to limit users' ability to execute scripts within Azure compute resources. -- [For example, how to control PowerShell script execution in Windows Environments](https://docs.microsoft.com/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-7&amp;preserve-view=true)
+- [For example, how to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy?amp;preserve-view=true&view=powershell-7)
**Responsibility**: Customer
If using a cloud-based private registry like Azure Container Registry (ACR) with
**Guidance**: If using custom Azure policy definitions, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?amp;preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Back up customer-managed keys in Azure Key Vault using Azure command-line tools
- [Import container images to a container registry](../container-registry/container-registry-import-images.md) -- [How to backup key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
- [Encrypting deployment data with Container Instances](container-instances-encrypt-data.md)
Back up customer-managed keys in Azure Key Vault using Azure command-line tools
**Guidance**: Test restoration of backed up customer-managed keys in Azure Key Vault using Azure command-line tools or SDKs. -- [How to restore Azure Key Vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Azure Key Vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Additionally, mark subscriptions using tags and create a naming system to identi
- [Security alerts in Azure Security Center](../security-center/security-center-alerts-overview.md) -- [Use tags to organize your Azure resources](/azure/azure-resource-manager/resource-group-using-tags)
+- [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Additionally, mark subscriptions using tags and create a naming system to identi
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
container-registry Container Registry Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-faq.md
Title: Frequently asked questions
description: Answers for frequently asked questions related to the Azure Container Registry service Previously updated : 09/18/2020 Last updated : 03/15/2021
Image quarantine is currently a preview feature of ACR. You can enable the quara
### How do I enable anonymous pull access?
-Setting up an Azure container registry for anonymous (public) pull access is currently a preview feature. If you have any [scope map (user) or token resources](./container-registry-repository-scoped-permissions.md) in your registry, please delete them before raising a support ticket (system scope maps can be ignored). To enable public access, please open a support ticket at https://aka.ms/acr/support/create-ticket. For details, see the [Azure Feedback Forum](https://feedback.azure.com/forums/903958-azure-container-registry/suggestions/32517127-enable-anonymous-access-to-registries).
+Setting up an Azure container registry for anonymous (unauthenticated) pull access is currently a preview feature, available in the Standard and Premium [service tiers](container-registry-skus.md).
+
+To enable anonymous pull access, update a registry using the Azure CLI (version 2.21.0 or later) and pass the `--anonymous-pull-enabled` parameter to the [az acr update](/cli/azure/acr#az_acr_update) command:
+
+```azurecli
+az acr update --name myregistry --anonymous-pull-enabled
+```
+
+You may disable anonymous pull access at any time by setting `--anonymous-pull-enabled` to `false`.
> [!NOTE]
-> * Only the APIs required to pull a known image can be accessed anonymously. No other APIs for operations like tag list or repository list are accessible anonymously.
> * Before attempting an anonymous pull operation, run `docker logout` to ensure that you clear any existing Docker credentials.
+> * Only data-plane operations are available to unauthenticated clients.
+> * The registry may throttle a high rate of unauthenticated requests.
+
+> [!WARNING]
+> Anonymous pull access currently applies to all repositories in the registry. If you manage repository access using [repository-scoped tokens](container-registry-repository-scoped-permissions.md), be aware that all users may pull from those repositories in a registry enabled for anonymous pull. We recommend deleting tokens when anonymous pull access is enabled.
### How do I push non-distributable layers to a registry?
container-registry Container Registry Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-storage.md
Title: Container image storage description: Details on how your container images and other artifacts are stored in Azure Container Registry, including security, redundancy, and capacity.- Previously updated : 03/03/2021+ Last updated : 03/24/2021
All container images and other artifacts in your registry are encrypted at rest.
## Regional storage
-Azure Container Registry stores data in the region where the registry is created, to help customers meet data residency and compliance requirements.
+Azure Container Registry stores data in the region where the registry is created, to help customers meet data residency and compliance requirements. In all regions except Brazil South and Southeast Asia, Azure may also store registry data in a paired region in the same geography. In the Brazil South and Southeast Asia regions, registry data is always confined to the region, to accommodate data residency requirements for those regions.
-To help guard against datacenter outages, some regions offer [zone redundancy](zone-redundancy.md), where data is replicated across multiple datacenters in a particular region.
-
-Customers who wish to have their data stored in multiple regions for better performance across different geographies or who wish to have resiliency in the event of a regional outage should enable [geo-replication](container-registry-geo-replication.md).
+If a regional outage occurs, the registry data may become unavailable and is not automatically recovered. Customers who wish to have their registry data stored in multiple regions for better performance across different geographies or who wish to have resiliency in the event of a regional outage should enable [geo-replication](container-registry-geo-replication.md).
## Geo-replication
For scenarios requiring high-availability assurance, consider using the [geo-rep
## Zone redundancy
-To create a resilient and high-availability Azure container registry, optionally enable [zone redundancy](zone-redundancy.md) in select Azure regions. A feature of the Premium service tier, zone redundancy uses Azure [availability zones](../availability-zones/az-overview.md) to replicate your registry to a minimum of three separate zones in each enabled region. Combine geo-replication and zone redundancy to enhance both the reliability and performance of a registry.
+To help create a resilient and high-availability Azure container registry, optionally enable [zone redundancy](zone-redundancy.md) in select Azure regions. A feature of the Premium service tier, zone redundancy uses Azure [availability zones](../availability-zones/az-overview.md) to replicate your registry to a minimum of three separate zones in each enabled region. Combine geo-replication and zone redundancy to enhance both the reliability and performance of a registry.
## Scalable storage
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
The following constraints are applicable on the operational data in Azure Cosmos
* Expect different behavior in regard to explicit `null` values: * Spark pools in Azure Synapse will read these values as `0` (zero).
- * SQL serverless pools in Azure Synapse will read these values as `NULL` if the first document of the collection has, for the same property, a value with a datatype different of `integer`.
- * SQL serverless pools in Azure Synapse will read these values as `0` (zero) if the first document of the collection has, for the same property, a value that is an integer.
+ * SQL serverless pools in Azure Synapse will read these values as `NULL` if the first document of the collection has, for the same property, a value with a `non-numeric` datatype.
+ * SQL serverless pools in Azure Synapse will read these values as `0` (zero) if the first document of the collection has, for the same property, a value with a `numeric` datatype.
* Expect different behavior in regard to missing columns: * Spark pools in Azure Synapse will represent these columns as `undefined`.
cosmos-db Create Cassandra Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-python.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Python 2.7.14+ or 3.4+](https://www.python.org/downloads/).
+- [Python 2.7 or 3.6+](https://www.python.org/downloads/).
- [Git](https://git-scm.com/downloads). - [Python Driver for Apache Cassandra](https://github.com/datastax/python-driver).
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 03/17/2021 Last updated : 03/24/2021
resourceGroupName='<myResourceGroup>'
accountName='<myCosmosAccount>' readOnlyRoleDefinitionId = '<roleDefinitionId>' // as fetched above principalId = '<aadPrincipalId>'
-az cosmosdb sql role assignment create --account-name $accountName --resource-group --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId
+az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId
``` ## Initialize the SDK with Azure AD
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
Previously updated : 10/14/2020 Last updated : 03/24/2021 # Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates+ [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
This template creates an Azure Cosmos account, database and container with with
:::code language="json" source="~/quickstart-templates/101-cosmosdb-sql-container-sprocs/azuredeploy.json":::
+<a id="create-rbac"></a>
+
+## Azure Cosmos DB account with Azure AD and RBAC
+
+This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-cosmosdb-sql-rbac%2Fazuredeploy.json)
++ <a id="free-tier"></a> ## Free tier Azure Cosmos DB account
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
cosmos-db Table Storage How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-python.md
ms.devlang: python Previously updated : 07/23/2020 Last updated : 03/23/2021
for task in tasks:
print(task.description) ```
+## Query for an entity without partition and row keys
+
+You can also query for entities within a table without using the partition and row keys. Use the `table_service.query_entities` method without the "filter" and "select" parameters as show in the following example:
+
+```python
+print("Get the first item from the table")
+tasks = table_service.query_entities(
+ 'tasktable')
+lst = list(tasks)
+print(lst[0])
+```
+ ## Delete an entity Delete an entity by passing its **PartitionKey** and **RowKey** to the [delete_entity][py_delete_entity] method.
cosmos-db Templates Samples Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/templates-samples-sql.md
Previously updated : 10/14/2020 Last updated : 03/24/2021
This article only shows Azure Resource Manager template examples for Core (SQL)
|[Create an Azure Cosmos account, database, container with analytical store](manage-with-templates.md#create-analytical-store) | This template creates a Core (SQL) API account in one region with a container configured with Analytical TTL enabled and option to use manual or autoscale throughput. | |[Create an Azure Cosmos account, database, container with standard (manual) throughput](manage-with-templates.md#create-manual) | This template creates a Core (SQL) API account in two regions, a database and container with standard throughput. | |[Create an Azure Cosmos account, database and container with a stored procedure, trigger and UDF](manage-with-templates.md#create-sproc) | This template creates a Core (SQL) API account in two regions with a stored procedure, trigger and UDF for a container. |
+|[Create an Azure Cosmos account with Azure AD identity, Role Definitions and Role Assignment](manage-with-templates.md#create-rbac) | This template creates a Core (SQL) API account with Azure AD identity, Role Definitions and Role Assignment on a Service Principal. |
|[Create a private endpoint for an existing Azure Cosmos account](how-to-configure-private-endpoints.md#create-a-private-endpoint-by-using-a-resource-manager-template) | This template creates a private endpoint for an existing Azure Cosmos Core (SQL) API account in an existing virtual network. | |[Create a free-tier Azure Cosmos account](manage-with-templates.md#free-tier) | This template creates an Azure Cosmos DB Core (SQL) API account on free-tier. |
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/allocate-costs.md
Title: Allocate Azure costs
description: This article explains how create cost allocation rules to distribute costs of subscriptions, resource groups, or tags to others. Previously updated : 08/11/2020 Last updated : 03/23/2021
In the Azure portal, navigate to **Cost Management + Billing** > **Cost Manageme
:::image type="content" source="./media/allocate-costs/tagged-costs.png" alt-text="Example showing costs for tagged items" lightbox="./media/allocate-costs/tagged-costs.png" :::
+Here's a video that demonstrates how to create a cost allocation rule.
+
+>[!VIDEO https://www.youtube.com/embed/nYzIIs2mx9Q]
++ ## Edit an existing cost allocation rule You can edit a cost allocation rule to change the source or the target or if you want to update the prefilled percentage for either compute, storage, or network options. Edit the rules in the same way you create them. Modifying existing rules can take up to two hours to reprocess.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Title: Tutorial - Create and manage exported data from Azure Cost Management
description: This article shows you how you can create and manage exported Azure Cost Management data so that you can use it in external systems. Previously updated : 12/7/2020 Last updated : 03/24/2020
Initially, it can take 12-24 hours before the export runs. However, it can take
### [Azure CLI](#tab/azure-cli)
+When you create an export programmatically, you must manually register the `Microsoft.CostManagementExports` resource provider with the subscription where the storage account resides. Registration happens automatically when you create the export using the Azure portal. For more information about how to register resource providers, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+ Start by preparing your environment for the Azure CLI: [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../../includes/azure-cli-prepare-your-environment-no-header.md)]
az costmanagement export delete --name DemoExport --scope "subscriptions/0000000
### [Azure PowerShell](#tab/azure-powershell)
+When you create an export programmatically, you must manually register the `Microsoft.CostManagementExports` resource provider with the subscription where the storage account resides. Registration happens automatically when you create the export using the Azure portal. For more information about how to register resource providers, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+ Start by preparing your environment for Azure PowerShell: [!INCLUDE [azure-powershell-requirements-no-header.md](../../../includes/azure-powershell-requirements-no-header.md)]
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Previously updated : 02/11/2021 Last updated : 03/24/2021
This article provides high-level steps used to transfer Azure subscriptions to a
Before you start a transfer request, you should download or export any cost and billing information that you want to keep. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
-If you have any existing reservations, they stop applying after you transfer a subscription. Be sure to [cancel any reservations and refund them](../reservations/exchange-and-refund-azure-reservations.md) before you transfer a subscription.
+If you have any existing reservations, they stop applying 90 days after you transfer a subscription. Be sure to [cancel any reservations and refund them](../reservations/exchange-and-refund-azure-reservations.md) before you transfer a subscription to avoid charges after the 90 day grace period.
## Transfer EA subscriptions to a CSP partner
cost-management-billing Understand Storage Charges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/understand-storage-charges.md
Title: Understand how the reservation discount is applied to Azure Storage | Microsoft Docs
-description: Learn about how the Azure Storage reserved capacity discount is applied to block blob and Azure Data Lake Storage Gen2 resources.
+ Title: Understand how reservation discounts are applied to Azure storage services | Microsoft Docs
+description: Learn about how reserved capacity discounts are applied to Azure Blob storage, Azure Files, and Azure Data Lake Storage Gen2 resources.
Previously updated : 02/13/2020 Last updated : 03/08/2021
-# Understand how the reservation discount is applied to Azure Storage
+# Understand how reservation discounts are applied to Azure storage services
+Azure storage services enable you to save money on storage costs by reserving capacity. Azure Blob storage, Azure Files, and Azure Data Lake storage Gen 2 support reserve instances. After you purchase reserved capacity, the reservation discount is automatically applied to the storage resources that match the terms of the reservation. The reservation discount applies to storage capacity only. Bandwidth and request rate are charged at pay-as-you-go rates.
-After you purchase Azure Storage reserved capacity, the reservation discount is automatically applied to block blob and Azure Data Lake Storage Gen2 resources that match the terms of the reservation. The reservation discount applies to storage capacity only. Bandwidth and request rate are charged at pay-as-you-go rates.
+For more information about Azure Blob storage and Azure Data Lake storage Gen 2 reserved capacity, see [Optimize costs for Blob storage with reserved capacity](../../storage/blobs/storage-blob-reserved-capacity.md). For more information about Azure Files reserved capacity, see [Optimize costs for Azure Files with reserved capacity](../../storage/files/files-reserve-capacity.md).
-For more information about Azure Storage reserved capacity, see [Optimize costs for Blob storage with reserved capacity](../../storage/blobs/storage-blob-reserved-capacity.md).
-
-For information about Azure Storage reservation pricing, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure Data Lake Storage Gen 2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
+For information about Azure Blob storage reservation pricing, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure Data Lake Storage Gen 2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). For information about Azure Files storage reservation pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files).
## How the reservation discount is applied
+The reserved capacity discount is applied to supported storage resources on an hourly basis.
-The Azure Storage reserved capacity discount is applied to block blob and Azure Data Lake Storage Gen2 resources on an hourly basis.
-
-The Azure Storage reserved capacity discount is a "use-it-or-lose-it" discount. If you don't have any block blob or Azure Data Lake Storage Gen2 resources that meet the terms of the reservation for a given hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+The reserved capacity discount is a "use-it-or-lose-it" discount. If you don't have any block blobs, Azure file shares, or Azure Data Lake Storage Gen2 resources that meet the terms of the reservation for a given hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
When you delete a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost. ## Discount examples
+The following examples show how the reserved capacity discount applies, depending on the deployments.
-The following examples show how the Azure Storage reserved capacity discount applies, depending on the deployments.
-
-Suppose that you have purchased 100 TB of reserved capacity in the in US West 2 region for a 1-year term. Your reservation is for locally redundant storage (LRS) in the hot access tier.
+Suppose that you have purchased 100 TiB of reserved capacity in the in US West 2 region for a 1-year term. Your reservation is for locally redundant storage (LRS) blob storage in the hot access tier.
Assume that the cost of this sample reservation is $18,540. You can either choose to pay the full amount up front or to pay fixed monthly installments of $1,545 per month for the next twelve months. For these examples, assume that you have signed up for a monthly reservation payment plan. The following scenarios describe what happens if you under-use or overuse your reserved capacity. ### Underusing your capacity-
-Suppose that in a given hour within the reservation period, you used only 80 TB of your 100 TB reserved capacity. The remaining 20 TB is not applied for that hour and does not carry over.
+Suppose that in a given hour within the reservation period, you used only 80 TiB of your 100 TiB reserved capacity. The remaining 20 TiB is not applied for that hour and does not carry over.
### Overusing your capacity-
-Suppose that in a given hour within the reservation period, you used 101 TB of storage capacity. The reservation discount applies to 100 TB of your data, and the remaining 1 TB is charged at pay-as-you-go rates for that hour. If in the next hour your usage changes to 100 TB, then all usage is covered by the reservation.
+Suppose that in a given hour within the reservation period, you used 101 TiB of storage capacity. The reservation discount applies to 100 TiB of your data, and the remaining 1 TiB is charged at pay-as-you-go rates for that hour. If in the next hour your usage changes to 100 TiB, then all usage is covered by the reservation.
## Need help? Contact us- If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). ## Next steps- - [Optimize costs for Blob storage with reserved capacity](../../storage/blobs/storage-blob-reserved-capacity.md)
+- [Optimize costs for Azure Files with reserved capacity](../../storage/files/files-reserve-capacity.md)
- [What are Azure Reservations?](save-compute-costs-reservations.md)
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
Previously updated : 02/18/2020 Last updated : 03/24/2020 # Copy data to or from Azure Data Explorer by using Azure Data Factory
The following sections provide details about properties that are used to define
## Linked service properties
-The Azure Data Explorer connector uses service principal authentication. Follow these steps to get a service principal and to grant permissions:
+The Azure Data Explorer connector supports the following authentication types. See the corresponding sections for details:
+
+- [Service principal authentication](#service-principal-authentication)
+- [Managed identities for Azure resources authentication](#managed-identity)
+
+### Service principal authentication
+
+To use service principal authentication, follow these steps to get a service principal and to grant permissions:
1. Register an application entity in Azure Active Directory by following the steps in [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service:
The Azure Data Explorer connector uses service principal authentication. Follow
- **As sink**, grant at least the **Database ingestor** role to your database >[!NOTE]
->When you use the Data Factory UI to author, your login user account is used to list Azure Data Explorer clusters, databases, and tables. Manually enter the name if you don't have permission for these operations.
+>When you use the Data Factory UI to author, by default your login user account is used to list Azure Data Explorer clusters, databases, and tables. You can choose to list the objects using the service principal by clicking the dropdown next to the refresh button, or manually enter the name if you don't have permission for these operations.
The following properties are supported for the Azure Data Explorer linked service:
The following properties are supported for the Azure Data Explorer linked servic
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. This is known as "Authority ID" in [Kusto connection string](/azure/kusto/api/connection-strings/kusto#application-authentication-properties). Retrieve it by hovering the mouse pointer in the upper-right corner of the Azure portal. | Yes | | servicePrincipalId | Specify the application's client ID. This is known as "AAD application client ID" in [Kusto connection string](/azure/kusto/api/connection-strings/kusto#application-authentication-properties). | Yes | | servicePrincipalKey | Specify the application's key. This is known as "AAD application key" in [Kusto connection string](/azure/kusto/api/connection-strings/kusto#application-authentication-properties). Mark this field as a **SecureString** to store it securely in Data Factory, or [reference secure data stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
-**Linked service properties example:**
+**Example: using service principal key authentication**
```json {
The following properties are supported for the Azure Data Explorer linked servic
} ```
+### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+
+To use managed identities for Azure resource authentication, follow these steps to grant permissions:
+
+1. [Retrieve the Data Factory managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your factory.
+
+2. Grant the managed identity the correct permissions in Azure Data Explorer. See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) for detailed information about roles and permissions and about managing permissions. In general, you must:
+
+ - **As source**, grant at least the **Database viewer** role to your database
+ - **As sink**, grant at least the **Database ingestor** role to your database
+
+>[!NOTE]
+>When you use the Data Factory UI to author, your login user account is used to list Azure Data Explorer clusters, databases, and tables. Manually enter the name if you don't have permission for these operations.
+
+The following properties are supported for the Azure Data Explorer linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property must be set to **AzureDataExplorer**. | Yes |
+| endpoint | Endpoint URL of the Azure Data Explorer cluster, with the format as `https://<clusterName>.<regionName>.kusto.windows.net`. | Yes |
+| database | Name of database. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using managed identity authentication**
+
+```json
+{
+ "name": "AzureDataExplorerLinkedService",
+ "properties": {
+ "type": "AzureDataExplorer",
+ "typeProperties": {
+ "endpoint": "https://<clusterName>.<regionName>.kusto.windows.net ",
+ "database": "<database name>",
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see [Datasets in Azure Data Factory](concepts-datasets-linked-services.md). This section lists properties that the Azure Data Explorer dataset supports.
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
description: 'This topic describes how to deal with delimited text format in Azu
Previously updated : 12/07/2020 Last updated : 03/23/2021
The below table lists the properties supported by a delimited text sink. You can
| Name | Description | Required | Allowed values | Data flow script property | | - | -- | -- | -- | - | | Clear the folder | If the destination folder is cleared prior to write | no | `true` or `false` | truncate |
-| File name option | The naming format of the data written. By default, one file per partition in format `part-#####-tid-<guid>` | no | Pattern: String <br> Per partition: String[] <br> As data in column: String <br> Output to single file: `['<fileName>']` | filePattern <br> partitionFileNames <br> rowUrlColumn <br> partitionFileNames |
+| File name option | The naming format of the data written. By default, one file per partition in format `part-#####-tid-<guid>` | no | Pattern: String <br> Per partition: String[] <br> Name file as column data: String <br> Output to single file: `['<fileName>']` <br> Name folder as column data: String | filePattern <br> partitionFileNames <br> rowUrlColumn <br> partitionFileNames <br> rowFolderUrlColumn |
| Quote all | Enclose all values in quotes | no | `true` or `false` | quoteAll |
+rowFolderUrlColumn:
+ ### Sink example The below image is an example of a delimited text sink configuration in mapping data flows.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 03/17/2021 Last updated : 03/24/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Data Lake Analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/data-lake-analytics-whats-new.md
The runtime version will be updated aperiodically. And the previous runtime will
The following version is the current default runtime version. -- release-20200124live_adl_16283022_2
+- **release_20200707_scope_2b8d563_usql**
To get understanding how to troubleshoot U-SQL runtime failures, refer to [Troubleshoot U-SQL runtime failures](runtime-troubleshoot.md).
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
databox-online Azure Stack Edge Gpu Manage Virtual Machine Network Interfaces Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md
+
+ Title: How to manage VMs network interfaces on your Azure Stack Edge Pro via the Azure portal
+description: Learn how to manage network interfaces on VMs that are deployed on your Azure Stack Edge Pro GPU via the Azure portal.
++++++ Last updated : 03/23/2021+
+Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.
++
+# Use the Azure portal to manage network interfaces on the VMs on your Azure Stack Edge Pro GPU
++
+You can create and manage virtual machines (VMs) on an Azure Stack Edge device using Azure portal, templates, Azure PowerShell cmdlets and via Azure CLI/Python scripts. This article describes how to manage the network interfaces on a VM running on your Azure Stack Edge device using the Azure portal.
+
+When you create a VM, you specify one virtual network interface to be created. You may want to add one or more network interfaces to the virtual machine after it is created. You may also want to change the default network interface settings for an existing network interface.
+
+This article explains how to add a network interface to an existing VM, change existing settings such as IP type (static vs. dynamic), and finally remove or detach an existing interface.
+
+
+## About network interfaces on VMs
+
+A network interface enables a virtual machine (VM) running on your Azure Stack Edge Pro device to communicate with Azure and on-premises resources. When you enable a port for compute network on your device, a virtual switch is created on that network interface. This virtual switch is then used to deploy compute workloads such as VMs or containerized applications on your device.
+
+Your device supports only one virtual switch but multiple virtual network interfaces. Each network interface on your VM has a static or a dynamic IP address assigned to it. With IP addresses assigned to multiple network interfaces on your VM, certain capabilities are enabled on your VM. For example, your VM can host multiple websites or services with different IP addresses and SSL certificates on a single server. A VM on your device can serve as a network virtual appliance, such as a firewall or a load balancer. <!--Is it possible to do that on ASE?-->
+
+<!--There is a limit to how many virtual network interfaces can be created on the virtual switch on your device. See the Azure Stack Edge Pro limits article for details.-->
++
+## Prerequisites
+
+Before you begin to manage VMs on your device via the Azure portal, make sure that:
+
+1. You have enabled a network interface for compute on your device. This action creates a virtual switch on that network interface on your VM.
+ 1. In the local UI of your device, go to **Compute**. Select the network interface that you will use to create a virtual switch.
+
+ > [!IMPORTANT]
+ > You can only configure one port for compute.
+
+ 1. Enable compute on the network interface. Azure Stack Edge Pro GPU creates and manages a virtual switch corresponding to that network interface.
+
+1. You have atleast one VM deployed on your device. To create this VM, see the instructions in [Deploy VM on your Azure Stack Edge Pro via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+1. Your VM should be in **Stopped** state. To stop your VM, go to **Virtual machines > Overview** and select the VM you want to stop. In the VM properties page, select **Stop** and then select **Yes** when prompted for confirmation. Before you add, edit, or delete network interfaces, you must stop the VM.
+
+ ![Stop VM from VM properties page](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/stop-vm-2.png)
++
+## Add a network interface
+
+Follow these steps to add a network interface to a virtual machine deployed on your device.
+
+1. Go to the virtual machine that you have stopped and then go to the **VM Properties** page. Select **Networking**.
+
+ ![Select Networking on VM properties page](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-1.png)
+
+2. In the **Networking** blade, from the command bar, select **+ Add network interface**.
+
+ ![Select add network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-2.png)
+
+3. In the **Add network interface** blade, enter the following parameters:
+
+
+ |Column1 |Column2 |
+ |||
+ |Name | A unique name within the resource group. The name cannot be changed after the network interface is created. To manage multiple network interfaces easily, use the suggestions provided in the [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). |
+ |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. There is only one virtual network associated with your device. |
+ |Subnet | A subnet within the selected virtual network. This field is automatically populated with the subnet associated with the network interface on which you enabled compute. |
+ |IP assignment | A static or a dynamic IP for your network interface. The static IP should be an available, free IP from the specified subnet range. Choose dynamic if a DHCP server exists in the environment. |
+
+ ![Add a network interface blade](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-3.png)
+
+4. You'll see a notification that the network interface creation is in progress.
+
+ ![Notification when network interface is getting created](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-4.png)
+
+5. After the network interface is successfully created, the list of network interfaces refreshes to display the newly created interface.
+
+ ![Updated list of network interfaces](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-5.png)
++
+## Edit a network interface
+
+Follow these steps to edit a network interface associated with a virtual machine deployed on your device.
+
+1. Go to the virtual machine that you have stopped and go to the **VM Properties** page. Select **Networking**.
+
+1. In the list of network interfaces, select the interface that you wish to edit. In the far right of the network interface selected, select the edit icon (pencil).
+
+ ![Select a network interface to edit](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-1.png)
+
+1. In the **Edit network interface** blade, you can only change the IP assignment of the network interface. The name, virtual network, and subnet associated with the network interface can't be changed once it is created. Change the **IP assignment** to static and save the changes.
+
+ ![Change IP assignment for the network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-2.png)
+
+1. The list of network interface refreshes to display the updated network interface.
++
+## Detach a network interface
+
+Follow these steps to detach or remove a network interface associated with a virtual machine deployed on your device.
+
+1. Go to the virtual machine that you have stopped and go to the **VM Properties** page. Select **Networking**.
+
+1. In the list of network interfaces, select the interface that you wish to edit. In the far right of the network interface selected, select the detach icon (unplug).
+
+ ![Select a network interface to detach](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/detach-nic-1.png)
+
+1. After the interface is completely detached, the list of network interfaces is refreshed to display the remaining interfaces.
+
+## Next steps
+
+To learn how to deploy virtual machines on your Azure Stack Edge Pro device, see [Deploy virtual machines via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
databox-online Azure Stack Edge Mini R Manage Wifi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-manage-wifi.md
Previously updated : 10/28/2020 Last updated : 03/24/2021
Do the following steps in the local UI of your device to add and connect to a Wi
A wireless network profile contains the SSID (network name), password key, and security information to be able to connect to a wireless network. You can get the Wi-Fi profile for your environment from your network administrator.
+ For information about preparing your Wi-Fi profiles, see [Use Wi-Fi profiles with Azure Stack Edge Mini R devices](azure-stack-edge-mini-r-use-wifi-profiles.md).
+ ![Local web UI "Port WiFi Network settings" 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-2.png) After the profile is added, the list of Wi-Fi profiles updates to reflect the new profile. The profile should show the **Connection status** as **Disconnected**.
databox-online Azure Stack Edge Mini R Use Wifi Profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-use-wifi-profiles.md
+
+ Title: Use Wi-Fi profiles with Azure Stack Edge Mini R devices
+description: Describes how to create Wi-Fi profiles for Azure Stack Edge Mini R devices on high-security enterprise networks and personal networks.
++++++ Last updated : 03/24/2021+
+#Customer intent: As an IT pro or network administrator, I need to give users secure wireless access to their Azure Stack Edge Mini R devices.
++
+# Use Wi-Fi profiles with Azure Stack Edge Mini R devices
+
+This article describes how to use wireless network (Wi-Fi) profiles with your Azure Stack Edge Mini R devices.
+
+How you prepare the Wi-Fi profile depends on the type of wireless network:
+
+- On a Wi-Fi Protected Access 2 (WPA2) - Personal network, such as a home network or Wi-Fi open hotspot, you may be able to download and use an existing wireless profile with the same password you use with other devices.
+
+- In a high-security enterprise environment, you'll access your device over a WPA2 - Enterprise network. On this type of network, each client computer will have a distinct Wi-Fi profile and will be authenticated via certificates. You'll need to work with your network administrator to determine the required configuration.
+
+We'll discuss profile requirements for both types of network further later.
+
+In either case, it's very important to make sure the profile meets the security requirements of your organization before you test or use the profile with your device.
+
+## About Wi-Fi profiles
+
+A Wi-Fi profile contains the SSID (service set identifier, or **network name**), password key, and security information needed to connect your Azure Stack Edge Mini R device to a wireless network.
+
+The following code example shows basic settings for a profile to use with a typical wireless network:
+
+* `SSID` is the network name.
+
+* `name` is the user-friendly name for the Wi-Fi connection. That is the name users will see when they browse the available connections on their device.
+
+* The profile is configured to automatically connect the computer to the wireless network when it's within range of the network (`connectionMode` = `auto`).
+
+```html
+<?xml version="1.0"?>
+<WLANProfile xmlns="http://www.contoso.com/networking/WLAN/profile/v1">
+ <name>ContosoWIFICORP</name>
+ <SSIDConfig>
+ <SSID>
+ <hex>1A234561234B5012</hex>
+ </SSID>
+ <nonBroadcast>false</nonBroadcast>
+ </SSIDConfig>
+ <connectionType>ESS</connectionType>
+ <connectionMode>auto</connectionMode>
+ <autoSwitch>false</autoSwitch>
+```
+
+For more information about Wi-Fi profile settings, see **Enterprise profile** in [Add Wi-Fi settings for Windows 10 and newer devices](/mem/intune/configuration/wi-fi-settings-windows#enterprise-profile), and see [Configure Cisco Wi-Fi profile](azure-stack-edge-mini-r-manage-wifi.md#configure-cisco-wi-fi-profile).
+
+To enable wireless connections on an Azure Stack Edge Mini R device, you configure the Wi-Fi port on your device, and then add the Wi-Fi profile(s) to the device. On an enterprise network, you'll also upload certificates to the device. You can then connect to a wireless network from the local web UI for the device. For more information, see [Manage wireless connectivity on your Azure Stack Edge Mini R](./azure-stack-edge-mini-r-manage-wifi.md).
+
+## Profile for WPA2 - Personal network
+
+On a Wi-Fi Protected Access 2 (WPA2) - Personal network, such as a home network or Wi-Fi open hotspot, multiple devices may use the same profile and the same password. On your home network, your mobile phone and laptop use the same wireless profile and password to connect to the network.
+
+For example, a Windows 10 client can generate a runtime profile for you. When you sign in to the wireless network, you're prompted for the Wi-Fi password and, once you provide that password, you're connected. No certificate is needed in this environment.
+
+On this type of network, you may be able to export a Wi-Fi profile from your laptop, and then add it to your Azure Stack Edge Mini R device. For instructions, see [Export a Wi-Fi profile](#export-a-wi-fi-profile), below.
+
+> [!IMPORTANT]
+> Before you create a Wi-Fi profile for your Azure Stack Edge Mini R device, contact your network administrator to find out the organization's security requirements for wireless networking. You shouldn't test or use any Wi-Fi profile on your device until you know the wireless network meets requirements.
+
+## Profiles for WPA2 - Enterprise network
+
+On a Wireless Protected Access 2 (WPA2) - Enterprise network, you'll need to work with your network administrator to get the needed Wi-Fi profile and certificate to connect your Azure Stack Edge Mini R device to the network.
+
+For highly secure networks, the Azure device can use Protected Extensible Authentication Protocol (PEAP) with Extensible Authentication Protocol-Transport Layer Security (EAP-TLS). PEAP with EAP-TLS uses machine authentication: the client and server use certificates to verify their identities to each other.
+
+> [!NOTE]
+> * User authentication using PEAP Microsoft Challenge Handshake Authentication Protocol version 2 (PEAP MSCHAPv2) is not supported on Azure Stack Edge Mini R devices.
+> * EAP-TLS authentication is required in order to access Azure Stack Edge Mini R functionality. A wireless connection that you set up using Active Directory will not work.
+
+The network administrator will generate a unique Wi-Fi profile and a client certificate for each computer. The network administrator decides whether to use a separate certificate for each device or a shared certificate.
+
+If you work in more than one physical location at the workplace, the network administrator may need to provide more than one site-specific Wi-Fi profile and certificate for your wireless connections.
+
+On an enterprise network, we recommend that you do not change settings in the Wi-Fi profiles that your network administrator provides. The only adjustment you may want to make is to the automatic connection settings. For more information, see [Basic profile](/mem/intune/configuration/wi-fi-settings-windows#basic-profile) in Wi-Fi settings for Windows 10 and newer devices.
+
+In a high-security enterprise environment, you may be able to use an existing wireless network profile as a template:
+
+* You can download the corporate wireless network profile from your work computer. For instructions, see [Export a Wi-Fi profile](#export-a-wi-fi-profile), below.
+
+* If others in your organization are already connecting to their Azure Stack Edge Mini R devices over a wireless network, they can download the Wi-Fi profile from their device. For instructions, see [Download Wi-Fi profile](azure-stack-edge-mini-r-manage-wifi.md#download-wi-fi-profile).
+
+## Export a Wi-Fi profile
+
+To export a profile for the Wi-Fi interface on your computer, do these steps:
+
+1. To see the wireless profiles on your computer, on the **Start** menu, open **Command prompt** (cmd.exe), and enter this command:
+
+ `netsh wlan show profiles`
+
+ The output will look something like this:
+
+ ```dos
+ Profiles on interface Wi-Fi:
+
+ Group policy profiles (read only)
+
+ <None>
+
+ User profiles
+ -
+ All User Profile : ContosoCORP
+ All User Profile : ContosoFTINET
+ All User Profile : GusIS2809
+ All User Profile : GusGuests
+ All User Profile : SeaTacGUEST
+ All User Profile : Boat
+ ```
+
+2. To export a profile, enter the following command:
+
+ `netsh wlan export profile name=ΓÇ¥<profileName>ΓÇ¥ folder=ΓÇ¥<path>\<profileName>"`
+
+ For example, the following command saves the ContosoFTINET profile in XML format to the Downloads folder for the user named `gusp`.
+
+ ```dos
+ C:\Users\gusp>netsh wlan export profile name="ContosoFTINET" folder=c:Downloads
+
+ Interface profile "ContosoFTINET" is saved in file "c:Downloads\ContosoFTINET.xml" successfully.
+ ```
+
+## Add certificate, Wi-Fi profile to device
+
+When you have the Wi-Fi profiles and certificates that you need, do these steps to configure your Azure Stack Edge Mini R device for wireless connections:
+
+1. For a WPA2 - Enterprise network, upload the needed certificates to the device following the guidance in [Upload certificates](./azure-stack-edge-gpu-manage-certificates.md#upload-certificates).
+
+1. Upload the Wi-Fi profile(s) to the Mini R device and then connect to it by following the guidance in [Add, connect to Wi-Fi profile](./azure-stack-edge-mini-r-manage-wifi.md#add-connect-to-wi-fi-profile).
+
+## Next steps
+
+- Learn how to [Configure network for Azure Stack Edge Mini R](azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md).
+- Learn how to [Manage Wi-Fi on your Azure Stack Edge Mini R](azure-stack-edge-mini-r-manage-wifi.md).
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
You can keep your resources for the next tutorial. If no longer needed, delete t
To disable DDoS protection for a virtual network: 1. Enter the name of the virtual network you want to disable DDoS protection standard for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
-2. Select **Under DDoS Protection Standard**, select **Disable**.
+2. Under **DDoS Protection Standard**, select **Disable**.
If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-alerts-on-premises-management-console.md
Rules that you create by using the API appear in the **Exclusion Rule** window a
:::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/edit-exclusion-rule-screen.png" alt-text="Screenshot of the Edit Exclusion Rule view.":::
-## See also
+## Next steps
-[Work with alerts on your sensor](how-to-work-with-alerts-on-your-sensor.md)
+[Work with alerts on your sensor](how-to-work-with-alerts-on-your-sensor.md).
+Review the [Defender for IoT Engine alerts](alert-engine-messages.md).
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-overview.md
description: Overview of DNS hosting service on Microsoft Azure. Host your domai
Previously updated : 3/15/2021 Last updated : 3/25/2021 #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
Title: 'Azure ExpressRoute: Designing for disaster recovery'
description: This page provides architectural recommendations for disaster recovery while using Azure ExpressRoute. - Previously updated : 05/25/2019 Last updated : 03/22/2021 - # Designing for disaster recovery with ExpressRoute private peering
-ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there is no single point of failure in the ExpressRoute path within Microsoft network. For design considerations to maximize the availability of an ExpressRoute circuit, see [Designing for high availability with ExpressRoute][HA].
+ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. For design considerations to maximize the availability of an ExpressRoute circuit, see [Designing for high availability with ExpressRoute][HA].
-However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. In other words, in this article let us look into network architecture considerations for building robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits.
+However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll be looking into network architecture considerations for building robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits.
>[!NOTE] >The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.
However, taking Murphy's popular adage--*if anything can go wrong, it will*--int
## Need for redundant connectivity solution
-There are possibilities and instances where an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. Therefore, for business continuity and mission critical applications it is important to plan for disaster recovery.
+There are possibilities and instances where an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. That's why, for business continuity and mission critical applications it's important to plan for disaster recovery.
-Irrespective of whether you run your mission critical applications in an Azure region or on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives:
+No matter what, whether you run your mission critical applications in an Azure region or on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives:
- [Enterprise-scale disaster recovery][Enterprise DR] - [SMB disaster recovery with Azure Site Recovery][SMB DR]
If you rely on ExpressRoute connectivity between your on-premises network and Mi
## Challenges of using multiple ExpressRoute circuits
-When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT, firewall) in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you won't come across stateful entities such as NAT or Firewalls. Therefore, asymmetrical routing over ExpressRoute private peering does not necessarily block traffic flow.
+When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT, firewall) in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you won't come across stateful entities such as NAT or Firewalls. That's why, asymmetrical routing over ExpressRoute private peering doesn't necessarily block traffic flow.
-However, if you load balance traffic across geo-redundant parallel paths, irrespective of whether you have stateful entities or not, you would experience inconsistent network performance. In this article, let's discuss how to address these challenges.
+However, if you load balance traffic across geo-redundant parallel paths, regardless of whether you have stateful entities or not, you would experience inconsistent network performance. These geo-redundant parallel paths can be through the same metro or different metro found on the [providers by location](expressroute-locations-providers.md#partners) page.
+
+### Same metro
+
+When using the same metro, you should use the secondary location for the second path for this configuration to work. An example of the same metro would be *Amsterdam* and *Amsterdam2*. The advantage of selecting the same metro is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays the same. However, if there is a natural disaster, connectivity for both paths may no longer be available.
+
+### Different metros
+
+When using different metros for Standard SKU circuits, the secondary location should be in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increase latency end-to-end.
+
+In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths.
## Small to medium on-premises network considerations
Using any of the techniques, if you influence Azure to prefer one of your Expres
## Large distributed enterprise network
-When you have a large distributed enterprise network, you're likely to have multiple ExpressRoute circuits. In this section, let's see how to design disaster recovery using the active-active ExpressRoute circuits, without needing additional stand-by circuits.
+When you have a large distributed enterprise network, you're likely to have multiple ExpressRoute circuits. In this section, let's see how to design disaster recovery using the active-active ExpressRoute circuits, without needing another stand-by circuits.
Let's consider the example illustrated in the following diagram. In the example, Contoso has two on-premises locations connected to two Contoso IaaS deployment in two different Azure regions via ExpressRoute circuits in two different peering locations. [![6]][6]
-How we architect the disaster recovery has an impact on how cross regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently.
+How we architect the disaster recovery has an impact on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently.
### Scenario 1
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-erdirect-about.md
# About ExpressRoute Direct
-ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.
+ExpressRoute Direct gives you the ability to connect directly into MicrosoftΓÇÖs global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale. You can work with any service provider for ER Direct.
Key features that ExpressRoute Direct provides include, but aren't limited to:
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table provides a map of Azure regions to ExpressRoute locations wi
| **Geopolitical region** | **Azure regions** | **ExpressRoute locations** | | | | | | **Australia Government** | Australia Central, Australia Central 2 |Canberra, Canberra2 |
-| **Europe** | France Central, France South, Germany North, Germany West Central, North Europe, Norway East, Norway West, Switzerland North, Switzerland West, UK West, UK South, West Europe |Amsterdam, Amsterdam2, Berlin, Copenhagen, Dublin, Frankfurt, Geneva, London, London2, Marseille, Milan, Munich, Newport(Wales), Oslo, Paris, Stavanger, Stockholm, Zurich |
-| **North America** | East US, West US, East US 2, West US 2, Central US, South Central US, North Central US, West Central US, Canada Central, Canada East |Atlanta, Chicago, Dallas, Denver, Las Vegas, Los Angeles, Los Angeles2, Miami, Minneapolis, Montreal, New York, Phoenix, Quebec City, Queretaro(Mexico), Quincy, San Antonio, Seattle, Silicon Valley, Silicon Valley2, Toronto, Vancouver, Washington DC, Washington DC2 |
+| **Europe** | France Central, France South, Germany North, Germany West Central, North Europe, Norway East, Norway West, Switzerland North, Switzerland West, UK West, UK South, West Europe |Amsterdam, Amsterdam2, Berlin, Copenhagen, Dublin, Frankfurt, Frankfurt2, Geneva, London, London2, Madrid, Marseille, Milan, Munich, Newport(Wales), Oslo, Paris, Stavanger, Stockholm, Zurich |
+| **North America** | East US, West US, East US 2, West US 2, Central US, South Central US, North Central US, West Central US, Canada Central, Canada East |Atlanta, Chicago, Dallas, Denver, Las Vegas, Los Angeles, Los Angeles2, Miami, Minneapolis, Montreal, New York, Phoenix, Quebec City, Queretaro(Mexico), Quincy, San Antonio, Seattle, Silicon Valley, Silicon Valley2, Toronto, Toronto2, Vancouver, Washington DC, Washington DC2 |
| **Asia** | East Asia, Southeast Asia | Bangkok, Hong Kong, Hong Kong2, Jakarta, Kuala Lumpur, Singapore, Singapore2, Taipei | | **India** | India West, India Central, India South |Chennai, Chennai2, Mumbai, Mumbai2 | | **Japan** | Japan West, Japan East |Osaka, Tokyo, Tokyo2 |
The following table provides a map of Azure regions to ExpressRoute locations wi
| **South Korea** | Korea Central, Korea South |Busan, Seoul| | **UAE** | UAE Central, UAE North | Dubai, Dubai2 | | **South Africa** | South Africa West, South Africa North |Cape Town, Johannesburg |
-| **South America** | Brazil South |Bogota, Sao Paulo |
+| **South America** | Brazil South |Bogota, Rio de Janeiro, Sao Paulo |
## Azure regions and geopolitical boundaries for national clouds The table below provides information on regions and geopolitical boundaries for national clouds.
Azure national clouds are isolated from each other and from global commercial Az
| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | 10G, 100G | Equinix, Megaport, Verizon | | **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | 10G, 100G | Equinix, CenturyLink Cloud Connect, Verizon | | **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
-| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | n/a | CenturyLink Cloud Connect, Megaport |
+| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | 10G, 100G | CenturyLink Cloud Connect, Megaport |
| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | 10G, 100G | AT&T, Equinix, Level 3 Communications, Verizon | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | 10G, 100G | Equinix, Megaport | | **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East, US Gov Virginia | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Verizon |
expressroute Expressroute Network Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-network-insights.md
+
+ Title: 'Azure ExpressRoute Insights using Network Insights'
+description: Learn about Azure ExpressRoute Insights using Network Insights.
++++ Last updated : 03/23/2021+++
+# Azure ExpressRoute Insights using Network Insights
+
+This article explains how Network Insights can help you view your ExpressRoute metrics and configurations all in one place. Through Network Insights, you can view topological maps and health dashboards containing important ExpressRoute information without needing to complete any extra setup.
++
+## Visualize functional dependencies
+
+To view this solution, navigate to the *Azure Monitor* page, select *Networks*, and then select the *ExpressRoute Circuits* card. Then, select the topology button for the circuit you would like to view.
+
+The functional dependency view provides a clear picture of your ExpressRoute setup, outlining the relationship between different ExpressRoute components (peerings, connections, gateways).
++
+Hover over any component in the topology map to view configuration information. For example, hover over an ExpressRoute peering component to view details such as circuit bandwidth and Global Reach enablement.
++
+## View a detailed and pre-loaded metrics dashboard
+
+Once you review the topology of your ExpressRoute setup using the functional dependency view, select **View detailed metrics** to navigated to the detailed metrics view to understand the performance of your circuit. This view offers an organized list of linked resources and a rich dashboard of important ExpressRoute metrics.
+
+The **Linked Resources** section lists the connected ExpressRoute gateways and configured peerings, which you can select on to navigate to the corresponding resource page.
+++
+The **ExpressRoute Metrics** section includes charts of important circuit metrics across the categories of **Availability**, **Throughput**, **Packet Drops**, and **Gateway Metrics**.
+
+### Availability
+
+The *Availability* tab tracks ARP and BGP availability, plotting the data for both the circuit as a whole and individual connection (primary and secondary).
++
+### Throughput
+
+Similarly, the *Throughput* tab plots the total throughput of ingress and egress traffic for the circuit in bits/second. You can also view throughput for individual connections and each type of configured peering.
++
+### Packet Drops
+
+The *Packet Drops* tab plots the dropped bits/second for ingress and egress traffic through the circuit. This tab provides an easy way to monitor performance issues that may occur if you regularly need or exceed your circuit bandwidth.
++
+### Gateway Metrics
+
+Lastly, the Gateway Metrics tab populates with key metrics charts for a selected ExpressRoute gateway (from the Linked Resources section). Use this tab when you need to monitor your connectivity to specific virtual networks.
++
+## Next steps
+
+Configure your ExpressRoute connection.
+
+* Learn more about [Azure ExpressRoute](expressroute-introduction.md), [Network Insights](../azure-monitor/insights/network-insights-overview.md), and [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md)
+* [Create and modify a circuit](expressroute-howto-circuit-arm.md)
+* [Create and modify peering configuration](expressroute-howto-routing-arm.md)
+* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+* [Customize your metrics](expressroute-monitoring-metrics-alerts.md) and create a [Connection Monitor](../network-watcher/connection-monitor-overview.md)
governance Create Management Group Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-portal.md
directory. You receive a notification when the process is complete. For more inf
1. Select **+ Add management group**.
- :::image type="content" source="./media/main.png" alt-text="Screenshot of the Management groups page showing child management groups and subscriptions." border="false":::
+ :::image type="content" source="./media/main.png" alt-text="Screenshot of the Management groups page showing child management groups and subscriptions.":::
1. Leave **Create new** selected and fill in the management group ID field.
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
initiative definition.
|[Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d092e0a-7acd-40d2-a975-dca21cae48c4) |Azure Virtual Network deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access.When an Azure Cache for Redis instance is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_CacheInVnet_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
-|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
-|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[SQL servers should retain audit data for at least 90 days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL servers' audit data to at least 90 days. Confirm that you're meeting the necessary retention rules for the regions in which you're operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
### Enable alerts for anomalous activity
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[SQL servers should retain audit data for at least 90 days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL servers' audit data to at least 90 days. Confirm that you're meeting the necessary retention rules for the regions in which you're operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
### Ensure that 'Advanced Data Security' on a SQL server is set to 'On'
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[SQL servers should retain audit data for at least 90 days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL servers' audit data to at least 90 days. Confirm that you're meeting the necessary retention rules for the regions in which you're operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
### Ensure that Advanced Threat Protection (ATP) on a SQL server is set to 'Enabled'
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **New Zealand ISM Restricted** Regulatory Compliance built-in initiative definition.
+This built-in initiative is deployed as part of the
+[New Zealand ISM Restricted blueprint sample](../../blueprints/samples/new-zealand-ism.md).
+ > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
-> control; however, there often is not a 1:1 or complete match between a control and one or more
-> policies. As such, **Compliant** in Azure Policy refers only to the policy definitions themselves;
-> this doesn't ensure you're fully compliant with all requirements of a control. In addition, the
-> compliance standard includes controls that aren't addressed by any Azure Policy definitions at
-> this time. Therefore, compliance in Azure Policy is only a partial view of your overall compliance
-> status. The associations between compliance domains, controls, and Azure Policy definitions for
-> this compliance standard may change over time. To view the change history, see the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/nz_ism.json). ## Information security monitoring
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 R4 description: Details of the NIST SP 800-53 R4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/17/2021 Last updated : 03/24/2021
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
healthcare-apis Access Fhir Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/access-fhir-postman-tutorial.md
Title: Postman FHIR server in Azure - Azure API for FHIR
-description: In this tutorial, we will walk through the steps needed to use Postman to access an FHIR server. Postman is helpful for debugging applications that access APIs.
+description: In this tutorial, we'll walk through the steps needed to use Postman to access an FHIR server. Postman is helpful for debugging applications that access APIs.
Previously updated : 02/01/2021 Last updated : 03/16/2021 # Access Azure API for FHIR with Postman
-A client application would access an FHIR API through a [REST API](https://www.hl7.org/fhir/http.html). You may also want to interact directly with the FHIR server as you build applications, for example, for debugging purposes. In this tutorial, we will walk through the steps needed to use [Postman](https://www.getpostman.com/) to access a FHIR server. Postman is a tool often used for debugging when building applications that access APIs.
+A client application can access the Azure API for FHIR through a [REST API](https://www.hl7.org/fhir/http.html). To send requests, view responses, and debug your application as it is being built, use an API testing tool of your choice. In this tutorial, we'll walk you through the steps of accessing the FHIR server using [Postman](https://www.getpostman.com/).
## Prerequisites -- A FHIR endpoint in Azure. You can set that up using the managed Azure API for FHIR or the Open Source FHIR server for Azure. Set up the managed Azure API for FHIR using [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).-- A [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service.-- You have granted permissions, for example, "FHIR Data Contributor", to the client application to access the FHIR service. More info at [Configure Azure RBAC for FHIR](configure-azure-rbac.md)-- Postman installed. You can get it from [https://www.getpostman.com](https://www.getpostman.com)
+- A FHIR endpoint in Azure.
+
+ To deploy the Azure API for FHIR (a managed service), you can use the [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).
+- A registered [confidential client application](register-confidential-azure-ad-client-app.md) to access the FHIR service.
+- You have granted permissions to the confidential client application, for example, "FHIR Data Contributor", to access the FHIR service. For more information, see [Configure Azure RBAC for FHIR](./configure-azure-rbac.md).
+- Postman installed.
+
+ For more information about Postman, see [Get Started with Postman](https://www.getpostman.com).
## FHIR server and authentication details
-In order to use Postman, the following details are needed:
+To use Postman, the following authentication parameters are required:
+
+- Your FHIR server URL, for example, `https://MYACCOUNT.azurehealthcareapis.com`
-- Your FHIR server URL, for example `https://MYACCOUNT.azurehealthcareapis.com` - The identity provider `Authority` for your FHIR server, for example, `https://login.microsoftonline.com/{TENANT-ID}`-- The configured `audience`. This is usually the URL of the FHIR server, e.g. `https://<FHIR-SERVER-NAME>.azurehealthcareapis.com` or just `https://azurehealthcareapis.com`.-- The `client_id` (or application ID) of the [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service.-- The `client_secret` (or application secret) of the client application.+
+- The configured `audience` that is usually the URL of the FHIR server, for example, `https://<FHIR-SERVER-NAME>.azurehealthcareapis.com` or `https://azurehealthcareapis.com`.
+
+- The `client_id` or application ID of the [confidential client application](register-confidential-azure-ad-client-app.md) used for accessing the FHIR service.
+
+- The `client_secret` or application secret of the confidential client application.
Finally, you should check that `https://www.getpostman.com/oauth2/callback` is a registered reply URL for your client application. ## Connect to FHIR server
-Using Postman, do a `GET` request to `https://fhir-server-url/metadata`:
+Open Postman, and then select **GET** to make a request to `https://fhir-server-url/metadata`.
![Postman Metadata Capability Statement](media/tutorial-postman/postman-metadata.png)
-The metadata URL for Azure API for FHIR is `https://MYACCOUNT.azurehealthcareapis.com/metadata`. In this example, the FHIR server URL is `https://fhirdocsmsft.azurewebsites.net` and the capability statement of the server is available at `https://fhirdocsmsft.azurewebsites.net/metadata`. That endpoint should be accessible without authentication.
+The metadata URL for Azure API for FHIR is `https://MYACCOUNT.azurehealthcareapis.com/metadata`.
+
+In this example, the FHIR server URL is `https://fhirdocsmsft.azurewebsites.net`, and the capability statement of the server is available at `https://fhirdocsmsft.azurewebsites.net/metadata`. This endpoint is accessible without authentication.
-If you attempt to access restricted resources, you should get an "Authentication failed" response:
+If you attempt to access restricted resources, an "Authentication failed" response occurs.
![Authentication Failed](media/tutorial-postman/postman-authentication-failed.png) ## Obtaining an access token-
-To obtain a valid access token, select "Authorization" and pick TYPE "OAuth 2.0":
+To obtain a valid access token, select **Authorization** and select **OAuth 2.0** from the **TYPE** drop-down menu.
![Set OAuth 2.0](media/tutorial-postman/postman-select-oauth2.png)
-Hit "Get New Access Token" and a dialog appears:
+Select **Get New Access Token**.
![Request New Access Token](media/tutorial-postman/postman-request-token.png)
-You will need to some details:
+In the **Get New Access Token** dialog box, enter the following details:
| Field | Example Value | Comment | |--|--|-|
You will need to some details:
| Client ID | `XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX` | Application ID | | Client Secret | `XXXXXXXX` | Secret client key | | Scope | `<Leave Blank>` |
-| State | `1234` | |
+| State | `1234` | |
| Client Authentication | Send client credentials in body |
-Hit "Request Token" and you will be guided through the Azure Active Directory Authentication flow and a token will be returned to Postman. If you run into problems open the Postman Console (from the "View->Show Postman Console" menu item).
+Select **Request Token** to be guided through the Azure Active Directory Authentication flow, and a token will be returned to Postman. If an authentication failure occurs, refer to the Postman Console for more details. **Note**: On the ribbon, select **View**, and then select **Show Postman Console**. The keyboard shortcut to the Postman Console is **Alt-Ctrl+C**.
-Scroll down on the returned token screen and hit "Use Token":
+Scroll down to view the returned token screen, and then select **Use Token**.
![Use Token](media/tutorial-postman/postman-use-token.png)
-The token should now be populated in the "Access Token" field and you can select tokens from "Available Tokens". If you "Send" again to repeat the `Patient` resource search, you should get a Status `200 OK`:
+Refer to the **Access Token** field to view the newly populated token. If you select **Send** to repeat the `Patient` resource search, a **Status** `200 OK` gets returned. A returned status `200 OK` indicates a successful HTTP request.
![200 OK](media/tutorial-postman/postman-200-OK.png)
-In this case, there are no patients in the database and the search is empty.
+In the *Patient search* example, there are no patients in the database such that the search result is empty.
-If you inspect the access token with a tool like [https://jwt.ms](https://jwt.ms), you should see content like:
+You can inspect the access token using a tool like [jwt.ms](https://jwt.ms). An example of the content is shown below.
-```jsonc
+```json
{ "aud": "https://MYACCOUNT.azurehealthcareapis.com", "iss": "https://sts.windows.net/{TENANT-ID}/",
If you inspect the access token with a tool like [https://jwt.ms](https://jwt.ms
} ```
-In troubleshooting situations, validating that you have the correct audience (`aud` claim) is a good place to start. If your token is from the correct issuer (`iss` claim) and has the correct audience (`aud` claim), but you are still unable to access the FHIR API, it is likely that the user or service principal (`oid` claim) does not have access to the FHIR data plane. We recommend you [use Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign data plane roles to users. If you are using an external, secondary Azure Active directory tenant for your data plane, you will need to [configure local RBAC assignments](configure-local-rbac.md).
+In troubleshooting situations, validating that you have the correct audience (`aud` claim) is a good place to start. If your token is from the correct issuer (`iss` claim) and has the correct audience (`aud` claim), but you are still unable to access the FHIR API, it is likely that the user or service principal (`oid` claim) doesn't have access to the FHIR data plane. We recommend you use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign data plane roles to users. If you're using an external, secondary Azure Active directory tenant for your data plane, you'll need to [Configure local RBAC for FHIR](configure-local-rbac.md) assignments.
-It is also possible to [get a token for the Azure API for FHIR using the Azure CLI](get-healthcare-apis-access-token-cli.md). If you are using a token obtained with the Azure CLI, you should use Authorization type "Bearer Token" and paste the token in directly.
+It's also possible to get a token for the [Azure API for FHIR using the Azure CLI](get-healthcare-apis-access-token-cli.md). If you're using a token obtained with the Azure CLI, you should use Authorization type *Bearer Token*. Paste the token in directly.
## Inserting a patient
-Now that you have a valid access token. You can insert a new patient. Switch to method "POST" and add the following JSON document in the body of the request:
+With a valid access token, you can now insert a new patient. In Postman, change the method by selecting **Post**, and then add the following JSON document in the body of the request.
[!code-json[](../samples/sample-patient.json)]
-Hit "Send" and you should see that the patient is successfully created:
+Select **Send** to determine that the patient is successfully created.
![Screenshot that shows that the patient is successfully created.](media/tutorial-postman/postman-patient-created.png)
If you repeat the patient search, you should now see the patient record:
## Next steps
-In this tutorial, you've accessed an FHIR API using postman. Read about the supported API features in our supported features section.
+In this tutorial, you've accessed the Azure API for FHIR using Postman. For more information about the Azure API for FHIR features, see
>[!div class="nextstepaction"]
->[Supported features](fhir-features-supported.md)
+>[Supported features](fhir-features-supported.md)
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 3/5/2020 Last updated : 3/18/2021
Azure API for FHIR supports $export command that allows you to export the data o
There are three steps involved in configuring export in Azure API for FHIR:
-1. Enable Managed Identity on Azure API for FHIR Service
-2. Creating a Azure storage account (if not done before) and assigning permission to Azure API for FHIR to the storage account
-3. Selecting the storage account in Azure API for FHIR as export storage account
+1. Enable Managed Identity on Azure API for FHIR Service.
+2. Creating a Azure storage account (if not done before) and assigning permission to Azure API for FHIR to the storage account.
+3. Selecting the storage account in Azure API for FHIR as export storage account.
## Enabling Managed Identity on Azure API for FHIR
-The first step in configuring Azure API for FHIR for export is to enable system wide managed identity on the service. You can read all about Managed Identities in Azure [here](../../active-directory/managed-identities-azure-resources/overview.md).
+The first step in configuring Azure API for FHIR for export is to enable system wide managed identity on the service. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-To do so, navigate to Azure API for FHIR service and select Identity blade. Changing the status to On will enable managed identity in Azure API for FHIR Service.
+To do so, go to the Azure API for FHIR service and select **Identity**. Changing the status to **On** will enable managed identity in Azure API for FHIR Service.
![Enable Managed Identity](media/export-data/fhir-mi-enabled.png)
-Now we can move to next step and create a storage account and assign permission to our service.
+Now, you can move to the next step by creating a storage account and assign permission to our service.
## Adding permission to storage account
-Next step in export is to assign permission for Azure API for FHIR service to write to the storage account.
+The next step in export is to assign permission for Azure API for FHIR service to write to the storage account.
-After we have created a storage account, navigate to Access Control (IAM) blade in Storage Account and select Add Role Assignments
+After you've created a storage account, go to **Access Control (IAM)** in the storage account and select **Add role assignment**.
![Export Role Assignment](media/export-data/fhir-export-role-assignment.png)
-Here we then add role Storage Blob Data Contributor to our service name.
+It is here that you'll add the role **Storage Blob Data Contributor** to our service name, and then select **Save**.
![Add Role](media/export-data/fhir-export-role-add.png)
-Now we are ready for next step where we can select the storage account in Azure API for FHIR as a default storage account for $export.
+Now you are ready to select the storage account in Azure API for FHIR as a default storage account for $export.
## Selecting the storage account for $export
-Final step is to assign the Azure storage account that Azure API for FHIR will use to export the data to. To do this, navigate to Integration blade in Azure API for FHIR service in Azure portal and select the storage account
+The final step is to assign the Azure storage account that Azure API for FHIR will use to export the data to. To do this, go to **Integration** in Azure API for FHIR service and select the storage account.
![FHIR Export Storage](media/export-data/fhir-export-storage.png)
-After that we are ready to export the data using $export command.
+After you've completed this final step, you are now ready to export the data using $export command.
+
+> [!Note]
+> Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
+
+For more information about configuring database settings, access control, enabling diagnostic logging, and using custom headers to add data to audit logs, see:
>[!div class="nextstepaction"] >[Additional Settings](azure-api-for-fhir-additional-settings.md)
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/export-data.md
Previously updated : 2/19/2021 Last updated : 3/18/2021 # How to export FHIR data
The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
-Before using $export, you will want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+Before using $export, you'll want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
## Using $export command
-After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
++
+**Jobs stuck in a bad state**
+
+In some situations, there is a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions have not been setup properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, ndjson) files are present. If they are not present, and there are no other export jobs running, then there is a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try re-queuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
The Azure API For FHIR supports $export at the following levels: * [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>` * [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
-* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but does not export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
-When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large, we create a new file after the size of a single exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (i.e. Patient-1.ndjson, Patient-2.ndjson).
+When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large. We create a new file after the size of a single exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (that is, Patient-1.ndjson, Patient-2.ndjson).
> [!Note]
In addition, checking the export status through the URL returned by the location
Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation: -- User cannot take advantage of [hierarchical namespaces](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-namespace) yet; there isn't a way to target export to a specific sub-directory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
+- User cannot take advantage of [hierarchical namespaces](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-namespace), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
The Azure API for FHIR supports the following query parameters. All of these par
| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results | | \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported to that container in a new folder with the name. If the container is not specified, it will be exported to a new container using timestamp and job ID. |
+> [!Note]
+> Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
+ ## Secure Export to Azure Storage Azure API for FHIR supports a secure export operation. One option to run
Azure API for FHIR, the configurations are different.
### When the Azure storage account is in a different region
-Select the networking blade of the Azure storage account from the
+Select **Networking** of the Azure storage account from the
portal. :::image type="content" source="media/export-data/storage-networking.png" alt-text="Azure Storage Networking Settings." lightbox="media/export-data/storage-networking.png":::
-Select "Selected networks" and specify the IP address in the
-**Address range** box under the section of Firewall \| Add IP ranges to
+Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
allow access from the internet or your on-premises networks. You can
-find the IP address from the table below for the Azure region where the
+find the IP address in the table below for the Azure region where the
Azure API for FHIR service is provisioned. |**Azure Region** |**Public IP Address** |
The configuration process is the same as above except a specific IP
address range in CIDR format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request. > [!Note]
-> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case the $export operation will not succeed. You can retry the $export request but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
+> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
## Next steps
-In this article, you learned how to export FHIR resources using $export command. Next, learn how to export de-identified data:
+In this article, you learned how to export FHIR resources using $export command. Next, to learn how to export de-identified data, see:
>[!div class="nextstepaction"] >[Export de-identified data](de-identified-export.md)
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-features-supported.md
Previous versions also currently supported include: `3.0.2`
| update with optimistic locking | Yes | Yes | Yes | | | update (conditional) | Yes | Yes | Yes | | | patch | No | No | No | |
-| delete | Yes | Yes | Yes | See Note Below |
+| delete | Yes | Yes | Yes | See Note below. |
| delete (conditional) | No | No | No | | | history | Yes | Yes | Yes | | | create | Yes | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
-| search | Partial | Partial | Partial | See below |
-| chained search | No | Yes | No | |
-| reverse chained search | No | Yes | No | |
+| search | Partial | Partial | Partial | See Search section below. |
+| chained search | Yes | Yes | Partial | See Note 2 below. |
+| reverse chained search | Yes | Yes | Partial | See Note 2 below. |
| capabilities | Yes | Yes | Yes | | | batch | Yes | Yes | Yes | | | transaction | No | Yes | No | |
Previous versions also currently supported include: `3.0.2`
> [!Note] > Delete defined by the FHIR spec requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code and the resource is no longer found through searching. The Azure API for FHIR also enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you do not pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available. +
+ **Note 2**
+* Adds MVP support for Chained and Reverse Chained FHIR Search in CosmosDB.
+
+ In the Azure API for FHIR and the open-source FHIR server backed by Cosmos, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 100 results, an error will be thrown. By default, chained search is behind a feature flag. To use the chained searching on Cosmos DB, use the header `x-ms-enable-chained-search: true`. For more details, see [PR 1695](https://github.com/microsoft/fhir-server/pull/1695).
+ ## Search All search parameter types are supported.
healthcare-apis Fhir Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-github-projects.md
+
+ Title: Related GitHub Projects for Azure API for FHIR
+description: List all Open Source (GitHub) repositories for Azure API for FHIR.
+++++ Last updated : 02/01/2021++
+# Related GitHub Projects
+
+We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You are always welcome to visit our GitHub repositories to learn and experiment with our features and products.
+
+## FHIR Server
+* [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for Azure API for FHIR
+* To see the latest releases, please refer to [Release Notes](https://github.com/microsoft/fhir-server/releases)
+* [microsoft/fhir-server-samples](https://github.com/microsoft/fhir-server-samples): a sample environment
+
+## Data Conversion & Anonymization
+
+#### FHIR Converter
+* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a conversion utility to translate legacy data formats into FHIR
+* Integrated with the Azure API for FHIR as well as FHIR server for Azure in the form of $convert-data operation
+* Ongoing improvements in OSS, and continual integration to the FHIR servers
+
+#### FHIR Converter - VS Code Extension
+* [microsoft/FHIR-Tools-for-Anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization): a set of tools for helping with data (in FHIR format) anonymization
+* Integrated with the Azure API for FHIR as well as FHIR server for Azure in the form of ΓÇÿde-identified exportΓÇÖ
+
+#### FHIR Tools for Anonymization
+* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with Azure Healthcare APIs
+* Released to Visual Studio Marketplace
+* Used for authoring Liquid templates to be used in the FHIR Converter
+
+## IoT Connector
+
+#### Integration with IoT Hub and IoT Central
+* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): integration with IoT Hub or IoT Central to FHIR with data normalization and FHIR conversion of the normalized data
+* Normalization: device data information is extracted into a common format for further processing
+* FHIR Conversion: normalized and grouped data is mapped to FHIR. Observations are created or updated according to configured templates and linked to the device and patient.
+* [Tools to help build the conversation map](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): visualize the mapping configuration for normalizing the device input data and transform it to the FHIR resources. Developers can use this tool to edit and test the mappings, device mapping and FHIR mapping, and export them for uploading to the IoT Connector in the Azure portal.
+
+#### HealthKit and FHIR Integration
+* [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): a Swift library that automates the export of Apple HealthKit Data to a FHIR Server
+
+
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/iot-fhir-portal-quickstart.md
Deploy the [Continuous patient monitoring application template](../../iot-centra
> Whenever your real devices are ready, you can use same IoT Central application to [onboard your devices](../../iot-central/core/howto-set-up-template.md) and replace device simulators. Your device data will automatically start flowing to FHIR as well. ## Connect your IoT data with the Azure IoT Connector for FHIR (preview)
-> [!WARNING]
-> The Device mapping template provided in this guide is designed to work with Data Export (legacy) within IoT Central.
-Once you've deployed your IoT Central application, your two out-of-the-box simulated devices will start generating telemetry. For this tutorial, we'll ingest the telemetry from *Smart Vitals Patch* simulator into FHIR via the Azure IoT Connector for FHIR. To export your IoT data to the Azure IoT Connector for FHIR, we'll want to [set up a continuous data export within IoT Central](../../iot-central/core/howto-export-data-legacy.md). On the continuous data export page:
-- Pick *Azure Event Hubs* as the export destination.-- Select *Use a connection string* value for **Event Hubs namespace** field.-- Provide Azure IoT Connector for FHIR's connection string obtained in a previous step for the **Connection String** field.-- Keep **Telemetry** option *On* for **Data to Export** field.
+Once you've deployed your IoT Central application, your two out-of-the-box simulated devices will start generating telemetry. For this tutorial, we'll ingest the telemetry from *Smart Vitals Patch* simulator into FHIR via the Azure IoT Connector for FHIR. To export your IoT data to the Azure IoT Connector for FHIR, we'll want to [set up a continuous data export within IoT Central](../../iot-central/core/howto-export-data.md). We'll first need to create a connection to the destination, and then we'll create a data export job to continuously run:
+
+Create a new destination:
+- Go to the **Destinations** tab and create a new destination.
+- Start by giving your destination a unique name.
+- Pick *Azure Event Hubs* as the destination type.
+- Provide Azure IoT Connector for FHIR's connection string obtained in a previous step for the **Connection string** field.
+
+Create a new data export:
+- Once you've created your destination, go over to the **Exports** tab and create a new data export.
+- Start by giving it the data export a unique name.
+- Under **Data** select *Telemetry* as the *Type of data to export*.
+- Under **Destination** select the destination name you created in the previous name.
## View device data in Azure API for FHIR
Learn how to configure IoT Connector using device and FHIR mapping templates.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Your transparent gateway is now configured and ready to start forwarding telemet
## Provision a downstream device
-Currently, IoT Edge can't automatically provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device. To complete these steps, you need an environment with Python 3.5 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.5 pre-installed:
+Currently, IoT Edge can't automatically provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
1. Run the following command to install the `azure.iot.device` module:
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-analytics.md
You can configure an IoT Central application to continuously export telemetry to
1. In the Azure portal, navigate to your Event Hubs namespace and select **+ Event Hub**. 1. Name your event hub **centralexport**. 1. In the list of event hubs in your namespace, select **centralexport**. Then choose **Shared access policies**.
-1. Select **+ Add**. Create a policy named **Listen** with the **Listen** claim.
+1. Select **+ Add**. Create a policy named **SendListen** with the **Send** and **Listen** claims.
1. When the policy is ready, select it in the list, and then copy the **Connection string-primary key** value. 1. Make a note of this connection string, you use it later when you configure your Databricks notebook to read from the event hub.
Your Event Hubs namespace looks like the following screenshot:
:::image type="content" source="media/howto-create-custom-analytics/event-hubs-namespace.png" alt-text="image of Event Hubs namespace.":::
-## Configure export in IoT Central and create a new destination
+## Configure export in IoT Central
-On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application you created from the Contoso template. In this section, you configure the application to stream the telemetry from its simulated devices to your event hub. To configure the export:
+In this section, you configure the application to stream telemetry from its simulated devices to your event hub.
-1. Navigate to the **Data Export** page, select **+ New Export**.
-1. Before finishing the first window, Select **Create a destination**.
+On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application you created previously. To configure the export, first create a destination:
-The window will look like below.
+1. Navigate to the **Data export** page, then select **Destinations**.
+1. Select **+ New destination**.
+1. Use the values in the following table to create a destination:
+ | Setting | Value |
+ | -- | -- |
+ | Destination name | Telemetry event hub |
+ | Destination type | Azure Event Hubs |
+ | Connection string | The event hub connection string you made a note of previously |
-3. Enter the following values:
+ The **Event Hub** shows as **centralexport**.
-| Setting | Value |
-| - | -- |
-| Destination Name | Your Destination Name |
-| Destination Type | Azure Event Hubs |
-| Connection String| The event hub connection string you made a note of previously. |
-| Event Hub| Your Event Hub Name|
+ :::image type="content" source="media/howto-create-custom-analytics/data-export-1.png" alt-text="Screenshot showing data export destination.":::
+
+1. Select **Save**.
+
+To create the export definition:
-4. Click **Create** to finish.
+1. Navigate to the **Data export** page and select **+ New Export**.
-5. Use the following settings to configure the export:
+1. Use the values in the following table to configure the export:
| Setting | Value | | - | -- |
- | Enter an export name | eventhubexport |
+ | Export name | Event Hub Export |
| Enabled | On |
- | Data| Select telemetry |
- | Destinations| Create a destination, as shown below, for your export and then select it in the destination dropdown menu. |
+ | Type of data to export | Telemetry |
+ | Destinations | Select **+ Destination**, then select **Telemetry event hub** |
+1. Select **Save**.
-6. When finished, select **Save**.
+ :::image type="content" source="media/howto-create-custom-analytics/data-export-2.png" alt-text="Screenshot showing data export definition.":::
-Wait until the export status is **Running** before you continue.
+Wait until the export status is **Healthy** on the **Data export** page before you continue.
## Configure Databricks workspace
In this how-to guide, you learned how to:
* Stream telemetry from an IoT Central application using *continuous data export*. * Create an Azure Databricks environment to analyze and plot telemetry data.
-Now that you know how to create custom analytics, the suggested next step is to learn how to [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](howto-connect-powerbi.md).
+Now that you know how to create custom analytics, the suggested next step is to learn how to [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](howto-connect-powerbi.md).
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
+
+ Title: Azure IoT Central administrator guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the administrator role in IoT Central.
++ Last updated : 03/22/2021++++++
+# IoT Central administrator guide
+
+This article provides an overview of the administrator role in IoT Central.
+
+To access and use the Administration section, you must be in the Administrator role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically assigned to the Administrator role for that application.
+
+As an _administrator_, you are responsible for administrative tasks such as:
+
+* Managing Roles
+* Curate Permissions
+* Manage application by changing the application name and URL
+* Uploading image
+* Deleting an application in your Azure IoT Central application.
+
+## Manage application settings
+You have the ability to [manage application settings](howto-administer.md).
+
+## Manage billing
+You can [manage your Azure IoT Central billing](howto-view-bill.md). You can move your application from the free pricing plan to a standard pricing plan, and also upgrade or downgrade your pricing plan.
+
+## Export applications
+You can [export your Azure IoT application](howto-use-app-templates.md) so that you may reuse it.
+
+## Manage migration between versions
+When you create a new IoT Central application, it's a V3 application. If you previously created an application, then depending on when you created it, it may be V2. You can [migrate a V2 to a V3 application](howto-migrate.md).
+
+## Monitor application health
+You can set of metrics provided by IoT Central to [assess the health of devices](howto-monitor-application-health.md) connected to your IoT Central application and the health of your running data exports.
+
+## Manage security (X.509, SAS keys, API tokens)
+As an _Administrator_ you can do the following:
+* Manage [X.509 certificates](how-to-roll-x509-certificates.md)
+* Curate [SaS keys](concepts-get-connected.md)
+* Review [API tokens](https://docs.microsoft.com/rest/api/iotcentral/)
+
+## Configure file uploads
+You can configure how to [file uploads](howto-configure-file-uploads.md)
+
+## Tools - Azure CLI, Azure PowerShell, Azure portal
+
+Here are some tools you have access to as _administrator_.
+* [Azure CLI](howto-manage-iot-central-from-cli.md)
+* [Azure PowerShell](howto-manage-iot-central-from-powershell.md)
+* [Azure portal](howto-manage-iot-central-from-portal.md)
+
+## Next steps
+
+Now that you've learned about how to administer your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
iot-develop Quickstart Send Telemetry Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-cli-python.md
In this quickstart, you learned a basic Azure IoT application workflow for secur
As a next step, explore the Azure IoT Python SDK through application samples. - [Asynchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios): This directory contains asynchronous Python samples for additional IoT Hub scenarios.-- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.5+-- [IoT Edge samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-edge-scenarios): This directory contains Python samples for working with Edge modules and downstream devices.
+- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.6+
+- [IoT Edge samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-edge-scenarios): This directory contains Python samples for working with Edge modules and downstream devices.
iot-develop Quickstart Send Telemetry Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-python.md
In this quickstart, you learned a basic Azure IoT application workflow for secur
As a next step, explore the Azure IoT Python SDK through application samples. - [Asynchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios): This directory contains asynchronous Python samples for additional IoT Hub scenarios.-- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.5+
+- [Synchronous Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples): This directory contains Python samples for use with Python 2.7 or synchronous compatibility scenarios for Python 3.6+
- [IoT Edge samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-edge-scenarios): This directory contains Python samples for working with Edge modules and downstream devices.
iot-dps Quick Create Simulated Device X509 Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-x509-python.md
In this quickstart, you provision a development machine as a Python X.509 device
- Familiar with [provisioning](about-iot-dps.md#provisioning-process) concepts. - Completion of [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- [Python 3.5.3 or later](https://www.python.org/downloads/)
+- [Python 3.6 or later](https://www.python.org/downloads/)
- [Git](https://git-scm.com/download/).
iot-edge About Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/about-iot-edge.md
Azure IoT Edge integrates seamlessly with Azure IoT solution accelerators to pro
## Next steps
-Try out these concepts by [deploying IoT Edge on a simulated device](quickstart.md).
+Try out these concepts by deploying your first IoT Edge module to a device:
+
+<!-- 1.1 -->
+
+* [Deploy modules to a Linux IoT Edge device](quickstart-linux.md)
+* [Deploy modules to a Windows IoT Edge device](quickstart.md)
++
+<!-- 1.2 -->
+
+[Deploy modules to an IoT Edge device](quickstart-linux.md)
+
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-configure-proxy-support.md
Whether your IoT Edge device runs on Windows or Linux, you need to access the in
If you're installing the IoT Edge runtime on a Linux device, configure the package manager to go through your proxy server to access the installation package. For example, [Set up apt-get to use a http-proxy](https://help.ubuntu.com/community/AptGet/Howto/#Setting_up_apt-get_to_use_a_http-proxy). Once your package manager is configured, follow the instructions in [Install Azure IoT Edge runtime](how-to-install-iot-edge.md) as usual.
-### Windows devices
+### Windows devices using IoT Edge for Linux on Windows
+
+If you're installing the IoT Edge runtime using IoT Edge for Linux on Windows, IoT Edge is installed by default on your Linux virtual machine. No additional installation or update steps are required.
+
+### Windows devices using Windows containers
If you're installing the IoT Edge runtime on a Windows device, you need to go through the proxy server twice. The first connection downloads the installer script file, and the second connection is during the installation to download the necessary components. You can configure proxy information in Windows settings, or include your proxy information directly in the PowerShell commands.
systemctl show --property=Environment aziot-identityd
:::moniker-end <!--end 1.2-->
-#### Windows
+#### Windows using IoT Edge for Linux on Windows
+
+Log in to your IoT Edge for Linux on Windows virtual machine:
+
+```azurepowershell-interactive
+Ssh-EflowVm
+```
+
+Follow the same steps as the Linux section above to configure the IoT Edge daemon.
+
+#### Windows using Windows containers
Open a PowerShell window as an administrator and run the following command to edit the registry with the new environment variable. Replace **\<proxy url>** with your proxy server address and port.
This step takes place once on the IoT Edge device during initial device setup.
5. Save the changes to config.yaml and close the editor. Restart IoT Edge for the changes to take effect.
- * Linux:
+ * Linux and IoT Edge for Linux on Windows:
```bash sudo systemctl restart iotedge ```
- * Windows:
+ * Windows using Windows containers:
```powershell Restart-Service iotedge
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Verify that IoT Edge for Linux on Windows was successfully installed and configu
## Next steps
-Continue to [deploy IoT Edge modules](how-to-deploy-modules-portal.md) to learn how to deploy modules onto your device.
+* Continue to [deploy IoT Edge modules](how-to-deploy-modules-portal.md) to learn how to deploy modules onto your device.
+* Learn how to [manage certificates on your IoT Edge for Linux on Windows virtual machine](how-to-manage-device-certificates.md) and transfer files from the host OS to your Linux virtual machine.
+* Learn how to [configure your IoT Edge devices to communicate through a proxy server](how-to-configure-proxy-support.md).
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-manage-device-certificates.md
To see an example of these certificates, review the scripts that create demo cer
Install your certificate chain on the IoT Edge device and configure the IoT Edge runtime to reference the new certificates.
-Copy the three certificate and key files onto your IoT Edge device. You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate files. If you generated the certificates on the IoT Edge device itself, you can skip this step and use the path to the working directory.
+Copy the three certificate and key files onto your IoT Edge device. You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate files. If you generated the certificates on the IoT Edge device itself, you can skip this step and use the path to the working directory.
-For example, if you used the sample scripts to [Create demo certificates](how-to-create-test-certificates.md), copy the following files onto your IoT-Edge device:
+If you are using IoT Edge for Linux on Windows, you need to use the SSH key located in the Azure IoT Edge `id_rsa` file to authenticate file transfers between the host OS and the Linux virtual machine. You can do an authenticated SCP using the following command:
+
+ ```powershell-interactive
+ C:\WINDOWS\System32\OpenSSH\scp.exe -i 'C:\Program Files\Azure IoT Edge\id_rsa' <PATH_TO_SOURCE_FILE> iotedge-user@<VM_IP>:<PATH_TO_FILE_DESTINATION>
+ ```
+
+ >[!NOTE]
+ >The Linux virtual machine's IP address can be queried via the `Get-EflowVmAddr` command.
+
+If you used the sample scripts to [Create demo certificates](how-to-create-test-certificates.md), copy the following files onto your IoT-Edge device:
* Device CA certificate: `<WRKDIR>\certs\iot-edge-device-MyEdgeDeviceCA-full-chain.cert.pem` * Device CA private key: `<WRKDIR>\private\iot-edge-device-MyEdgeDeviceCA.key.pem`
For example, if you used the sample scripts to [Create demo certificates](how-to
1. Open the IoT Edge security daemon config file.
- * Windows: `C:\ProgramData\iotedge\config.yaml`
- * Linux: `/etc/iotedge/config.yaml`
+ * Linux and IoT Edge for Linux on Windows: `/etc/iotedge/config.yaml`
+
+ * Windows using Windows containers: `C:\ProgramData\iotedge\config.yaml`
1. Set the **certificate** properties in config.yaml to the file URI path to the certificate and key files on the IoT Edge device. Remove the `#` character before the certificate properties to uncomment the four lines. Make sure the **certificates:** line has no preceding whitespace and that nested items are indented by two spaces. For example:
- * Windows:
+ * Linux and IoT Edge for Linux on Windows:
```yaml certificates:
- device_ca_cert: "file:///C:/<path>/<device CA cert>"
- device_ca_pk: "file:///C:/<path>/<device CA key>"
- trusted_ca_certs: "file:///C:/<path>/<root CA cert>"
+ device_ca_cert: "file:///<path>/<device CA cert>"
+ device_ca_pk: "file:///<path>/<device CA key>"
+ trusted_ca_certs: "file:///<path>/<root CA cert>"
```
- * Linux:
+ * Windows using Windows containers:
```yaml certificates:
- device_ca_cert: "file:///<path>/<device CA cert>"
- device_ca_pk: "file:///<path>/<device CA key>"
- trusted_ca_certs: "file:///<path>/<root CA cert>"
+ device_ca_cert: "file:///C:/<path>/<device CA cert>"
+ device_ca_pk: "file:///C:/<path>/<device CA key>"
+ trusted_ca_certs: "file:///C:/<path>/<root CA cert>"
``` 1. On Linux devices, make sure that the user **iotedge** has read permissions for the directory holding the certificates. 1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following two directories before starting or restarting IoT Edge:
- * Windows: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
+ * Linux and IoT Edge for Linux on Windows: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
+
+ * Windows using Windows containers: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
- * Linux: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
:::moniker-end <!-- end 1.1 -->
Upon expiry after the specified number of days, IoT Edge has to be restarted to
1. Delete the contents of the `hsm` folder to remove any previously generated certificates.
- Windows: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
- Linux: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
+ * Linux and IoT Edge for Linux on Windows: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
-1. Restart the IoT Edge service.
-
- Windows:
+ * Windows using Windows containers: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
- ```powershell
- Restart-Service iotedge
- ```
+1. Restart the IoT Edge service.
- Linux:
+ * Linux and IoT Edge for Linux on Windows:
```bash sudo systemctl restart iotedge ```
-1. Confirm the lifetime setting.
-
- Windows:
+ * Windows using Windows containers:
```powershell
- iotedge check --verbose
+ Restart-Service iotedge
```
- Linux:
+1. Confirm the lifetime setting.
+
+ * Linux and IoT Edge for Linux on Windows:
```bash sudo iotedge check --verbose ```
+ * Windows using Windows containers:
+
+ ```powershell
+ iotedge check --verbose
+ ```
+ Check the output of the **production readiness: certificates** check, which lists the number of days until the automatically generated device CA certificates expire. :::moniker-end
iot-edge Iot Edge As Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-as-gateway.md
description: Use Azure IoT Edge to create a transparent, opaque, or proxy gatewa
Previously updated : 11/10/2020 Last updated : 03/23/2021
IoT Edge devices can operate as gateways, providing a connection between other devices on the network and IoT Hub.
-The IoT Edge hub module acts like IoT Hub, so can handle connections from any devices that have an identity with IoT Hub, including other IoT Edge devices. This type of gateway pattern is called *transparent* because messages can pass from downstream devices to IoT Hub as though there were not a gateway between them.
-
-<!-- 1.2.0 -->
-Beginning with version 1.2 of IoT Edge, transparent gateways can handle downstream connections from other IoT Edge devices.
+The IoT Edge hub module acts like IoT Hub, so can handle connections from other devices that have an identity with the same IoT hub. This type of gateway pattern is called *transparent* because messages can pass from downstream devices to IoT Hub as though there were not a gateway between them.
For devices that don't or can't connect to IoT Hub on their own, IoT Edge gateways can provide that connection. This type of gateway pattern is called *translation* because the IoT Edge device has to perform processing on incoming downstream device messages before they can be forwarded to IoT Hub. These scenarios require additional modules on the IoT Edge gateway to handle the processing steps.
For more information about how the IoT Edge hub manages communication between do
<!-- 1.1 --> ::: moniker range="iotedge-2018-06"-
-IoT Edge devices cannot be downstream of an IoT Edge gateway.
- ![Diagram - Transparent gateway pattern](./media/iot-edge-as-gateway/edge-as-gateway-transparent.png)
+>[!NOTE]
+>In IoT Edge version 1.1 and older, IoT Edge devices cannot be downstream of an IoT Edge gateway.
+>
+>Beginning with version 1.2 of IoT Edge, transparent gateways can handle connections from downstream IoT Edge devices. For more information, switch to the [IoT Edge 1.2](?view=iotedge-2020-11&preserve-view=true) version of this article.
+ ::: moniker-end
-<!-- 1.2.0 -->
+<!-- 1.2 -->
::: moniker range=">=iotedge-2020-11"
-Starting in version 1.2.0, IoT Edge devices can connect through transparent gateways.
+Beginning with version 1.2 of IoT Edge, transparent gateways can handle connections from downstream IoT Edge devices.
<!-- TODO add a downstream IoT Edge device to graphic -->
iot-hub Iot Hub Device Sdk Platform Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-sdk-platform-support.md
The [Azure IoT Hub Python device SDK](https://github.com/Azure/azure-iot-sdk-pyt
| OS | Compiler | ||--|
-| Linux | Python 2.7.*, 3.5 or later |
-| macOS High Sierra | Python 2.7.*, 3.5 or later |
-| Windows 10 family | Python 2.7.*, 3.5 or later |
+| Linux | Python 2.7.*, 3.6 or later |
+| macOS High Sierra | Python 2.7.*, 3.6 or later |
+| Windows 10 family | Python 2.7.*, 3.6 or later |
Only Python version 3.5.3 or later support the asynchronous APIs, we recommend using version 3.7 or later.
If you experience problems while using the Azure IoT device SDKs, there are seve
## Next steps * [Device and service SDKs](iot-hub-devguide-sdks.md)
-* [Porting Guidance](https://github.com/Azure/azure-c-shared-utility/blob/master/devdoc/porting_guide.md)
+* [Porting Guidance](https://github.com/Azure/azure-c-shared-utility/blob/master/devdoc/porting_guide.md)
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ip-filtering.md
Previously updated : 03/12/2021 Last updated : 03/22/2021
By default, the **IP Filter** grid in the portal for an IoT hub is empty. This d
## Add or edit an IP filter rule
-To add an IP filter rule, select **+ Add IP Filter Rule**.
+To add an IP filter rule, select **+ Add IP Filter Rule**. To quickly add your computer's IP address, click the **Add your client IP address**.
:::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-add-rule.png" alt-text="Add an IP filter rule to an IoT hub":::
-After selecting **Add IP Filter Rule**, fill in the fields.
+After selecting **Add IP Filter Rule**, fill in the fields. These fields are pre-filled for you if you selected to add your client IP address.
:::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-after-selecting-add.png" alt-text="After selecting Add an IP Filter rule":::
IP filter rules are *allow* rules and applied without ordering. Only IP addresse
For example, if you want to accept addresses in the range `192.168.100.0/22` and reject everything else, you only need to add one rule in the grid with address range `192.168.100.0/22`.
+### Azure portal
+
+IP filter rules are also applied when using IoT Hub through Azure portal. This is because API calls to the IoT Hub service are made directly using your browser with your credentials, which is consistent with other Azure services. To access IoT Hub using Azure portal when IP filter is enabled, add your computer's IP address to the allowlist.
+ ## Retrieve and update IP filters using Azure CLI Your IoT Hub's IP filters can be retrieved and updated through [Azure CLI](/cli/azure/).
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
Previously updated : 03/12/2021 Last updated : 03/22/2021 # Managing public network access for your IoT hub
To turn on public network access, selected **All networks**, then **Save**.
## Accessing the IoT Hub after disabling public network access
-After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md).
+After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
## IoT Hub endpoint, IP address, and ports after disabling public network access
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/troubleshooting.md
ONNX Runtime includes environment variables to control TensorRT model caching. T
The folder must be created prior to starting body tracking.
-> [!NOTE]
+> [!IMPORTANT]
> TensorRT pre-processes the model prior to inference resulting in extended start up times when compared to other execution environments. Engine caching limits this to first execution however it is experimental and is specific to the model, ONNX Runtime version, TensorRT version and GPU model. The TensorRT execution environment supports both FP32 (default) and FP16. FP16 trades ~2x performance increase for minimal accuracy decrease. To specify FP16:
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
Title: SNAT for outbound connections
-description: Describes how Azure Load Balancer is used to perform SNAT for outbound internet connectivity
+ Title: Source Network Address Translation (SNAT) for outbound connections
+
+description: Learn how Azure Load Balancer is used for outbound internet connectivity (SNAT).
Last updated 10/13/2020
-# Using SNAT for outbound connections
+# Using Source Network Address Translation (SNAT) for outbound connections
The frontend IPs of an Azure public load balancer can be used to provide outbound connectivity to the internet for backend instances. This configuration uses **source network address translation (SNAT)**. SNAT rewrites the IP address of the backend to the public IP address of your load balancer.
-SNAT enables **IP masquerading** of the backend instance. This masquerading prevents outside sources from having a direct address to the backend instances. Sharing an IP address between backend instances reduces the cost of static public IPs and supports scenarios such as simplifying IP allow lists with traffic from known public IPs.
+SNAT enables **IP masquerading** of the backend instance. This masquerading prevents outside sources from having a direct address to the backend instances. An IP address shared between backend instances reduces the cost of static public IPs. A known IP address supports scenarios such as simplifying IP allowlist with traffic from known public IPs.
>[!Note] > For applications that require large numbers of outbound connections or enterprise customers who require a single set of IPs to be used from a given virtual network,
-> [Virtual Network NAT](../virtual-network/nat-overview.md) is the recommended solution. It's dynamic allocation allows for simple configuration and > the most efficient use of SNAT ports from each IP address. It also allows all resources in the virtual network to share a set of IP addresses without a need for them to share > a load balancer.
+> [Virtual Network NAT](../virtual-network/nat-overview.md) is the recommended solution. It's dynamic allocation allows for simple configuration and the most efficient use of SNAT ports from each IP address. It allows all resources in the virtual network to share a set of IP addresses without a need for them to share a load balancer.
>[!Important] > Even without outbound SNAT configured, Azure storage accounts within the same region will still be accessible and backend resources will still have access to Microsoft services such as Windows Updates.
The five-tuple consists of:
* Source IP * Source port and protocol to provide this distinction.
-If a port is used for inbound connections, it will have a **listener** for inbound connection requests on that port and cannot be used for outbound connections. To establish an outbound connection, an **ephemeral port** must be used to provide the destination with a port on which to communicate and maintain a distinct traffic flow. When these ephemeral ports are used to perform SNAT they are called **SNAT ports**
+If a port is used for inbound connections, it has a **listener** for inbound connection requests on that port. That port can't be used for outbound connections. To establish an outbound connection, an **ephemeral port** is used to provide the destination with a port on which to communicate and maintain a distinct traffic flow. When these ephemeral ports are used for SNAT, they're called **SNAT ports**
-By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP(Transmission Control Protocol) and UDP(User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, Azure gives 64,000 eligible for use as SNAT ports.
+By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP(Transmission Control Protocol) and UDP(User Datagram Protocol).
+
+When a public IP address is added as a frontend IP to a load balancer, Azure gives 64,000 ports that are eligible for SNAT.
>[!NOTE]
-> Each port used for a load-balancing or inbound NAT rule will consume a range of eight ports from these 64,000 ports, reducing the number of ports eligible for SNAT. If a load-> balancing or nat rule is in the same range of eight as another it will consume no additional ports.
+> Each port used for a load-balancing or inbound NAT rule will consume a range of eight ports from these 64,000 ports, reducing the number of ports eligible for SNAT. If a load-balancing or nat rule is in the same range of eight as another it will consume no additional ports.
Through [outbound rules](./outbound-rules.md) and load-balancing rules, these SNAT ports can be distributed to backend instances to enable them to share the public IPs of the load balancer for outbound connections.
-When [scenario 2](#scenario2) below is configured, the host for each backend instance will perform SNAT on packets that are part of an outbound connection. When performing SNAT on an outbound connection from a backend instance, the host rewrites the source IP to one of the frontend IPs. To maintain unique flows, the host rewrites the source port of each outbound packet to one of the SNAT ports allocated for the backend instance.
+When [scenario 2](#scenario2) below is configured, the host for each backend instance will SNAT packets that are part of an outbound connection.
+
+When doing SNAT on an outbound connection from a backend instance, the host rewrites the source IP to one of the frontend IPs.
+
+To maintain unique flows, the host rewrites the source port of each outbound packet to a SNAT port on the backend instance.
## Outbound connection behavior for different scenarios * Virtual machine with public IP. * Virtual machine without public IP. * Virtual machine without public IP and without standard load balancer. - ### <a name="scenario1"></a> Scenario 1: Virtual machine with public IP - | Associations | Method | IP protocols | | - | | | | Public load balancer or stand-alone | [SNAT (Source Network Address Translation)](#snat) </br> not used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) | - #### Description - Azure uses the public IP assigned to the IP configuration of the instance's NIC for all outbound flows. The instance has all ephemeral ports available. It doesn't matter whether the VM is load balanced or not. This scenario takes precedence over the others. - A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT. - ### <a name="scenario2"></a>Scenario 2: Virtual machine without public IP and behind Standard public Load Balancer - | Associations | Method | IP protocols | | | | | | Standard public load balancer | Use of load balancer frontend IPs for [SNAT](#snat).| TCP </br> UDP | - #### Description -
- The load balancer resource is configured with an outbound rule or a load-balancing rule that enables default SNAT. This rule is used to create a link between the public IP frontend with the backend pool.
-
+ The load balancer resource is configured with an outbound rule or a load-balancing rule that enables SNAT. This rule is used to create a link between the public IP frontend with the backend pool.
If you don't complete this rule configuration, the behavior is as described in scenario 3. - A rule with a listener isn't required for the health probe to succeed. - When a VM creates an outbound flow, Azure translates the source IP address to the public IP address of the public load balancer frontend. This translation is done via [SNAT](#snat). - Ephemeral ports of the load balancer frontend public IP address are used to distinguish individual flows originated by the VM. SNAT dynamically uses [preallocated ephemeral ports](#preallocatedports) when outbound flows are created.
+ In this context, the ephemeral ports used for SNAT are called SNAT ports. It's highly recommended that an [outbound rule](./outbound-rules.md) is explicitly configured. If using default SNAT through a load-balancing rule, SNAT ports are pre-allocated as described in the [Default SNAT ports allocation table](#snatporttable).
- In this context, the ephemeral ports used for SNAT are called SNAT ports. It is highly recommended that an [outbound rule](./outbound-rules.md) is explicitly configured. If using default SNAT through a load-balancing rule, SNAT ports are pre-allocated as described in the [Default SNAT ports allocation table](#snatporttable).
+> [!NOTE]
+> **Azure Virtual Network NAT** can provide outbound connectivity for virtual machines without the need for a load balancer. See [What is Azure Virtual Network NAT?](../virtual-network/nat-overview.md) for more information.
### <a name="scenario3"></a>Scenario 3: Virtual machine without public IP and behind Standard internal Load Balancer - | Associations | Method | IP protocols | | | | | | Standard internal load balancer | No internet connectivity.| None | #### Description
-When using a Standard internal load balancer there is no use of ephemeral IP addresses for SNAT. This is to support security by default and ensure that all IP addresses used by resource are configurable and can be reserved. In order to achieve outbound connectivity to the internet when using a Standard internal load balancer, configure an instance level public IP address to follow the behavior in (scenario 1)[#scenario1] or add the backend instances to a Standard public load balancer with an outbound rule configured in additon to the internal load balancer to follow the behavior in (scenario 2)[#scenario2].
+When using a Standard internal load balancer, there isn't use of ephemeral IP addresses for SNAT. This feature supports security by default. This feature ensures all IP addresses used by resources are configurable and can be reserved.
- ### <a name="scenario4"></a>Scenario 4: Virtual machine without public IP and behind Basic Load Balancer
+To achieve outbound connectivity to the internet when using a Standard internal load balancer, configure an instance level public IP address to follow the behavior in [scenario 1](#scenario1).
+
+Another option is to add the backend instances to a Standard public load balancer with an outbound rule configured. The backend instances are added to an internal load balancer for internal load balancing. This deployment follows the behavior in [scenario 2](#scenario2).
+> [!NOTE]
+> **Azure Virtual Network NAT** can provide outbound connectivity for virtual machines without the need for a load balancer. See [What is Azure Virtual Network NAT?](../virtual-network/nat-overview.md) for more information.
+
+ ### <a name="scenario4"></a>Scenario 4: Virtual machine without public IP and behind Basic Load Balancer
| Associations | Method | IP protocols | | | | |
When using a Standard internal load balancer there is no use of ephemeral IP add
#### Description
+ When the VM creates an outbound flow, Azure translates the source IP address to a dynamically given public source IP address. This public IP address **isn't configurable** and can't be reserved. This address doesn't count against the subscription's public IP resource limit.
- When the VM creates an outbound flow, Azure translates the source IP address to a dynamically allocated public source IP address. This public IP address **isn't configurable** and can't be reserved. This address doesn't count against the subscription's public IP resource limit.
--
- The public IP address will be released and a new public IP requested if you redeploy the:
-
+The public IP address will be released and a new public IP requested if you redeploy the:
* Virtual Machine * Availability set * Virtual machine scale set -
- Don't use this scenario for adding IPs to an allow list. Use scenario 1 or 2 where you explicitly declare outbound behavior. [SNAT](#snat) ports are preallocated as described in the [Default SNAT ports allocation table](#snatporttable).
+ Don't use this scenario for adding IPs to an allowlist. Use scenario 1 or 2 where you explicitly declare outbound behavior. [SNAT](#snat) ports are preallocated as described in the [Default SNAT ports allocation table](#snatporttable).
## <a name="scenarios"></a> Exhausting ports
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 03/08/2021 Last updated : 03/24/2021 tags: connectors
The managed SAP connector integrates with SAP systems through your [on-premises
These prerequisites apply if you're running your logic app in a Premium-level ISE. However, they don't apply to logic apps running in a Developer-level ISE. An ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let logic apps directly access on-premises resources without using on-premises data gateway.
-> [!NOTE]
-> While the SAP ISE connector is visible inside of a Developer-level ISE, attempts to install the connector won't succeed.
- 1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). 1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly files:
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021 ms.suite: integration
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
A compute instance:
You can use compute instance as a local inferencing deployment target for test/debug scenarios.
+> [!TIP]
+> The compute instance has 120GB OS disk. If you run out of disk space, clear sufficient space before attempting to stop/restart the compute instance.
+ ## <a name="notebookvm"></a>What happened to Notebook VM?
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/17/2021 Last updated : 03/24/2021
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-new-saas-offer-plans.md
The actions that are available in the **Action** column of the **Plan overview**
- If the plan status is **Draft**, the link in the **Action** column will say **Delete draft**. - If the plan status is **Live**, the link in the **Action** column will be either **Stop sell plan** or **Sync private audience**. The **Sync private audience** link will publish only the changes to your private audiences, without publishing any other updates you might have made to the offer.
+## Before you publish your offer
+
+If you haven't already done so, create a development and test (DEV) offer to test your offer before publishing your production offer live. To learn more, see [Create a development and test offer](create-saas-dev-test-offer.md).
+ ## Next steps - Learn [How to sell your SaaS offer](create-new-saas-offer-marketing.md) through the **Co-sell with Microsoft** and **Resell through CSPs** programs.
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-new-saas-offer.md
Previously updated : 09/02/2020 Last updated : 03/19/2021 # How to create a SaaS offer in the commercial marketplace
As a commercial marketplace publisher, you can create a software as a service (S
If you havenΓÇÖt already done so, read [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md). It will explain the technical requirements for your SaaS app, and the information and assets youΓÇ