Updates from: 02/25/2022 02:09:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
Previously updated : 02/09/2022 Last updated : 02/23/2022 # Monitor Azure AD B2C with Azure Monitor
After you've deployed the template and waited a few minutes for the resource pro
1. Sign in to the [Azure portal](https://portal.azure.com) with your **Azure AD B2C** administrative account. This account must be a member of the security group you specified in the [Delegate resource management](#3-delegate-resource-management) step. 1. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find your Azure AD directory that contains the Azure subscription and the _azure-ad-b2c-monitor_ resource group you created, and then select **Switch**.
-1. Verify that you've selected the correct directory and subscription.
+1. Verify that you've selected the correct directory and your Azure subscription is listed and selected in the **Default subscription filter**.
+
+ ![Screenshot of the default subscription filter](./media/azure-monitor/default-subscription-filter.png)
## 5. Configure diagnostic settings
To configure monitoring settings for Azure AD B2C activity logs:
1. Check the box for each destination to send the logs. Select **Configure** to specify their settings **as described in the following table**. 1. Select **Send to Log Analytics**, and then select the **Name of workspace** you created earlier (`AzureAdB2C`). 1. Select **AuditLogs** and **SignInLogs**.+
+ > [!NOTE]
+ > Only the **AuditLogs** and **SignInLogs** diagnostic settings are currently supported for Azure AD B2C tenants.
+ 1. Select **Save**. > [!NOTE]
active-directory Cloudknox Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md
CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that
## What are the prerequisites to use CloudKnox?
-CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox, however, an Azure subscription or Azure AD P1 or P2 license aren't required to use CloudKnox for AWS or GCP.
+CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox.
## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that arenΓÇÖt yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 01/10/2022 Last updated : 02/23/2022
Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
-A workload identity is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as:
+A [workload identity](../develop/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they:
-- They usually have no formal lifecycle process.
+- CanΓÇÖt perform multi-factor authentication.
+- Often have no formal lifecycle process.
- Need to store their credentials or secrets somewhere.-- Applications may use multiple identities.
-
-These differences make workload identities difficult to manage, puts them at higher risk for leaks, and reduces the potential for securing access.
+
+These differences make workload identities harder to manage and put them at higher risk for compromise.
> [!IMPORTANT] > In public preview, you can scope Conditional Access policies to service principals in Azure AD with an Azure Active Directory Premium P2 edition active in your tenant. After general availability, additional licenses might be required.
These differences make workload identities difficult to manage, puts them at hig
> [!NOTE] > Policy can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities are not covered by policy.
-This preview enables blocking service principals from outside of trusted IP ranges, such as a corporate network public IP ranges.
+This preview enables blocking service principals from outside of trusted public IP ranges, or based on risk detected by Azure AD Identity Protection.
## Implementation
-### Step 1: Set up a sample application
-
-If you already have a test application that makes use of a service principal, you can skip this step.
-
-Set up a sample application that, demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. Follow the instructions in the article [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](../develop/quickstart-v2-netcore-daemon.md) to create this application.
-
-### Step 2: Create a Conditional Access policy
+### Create a location-based Conditional Access policy
Create a location based Conditional Access policy that applies to service principals.
Create a location based Conditional Access policy that applies to service princi
1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**. 1. Select **Create** to complete your policy.
+### Create a risk-based Conditional Access policy
+
+Use this sample JSON for a risk-based policy using the [Microsoft Graph beta endpoint](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-1.0&preserve-view=true).
+
+> [!NOTE]
+> Report-only mode doesn't report account risk on a risky workload identity.
+
+```json
+{
+"displayName": "Name",
+"state": "enabled OR disabled",
+"conditions": {
+"applications": {
+"includeApplications": [
+"All"
+],
+"excludeApplications": [],
+"includeUserActions": [],
+"includeAuthenticationContextClassReferences": [],
+"applicationFilter": null
+},
+"userRiskLevels": [],
+"signInRiskLevels": [],
+"clientApplications": {
+"includeServicePrincipals": [
+"ServicePrincipalsInMyTenant"
+],
+"excludeServicePrincipals": []
+},
+"servicePrincipalRiskLevels": [
+"low",
+"medium",
+"high"
+]
+},
+"grantControls": {
+"operator": "and",
+"builtInControls": [
+"block"
+],
+"customAuthenticationFactors": [],
+"termsOfUse": []
+}
+}
+```
+ ## Roll back If you wish to roll back this feature, you can delete or disable any created policies.
Failure reason when Service Principal is blocked by Conditional Access: ΓÇ£Acces
### Finding the objectID
-You can get the objectID of the service principal from Azure AD Enterprise Applications. The Object ID in Azure AD App registrations cannot be used. This identifier is the Object ID of the app registration, not of the service principal.
+You can get the objectID of the service principal from Azure AD Enterprise Applications. The Object ID in Azure AD App registrations canΓÇÖt be used. This identifier is the Object ID of the app registration, not of the service principal.
1. Browse to the **Azure portal** > **Azure Active Directory** > **Enterprise Applications**, find the application you registered. 1. From the **Overview** tab, copy the **Object ID** of the application. This identifier is the unique to the service principal, used by Conditional Access policy to find the calling app. ### Microsoft Graph
-Sample JSON for configuration using the Microsoft Graph beta endpoint.
+Sample JSON for location-based configuration using the Microsoft Graph beta endpoint.
```json {
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
To compute the assertion, you can use one of the many JWT libraries in the langu
| | | | `alg` | Should be **RS256** | | `typ` | Should be **JWT** |
-| `x5t` | The X.509 certificate hash's (also known as the cert's SHA-1 *thumbprint*) Hex representation encoded as a Base64url string value. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64url). |
+| `x5t` | Base64url-encoded SHA-1 thumbprint of the X.509 certificate thumbprint. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64url). |
### Claims (payload)
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Another approach is to use Azure AD Groups and Group Claims as shown in the [act
## Declare roles for an application
-You define app roles by using the [Azure portal](https://portal.azure.com). App roles are usually defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted individually to the user and from their group membership. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
+You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted individually to the user and the user's group memberships. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
> [!IMPORTANT] > Currently if you add a service principal to a group, and then assign an app role to that group, Azure AD does not add the `roles` claim to tokens it issues.
-There are two ways to declare app roles by using the Azure portal:
--- [App roles UI](#app-roles-ui)-- [App manifest editor](#app-manifest-editor)
+App roles are declared using the app roles by using[App roles UI](#app-roles-ui) in the Azure portal:
The number of roles you add counts toward application manifest limits enforced by Azure Active Directory. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md).
To create an app role by using the Azure portal's user interface:
1. Select **Apply** to save your changes.
-### App manifest editor
-
-To add roles by editing the manifest directly:
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select the **Directory + subscription** filter in top menu, and then choose the Azure Active Directory tenant that contains the app registration to which you want to add an app role.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, and then select the application you want to define app roles in.
-1. Again under **Manage**, select **Manifest**.
-1. Edit the app manifest by locating the `appRoles` setting and adding your application roles. You can define app roles that target `users`, `applications`, or both. The following JSON snippets show examples of both.
-1. Save the manifest.
-
-Each app role definition in the manifest must have a unique GUID for its `id` value.
-
-The `value` property of each app role definition should exactly match the strings that are used in the code in the application. The `value` property can't contain spaces. If it does, you'll receive an error when you save the manifest.
-
-#### Example: User app role
-
-This example defines an app role named `Writer` that you can assign to a `User`:
-
-```json
-"appId": "8763f1c4-0000-0000-0000-158e9ef97d6a",
-"appRoles": [
- {
- "allowedMemberTypes": [
- "User"
- ],
- "displayName": "Writer",
- "id": "d1c2ade8-0000-0000-0000-6d06b947c66f",
- "isEnabled": true,
- "description": "Writers Have the ability to create tasks.",
- "value": "Writer"
- }
- ],
-"availableToOtherTenants": false,
-```
-
-#### Example: Application app role
-
-When available to `applications`, app roles appear as application permissions in an app registration's **Manage** section > **API permissions > Add a permission > My APIs > Choose an API > Application permissions**.
-
-This example shows an app role targeted to an `Application`:
-
-```json
-"appId": "8763f1c4-0000-0000-0000-158e9ef97d6a",
-"appRoles": [
- {
- "allowedMemberTypes": [
- "Application"
- ],
- "displayName": "ConsumerApps",
- "id": "47fbb575-0000-0000-0000-0f7a6c30beac",
- "isEnabled": true,
- "description": "Consumer apps have access to the consumer data.",
- "value": "Consumer"
- }
- ],
-"availableToOtherTenants": false,
-```
- ## Assign users and groups to roles Once you've added app roles in your application, you can assign users and groups to the roles. Assignment of users and groups to roles can be done through the portal's UI, or programmatically using [Microsoft Graph](/graph/api/user-post-approleassignments). When the users assigned to the various app roles sign in to the application, their tokens will have their assigned roles in the `roles` claim.
To assign app roles to an application by using the Azure portal:
The newly added roles should appear in your app registration's **API permissions** pane.
-#### Grant admin consent
+### Grant admin consent
Because these are _application permissions_, not delegated permissions, an admin must grant consent to use the app roles assigned to the application.
The **Status** column should reflect that consent has been **Granted for \<tenan
<a name="use-app-roles-in-your-web-api"></a> ## Usage scenario of app roles
-If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registration**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
+If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registration**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the access token. Your next step is to add code to your web API to check for those roles when the API is called.
Though you can use app roles or groups for authorization, key differences betwee
Developers can use app roles to control whether a user can sign in to an app or an app can obtain an access token for a web API. To extend this security control to groups, developers and admins can also assign security groups to app roles.
-App roles are preferred by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the same reasons.
+App roles are preferred by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the very same reasons as it allows the SaaS app to be provisioned in multiple tenants.
## Next steps Learn more about app roles with the following resources. - Code samples on GitHub
+ - [Add authorization using app roles & roles claims to an ASP\.NET Core web app](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md)
- [Add authorization using groups and group claims to an ASP.NET Core web app](https://aka.ms/groupssample) - [Angular single-page application (SPA) calling a .NET Core web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl) - [React single-page application (SPA) calling a Node.js web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl)
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
To complete the scenario in this tutorial, you need:
![Screenshot showing the More information required message](media/tutorial-mfa/mfa-required.png)
+ > [!NOTE]
+ > You also can configure [cross-tenant access settings](cross-tenant-access-overview.md) to trust the MFA from the Azure AD home tenant. This allows external Azure AD users to use the MFA registered in their own tenant rather than register in the resource tenant.
+ 1. Sign out. ## Clean up resources
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Previously updated : 10/01/2021 Last updated : 02/24/2022
Follow [GoogleΓÇÖs guidance](https://developers.googleblog.com/2016/08/modernizi
First, create a new project in the Google Developers Console to obtain a client ID and a client secret that you can later add to Azure Active Directory (Azure AD). 1. Go to the Google APIs at https://console.developers.google.com, and sign in with your Google account. We recommend that you use a shared team Google account.
-2. Accept the terms of service if you're prompted to do so.
-3. Create a new project: In the upper-left corner of the page, select the project list, and then on the **Select a project** page, select **New Project**.
-4. On the **New Project** page, give the project a name (for example, **Azure AD B2B**), and then select **Create**:
+
+1. Accept the terms of service if you're prompted to do so.
+
+1. Create a new project: At the top of the page, select the project menu to open the **Select a project** page. Choose **New Project**.
+
+1. On the **New Project** page, give the project a name (for example, `MyB2BApp`), and then select **Create**:
![Screenshot that shows a New Project page.](media/google-federation/google-new-project.png)
-4. On the **APIs & Services** page, select **View** under your new project.
+1. Open the new project by selecting the link in the **Notifications** message box or by using the project menu at the top of the page.
-5. Select **Go to APIs overview** on the APIs card. Select **OAuth consent screen**.
+1. In the left menu, select **APIs & Services**, and then select **OAuth consent screen**.
-6. Select **External**, and then select **Create**.
+1. Under **User Type**, select **External**, and then select **Create**.
-7. On the **OAuth consent screen**, enter an **Application name**:
+1. On the **OAuth consent screen**, under **App information**, enter an **App name**.
- ![Screenshot that shows the Google OAuth consent screen.](media/google-federation/google-oauth-consent-screen.png)
+1. Under **User support email**, select an email address.
-8. Scroll to the **Authorized domains** section and enter **microsoftonline.com**:
+1. Under **Authorized domains**, select **Add domain**, and then add the `microsoftonline.com` domain.
- ![Screenshot that shows the Authorized domains section.](media/google-federation/google-oauth-authorized-domains.PNG)
+1. Under **Developer contact information**, enter an email address.
-9. Select **Save**.
+1. Select **Save and continue**.
-10. Select **Credentials**. On the **Create credentials** menu, select **OAuth client ID**:
+1. In the left menu, select **Credentials**.
- ![Screenshot that shows the Google APIs Create credentials menu.](media/google-federation/google-api-credentials.png)
+1. Select **Create credentials**, and then select **OAuth client ID**.
+
+1. In the Application type menu, select **Web application**. Give the application a suitable name, like `Azure AD B2B`. Under **Authorized redirect URIs**, add the following URIs:
-11. Under **Application type**, select **Web application**. Give the application a suitable name, like **Azure AD B2B**. Under **Authorized redirect URIs**, enter the following URIs:
- `https://login.microsoftonline.com` - `https://login.microsoftonline.com/te/<tenant ID>/oauth2/authresp` <br>(where `<tenant ID>` is your tenant ID) - `https://login.microsoftonline.com/te/<tenant name>.onmicrosoft.com/oauth2/authresp` <br>(where `<tenant name>` is your tenant name)
First, create a new project in the Google Developers Console to obtain a client
> [!NOTE] > To find your tenant ID, go to the [Azure portal](https://portal.azure.com). Under **Azure Active Directory**, select **Properties** and copy the **Tenant ID**.
- ![Screenshot that shows the Authorized redirect URIs section.](media/google-federation/google-create-oauth-client-id.png)
-
-12. Select **Create**. Copy the client ID and client secret. You'll use them when you add the identity provider in the Azure portal.
+1. Select **Create**. Copy your client ID and client secret. You'll use them when you add the identity provider in the Azure portal.
![Screenshot that shows the OAuth client ID and client secret.](media/google-federation/google-auth-client-id-secret.png)
-13. You can leave your project at a publishing status of **Testing** and add test users to the OAuth consent screen. Or you can select the **Publish app** button on the OAuth consent screen to make the app available to any user with a Google Account.
+1. You can leave your project at a publishing status of **Testing** and add test users to the OAuth consent screen. Or you can select the **Publish app** button on the OAuth consent screen to make the app available to any user with a Google Account.
## Step 2: Configure Google federation in Azure AD
You'll now set the Google client ID and client secret. You can use the Azure por
1. Go to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**. 2. Select **External Identities**. 3. Select **All identity providers**, and then select the **Google** button.
-4. Enter the client ID and client secret you obtained earlier. Select **Save**:
+4. Enter the client ID and client secret you obtained earlier. Select **Save**:
![Screenshot that shows the Add Google identity provider page.](media/google-federation/google-identity-provider.png)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
na Previously updated : 01/21/2022 Last updated : 02/23/2022
We recommend that you harden your Azure AD Connect server to decrease the securi
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
- * Azure AD Connect supports all versions of SQL Server from 2012 (with the latest service pack) to SQL Server 2019. Azure SQL Database *isn't supported* as a database.
+ * Azure AD Connect supports all versions of SQL Server from 2012 (with the latest service pack) to SQL Server 2019. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
* You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
Caveat: If there are synchronized accounts that need to have non-expiring passwo
`Set-AzureADUser -ObjectID <User Object ID> -PasswordPolicies "DisablePasswordExpiration"` > [!NOTE]
-> For hybrid users that have a PasswordPolicies value set to `DisablePassordExpiration`, this value switches to `None` after a password change is executed on-premises.
+> For hybrid users that have a PasswordPolicies value set to `DisablePasswordExpiration`, this value switches to `None` after a password change is executed on-premises.
> [!NOTE] > The Set-MsolPasswordPolicy PowerShell command will not work on federated domains.
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
To make use of workload identity risk, including the new **Risky workload identi
- Security operator - Security reader
+Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
+ ## Workload identity risk detections We detect risk on workload identities across sign-in behavior and offline indicators of compromise.
You can also query risky workload identities [using the Microsoft Graph API](/gr
Organizations can export data by configurating [diagnostic settings in Azure AD](howto-export-risk-data.md) to send risk data to a Log Analytics workspace, archive it to a storage account, stream it to an event hub, or send it to a SIEM solution.
+## Enforce access controls with risk-based Conditional Access
+
+Using [Conditional Access for workload identities](../conditional-access/workload-identity.md), you can block access for specific accounts you choose when Identity Protection marks them "at risk." Policy can be applied to single-tenant service principals that have been registered in your tenant. Third-party SaaS, multi-tenanted apps, and managed identities are out of scope.
+ ## Investigate risky workload identities Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal.
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md
Previously updated : 07/30/2021 Last updated : 02/18/2022 -+
Azure AD stores reports and security signals for a defined period of time. When
| Azure AD MFA usage | 30 days | 30 days | 30 days | | Risky sign-ins | 7 days | 30 days | 30 days |
-Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers** and **UserRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an Event Hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
[ ![Diagnostic settings screen in Azure AD showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox)
Organizations can choose to store data for longer periods by changing diagnostic
Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
-Once enabled you will find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The tables of most interest to Identity Protection administrators are **AADRiskyUsers** and **AADUserRiskEvents**.
+Once enabled you'll find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators:
- AADRiskyUsers - Provides data like the **Risky users** report in Identity Protection. - AADUserRiskEvents - Provides data like the **Risk detections** report in Identity Protection.
+- RiskyServicePrincipals - Provides data like the **Risky workload identities** report in Identity Protection.
+- ServicePrincipalRiskEvents - Provides data like the **Workload identity detections** report in Identity Protection.
[ ![Log Analytics view showing a query against the AADUserRiskEvents table showing the top 5 events](./media/howto-export-risk-data/log-analytics-view-query-user-risk-events.png) ](./media/howto-export-risk-data/log-analytics-view-query-user-risk-events.png#lightbox)
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Title: 'Quickstart: Configure enterprise application properties'
+ Title: 'Configure enterprise application properties'
description: Configure the properties of an enterprise application in Azure Active Directory.
-+ Last updated 09/22/2021 - #Customer intent: As an administrator of an Azure AD tenant, I want to configure the properties of an enterprise application.
-# Quickstart: Configure enterprise application properties
+# Configure enterprise application properties
-In this quickstart, you use the Azure Active Directory Admin Center to configure the properties of an enterprise application that you previously added to your Azure Active Directory (Azure AD) tenant.
-
-You can configure the following common attributes of an enterprise application:
--- **Enabled for users to sign in?** - Determines whether users assigned to the application can sign in.-- **User assignment required?** - Determines whether users who aren't assigned to the application can sign in.-- **Visible to users?** - Determines whether users assigned to an application can see it in My Apps and Microsoft 365 app launcher. (See the waffle menu in the upper-left corner of a Microsoft 365 website.)-- **Logo** - Determines the logo that represents the application.-- **Notes** - Provides a place to add notes that apply to the application.-
-It is recommended that you use a non-production environment to test the steps in this quickstart.
+This article shows you where you can configure the properties of an enterprise application in your Azure Active Directory (Azure AD) tenant. For more information about the properties that you can configure, see [Properties of an enterprise application](application-properties.md).
## Prerequisites
To configure the properties of an enterprise application, you need:
- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.-- Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md). ## Configure application properties Application properties control how the application is represented and how the application is accessed.
-To edit the application properties:
+To configure the application properties:
1. Go to the [Azure Active Directory Admin Center](https://aad.portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. For example, **Azure AD SAML Toolkit 1**.
+1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use.
1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing.
-1. Select **Yes** or **No** to decide whether the application is enabled for users to sign in.
-1. Select **Yes** or **No** to decide whether only user accounts that have been assigned to the application can sign in.
-1. Select **Yes** or **No** to decide whether users assigned to an application can see it in My Apps and Microsoft 365 portals.
-
- :::image type="content" source="media/add-application-portal-configure/configure-properties.png" alt-text="Configure the properties of an enterprise application.":::
-
-## Use a custom logo
-
-The application logo is seen on the My Apps and Microsoft 365 portals, and when administrators view this application in the enterprise application gallery. Custom logos must be exactly 215x215 pixels in size and be in the PNG format. It is recommended that you use a solid color background with no transparency in your application logo so that it appears best to users.
-
-To use a custom logo:
-
-1. Create a logo that's 215 by 215 pixels, and save it in .png format.
-1. Select the icon in **Select a file** to upload the logo.
-1. When you're finished, select **Save**.
-
-The thumbnail for the logo doesn't update right away. You can close and reopen the **Properties** pane to see the updated thumbnail.
-
-## Add notes
-
-You can use the **Notes** property to add any information that is relevant for the management of the application in Azure AD. The **Notes** property is a free text field with a maximum size of 1024 characters.
-
-To enter notes for the application:
-
-1. Enter the notes that you want to keep with the application.
-1. Select **Save**.
-
-## Clean up resources
-
-If you are planning to complete the next quickstart, keep the enterprise application that you created. Otherwise, you can consider deleting it to clean up your tenant. For more information, see [Delete an application](delete-application-portal.md).
+1. Configure the properties based on the needs of your application.
## Next steps
-Learn how to search for and view the applications in your Azure AD tenant.
+Learn more about how to manage enterprise applications.
> [!div class="nextstepaction"]
-> [View applications](view-applications-portal.md)
+> [What is application management in Azure Active Directory?](what-is-application-management.md)
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
+
+ Title: 'Properties of an enterprise application'
+
+description: Learn about the properties of an enterprise application in Azure Active Directory.
+++++++ Last updated : 09/22/2021++
+#Customer intent: As an administrator of an Azure AD tenant, I want to learn more about the properties of an enterprise application that I can configure.
++
+# Properties of an enterprise application
+
+This article describes the properties that you can configure for an enterprise application in your Azure Active Directory (Azure AD) tenant. To configure the properties, see [Configure enterprise application properties](add-application-portal-configure.md).
+
+## Enabled for users to sign in?
+
+If this option is set to **Yes**, then assigned users are able to sign in to the application from the My Apps portal, the User access URL, or by navigating to the application URL directly. If assignment is required, then only users who are assigned to the application are able to sign-in. If assignment is required, applications must be assigned to be granted a token.
+
+If this option is set to **No**, then no users are able to sign in to the application, even if they're assigned to it. Tokens aren't issued for the application.
+
+## Name
+
+This property is the name of the application that users see on the My Apps portal. Administrators see the name when they manage access to the application. Other tenants see the name when integrating the application into their directory.
+
+It's recommended that you choose a name that users can understand. This is important because this name is visible in the various portals, such as My Apps and O365 Launcher.
+
+## Homepage URL
+
+If the application is custom-developed, the homepage URL is the URL that a user can use to sign in to the application. For example, it's the URL that is launched when the application is selected in the My Apps portal. If this application is from the Azure AD Gallery, this URL is where you can go to learn more about the application or its vendor.
+
+The homepage URL can't be edited within enterprise applications. The homepage URL must be edited on the application object.
+
+## Logo
+
+This is the application logo that users see on the My Apps portal and the Office 365 application launcher. Administrators also see the logo in the Azure AD gallery.
+
+Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The central image dimensions should be 94x94 pixels and the logo file size can't be over 100 KB.
+
+## Application ID
+
+This property is the unique identifier for the application in your directory. You can use this application ID if you ever need help from Microsoft Support. You can also use the identifier to perform operations using the Microsoft Graph APIs or the Microsoft Graph PowerShell SDK.
+
+## Object ID
+
+This is the unique identifier of the service principal object associated with the application. This identifier can be useful when performing management operations against this application using PowerShell or other programmatic interfaces. This identifier is different than the identifier for the application object.
+
+The identifier is used to update information for the local instance of the application, such as assigning users and groups to the application. The identifier can also be used to update the properties of the enterprise application or to configure single-sign on.
+
+## Assignment required
+
+This option doesn't affect whether or not an application appears on the My Apps portal. To show the application there, assign an appropriate user or group to the application. This option has no effect on users' access to the application when it's configured for any of the other single sign-on modes.
+
+If this option is set to **Yes**, then users and other applications or services must first be assigned this application before being able to access it.
+
+If this option is set to **No**, then all users are able to sign in, and other applications and services are able to obtain an access token to the application.
+
+This option only applies to the following types of applications and
+- Applications using SAML
+- OpenID Connect
+- OAuth 2.0
+- WS-Federation for user sign
+- Application Proxy applications with Azure AD pre-authentication enabled
+- Applications or services for which other applications or service are requesting access tokens
+
+## Visible to users
+
+Makes the application visible in My Apps and the O365 Launcher
+
+If this option is set to **Yes**, then assigned users see the application on the My Apps portal and O365 app launcher.
+
+If this option is set to **No**, then no users see this application on their My Apps portal and O365 launcher.
+
+Make sure that a homepage URL is included or else the application can't be launched from the My Apps portal.
+
+Regardless of whether assignment is required or not, only assigned users are able to see this application in the My Apps portal. If you want certain users to see the application in the My Apps portal, but everyone to be able to access it, assign the users in the **Users and Groups** tab, and set assignment required to **No**.
+
+## Notes
+
+You can use this field to add any information that is relevant for the management of the application. The field is a free text field with a maximum size of 1024 characters.
+
+## Next steps
+
+Learn where to go to configure the properties of an enterprise application.
+
+- [Configure enterprise application properties](add-application-portal-configure.md)
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
+
+ Title: "Tutorial: Govern and monitor applications"
+
+description: In this tutorial, you learn how to govern and monitor an application in Azure Active Directory.
++++++ Last updated : 02/24/2022
+# Customer intent: As an administrator of an Azure AD tenant, I want to govern and monitor my applications.
++
+# Tutorial: Govern and monitor applications
+
+The IT administrator at Fabrikam has added and configured an application from the [Azure Active Directory (Azure AD) application gallery](overview-application-gallery.md). They also made sure that access can be managed and that the application is secure by using the information in [Tutorial: Manage application access and security](tutorial-manage-access-security.md). They now need to understand the resources that are available to govern and monitor the application.
+
+Using the information in this tutorial, an administrator of the application learns how to:
+
+> [!div class="checklist"]
+> * Create an access review
+> * Access the audit logs report
+> * Access the sign-ins report
+> * Send logs to Azure Monitor
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Privileged Role Administrator, Cloud Application Administrator, or Application Administrator.
+- An enterprise application that has been configured in your Azure AD tenant.
+
+## Create an access review
+
+The administrator wants to make sure that users or guests have appropriate access. They decide to ask users of the application to participate in an access review and recertify or attest to their need for access. When the access review is finished, they can then make changes and remove access from users who no longer need it. For more information, see
+[Manage user and guest user access with access reviews](../governance/manage-access-review.md).
+
+To create an access review:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with one of the roles listed in the prerequisites.
+1. Go to **Azure Active Directory**, and then select **Identity Governance**.
+1. On the left menu, select **Access reviews**.
+1. Select **New access review** to create a new access review.
+1. In **Select what to review**, select **Applications**.
+1. Select **+ Select application(s)**, select the application, and then choose **Select**.
+1. Now you can select a scope for the review. Your options are:
+ - **Guest users only** - This option limits the access review to only the Azure AD B2B guest users in your directory.
+ - **All users** - This option scopes the access review to all user objects associated with the resource.
+ Select **All users**.
+1. Select **Next: Reviews**.
+1. In the **Specify reviewers** section, in the Select reviewers box, select **Selected user(s) or group(s)**, select **+ Select reviewers**, and then select the user account that is assigned to the application.
+1. In the **Specify recurrence of review** section, specify the following selections:
+ - **Duration (in days)** - Accept the default value of **3**.
+ - **Review recurrence** - select **One time**.
+ - **Start date** - Accept today's date as the start date.
+1. Select **Next: Settings**.
+1. In the **Upon completion settings** section, you can specify what happens after the review finishes. Select **Auto apply results to resource**.
+1. Select **Next: Review + Create**.
+1. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
+1. Review the information and select **Create**.
+
+### Start the access review
+
+After you've specified the settings for an access review, select **Start**. The access review appears in your list with an indicator of its status.
+
+By default, Azure AD sends an email to reviewers shortly after the review starts. If you choose not to have Azure AD send the email, be sure to inform the reviewers that an access review is waiting for them to complete. You can show them the instructions for how to review access to groups or applications. If your review is for guests to review their own access, show them the instructions for how to review access for themselves to groups or applications.
+
+If you've assigned guests as reviewers and they haven't accepted their invitation to the tenant, they won't receive an email from access reviews. They must first accept the invitation before they can begin reviewing.
+
+## Access the audit logs report
+
+The audit logs report combines several reports around application activities into a single view for context-based reporting. For more information, see [Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md).
+
+To access the audit logs report, select **Audit logs** from the **Activity** section of the Azure Active Directory page.
+
+The audit logs report consolidates the following reports:
+
+- Password reset activity
+- Password reset registration activity
+- Self-service groups activity
+- Office365 Group Name Changes
+- Account provisioning activity
+- Password rollover status
+- Account provisioning errors
+
+## Access the sign-ins report
+
+The Sign-ins view includes all user sign-ins, and the Application Usage report. You also can view application usage information in the Manage section of the Enterprise applications overview. For more information, see [Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md)
+
+To access the sign-in logs report, select **Sign-ins** from the **Monitoring** section of the Azure Active Directory blade.
+
+## Send logs to Azure Monitor
+
+The Azure AD activity logs only store information for a maximum of 30 days. Depending on your needs, you may require extra storage to back up the activity logs data. Using the Azure Monitor, you can archive the audit and sign logs to an Azure storage account to retain the data for a longer time.
+The Azure Monitor is also useful for rich visualization, monitoring and alerting of data. To learn more about the Azure Monitor and the cost considerations for extra storage, see [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md).
+
+To send logs to your logs analytics workspace:
+
+1. Select **Diagnostic settings**, and then select **Add diagnostic setting**. You can also select Export Settings from the Audit Logs or Sign-ins page to get to the diagnostic settings configuration page.
+1. In the Diagnostic settings menu, select **Send to Log Analytics workspace**, and then select Configure.
+1. Select the Log Analytics workspace you want to send the logs to, or create a new workspace in the provided dialog box.
+1. Select the logs that you would like to send to the workspace.
+1. Select **Save** to save the setting.
+
+After about 15 minutes, verify that events are streamed to your Log Analytics workspace.
+
+## Next steps
+
+Advance to the next article to learn how to...
+> [!div class="nextstepaction"]
+> [Manage consent to applications and evaluate consent requests](manage-consent-requests.md)
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
+
+ Title: "Tutorial: Manage application access and security"
+
+description: In this tutorial, you learn how to manage access to an application in Azure Active Directory and make sure it's secure.
++++++ Last updated : 02/24/2022+
+# Customer intent: As an administrator of an Azure AD tenant, I want to manage access to my applications and make sure they are secure.
++
+# Tutorial: Manage application access and security
+
+The IT administrator at Fabrikam has added and configured an application from the Azure Active Directory (Azure AD) application gallery. They now need to understand the features that are available to manage access to the application and make sure the application is secure.
+Using the information in this tutorial, an administrator learns how to:
+
+> [!div class="checklist"]
+> * Grant consent for the application on behalf of all users
+> * Enable multi-factor authentication to make sign-in more secure
+> * Communicate a term of use to users of the application
+> * Create a collection in the My Apps portal
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Privileged Role Administrator, Cloud Application Administrator, or Application Administrator.
+* An enterprise application that has been configured in your Azure AD tenant.
+* At least one user account added and assigned to the application. For more information, see [Quickstart: Create and assign a user account](add-application-portal-assign-users.md).
+
+## Grant tenant wide admin consent
+
+For the application that the administrator added to their tenant, they want to set it up so that all users in the organization can use it and not have to individually request consent to use it. To avoid the need for user consent, they can grant consent for the application on behalf of all users in the organization. For more information, see [Consent and permissions overview](consent-and-permissions-overview.md).
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with one of the roles listed in the prerequisites.
+2. Search for and select **Azure Active Directory**.
+3. Select **Enterprise applications**.
+4. Select the application to which you want to grant tenant-wide admin consent.
+5. Under **Security**, select **Permissions**.
+6. Carefully review the permissions that the application requires. If you agree with the permissions the application requires, select **Grant admin consent**.
+
+## Create a Conditional Access policy
+
+The administrator wants to make sure that only the people they assign to the application can securely sign in. To do this, they can configure a conditional access policy for a group of users that enforces multi-factor authentication (MFA). For more information, see [What is Conditional Access?](../conditional-access/overview.md).
+
+### Create a group
+
+It's easier for an administrator to manage access to the application by assigning all users of the application to a group. The administrator can then manage access at a group level.
+
+1. In the left menu of the tenant overview, select **Groups**.
+1. Select **New group** at the top of the pane.
+1. Enter *MFA-Test-Group* for the name of the group.
+1. Select No members selected, and then choose the user account that you assigned to the application.
+1. Select **Create**.
+
+### Create a Conditional Access policy for the group
+
+1. In the left menu of the tenant overview, select **Security**.
+1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**.
+1. Enter a name for the policy, such as *MFA Pilot*.
+1. Under **Assignments**, select **Users and groups**
+1. On the **Include** tab, choose **Select users and groups**, and then select **Users and groups**.
+1. Browse for and select the *MFA-Test-Group* that you previously created, and then choose **Select**.
+1. Don't select **Create** yet, you add MFA to the policy in the next section.
+
+### Configure multi-factor authentication
+
+In this tutorial, the administrator can find the basic steps to configure the application, but they should consider creating a plan for MFA before starting. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+
+1. Under **Cloud apps or actions**, select **No cloud apps, actions, or authentication contexts selected**. For this tutorial, on the **Include** tab, choose **Select apps**.
+1. Search for and select your application, and then select **Select**.
+1. Under **Access controls** and **Grant**, select **0 controls selected**.
+1. Check the box for **Require multi-factor authentication**, and then choose **Select**.
+1. Set **Enable policy** to **On**.
+1. To apply the Conditional Access policy, select **Create**.
+
+## Test multi-factor authentication
+
+1. Open a new browser window in InPrivate or incognito mode and browse to the URL of the application.
+1. Sign in with the user account that you assigned to the application. You're required to register for and use Azure AD Multi-Factor Authentication. Follow the prompts to complete the process and verify you successfully sign into the Azure portal.
+1. Close the browser window.
+
+## Create a terms of use statement
+
+Juan wants to make sure that certain terms and conditions are known to users before they start using the application. For more information, see [Azure Active Directory terms of use](../conditional-access/terms-of-use.md).
+
+1. In Microsoft Word, create a new document.
+1. Type My terms of use, and then save the document on your computer as *mytou.pdf*.
+1. Under **Manage**, in the **Conditional Access** menu, select **Terms of use**.
+1. In the top menu, select **+ New terms**.
+1. In the **Name** textbox, type *My TOU*.
+1. In the **Display name** textbox, type *My TOU*.
+1. Upload your terms of use PDF file.
+1. For **Language**, select **English**.
+1. For **Require users to expand the terms of use**, select **On**.
+1. For **Enforce with conditional access policy templates**, select **Custom policy**.
+1. Select **Create**.
+
+## Add the terms of use to the policy
+
+1. In the left menu of the tenant overview, select **Security**.
+1. Select **Conditional Access**, and then select the *MFA Pilot* policy.
+1. Under **Access controls** and **Grant**, select the controls selected link.
+1. Select *My TOU*.
+1. Select **Require all the selected controls**, and then choose **Select**.
+1. Select **Save**.
+
+## Create a collection in the My Apps portal
+
+The My Apps portal enables administrators and users to manage the applications used in the organization. For more information, see [End-user experiences for applications](end-user-experiences.md).
+
+> [!NOTE]
+> Applications only appear in a user's my Apps portal after the user is assigned to the application and the application is configured to be visible to users. See [Configure application properties](add-application-portal-configure.md) to learn how to make the application visible to users.
+
+1. Open the Azure portal.
+1. Go to **Azure Active Directory**, and then select **Enterprise Applications**.
+1. Under **Manage**, select **Collections**.
+1. Select **New collection**. In the New collection page, enter a **Name** for the collection (it's recommended to not use "collection" in the name. Then enter a **Description**.
+1. Select the **Applications** tab. Select **+ Add application**, and then in the Add applications page, select all the applications you want to add to the collection, or use the Search box to find applications.
+1. When you're finished adding applications, select **Add**. The list of selected applications appears. You can use the arrows to change the order of applications in the list.
+1. Select the **Owners** tab. Select **+ Add users and groups**, and then in the Add users and groups page, select the users or groups you want to assign ownership to. When you're finished selecting users and groups, choose **Select**.
+1. Select the **Users and groups** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign the collection to. Or use the Search box to find users or groups. When you're finished selecting users and groups, choose **Select**.
+1. Select **Review + Create**, and then select **Create**. The properties for the new collection appear.
+
+## Clean up resources
+
+You can keep the resources for future use, or if you're not going to continue to use the resources created in this tutorial, delete them with the following steps.
+
+## Delete the application
+
+1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to delete.
+1. In the **Manage** section of the left menu, select **Properties**.
+1. At the top of the **Properties** pane, select **Delete**, and then select **Yes** to confirm you want to delete the application from your Azure AD tenant.
+
+## Delete the conditional access policy
+
+1. Select **Enterprise applications**.
+1. Under **Security**, select **Conditional Access**.
+1. Search for and select **MFA Pilot**.
+1. Select **Delete** at the top of the pane.
+
+## Delete the group
+
+1. Select **Azure Active Directory**, and then select **Groups**.
+1. From the **Groups - All groups** page, search for and select the **MFA-Test-Group** group.
+1. On the overview page, select **Delete**.
+
+## Next steps
+
+For information about how you can make sure that your application is healthy and being used correctly, see:
+> [!div class="nextstepaction"]
+> [Govern and monitor your application](tutorial-govern-monitor.md)
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
Application management in Azure Active Directory (Azure AD) is the process of cr
In this article, you learn these important aspects of managing the lifecycle of an application: -- **Develop, add, or connect** ΓÇô You take different paths depending on whether you are developing your own application, using a pre-integrated application, or connecting to an on-premises application.
+- **Develop, add, or connect** ΓÇô You take different paths depending on whether you're developing your own application, using a pre-integrated application, or connecting to an on-premises application.
- **Manage access** ΓÇô Access can be managed by using single sign-on (SSO), assigning resources, defining the way access is granted and consented to, and using automated provisioning. - **Configure properties** ΓÇô Configure the requirements for signing into the application and how the application is represented in user portals. - **Secure the application** ΓÇô Manage configuration of permissions, multifactor authentication (MFA), conditional access, tokens, and certificates.
Azure AD provides customizable ways to deploy applications to users in your orga
## Configure properties
-When you add an application to your Azure AD tenant, you have the opportunity to [configure properties](add-application-portal-configure.md) that affect the way users can sign in. You can enable or disable the ability to sign in and user assignment can be required. You can also determine the visibility of the application, what logo represents the application, and any notes about the application.
+When you add an application to your Azure AD tenant, you have the opportunity to configure properties that affect the way users can interact with the application. You can enable or disable the ability to sign in and user assignment can be required. You can also determine the visibility of the application, what logo represents the application, and any notes about the application. For more information about the properties that can be configured, see [Properties of an enterprise application](application-properties.md).
## Secure the application
Your Azure AD reporting and monitoring solution depends on your legal, security,
## Clean up
-You can clean up access to applications. For example, [removing a userΓÇÖs access](methods-for-removing-user-access.md). You can also [disable how a user signs in](disable-user-sign-in-portal.md). And finally, you can delete the application if it is no longer needed for the organization. For a simple example of how to delete an enterprise application from your Azure AD tenant, see [Quickstart: Delete an enterprise application](delete-application-portal.md).
+You can clean up access to applications. For example, [removing a userΓÇÖs access](methods-for-removing-user-access.md). You can also [disable how a user signs in](disable-user-sign-in-portal.md). And finally, you can delete the application if it's no longer needed for the organization. For a simple example of how to delete an enterprise application from your Azure AD tenant, see [Quickstart: Delete an enterprise application](delete-application-portal.md).
## Next steps
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
To assign a role to a user-assigned managed identity, your account needs the [Us
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**. 1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to assign a role.
-1. Select **Access control (IAM)**, and then select **Add role assignment**.
-
- :::image type="content" source="media/how-manage-user-assigned-managed-identities/assign-role.png" alt-text="Screenshot that shows the user-assigned managed identity start.":::
--
+1. Select **Azure role assignments**, and then select **Add role assignment**.
1. In the **Add role assignment** pane, configure the following values, and then select **Save**: - **Role**: The role to assign. - **Assign access to**: The resource to assign the user-assigned managed identity.
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
na Previously updated : 09/30/2020 Last updated : 02/23/2022
In this article, you learn how to view the service principal of a managed identi
This procedure demonstrates how to view the service principal of a VM with system assigned identity enabled (the same steps apply for an application).
-1. Click **Azure Active Directory** and then click **Enterprise applications**.
-2. Under **Application Type**, choose **All Applications** and then click **Apply**.
-3. In the search filter box, type the name of the Azure resource that has managed identity enabled or choose it from the list presented.
+1. Select **Azure Active Directory** and then select **Enterprise applications**.
+2. Under **Application Type**, choose **All Applications** and then select **Apply**.
+3. In the search filter box, type the name of the Azure resource that has managed identities enabled or choose it from the list.
![View managed identity service principal in portal](./media/how-to-view-managed-identity-service-principal-portal/view-managed-identity-service-principal-portal.png)
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/known-issues.md
-# Known issues with Managed Identities
+# Known issues with managed identities for Azure resources
This article discusses a couple of issues around managed identities and how to address them. Common questions about managed identities are documented in our [frequently asked questions](managed-identities-faq.md) article. ## VM fails to start after being moved
-If you move a VM in a running state from a resource group or subscription, it continues to run during the move. However, after the move, if the VM is stopped and restarted, it will fail to start. This issue happens because the VM is not updating the reference to the managed identities for Azure resources identity and continues to point to it in the old resource group.
+If you move a VM in a running state from a resource group or subscription, it continues to run during the move. However, after the move, if the VM is stopped and restarted, it fails to start. This issue happens because the VM doesn't update the managed identity reference and it continues to use an outdated URI.
**Workaround**
az vm update -n <VM Name> -g <Resource Group> --remove tags.fixVM
## Transferring a subscription between Azure AD directories
-Managed identities do not get updated when a subscription is moved/transferred to another directory. As a result, any existent system-assigned or user-assigned managed identities will be broken.
+Managed identities don't get updated when a subscription is moved/transferred to another directory. As a result, any existent system-assigned or user-assigned managed identities will be broken.
Workaround for managed identities in a subscription that has been moved to another directory:
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
ms.devlang:
Previously updated : 01/11/2022 Last updated : 02/23/2022
az resource list --query "[?identity.type=='SystemAssigned'].{Name:name, princi
### Which Azure RBAC permissions are required to use a managed identity on a resource? - System-assigned managed identity: You need write permissions over the resource. For example, for virtual machines you need `Microsoft.Compute/virtualMachines/write`. This action is included in resource specific built-in roles like [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).-- Assigning user-assigned managed identities to resources: You need write permissions over the resource. For example, for virtual machines you need `Microsoft.Compute/virtualMachines/write`. You will also need `Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action` action over the user-assigned identity. This action is included in the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) built-in role.
+- Assigning user-assigned managed identities to resources: You need write permissions over the resource. For example, for virtual machines you need `Microsoft.Compute/virtualMachines/write`. You'll also need `Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action` action over the user-assigned identity. This action is included in the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) built-in role.
- Managing user-assigned identities: To create or delete user-assigned managed identities, you need the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.-- Managing role assignments for managed identities: You need the [Owner](../../role-based-access-control/built-in-roles.md#all) or [User Access Administrator](../../role-based-access-control/built-in-roles.md#all) role assignment over the resource to which you're granting access. You will need the [Reader](../../role-based-access-control/built-in-roles.md#all) role assignment to the resource with a system-assigned identity, or to the user-assigned identity that is being given the role assignment. If you do not have read access, you can search by "User, group, or service principal" to find the identity's backing service principal, instead of searching by managed identity while adding the role assignment. [Read more about assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
+- Managing role assignments for managed identities: You need the [Owner](../../role-based-access-control/built-in-roles.md#all) or [User Access Administrator](../../role-based-access-control/built-in-roles.md#all) role assignment over the resource to which you're granting access. You'll need the [Reader](../../role-based-access-control/built-in-roles.md#all) role assignment to the resource with a system-assigned identity, or to the user-assigned identity that is being given the role assignment. If you don't have read access, you can search by "User, group, or service principal" to find the identity's backing service principal, instead of searching by managed identity while adding the role assignment. [Read more about assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
### How do I prevent the creation of user-assigned managed identities?
At this point, any attempt to create a user-assigned managed identity in the res
### Do managed identities have a backing app object?
-No. Managed identities and Azure AD App Registrations are not the same thing in the directory.
+No. Managed identities and Azure AD App Registrations aren't the same thing in the directory.
App registrations have two components: An Application Object + A Service Principal Object. Managed Identities for Azure resources have only one of those components: A Service Principal Object.
Managed identities don't have an application object in the directory, which is w
> [!NOTE] > How managed identities authenticate is an internal implementation detail that is subject to change without notice.
-Managed identities use certificate-based authentication. Each managed identityΓÇÖs credential has an expiration of 90 days and it is rolled after 45 days.
+Managed identities use certificate-based authentication. Each managed identityΓÇÖs credential has an expiration of 90 days and it's rolled after 45 days.
### What identity will IMDS default to if don't specify the identity in the request? - If system assigned managed identity is enabled and no identity is specified in the request, Azure Instance Metadata Service (IMDS) defaults to the system assigned managed identity.-- If system assigned managed identity is not enabled, and only one user assigned managed identity exists, IMDS defaults to that single user assigned managed identity.-- If system assigned managed identity is not enabled, and multiple user assigned managed identities exist, then you are required to specify a managed identity in the request.
+- If system assigned managed identity isn't enabled, and only one user assigned managed identity exists, IMDS defaults to that single user assigned managed identity.
+- If system assigned managed identity isn't enabled, and multiple user assigned managed identities exist, then you are required to specify a managed identity in the request.
## Limitations ### Can the same managed identity be used across multiple regions?
-In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SP) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities would not be impacted.
+In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SP) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities wouldn't be impacted.
### Does managed identities for Azure resources work with Azure Cloud Services?
No, there are no plans to support managed identities for Azure resources in Azur
### What is the security boundary of managed identities for Azure resources?
-The security boundary of the identity is the resource to which it is attached to. For example, the security boundary for a Virtual Machine with managed identities for Azure resources enabled, is the Virtual Machine. Any code running on that VM, is able to call the managed identities for Azure resources endpoint and request tokens. It is the similar experience with other resources that support managed identities for Azure resources.
+The security boundary of the identity is the resource to which it's attached. For example, the security boundary for a virtual machine with managed identities for Azure resources enabled, is the virtual machine. Any code running on that VM, is able to call the managed identities endpoint and request tokens. The experience is similar experience when working with other resources that support managed identities.
### Will managed identities be recreated automatically if I move a subscription to another directory? No. If you move a subscription to another directory, you have to manually re-create them and grant Azure role assignments again.+ - For system assigned managed identities: disable and re-enable. - For user assigned managed identities: delete, re-create, and attach them again to the necessary resources (for example, virtual machines) ### Can I use a managed identity to access a resource in a different directory/tenant?
-No. Managed identities do not currently support cross-directory scenarios.
+No. Managed identities don't currently support cross-directory scenarios.
### Are there any rate limits that apply to managed identities?
Managed identities limits have dependencies on Azure service limits, Azure Insta
### Is it possible to move a user-assigned managed identity to a different resource group/subscription?
-Moving a user-assigned managed identity to a different resource group is not supported.
+Moving a user-assigned managed identity to a different resource group isn't supported.
-### Are tokens cached after they are issued for a managed identity?
+### Are managed identities tokens cached?
-Managed identity tokens are cached by the underlying Azure infrastructure for performance and resiliency purposes: the back-end services for managed identities maintain a cache per resource URI for around 24 hours. This means that it can take several hours for changes to a managed identity's permissions to take effect, for example. Today, it is not possible to force a managed identity's token to be refreshed before its expiry. For more information, see [Limitation of using managed identities for authorization](managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
+Managed identity tokens are cached by the underlying Azure infrastructure for performance and resiliency purposes: the back-end services for managed identities maintain a cache per resource URI for around 24 hours. It can take several hours for changes to a managed identity's permissions to take effect, for example. Today, it is not possible to force a managed identity's token to be refreshed before its expiry. For more information, see [Limitation of using managed identities for authorization](managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
## Next steps
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
az identity create -g <Resource Group> -n <Managed Identity name>
Assign Managed Identity ```azurecli-interactive
-az role assignment create --assignee <id> --role "Storage Account Key Operator Service Role" --scope <Resource id>
+az role assignment create --assignee <id> --role "Contributor" --scope <Resource id>
``` ## Create an AKS cluster using the Host Group
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
If you don't have a managed identity yet, you should go ahead and create one for
```azurecli-interactive az identity create --name myIdentity --resource-group myResourceGroup ```+
+Assign "Managed Identity Operator" role to the identity.
+
+```azurecli-interactive
+az role assignment create --assignee <id> --role "Managed Identity Operator" --scope <id>
++ The result should look like: ```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "westus2",
- "name": "myIdentity",
+{
+ "canDelegate": null,
+ "condition": null,
+ "conditionVersion": null,
+ "description": null,
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "name": "myIdentity,
"principalId": "<principalId>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ "principalType": "ServicePrincipal",
+ "resourceGroup": "myResourceGroup",
+ "roleDefinitionId": "/subscriptions/<subscriptionid>/providers/Microsoft.Authorization/roleDefinitions/<definitionid>",
+ "scope": "<resourceid>",
+ "type": "Microsoft.Authorization/roleAssignments"
} ```
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
# Add an API manually
-This article shows steps to add an API manually to the API Management (APIM) instance. When you want to mock the APIAc, you can create a blank API or define it manually. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
+This article shows steps to add an API manually to the API Management (APIM) instance. When you want to mock the API, you can create a blank API or define it manually. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
If you want to import an existing API, see [related topics](#related-topics) section.
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Previously updated : 10/25/2021 Last updated : 02/07/2022 # API Management policy expressions
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-context-lasterror"></a>context.LastError|Source: string<br /><br /> Reason: string<br /><br /> Message: string<br /><br /> Scope: string<br /><br /> Section: string<br /><br /> Path: string<br /><br /> PolicyId: string<br /><br /> For more information about context.LastError, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>context.Operation|Id: string<br /><br /> Method: string<br /><br /> Name: string<br /><br /> UrlTemplate: string| |<a id="ref-context-product"></a>context.Product|Apis: IEnumerable<[IApi](#ref-iapi)\><br /><br /> ApprovalRequired: bool<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Name: string<br /><br /> State: enum ProductState {NotPublished, Published}<br /><br /> SubscriptionLimit: int?<br /><br /> SubscriptionRequired: bool|
-|<a id="ref-context-request"></a>context.Request|Body: [IMessageBody](#ref-imessagebody) or `null` if request does not have a body.<br /><br /> Certificate: System.Security.Cryptography.X509Certificates.X509Certificate2<br /><br /> [Headers](#ref-context-request-headers): IReadOnlyDictionary<string, string[]><br /><br /> IpAddress: string<br /><br /> MatchedParameters: IReadOnlyDictionary<string, string><br /><br /> Method: string<br /><br /> OriginalUrl: [IUrl](#ref-iurl)<br /><br /> Url: [IUrl](#ref-iurl)|
+|<a id="ref-context-request"></a>context.Request|Body: [IMessageBody](#ref-imessagebody) or `null` if request does not have a body.<br /><br /> Certificate: System.Security.Cryptography.X509Certificates.X509Certificate2<br /><br /> [Headers](#ref-context-request-headers): IReadOnlyDictionary<string, string[]><br /><br /> IpAddress: string<br /><br /> MatchedParameters: IReadOnlyDictionary<string, string><br /><br /> Method: string<br /><br /> OriginalUrl: [IUrl](#ref-iurl)<br /><br /> Url: [IUrl](#ref-iurl)<br /><br /> PrivateEndpointConnection: [IPrivateEndpointConnection](#ref-iprivateendpointconnection) or `null` if request does not come from a private endpoint connection.|
|<a id="ref-context-request-headers"></a>string context.Request.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated request header values or `defaultValue` if the header is not found.| |<a id="ref-context-response"></a>context.Response|Body: [IMessageBody](#ref-imessagebody)<br /><br /> [Headers](#ref-context-response-headers): IReadOnlyDictionary<string, string[]><br /><br /> StatusCode: int<br /><br /> StatusReason: string| |<a id="ref-context-response-headers"></a>string context.Response.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated response header values or `defaultValue` if the header is not found.|
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-iapi"></a>IApi|Id: string<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Protocols: IEnumerable<string\><br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> SubscriptionKeyParameterNames: [ISubscriptionKeyParameterNames](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>IGroup|Id: string<br /><br /> Name: string| |<a id="ref-imessagebody"></a>IMessageBody|As<T\>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read either a request and response message body in specified type `T`. By default, the method:<br /><ul><li>Uses the original message body stream.</li><li>Renders it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
+|<a id="ref-iprivateendpointconnection"></a>IPrivateEndpointConnection|Name: string<br /><br /> GroupId: string<br /><br /> MemberName: string<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).|
|<a id="ref-iurl"></a>IUrl|Host: string<br /><br /> Path: string<br /><br /> Port: int<br /><br /> [Query](#ref-iurl-query): IReadOnlyDictionary<string, string[]><br /><br /> QueryString: string<br /><br /> Scheme: string| |<a id="ref-iuseridentity"></a>IUserIdentity|Id: string<br /><br /> Provider: string| |<a id="ref-isubscriptionkeyparameternames"></a>ISubscriptionKeyParameterNames|Header: string<br /><br /> Query: string|
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
For configurations specific to the *external* mode, where the service endpoints
1. Go to the [Azure portal](https://portal.azure.com) to find your API management instance. Search for and select **API Management services**. 1. Choose your API Management instance.
-1. Select **Virtual network**.
+1. Select **Network** > **Virtual network**.
1. Select the **Internal** access type. 1. In the list of locations (regions) where your API Management service is provisioned: 1. Choose a **Location**.
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
For configurations specific to the *internal* mode, where the endpoints are acce
1. Go to the [Azure portal](https://portal.azure.com) to find your API management instance. Search for and select **API Management services**. 1. Choose your API Management instance.
-1. Select **Virtual network**.
+1. Select **Network**.
1. Select the **External** access type. :::image type="content" source="media/api-management-using-with-vnet/api-management-menu-vnet.png" alt-text="Select VNet in Azure portal.":::
For configurations specific to the *internal* mode, where the endpoints are acce
:::image type="content" source="media/api-management-using-with-vnet/api-management-using-vnet-select.png" alt-text="VNet settings in the portal.":::
-1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNet and subnet choices.
+1. Select **Apply**. The **Network** page of your API Management instance is updated with your new VNet and subnet choices.
1. Continue configuring VNet settings for the remaining locations of your API Management instance.
api-management Howto Use Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md
Previously updated : 11/24/2020 Last updated : 02/23/2022 # Get API analytics in Azure API Management
Azure API Management provides built-in analytics for your APIs. Analyze the usag
* Requests > [!NOTE]
-> Geography values are approximate based on IP address mapping.
+> * API analytics provides data on requests that are matched with an API and operation. Other calls aren't reported.
+> * Geography values are approximate based on IP address mapping.
:::image type="content" source="media/howto-use-analytics/analytics-report-portal.png" alt-text="Timeline analytics in portal":::
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
+
+ Title: Set up private endpoint for Azure API Management
+description: Learn how to restrict access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link.
++++ Last updated : 02/23/2022+++
+# Connect privately to API Management using a private endpoint
+
+You can configure a [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md).
+
+* The private endpoint uses an IP address from your Azure VNet address space.
+
+* Network traffic between a client on your private network and API Management traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet.
+
+* Configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address.
+
+With a private endpoint and Private Link, you can:
+
+- Create multiple Private Link connections to an API Management instance.
+
+- Use the private endpoint to send inbound traffic on a secure connection.
+
+- Use policy to distinguish traffic that comes from the private endpoint.
+
+- Limit incoming traffic only to private endpoints, preventing data exfiltration.
+
+> [!IMPORTANT]
+> * API Management support for private endpoints is currently in preview.
+> * To enable private endpoints, the API Management instance can't already be configured with an external or internal [virtual network](virtual-network-concepts.md).
+> * A private endpoint connection supports only incoming traffic to the API Management instance.
++
+## Limitations
+
+* Only the API Management instance's Gateway endpoint currently supports Private Link connections.
+* Each API Management instance currently supports at most 100 Private Link connections.
+* Connections are not supported on the [self-hosted gateway](self-hosted-gateway-overview.md).
+
+## Prerequisites
+
+- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
+ - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](zone-redundancy.md).
+ - Do not deploy (inject) the instance into an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) virtual network.
+- A virtual network and subnet to host the private endpoint. The subnet may contain other Azure resources.
+- (Recommended) A virtual machine in the same or a different subnet in the virtual network, to test the private endpoint.
+
+## Approval method for private endpoint
+
+Typically, a network administrator creates a private endpoint. Depending on your Azure role-based access control (RBAC) permissions, a private endpoint that you create is either *automatically approved* to send traffic to the API Management instance, or requires the resource owner to *manually approve* the connection.
++
+|Approval method |Minimum RBAC permissions |
+|||
+|Automatic | `Microsoft.Network/virtualNetworks/**`<br/>`Microsoft.Network/virtualNetworks/subnets/**`<br/>`Microsoft.Network/privateEndpoints/**`<br/>`Microsoft.Network/networkinterfaces/**`<br/>`Microsoft.Network/locations/availablePrivateEndpointTypes/read`<br/>`Microsoft.ApiManagement/service/**`<br/>`Microsoft.ApiManagement/service/privateEndpointConnections/**` |
+|Manual | `Microsoft.Network/virtualNetworks/**`<br/>`Microsoft.Network/virtualNetworks/subnets/**`<br/>`Microsoft.Network/privateEndpoints/**`<br/>`Microsoft.Network/networkinterfaces/**`<br/>`Microsoft.Network/locations/availablePrivateEndpointTypes/read` |
+
+## Steps to configure private endpoint
+
+1. [Get available private endpoint types in subscription](#get-available-private-endpoint-types-in-subscription)
+1. [Disable network policies in subnet](#disable-network-policies-in-subnet)
+1. [Create private endpoint - portal](#create-private-endpointportal)
+1. [List private endpoint connections to the instance](#list-private-endpoint-connections-to-the-instance)
+1. [Approve pending private endpoint connections](#approve-pending-private-endpoint-connections)
+1. [Optionally disable public network access](#optionally-disable-public-network-access)
+
+### Get available private endpoint types in subscription
+
+Verify that the API Management private endpoint type is available in your subscription and location. In the portal, find this information by going to the **Private Link Center**. Select **Supported resources**.
+
+You can also find this information by using the [Available Private Endpoint Types - List](/rest/api/virtualnetwork/available-private-endpoint-types) REST API.
+
+```rest
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/locations/{region}/availablePrivateEndpointTypes?api-version=2021-03-01
+```
+
+Output should include the `Microsoft.ApiManagement.service` endpoint type:
+
+```JSON
+[...]
+
+ "name": "Microsoft.ApiManagement.service",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Network/AvailablePrivateEndpointTypes/Microsoft.ApiManagement.service",
+ "type": "Microsoft.Network/AvailablePrivateEndpointTypes",
+ "resourceName": "Microsoft.ApiManagement/service",
+ "displayName": "Microsoft.ApiManagement/service",
+ "apiVersion": "2021-04-01-preview"
+ }
+[...]
+```
+
+### Disable network policies in subnet
+
+Network policies such as network security groups must be disabled in the subnet used for the private endpoint.
+
+If you use tools such as Azure PowerShell, the Azure CLI, or REST API to configure private endpoints, update the subnet configuration manually. For examples, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md).
+
+When you use the Azure portal to create a private endpoint, as shown in the next section, network policies are disabled automatically as part of the creation process
+
+### Create private endpoint - portal
+
+1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
+
+1. In the left-hand menu, select **Network**.
+
+1. Select **Private endpoint connections** > **+ Add endpoint**.
+
+ :::image type="content" source="media/private-endpoint/add-endpoint-from-instance.png" alt-text="Add a private endpoint using Azure portal":::
+
+1. In the **Basics** tab of **Create a private endpoint**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select an existing resource group, or create a new one. It must be in the same region as your virtual network.|
+ | **Instance details** | |
+ | Name | Enter a name for the endpoint such as **myPrivateEndpoint**. |
+ | Region | Select a location for the private endpoint. It must be in the same region as your virtual network. It may differ from the region where your API Management instance is hosted. |
+
+1. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page. The following information about your API Management instance is already populated:
+ * Subscription
+ * Resource group
+ * Resource name
+
+1. In **Resource**, in **Target sub-resource**, select **Gateway**.
+
+ :::image type="content" source="media/private-endpoint/create-private-endpoint.png" alt-text="Create a private endpoint in Azure portal":::
+
+1. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
+
+1. In **Configuration**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Networking** | |
+ | Virtual network | Select your virtual network. |
+ | Subnet | Select your subnet. |
+ | **Private DNS integration** | |
+ | Integrate with private DNS zone | Leave the default of **Yes**. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select your resource group. |
+ | Private DNS zones | Leave the default of **(new) privatelink.azure-api.net**.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+### List private endpoint connections to the instance
+
+After the private endpoint is created, it appears in the list on the API Management instance's **Private endpoint connections** page in the portal.
+
+You can also use the [Private Endpoint Connection - List By Service](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-by-service) REST API to list private endpoint connections to the service instance.
++
+Note the endpoint's **Connection status**:
+
+* **Approved** indicates that the API Management resource automatically approved the connection.
+* **Pending** indicates that the connection must be manually approved by the resource owner.
+
+### Approve pending private endpoint connections
+
+If a private endpoint connection is in pending status, an owner of the API Management instance must manually approve it before it can be used.
+
+If you have sufficient permissions, approve a private endpoint connection on the API Management instance's **Private endpoint connections** page in the portal.
+
+You can also use the API Management [Private Endpoint Connection - Create Or Update](/rest/api/apimanagement/current-ga/private-endpoint-connection/create-or-update) REST API.
+
+```rest
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{apimServiceName}privateEndpointConnections/{privateEndpointConnectionName}?api-version=2021-08-01
+```
+
+### Optionally disable public network access
+
+To optionally limit incoming traffic to the API Management instance only to private endpoints, disable public network access. Use the [API Management Service - Create Or Update](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) REST API.
+
+```rest
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{apimServiceName}?api-version=2021-08-01
+Authorization: Bearer {{authToken.response.body.access_token}}
+Content-Type: application/json
+
+```
+Use the following JSON body:
+
+```json
+{
+ [...]
+ "properties": {
+ "publicNetworkAccess": "Disabled"
+ }
+}
+```
+
+## Validate private endpoint connection
+
+After the private endpoint is created, confirm its DNS settings in the portal:
+
+1. In the portal, navigate to the **Private Link Center**.
+1. Select **Private endpoints** and select the private endpoint you created.
+1. In the left-hand navigation, select **DNS configuration**.
+1. Review the DNS records and IP address of the private endpoint. The IP address is a private address in the address space of the subnet where the private endpoint is configured.
+
+### Test in virtual network
+
+Connect to a virtual machine you set up in the virtual network.
+
+Run a utility such as `nslookup` or `dig` to look up the IP address of your default Gateway endpoint over Private Link. For example:
+
+```
+nslookup my-apim-service.azure-api.net
+```
+
+Output should include the private IP address associated with the private endpoint.
+
+API calls initiated within the virtual network to the default Gateway endpoint should succeed.
+
+### Test from internet
+
+From outside the private endpoint path, attempt to call the API Management instance's default Gateway endpoint. If public access is disabled, output will include an error with status code `403` and a message similar to:
+
+```
+Request originated from client public IP address xxx.xxx.xxx.xxx, public network access on this 'Microsoft.ApiManagement/service/my-apim-service' is disabled.
+
+To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the Private Endpoint from inside your virtual network.
+```
+
+## Next steps
+
+* Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint.
+* Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md).
+* Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md).
+* Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
Title: Azure API Management with an Azure virtual network
-description: Learn about scenarios and requirements to connect you API Management instance to an Azure virtual network.
+description: Learn about scenarios and requirements to connect your API Management instance to an Azure virtual network.
Previously updated : 08/19/2021 Last updated : 01/14/2022 # Use a virtual network with Azure API Management
-With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. You can then connect VNets to your on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
+With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. In a virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
+
+> [!TIP]
+> API Management also supports [private endpoints](../private-link/private-endpoint-overview.md). A private endpoint enables secure client connectivity to your API Management instance using a private IP address from your virtual network and Azure Private Link. [Learn more](private-endpoint.md) about using private endpoints with API Management.
This article explains VNet connectivity options, requirements, and considerations for your API Management instance. You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][NetworkSecurityGroups].
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
Title: 'Quickstart: Create a WordPress site' description: Create your first WordPress site on Azure App Service in minutes.
+keywords: app service, azure app service, wordpress, preview, app service on linux, plugins, mysql flexible server, wordpress on linux, php
Last updated 01/14/2021 ms.devlang: wordpress
In this quickstart, you'll learn how to create and deploy your first [WordPress]
This quickstart configures WordPress in App Service on Linux. It uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. > [!IMPORTANT]
-> WordPress in App Service on Linux is in preview.
+> WordPress in App Service on Linux is in preview. [View the announcement](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/the-new-and-better-wordpress-on-app-service/ba-p/3202594).
> > [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74)
Sign in to the Azure portal at https://portal.azure.com.
:::image type="content" source="./media/quickstart-wordpress/04-wordpress-basics-project-details.png?text=Azure portal WordPress Project Details" alt-text="Screenshot of WordPress project details":::
-1. Under **Instance details**, type a globally unique name for your web app and choose **Linux (preview)** for **Operating System**. Select **Basic** for **Hosting plan**. See the table below for app and database SKUs for given hosting plans. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
-
- > [!div class="mx-tdCol2BreakAll mx-tdCol3BreakAll"]
- > |Hosting Plan | App Service SKU | Database SKU |
- > |-|-|-|
- > | Basic | B1, 1.75 GB Memory, 10GB Storage A-Series Compute Equivalent | Flexi Server - Burstable (1-2 vCores) ΓÇô Standard B1s(1v Core, 1GB Memory, 32 GB Storage & 400 IOPs)|
- > |Development | S1, 1.75 GB Memory, 50 GB Storage A-Series Compute Equivalent | Flexi Server - Burstable (1-2 vCores) ΓÇô Standard_B1ms(1v Core, 2 GB Memory, 64 GB Storage and 500 IOPs)|
- > |Standard | P1V2, 3.5 GB Memory, 250GB Storage Dv2 Series Compute Equivalent | Flexi Server - Burstable (1-2 vCores) - Standard_B2s( 2v Core, 4 GB Memory, 128 GB Storage and 700 IOPs) |
- > |Premium |P1V3, 8 GB Memory, 250GB Storage 2 v CPU | Flexi Server - General Purpose (2-64 vCores) Standard_D2ds_v4(2v Core, 8 GB Memory, 128 GB Storage and 700 IOPs) |
+1. Under **Instance details**, type a globally unique name for your web app and choose **Linux (preview)** for **Operating System**. Select **Basic** for **Hosting plan**. See the table below for app and database SKUs for given hosting plans. You can view [hosting plans details in the announcement](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/the-new-and-better-wordpress-on-app-service/ba-p/3202594). For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
:::image type="content" source="./media/quickstart-wordpress/05-wordpress-basics-instance-details.png?text=WordPress basics instance details" alt-text="Screenshot of WordPress instance details":::
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Next, create an App Service plan using the [az appservice plan create](/cli/azur
```azurecli-interactive
- # Change 123 to any three characters to form a unique name across Azure
-az appservice plan create
- --name msdocs-core-sql-plan-123
- --resource-group msdocs-core-sql
+ # Change 123 to any three characters to form a unique name
+az appservice plan create \
+ --name msdocs-core-sql-plan-123 \
+ --resource-group msdocs-core-sql \
--sku F1 ```
Finally, create the App Service web app using the [az webapp create](/cli/azure/
```azurecli-interactive
-az webapp create
- --name <your-app-service-name>
- --runtime "DOTNET|6.0"
- --plan <your-app-service-plan-name>
+az webapp create \
+ --name <your-app-service-name> \
+ --runtime "DOTNET|6.0" \
+ --plan <your-app-service-plan-name> \
--resource-group msdocs-core-sql ```
To create an Azure SQL database, we first must create a SQL Server to host it. A
Replace the *server-name* placeholder with a unique SQL Database name. This name is used as the part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-username* with a username and password of your choice. ```azurecli-interactive
-az sql server create
- --location eastus
- --resource-group msdocs-core-sql
- --name <server-name>
- --admin-user <db-username>
+az sql server create \
+ --location eastus \
+ --resource-group msdocs-core-sql \
+ --name <server-name> \
+ --admin-user <db-username> \
--admin-password <db-password> ``` Provisioning a SQL Server may take a few minutes. Once the resource is available, we can create a database with the [az sql db create](/cli/azure/sql/db#az_sql_db_create) command. ```azurecli-interactive
-az sql db create
- --resource-group msdocs-core-sql
- --server <server-name>
+az sql db create \
+ --resource-group msdocs-core-sql \
+ --server <server-name> \
--name coreDb ``` We also need to add the following firewall rule to our database server to allow other Azure resources to access it. ```azurecli-interactive
-az sql server firewall-rule create
- --resource-group msdocs-core-sql
- --server <server-name>
- --name AzureAccess
- --start-ip-address 0.0.0.0
+az sql server firewall-rule create \
+ --resource-group msdocs-core-sql \
+ --server <server-name> \
+ --name AzureAccess \
+ --start-ip-address 0.0.0.0 \
--end-ip-address 0.0.0.0 ```
Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com
We can retrieve the Connection String for our database using the [az sql db show-connection-string](/cli/azure/sql/db#az_sql_db_show_connection_string) command. This command allows us to add the Connection String to our App Service configuration settings. Copy this Connection String value for later use. ```azurecli-interactive
-az sql db show-connection-string
- --client ado.net
- --name coreDb
+az sql db show-connection-string \
+ --client ado.net \
+ --name coreDb \
--server <your-server-name> ```
Next, let's assign the Connection String to our App Service using the command be
Make sure to replace the username and password in the connection string with your own before running the command. ```azurecli-interactive
-az webapp config connection-string set
- -g msdocs-core-sql
- -n <your-app-name>
- -t SQLServer
+az webapp config connection-string set \
+ -g msdocs-core-sql \
+ -n <your-app-name> \
+ -t SQLServer \
--settings MyDbConnection=<your-connection-string> ```
Next we need to update the appsettings.json file in our local app code with the
Finally, run the commands below to install the necessary CLI tools for Entity Framework Core, create an initial database migration file, and apply those changes to update the database. ```dotnetcli
-dotnet tool install -g dotnet-ef
-dotnet ef migrations add InitialCreate
+dotnet tool install -g dotnet-ef \
+dotnet ef migrations add InitialCreate \
dotnet ef database update ```
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookW
## Next steps
-To learn about Azure VM extensions, see:
+- To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environments, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
+- To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/extension-based-hybrid-runbook-worker.md).
-To learn about VM extensions for Arc-enabled servers, see:
-- [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions).
+- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](/azure/virtual-machines/extensions/features-windows) and [Azure VM extensions and features for Linux](/azure/virtual-machines/extensions/features-linux).
+
+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions).
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Container Instances](../container-instances/container-instances-region-availability.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Data Factory](../data-factory/index.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Data Factory](../data-factory/concepts-data-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| Azure Database for MySQL ΓÇôΓÇ»[Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Database for PostgreSQL ΓÇôΓÇ»[Flexible Server](../postgresql/flexible-server/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DDoS Protection](../ddos-protection/ddos-faq.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
The recommended distribution varies by the type of application and its query pat
The first step in data modeling is to identify which of them more closely resembles your application.
-See details at [Determining application type](../../postgresql/hyperscale/concepts-app-type.md).
+See details at [Determining application type](../../postgresql/hyperscale/howto-app-type.md).
## Choose a distribution column
Why choose a distributed column?
This is one of the most important modeling decisions you'll make. Azure Arc-enabled PostgreSQL Hyperscale stores rows in shards based on the value of the rows' distribution column. The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes. This article gives distribution column tips for the two most common hyperscale scenarios.
-See details at [Choose distribution columns](../../postgresql/hyperscale/concepts-choose-distribution-column.md).
+See details at [Choose distribution columns](../../postgresql/hyperscale/howto-choose-distribution-column.md).
## Table colocation
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
In a few minutes, your creation should successfully complete.
- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers). --
-|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
-|||||
-|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
-| | | | |
-
-While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
-
+ |You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
+ |||||
+ |A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
+ | | | | |
+
+ This table is demonstrated in the following figure:
+
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+
+ While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
+
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class. - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
While indicating 1 worker works, we do not recommend you use it. This deployment
- [Monitor your server group](monitor-grafana-kibana.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. : * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Be aware of the following considerations when you're deploying:
|A simple instance of Azure Arc-enabled PostgreSQL Hyperscale that is ready to scale out when you need it. |One instance of Azure Arc-enabled PostgreSQL Hyperscale. It isn't yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes, and distribute the data. |*0*. |The Citus extension that provides the Hyperscale capability is present on your deployment, but isn't yet loaded. | | | | | |
- Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
+ This table is demonstrated in the following figure:
+
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+
+ Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
- **The storage classes you want your server group to use.** It's important to set the storage class right at the time you deploy a server group. You can't change this setting after you deploy. If you don't indicate storage classes, you get the storage classes of the data controller by default. - To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd`, followed by the name of the storage class.
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
The main parameters should consider are:
-|You need |Shape of the server group you will deploy |`-w` parameter to use |Note |
-|||||
-|A scaled out form of Postgres to satisfy the scalability needs of your applications. |Three or more Postgres instances, one is coordinator, n are workers with n >=2. |Use `-w n`, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A simple instance of Postgres that is ready to scale out when you need it. |One Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use `-w 0` or do not specify `-w`. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
-| | | | |
+ |You need |Shape of the server group you will deploy |`-w` parameter to use |Note |
+ |||||
+ |A scaled out form of Postgres to satisfy the scalability needs of your applications. |Three or more Postgres instances, one is coordinator, n are workers with n >=2. |Use `-w n`, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A simple instance of Postgres that is ready to scale out when you need it. |One Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use `-w 0` or do not specify `-w`. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
+ | | | | |
-While using `-w 1` works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get two instances of Postgres: One coordinator and one worker. With this setup, you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
+ This table is demonstrated in the following figure:
+
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+
+ While using `-w 1` works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get two instances of Postgres: One coordinator and one worker. With this setup, you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
- **The storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this setting cannot be changed after you deploy. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially: * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale: * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
## Suggested next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. : * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
You scale in when you remove Postgres instances (Postgres Hyperscale worker node
## Get started If you are already familiar with the scaling model of Azure Arc-enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc-enabled Data - [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)-- [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)-- [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+- [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+- [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
- [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) - [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) - [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
The scale-in operation is an online operation. Your applications continue to acc
- Read about how to set server parameters in your Azure Arc-enabled PostgreSQL Hyperscale server group - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. : * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.co
kubectl delete crd dags.sql.arcdata.microsoft.com kubectl delete crd exporttasks.tasks.arcdata.microsoft.com kubectl delete crd monitors.arcdata.microsoft.com
+kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
+ ## Delete Cluster roles and Cluster role bindings kubectl delete clusterrole arcdataservices-extension
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
With the Direct connectivity mode offered by Azure Arc-enabled data services you
- **Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to potentially benefit from better performances**: * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Determine application type](../../postgresql/hyperscale/howto-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
* [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
By using this annotation, the HelmRelease that is deployed will be patched with
## Multi-tenancy
-Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy). This capability will be integrated into Azure GitOps with Flux v2 prior to general availability.
+Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability will be integrated into Azure GitOps with Flux v2 prior to general availability.
>[!NOTE]
->This will be a breaking change if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects. To prepare for the release of this multi-tenancy feature, take one of these actions:
+>This will be a breaking change if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects. It [may also be a breaking change](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default) if you use a Kubernetes version less than 1.20.6. To prepare for the release of this multi-tenancy feature, take these actions:
>
->* (Recommended) Assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
->* If you need time to migrate, you can opt-out of multi-tenancy.
+>* Upgrade to Kubernetes version 1.20.6 or greater.
+>* In your Kubernetes manifests assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
+> * If you need time to update your manifests, you can opt-out of multi-tenancy. However, you still need to upgrade your Kubernetes version.
### Update manifests for multi-tenancy
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
There are four types of availability tests:
You can create up to 100 availability tests per Application Insights resource.
+> [!NOTE]
+> Availability tests are stored encrypted, according to [Microsoft Azure Data Encryption at rest](../../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services) policies.
+ ## Troubleshooting See the dedicated [troubleshooting article](troubleshoot-availability.md).
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.6.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.6/applicationinsights-agent-3.2.6.jar) file.
+Download the [applicationinsights-agent-3.2.7.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.7/applicationinsights-agent-3.2.7.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.6.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to your application
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.6.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.7.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.6.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.6.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.7.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.6.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.7.jar", "-jar", "<myapp.jar>"]
```
-If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.6.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.7.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.6.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.6.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.6.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.6.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.6.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar
```
-Quotes are not necessary, but if you want to include them, the proper placement is:
+Quotes aren't necessary, but if you want to include them, the proper placement is:
```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.6.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.7.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.6.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.7.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.6.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.7.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.6.jar
+-javaagent:path/to/applicationinsights-agent-3.2.7.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.6.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.7.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.6.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.7.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.6.jar
+-javaagent:path/to/applicationinsights-agent-3.2.7.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.6.jar
+-javaagent:path/to/applicationinsights-agent-3.2.7.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.6.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.7.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.6.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.6.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.7.jar` is located.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.6, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.7, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.6, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.7, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.6
+> Vertx HTTP Library instrumentation is available starting from version 3.2.7
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.6.jar` is located.
+`applicationinsights-agent-3.2.7.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
public class Program
``` > [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Metrics API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
##### [Node.js](#tab/nodejs)
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
After you enable monitoring of your Azure Red Hat OpenShift and Red Hat OpenShif
1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command. ```
- helm list
+ helm list --all-namespaces
``` The output will resemble the following:
The configuration change can take a few minutes to complete. Because Helm tracks
## Next steps
-If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
+If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Describes common errors for Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 01/13/2022 Last updated : 02/23/2022
If your error code isn't listed, submit a GitHub issue. On the right side of the
| StorageAccountAlreadyExists <br> StorageAccountAlreadyTaken | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) | | StorageAccountNotFound | Check the subscription, resource group, and name of the storage account that you're trying to use. | | | SubnetsNotInSameVnet | A virtual machine can only have one virtual network. When deploying several NICs, make sure they belong to the same virtual network. | [Windows VM multiple NICs](../../virtual-machines/windows/multiple-nics.md) <br><br> [Linux VM multiple NICs](../../virtual-machines/linux/multiple-nics.md) |
+| SubnetIsFull | There aren't enough available addresses in the subnet to deploy resources. You can release addresses from the subnet, use a different subnet, or create a new subnet. | [Manage subnets](../../virtual-network/virtual-network-manage-subnet.md) and [Virtual network FAQ](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) <br><br> [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) |
| SubscriptionNotFound | A specified subscription for deployment can't be accessed. It could be the subscription ID is wrong, the user deploying the template doesn't have adequate permissions to deploy to the subscription, or the subscription ID is in the wrong format. When using ARM template nested deployments to deploy across scopes, provide the subscription's GUID. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) | | SubscriptionNotRegistered | When deploying a resource, the resource provider must be registered for your subscription. When you use an Azure Resource Manager template for deployment, the resource provider is automatically registered in the subscription. Sometimes, the automatic registration doesn't complete in time. To avoid this intermittent error, register the resource provider before deployment. | [Resolve registration](error-register-resource-provider.md) | | TemplateResourceCircularDependency | Remove unnecessary dependencies. | [Resolve circular dependencies](error-invalid-template.md#circular-dependency) |
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
For more information, see the [*SignalR trigger* binding reference](../azure-fun
You also need to configure your function endpoint as an upstream so that service will trigger the function when there is message from client. For more information about how to configure upstream, please refer to this [doc](concept-upstream.md).
+> [!NOTE]
+> StreamInvocation from client is not supported in Serverless Mode.
+ ### Sending messages and managing group membership Use the *SignalR* output binding to send messages to clients connected to Azure SignalR Service. You can broadcast messages to all clients, or you can send them to a subset of clients that are authenticated with a specific user ID or have been added to a specific group.
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
For SQL Database, the backup storage redundancy can be configured at the time of
> [!IMPORTANT]
-> Backup storage redundancy for Hyperscale and SQL Managed Instance can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
+> Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
> [!IMPORTANT] > Zone-redundant storage is currently only available in [certain regions](../../storage/common/storage-redundancy.md#zone-redundant-storage).
azure-sql Job Automation Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/job-automation-managed-instance.md
Previously updated : 06/03/2021 Last updated : 02/23/2022 # Automate management tasks using SQL Agent jobs in Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
A schedule can define the following conditions for the time when a job runs:
- One time, at a specific date and time, which is useful for delayed execution of some job. - On a recurring schedule.
+For more information on scheduling a SQL Agent job, see [Schedule a Job](/sql/ssms/agent/schedule-a-job).
+ > [!Note]
-> SQL Managed Instance currently does not enable you to start a job when the CPU is idle.
+> Azure SQL Managed Instance currently does not enable you to start a job when the CPU is idle.
### SQL Agent job notifications
-SQL Agent Jobs enable you to get notifications when the job finishes successfully or fails. You can receive notifications via email.
+SQL Agent jobs enable you to get notifications when the job finishes successfully or fails. You can receive notifications via email.
If it isn't already enabled, first you would need to configure [the Database Mail feature](/sql/relational-databases/database-mail/database-mail) on SQL Managed Instance:
EXEC msdb.dbo.sp_update_job @job_name=N'Load data using SSIS',
SQL Managed Instance currently doesn't allow you to change any SQL Agent properties because they are stored in the underlying registry values. This means options for adjusting the Agent retention policy for job history records are fixed at the default of 1000 total records and max 100 history records per job.
+For more information, see [View SQL Agent job history](/sql/ssms/agent/view-the-job-history).
+ ### SQL Agent fixed database role membership If users linked to non-sysadmin logins are added to any of the three SQL Agent fixed database roles in the msdb system database, there exists an issue in which explicit EXECUTE permissions need to be granted to three system stored procedures in the master database. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 02/01/2022 Last updated : 02/23/2022
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Possible causes** | System databases restore failed as the **&lt;sid&gt;adm** user environment couldn't find the **HDBsettings.sh** file to trigger restore. **Recommended action** | Work with the SAP HANA team to fix this issue.<br><br>If HXE is the SID, ensure that environment variable HOME is set to _/usr/sap/HXE/home_ as **sid-adm** user.
+### CloudDosAbsoluteLimitReached
+
+**Error message** | `Operation is blocked as you have reached the limit on number of operations permitted in 24 hours.` |
+ | --
+**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. <br><br> For example: If you've hit the limit for the number of configure backup jobs that can be triggered per day, and you try to configure backup on a new item, you'll see this error.
+**Recommended action** | Typically, retrying the operation after 24 hours resolves this issue. However, if the issue persists, you can contact Microsoft support for help.
+
+### CloudDosAbsoluteLimitReachedWithRetry
+
+**Error message** | `Operation is blocked as the vault has reached its maximum limit for such operations permitted in a span of 24 hours.`
+ | --
+**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you've a large number of datasources protected with a policy and you try to modify that policy, it will trigger configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.
+**Recommended action** | Azure Backup service will automatically retry this operation after 24 hours.
+ ## Restore checks ### Single Container Database (SDC) restore
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, India Central, South Africa North, West US 2, Japan East and West US 3.
+**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, India Central, South Africa North, West US 2, Japan East, East US, US Gov Virginia and West US 3.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
This article helps you configure Bastion, and then connect to a VM in the VNet u
> [!NOTE] > * This configuration requires the Standard SKU tier for Azure Bastion.
+> * You can now upload and download files using the native client. To learn more, refer to [Upload and download files using the native client](vm-upload-download-native.md).
> * The user's capabilities on the VM using a native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via the Bastion is not supported. Currently, this feature has the following limitation:
This section helps you connect to your virtual machine using the *az network bas
* Use native clients on *non*-Windows local workstations (ex: a Linux PC). * Use a native client of your choice. * Set up concurrent VM sessions with Bastion.
-* Access file transfer for SSH sessions.
+* Upload files to your target VM from your local workstation.
1. Sign in to your Azure account, and select your subscription containing your Bastion resource.
This section helps you connect to your virtual machine using the *az network bas
```azurecli-interactive az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" ```
-3. Connect and sign in to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2.
+3. 1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2. For example, you can use the following command if you have the OpenSSH client installed on your local computer:
+
+ ```azurecli-interactive
+ ssh <username>@127.0.0.1 -p <LocalMachinePort>
+ ```
+ ## Next steps
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
Azure Bastion offers support for file transfer between your target VM and local
> * This feature requires the Standard SKU. The Basic SKU doesn't support using the native client. >
-## Upload and download files - RDP
+## Upload and download files using the *az network bastion rdp* command
-This section helps you transfer files between your local Windows computer and your target VM over RDP. The *az network bastion rdp command* uses the native client MSTSC to connect to the target VM. Once connected to the target VM, you can transfer files using right-click, then **Copy** and **Paste**.
+This section helps you transfer files between your local Windows computer and your target VM over RDP. The *az network bastion rdp* command uses the native client MSTSC to connect to the target VM. Once connected to the target VM, you can transfer files using right-click, then **Copy** and **Paste**.
1. Sign in to your Azure account and select the subscription containing your Bastion resource.
This section helps you transfer files between your local Windows computer and yo
1. Once you sign in to your target VM, the native client on your computer will open up with your VM session. You can now transfer files between your VM and local machine using right-click, then **Copy** and **Paste**.
-## Upload files - SSH
+## Upload files using the *az network bastion tunnel* command
-This section helps you upload files from your local computer to your target VM over SSH using the *az network bastion tunnel* command. The *az network tunnel command* allows you to use a native client of your choice. To learn more about the tunnel command, refer to [Connect to a VM using the *az network bastion tunnel* command](connect-native-client-windows.md#connect-tunnel).
+This section helps you upload files from your local computer to your target VM over SSH or RDP using the *az network bastion tunnel* command. The *az network tunnel command* allows you to use a native client of your choice on *non*-Windows local workstations. To learn more about the tunnel command, refer to [Connect to a VM using the *az network bastion tunnel* command](connect-native-client-windows.md#connect-tunnel).
> [!NOTE] > File download over SSH is not currently supported.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
You use Speech-to-text recognition when you need to identify the language in an
> [!NOTE] > Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, and Python.
+>
+> Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
::: zone pivot="programming-language-csharp"
You use Speech translation when you need to identify the language in an audio so
> [!NOTE] > Speech translation with language identification is only supported with Speech SDKs in C#, C++, and Python. -
+>
+> Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
::: zone pivot="programming-language-csharp"
communication-services Program Brief Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md
Previously updated : 11/30/2021 Last updated : 2/15/2022
This field captures the time duration in days until full production volume is re
Short Code programs that solicit political donations are subject to [additional best practices](https://www.ctia.org/the-wireless-industry/industry-commitments/guidelines-for-federal-political-campaign-contributions-via-wireless-carriers-bill). ### Privacy Policy and Terms and Conditions URL
-Message Senders are required to maintain a privacy policy and terms and conditions for all short code programs and make it accessible to customers from the initial call-to-action.
+Message Senders are required to maintain a privacy policy and terms and conditions that are specific to all short code programs and make it accessible to customers from the initial call-to-action.
-In this field, you can provide a URL of the privacy policy and terms and conditions where customers can access it. If you donΓÇÖt have the privacy policy URL yet, you can provide the URL of screenshots of what the policy will look like (a design mockup of the website that will go live once the campaign is launched).
+In this field, you can provide a URL of the privacy policy and terms and conditions where customers can access it. If you donΓÇÖt have the short code program specific privacy policy or terms of service URL yet, you can provide the URL of screenshots of what the short code program policies will look like (a design mockup of the website that will go live once the campaign is launched).
+
+Your terms of service must include terms specific to the short code program brief and must contain ALL of the following:
+- Program Name and Description
+- Message Frequency, it can be either listed as Message Frequency Varies or the accurate frequency, it also needs to match with what is listed in the call-to-action
+- The disclaimer: "Message and data rates may apply" written verbatim
+- Customer care information, for example: "For help call [phone number] or send an email to [email]"
+- Opt-Out message: "Text STOP to cancel"
+- A link to the Privacy Policy or the whole Privacy policy
+
+##### Example:
+**Terms of Service**
+ > [!Note]
-> If you donΓÇÖt have a URL of the website, mockups, or design, please send the screenshots to phone@microsoft.com.
+> If you donΓÇÖt have a URL of the website, mockups, or design, please send an email with the screenshots to phone@microsoft.com with "[CompanyName - ProgramName] Short Code Request".
+ ### Program Sign up type and URL
-This field captures the call to action (CTA), an instruction for the customers to take action for ensuring that the customer consents to receive text messages, and understands the nature of the program. Call to action can be over SMS, Interactive Voice Response (IVR), website, or point of sale. Carriers require that all short code program brief applications are submitted with mock ups for the call to action.
+This field captures the call-to-action, an instruction for the customers to take action for ensuring that the customer consents to receive text messages, and understands the nature of the program. Call-to-action can be over SMS, Interactive Voice Response (IVR), website, or point of sale. Carriers require that all short code program brief applications are submitted with mock ups for the call-to-action.
-In these fields, you must provide a URL of the website where customers will discover the program, URL for screenshots of the website, URL of mockup of the website, or URL with the design.
+In these fields, you must provide a URL of the website where customers will discover the program, URL for screenshots of the website, URL of mockup of the website, or URL with the design. If the program sign up type is SMS, then you must provide the keywords the customer will send to the short code for opting in.
> [!Note]
-> If you donΓÇÖt have a URL of the website, mockups, or design, please send the screenshots to phone@microsoft.com.
+> If you donΓÇÖt have a URL of the website, mockups, or design, please send the screenshots to phone@microsoft.com with Subject "[CompanyName - ProgramName] Short Code Request".
-#### Guidelines for designing the call to action (CTA):
-1. The CTA needs to be clear as to what program the customer is joining or agreeing to.
+#### Guidelines for designing the call-to-action:
+1. The call-to-action needs to be clear as to what program the customer is joining or agreeing to.
- Call-to-action must be clear and accurate; consent must not be obtained through deceptive means - Enrolling a user in multiple programs based on a single opt-in is prohibited, even when all programs operate on the same short code. Please refer to the [CTIA monitoring handbook](https://www.wmcglobal.com/hubfs/CTIA%20Short%20Code%20Monitoring%20Handbook%20-%20v1.8.pdf) for best practices.
-2. The CTA needs to include the abbreviated terms and conditions, which include:
+2. The call-to-action needs to include the abbreviated terms and conditions, which include:
- Program Name ΓÇô as described above - Message frequency (recurring message/subscriptions) - Message and Data rates may apply
Contoso.com: Announcing our Holiday Sale. Reply YES to save 5% on your next Cont
**Web opt-in**
-**Point of sale (hardcopy leaflet)**
+**Point of sale (hardcopy leaflet) with SMS keyword call-to-action**
:::image type="content" source= "../media/print-opt-in-mock.png" alt-text="Screenshot showing print opt-in mock up.":::
In this field, you are required to provide a sample message for each content typ
The following documents may be interesting to you: -- Familiarize yourself with the [SMS SDK](../sms/sdk-features.md)
+- Familiarize yourself with the [SMS SDK](../sms/sdk-features.md)
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
Title: Connect to Common Data Service (Microsoft Dataverse)
-description: Create and manage Common Data Service (Microsoft Dataverse) records by using Azure Logic Apps
+description: Create and manage Common Data Service (Microsoft Dataverse) records by using Azure Logic Apps.
ms.suite: integration--++ Last updated 02/11/2021 tags: connectors
This article shows how you can build a logic app that creates a task record when
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* A [Common Data Service environment](/power-platform/admin/environments-overview), which is a space where your organization stores, manages, and shares business data and a Common Data Service database. For more information, see these resources:<p>
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md
For all the operations and other technical information, such as properties, limi
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* An [Event Hubs namespace and event hub](../event-hubs/event-hubs-create.md)
connectors Connectors Create Api Bingsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-bingsearch.md
Title: Connect to Bing Search
-description: Automate tasks and workflows that find results in Bing Search by using Azure Logic Apps
+description: Automate tasks and workflows that find results in Bing Search by using Azure Logic Apps.
ms.suite: integration--++ Last updated 05/21/2018 tags: connectors
connectors Connectors Create Api Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-container-instances.md
Title: Deploy & manage Azure Container Instances
-description: Automate tasks and workflows that create and manage container deployments in Azure Container Instances by using Azure Logic Apps
+description: Automate tasks and workflows that create and manage container deployments in Azure Container Instances by using Azure Logic Apps.
ms.suite: integration
ms. -+ tags: connectors Last updated 01/14/2020 # Deploy and manage Azure Container Instances by using Azure Logic Apps
-With Azure Logic Apps and the Azure Container Instance connector,
-you can set up automated tasks and workflows that deploy and manage [container groups](../container-instances/container-instances-container-groups.md). The Container Instance connector supports the following actions:
+With Azure Logic Apps and the Azure Container Instance connector, you can set up automated tasks and workflows that deploy and manage [container groups](../container-instances/container-instances-container-groups.md). The Container Instance connector supports the following actions:
* Create or delete a container group * Get the properties of a container group
If you're new to logic apps, review
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142Fs).
* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md) and [how to create and manage container instances](../container-instances/container-instances-quickstart.md)
connectors Connectors Create Api Crmonline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-crmonline.md
Title: Connect to Dynamics 365
-description: Create and manage Dynamics 365 records by using Azure Logic Apps
+description: Create and manage Dynamics 365 records by using Azure Logic Apps.
ms.suite: integration--++ Last updated 05/09/2020 tags: connectors
connectors Connectors Create Api Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-db2.md
Title: Access and manage IBM DB2 resources
-description: Read, edit, update, and manage IBM DB2 resources by building automated workflows using Azure Logic Apps
+description: Read, edit, update, and manage IBM DB2 resources by building automated workflows using Azure Logic Apps.
ms.suite: integration--++ Last updated 11/19/2020 tags: connectors
which map to the corresponding actions in the connector:
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* An IBM DB2 database, either cloud-based or on-premises
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
Title: Connect to FTP server
-description: Automate tasks and workflows that create, monitor, and manage files on an FTP server by using Azure Logic Apps
+description: Automate tasks and workflows that create, monitor, and manage files on an FTP server by using Azure Logic Apps.
ms.suite: integration--++ Last updated 12/15/2019 tags: connectors
When a trigger finds a new file, the trigger checks that the new file is complet
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Your FTP host server address and account credentials
connectors Connectors Create Api Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-github.md
Title: Access, monitor, and manage your GitHub repo
-description: Monitor GitHub events and manage your GitHub repo by creating automated workflows with Azure Logic Apps
+description: Monitor GitHub events and manage your GitHub repo by creating automated workflows with Azure Logic Apps.
ms.suite: integration--++ Last updated 03/02/2018 tags: connectors
connectors Connectors Create Api Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md
Title: Connect to IBM Informix database
description: Automate tasks and workflows that manage resources stored in IBM Informix by using Azure Logic Apps ms.suite: integration-+ --++ Last updated 01/07/2020 tags: connectors
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
ms.suite: integration --++ Last updated 05/25/2021 tags: connectors
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md
Title: Connect to Office 365 Outlook
-description: Automate tasks and workflows that manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps
+description: Automate tasks and workflows that manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps.
ms.suite: integration -+ Last updated 08/11/2021 tags: connectors
connectors Connectors Create Api Onedrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-onedrive.md
Title: Access and manage files in Microsoft OneDrive
-description: Upload and manage files in OneDrive by creating automated workflows in Azure Logic Apps
+description: Upload and manage files in OneDrive by creating automated workflows in Azure Logic Apps.
ms.suite: integration--++ Last updated 10/18/2016 tags: connectors
connectors Connectors Create Api Onedriveforbusiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-onedriveforbusiness.md
Title: Connect to OneDrive for Business
-description: Upload and manage files with OneDrive for Business REST APIs and Azure Logic Apps
+description: Upload and manage files in OneDrive for Business using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/18/2016 tags: connectors
-# Get started with the OneDrive for Business connector
+# Connect to OneDrive for Business using Azure Logic Apps
+ Connect to OneDrive for Business to manage your files. You can perform various actions such as upload, update, get, and delete on files. You can get started by creating a logic app now, see [Create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
connectors Connectors Create Api Oracledatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md
Title: Connect to Oracle Database
-description: Insert and manage records with Oracle Database REST APIs and Azure Logic Apps
+description: Insert and manage records in Oracle Database using Azure Logic Apps.
ms.suite: integration--++ Last updated 05/20/2020 tags: connectors
-# Get started with the Oracle Database connector
+# Connect to Oracle Database from Azure Logic Apps
Using the Oracle Database connector, you create organizational workflows that use data in your existing database. This connector can connect to an on-premises Oracle Database, or an Azure virtual machine with Oracle Database installed. With this connector, you can:
connectors Connectors Create Api Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-outlook.md
Title: Connect to Outlook.com
-description: Automate tasks and workflows that manage email, calendars, and contacts in Outlook.com by using Azure Logic Apps
+description: Automate tasks and workflows that manage email, calendars, and contacts in Outlook.com using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/18/2016 tags: connectors
-# Manage email, calendars, and contacts in Outlook.com by using Azure Logic Apps
+# Connect to Outlook.com from Azure Logic Apps
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Outlook.com connector](/connectors/outlook/), you can create automated tasks and workflows that manage your @outlook.com or @hotmail.com account by building logic apps. For example, you automate these tasks:
You can use any trigger to start your workflow, for example, when a new email ar
* An [Outlook.com account](https://outlook.live.com/owa/)
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The logic app where you want to access your Outlook.com account. To start your workflow with an Outlook.com trigger, you need to have a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add an Outlook.com action to your workflow, your logic app needs to already have a trigger.
connectors Connectors Create Api Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-salesforce.md
Title: Connect to Salesforce from Azure Logic Apps
-description: Automate tasks and workflows that monitor, create, and manage Salesforce records and jobs by using Azure Logic Apps
+description: Automate tasks and workflows that monitor, create, and manage Salesforce records and jobs using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/24/2018 tags: connectors
-# Monitor, create, and manage Salesforce resources by using Azure Logic Apps
+# Connect to Salesforce from Azure Logic Apps
With Azure Logic Apps and the Salesforce connector, you can create automated tasks and workflows for your
If you're new to logic apps, review
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* A [Salesforce account](https://salesforce.com/)
connectors Connectors Create Api Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sendgrid.md
Title: Connect to SendGrid from Azure Logic Apps
-description: Automate tasks and workflows that send emails and manage mailing lists in SendGrid by using Azure Logic Apps
+description: Automate tasks and workflows that send emails and manage mailing lists in SendGrid using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/24/2018 tags: connectors
-# Send emails and manage mailing lists in SendGrid by using Azure Logic Apps
+# Connect to SendGrid from Azure Logic Apps
With Azure Logic Apps and the SendGrid connector, you can create automated tasks and workflows that
If you're new to logic apps, review
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* A [SendGrid account](https://www.sendgrid.com/) and a [SendGrid API key](https://sendgrid.com/docs/ui/account-and-settings/api-keys/)
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
Title: Exchange messages with Azure Service Bus
-description: Create automated tasks and workflows that send and receive messages by using Azure Service Bus in Azure Logic Apps
+description: Create automated tasks and workflows that send and receive messages by using Azure Service Bus in Azure Logic Apps.
ms.suite: integration -+ Last updated 08/18/2021 tags: connectors
-# Exchange messages in the cloud by using Azure Logic Apps and Azure Service Bus
+# Connect to Azure Service Bus from Azure Logic Apps
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) connector, you can create automated tasks and workflows that transfer data, such as sales and purchase orders, journals, and inventory movements across applications for your organization. The connector not only monitors, sends, and manages messages, but also performs actions with queues, sessions, topics, subscriptions, and so on, for example:
connectors Connectors Create Api Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sftp.md
Title: Connect to SFTP account (Deprecated)
-description: Automate tasks and processes that monitor, create, manage, send, and receive files for an SFTP server by using Azure Logic Apps
+description: Automate tasks and processes that monitor, create, manage, send, and receive files for an SFTP server using Azure Logic Apps.
ms.suite: integration --++ Last updated 11/01/2019 tags: connectors
> [!IMPORTANT] > Please use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md) as the SFTP connector is deprecated. You can no longer select SFTP
-> triggers and actions in the Logic App Designer.
+> triggers and actions in the workflow designer.
To automate tasks that monitor, create, send, and receive files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server, you can build and automate integration workflows by using Azure Logic Apps and the SFTP connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream. Here are some example tasks you can automate:
The SFTP connector handles only files that are *50 MB or smaller* and doesn't su
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Your SFTP server address and account credentials, which let your logic app access your SFTP account. To use the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you also need access to an SSH private key and the SSH private key password.
connectors Connectors Create Api Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sharepoint.md
Title: Connect to SharePoint
-description: Monitor and manage resources in SharePoint Online or SharePoint Server on premises by using Azure Logic Apps
+description: Monitor and manage resources in SharePoint Online or SharePoint Server on premises using Azure Logic Apps.
ms.suite: integration -+ Last updated 08/11/2021 tags: connectors
-# Connect to SharePoint resources using Azure Logic Apps
+# Connect to SharePoint from Azure Logic Apps
To automate tasks that monitor and manage resources, such as files, folders, lists, and items, in SharePoint Online or in on-premises SharePoint Server, you can create automated integration workflows by using Azure Logic Apps and the SharePoint connector.
connectors Connectors Create Api Slack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-slack.md
Title: Connect to Slack from Azure Logic Apps
-description: Automate tasks and workflows that monitor files and manage channels, groups, and messages in your Slack account by using Azure Logic Apps
+description: Automate tasks and workflows that monitor files and manage channels, groups, and messages in your Slack account using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/25/2018 tags: connectors
-# Monitor and manage Slack with Azure Logic Apps
+# Connect to Slack from Azure Logic Apps
With Azure Logic Apps and the Slack connector, you can create automated tasks and workflows that monitor
connectors Connectors Create Api Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-smtp.md
Title: Connect to SMTP from Azure Logic Apps
-description: Automate tasks and workflows that send email through your SMTP (Simple Mail Transfer Protocol) account by using Azure Logic Apps
+description: Automate tasks and workflows that send email through your SMTP (Simple Mail Transfer Protocol) account using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/25/2018 tags: connectors
-# Send email from your SMTP account with Azure Logic Apps
+# Connect to your SMTP account from Azure Logic Apps
With Azure Logic Apps and the Simple Mail Transfer Protocol (SMTP) connector, you can create automated tasks and workflows that send email from your SMTP account.
If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/lo
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Your SMTP account and user credentials
connectors Connectors Native Http Swagger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http-swagger.md
Title: Connect to REST endpoints from Azure Logic Apps
-description: Connect to REST endpoints from automated workflows in Azure Logic Apps.
+ Title: Call or connect to REST endpoints from workflows
+description: Learn how to call or connect to REST endpoints from workflows in Azure Logic Apps.
ms.suite: integration
Last updated 11/01/2019
tags: connectors
-# Call REST endpoints by using Azure Logic Apps
+# Call REST endpoints from workflows in Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the built-in **HTTP + Swagger** operation, you can create automated integration workflows that regularly call any REST endpoint through a [Swagger file](https://swagger.io). The **HTTP + Swagger** trigger and action work the same as the [HTTP trigger and action](connectors-native-http.md) but provide a better experience in the workflow designer by exposing the API structure and outputs described by the Swagger file. To implement a polling trigger, follow the polling pattern that's described in [Create custom APIs to call other APIs, services, and systems from logic app workflows](../logic-apps/logic-apps-create-api-app.md#polling-triggers).
+With the built-in **HTTP + Swagger** operation and [Azure Logic Apps](../logic-apps/logic-apps-overview.md), you can create automated integration workflows that regularly call any REST endpoint through a [Swagger file](https://swagger.io). The **HTTP + Swagger** trigger and action work the same as the [HTTP trigger and action](connectors-native-http.md) but provide a better experience in the workflow designer by exposing the API structure and outputs described by the Swagger file. To implement a polling trigger, follow the polling pattern that's described in [Create custom APIs to call other APIs, services, and systems from logic app workflows](../logic-apps/logic-apps-create-api-app.md#polling-triggers).
## Prerequisites * An account and Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The URL for the Swagger (not OpenAPI) file that describes the target REST endpoint that you want to call
+* The URL for the Swagger file (OpenAPI 2.0, not OpenAPI 3.0) that describes the target REST endpoint that you want to call
Typically, the REST endpoint has to meet the following criteria for the trigger or action to work:
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the built-in *
The examples in this topic use the [Cognitive Services Face API](../cognitive-services/face/overview.md), which requires a [Cognitive Services account and access key](../cognitive-services/cognitive-services-apis-create-account.md). > [!NOTE]
- > To reference a Swagger file that's unhosted or that doesn't meet the security and cross-origin requirements, you can [upload the Swagger file to a blob container in an Azure storage account](#host-swagger), and enable CORS on that storage account so that you can reference the file.
+ > To reference a Swagger file that's unhosted or that doesn't meet the security and cross-origin requirements,
+ > you can [upload the Swagger file to a blob container in an Azure storage account](#host-swagger), and enable
+ > CORS on that storage account so that you can reference the file.
-* The logic app workflow from where you want to call the target endpoint. To start with the **HTTP + Swagger** trigger, create a blank logic app workflow. To use the HTTP + Swagger action, start your workflow with any trigger that you want. This example uses the **HTTP + Swagger** trigger as the first step.
+* The logic app workflow from where you want to call the target endpoint. To start with the **HTTP + Swagger** trigger, create a blank logic app workflow. To use the **HTTP + Swagger** action, start your workflow with any trigger that you want. This example uses the **HTTP + Swagger** trigger as the first step.
If you're new to logic app workflows, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) and [how to create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
This built-in trigger sends an HTTP request to a URL for a Swagger file that des
1. To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
- For more information about authentication types available for HTTP + Swagger, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
+ For more information about authentication types available for HTTP + Swagger, review [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
1. Continue building your workflow with actions that run when the trigger fires.
This built-in action sends an HTTP request to the URL for the Swagger file that
1. To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
- For more information about authentication types available for HTTP + Swagger, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
+ For more information about authentication types available for HTTP + Swagger, review [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
1. When you're finished, remember to save your logic app workflow. On the designer toolbar, select **Save**.
This section provides more information about the outputs from an **HTTP + Swagge
| Property name | Type | Description | |||-|
-| headers | object | The headers from the request |
-| body | object | The object with the body content from the request |
-| status code | int | The status code from the request |
+| **headers** | Object | The headers from the request |
+| **body** | Object | The object with the body content from the request |
+| **status code** | Integer | The status code from the request |
|||| | Status code | Description |
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Azure CLI samples for Azure Cosmos DB Cassandra API
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Cassandra API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-## Common Samples
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-These samples apply to all Azure Cosmos DB APIs
+## Cassandra API Samples
|Task | Description | |||
-| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Create an Azure Cosmos account, keyspace and table](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table for Cassandra API. |
+| [Create a serverless Azure Cosmos account for Cassandra API, keyspace and table](../scripts/cli/cassandr)| Creates a serverless Azure Cosmos DB account, keyspace, and table for Cassandra API. |
+| [Create an Azure Cosmos account, keyspace and table with autoscale](../scripts/cli/cassandr)| Creates an Azure Cosmos DB account, keyspace, and table with autoscale for Cassandra API. |
+| [Perform throughput operations](../scripts/cli/cassandr) | Read, update and migrate between autoscale and standard throughput on a keyspace and table.|
+| [Lock resources from deletion](../scripts/cli/cassandr)| Prevent resources from being deleted with resource locks.|
|||
-## Cassandra API Samples
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
|Task | Description | |||
-| [Create an Azure Cosmos account, keyspace and table](../scripts/cli/cassandr?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, keyspace, and table for Cassandra API. |
-| [Create a serverless Azure Cosmos account for Cassandra API, keyspace and table](../scripts/cli/cassandr?toc=%2fcli%2fazure%2ftoc.json)| Creates a serverless Azure Cosmos DB account, keyspace, and table for Cassandra API. |
-| [Create an Azure Cosmos account, keyspace and table with autoscale](../scripts/cli/cassandr?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, keyspace, and table with autoscale for Cassandra API. |
-| [Throughput operations](../scripts/cli/cassandr?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a keyspace and table.|
-| [Lock resources from deletion](../scripts/cli/cassandr?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
|||+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Gremlin](../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../sql/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/powershell-samples.md
The following table includes links to commonly used Azure PowerShell scripts for
|[Create an account, keyspace and table](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, keyspace and table. | |[Create an account, keyspace and table with autoscale](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, keyspace and table with autoscale. | |[List or get keyspaces or tables](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get keyspaces or tables. |
-|[Throughput operations](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a keyspace or table including get, update and migrate between autoscale and standard throughput. |
+|[Perform throughput operations](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a keyspace or table including get, update and migrate between autoscale and standard throughput. |
|[Lock resources from deletion](../scripts/powershell/cassandr?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. | |||
cosmos-db Common Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-cli-samples.md
+
+ Title: Azure CLI Samples common to all Azure Cosmos DB APIs
+description: Azure CLI Samples common to all Azure Cosmos DB APIs
+++ Last updated : 02/22/2022+++++
+# Azure CLI samples for Azure Cosmos DB API
++
+The following table includes links to sample Azure CLI scripts that apply to all Cosmos DB APIs. For API specific samples, see [API specific samples](#api-specific-samples). Common samples are the same across all APIs.
+
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+
+|Task | Description |
+|||
+| [Add or fail over regions](scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+|||
+
+## API specific samples
+
+- [Cassandra API samples](cassandr)
+- [Gremlin API samples](graph/cli-samples.md)
+- [MongoDB API samples](mongodb/cli-samples.md)
+- [SQL API samples](sql/cli-samples.md)
+- [Table API samples](table/cli-samples.md)
+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Azure CLI samples for Azure Cosmos DB Gremlin API
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Gremlin API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-## Common Samples
+These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-These samples apply to all Azure Cosmos DB APIs
+## Gremlin API Samples
|Task | Description | |||
-| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Create an Azure Cosmos account, database and graph](../scripts/cli/gremlin/create.md)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
+| [Create a serverless Azure Cosmos account for Gremlin API, database and graph](../scripts/cli/gremlin/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and graph for Gremlin API. |
+| [Create an Azure Cosmos account, database and graph with autoscale](../scripts/cli/gremlin/autoscale.md)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
+| [Perform throughput operations](../scripts/cli/gremlin/throughput.md) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
+| [Lock resources from deletion](../scripts/cli/gremlin/lock.md)| Prevent resources from being deleted with resource locks.|
|||
-## Gremlin API Samples
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
|Task | Description | |||
-| [Create an Azure Cosmos account, database and graph](../scripts/cli/gremlin/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
-| [Create a serverless Azure Cosmos account for Gremlin API, database and graph](../scripts/cli/gremlin/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates a serverless Azure Cosmos DB account, database, and graph for Gremlin API. |
-| [Create an Azure Cosmos account, database and graph with autoscale](../scripts/cli/gremlin/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
-| [Throughput operations](../scripts/cli/gremlin/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
-| [Lock resources from deletion](../scripts/cli/gremlin/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
|||+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../sql/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/powershell-samples.md
The following table includes links to commonly used Azure PowerShell scripts for
|[Create an account, database and graph](../scripts/powershell/gremlin/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and graph. | |[Create an account, database and graph with autoscale](../scripts/powershell/gremlin/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and graph with autoscale. | |[List or get databases or graphs](../scripts/powershell/gremlin/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or graph. |
-|[Throughput operations](../scripts/powershell/gremlin/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a database or graph including get, update and migrate between autoscale and standard throughput. |
+|[Perform throughput operations](../scripts/powershell/gremlin/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or graph including get, update and migrate between autoscale and standard throughput. |
|[Lock resources from deletion](../scripts/powershell/gremlin/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. | |||
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
description: This article describes how to build a highly available solution usi
Previously updated : 02/17/2022 Last updated : 02/24/2022
The following table summarizes the high availability capability of various accou
* Review the expected [behavior of the Azure Cosmos SDKs](troubleshoot-sdk-availability.md) during these events and which are the configurations that affect it.
-* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see how to [configure your Azure Cosmos account with multiple write-regions](tutorial-global-distribution-sql-api.md).
+* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the SQL API](tutorial-global-distribution-sql-api.md).
* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplate
When migrating from periodic mode to continuous mode, you cannot run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
-You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1.00 PM PST.
+You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1:00 PM PST.
## Frequently asked questions
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Azure CLI samples for Azure Cosmos DB API for MongoDB
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB MongoDB API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-## Common Samples
+These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-These samples apply to all Azure Cosmos DB APIs
+## MongoDB API Samples
|Task | Description | |||
-| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Create an Azure Cosmos account, database and collection](../scripts/cli/mongodb/create.md)| Creates an Azure Cosmos DB account, database, and collection for MongoDB API. |
+| [Create a serverless Azure Cosmos account, database and collection](../scripts/cli/mongodb/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and collection for MongoDB API. |
+| [Create an Azure Cosmos account, database with autoscale and two collections with shared throughput](../scripts/cli/mongodb/autoscale.md)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for MongoDB API. |
+| [Perform throughput operations](../scripts/cli/mongodb/throughput.md) | Read, update and migrate between autoscale and standard throughput on a database and collection.|
+| [Lock resources from deletion](../scripts/cli/mongodb/lock.md)| Prevent resources from being deleted with resource locks.|
|||
-## MongoDB API Samples
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
|Task | Description | |||
-| [Create an Azure Cosmos account, database and collection](../scripts/cli/mongodb/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and collection for MongoDB API. |
-| [Create a serverless Azure Cosmos account, database and collection](../scripts/cli/mongodb/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates a serverless Azure Cosmos DB account, database, and collection for MongoDB API. |
-| [Create an Azure Cosmos account, database with autoscale and two collections with shared throughput](../scripts/cli/mongodb/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for MongoDB API. |
-| [Throughput operations](../scripts/cli/mongodb/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and collection.|
-| [Lock resources from deletion](../scripts/cli/mongodb/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
||| ## Next steps
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for Gremlin](../graph/cli-samples.md)
+- [CLI Samples for SQL](../sql/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
The following table includes links to commonly used Azure PowerShell scripts for
|[Create an account, database and collection](../scripts/powershell/mongodb/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and collection. | |[Create an account, database and collection with autoscale](../scripts/powershell/mongodb/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account, database and collection with autoscale. | |[List or get databases or collections](../scripts/powershell/mongodb/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or collection. |
-|[Throughput operations](../scripts/powershell/mongodb/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a database or collection including get, update and migrate between autoscale and standard throughput. |
+|[Perform throughput operations](../scripts/powershell/mongodb/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or collection including get, update and migrate between autoscale and standard throughput. |
|[Lock resources from deletion](../scripts/powershell/mongodb/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. | |||
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Previously updated : 7/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Cassandra API account, keyspace and table with autoscale using Azure CLI++
+The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table with autoscale for the Cassandra API.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/cassandra/autoscale.sh "Create an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Cassandra API account, keyspace and table using Azure CLI++
+The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table for Cassandra API.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/cassandra/create.sh "Create an Azure Cosmos DB Cassandra API account, keyspace, and table.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create a resource lock for Azure Cosmos Cassandra API keyspace and table using Azure CLI -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+The script in this article demonstrates preventing resources from being deleted with resource locks.
> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
> Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). ++
+- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/cassandra/lock.sh "Create a resource lock for an Azure Cosmos DB Cassandra API keyspace, and table.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Create an Azure Cosmos Cassandra API serverless account, keyspace and table using Azure CLI +
+The script in this article demonstrates creating a serverless Azure Cosmos DB account, keyspace, and table for Cassandra API.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)]
+- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/cassandra/serverless.sh "Create an Azure Cosmos DB Cassandra API serverless account, keyspace, and table.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
Title: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources Previously updated : 10/07/2020 Last updated : 02/21/2022 # Throughput (RU/s) operations with Azure CLI for a keyspace or table for Azure Cosmos DB - Cassandra API++
+The script in this article creates a Cassandra keyspace with shared throughput and a Cassandra table with dedicated throughput, then updates the throughput for both the keyspace and table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-This script creates a Cassandra keyspace with shared throughput and a Cassandra table with dedicated throughput, then updates the throughput for both the keyspace and table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/cassandra/throughput.sh "Throughput operations for Cassandra keyspace and table.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos account with IP firewall using Azure CLI+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)]
+The script in this article demonstrates creating a Cosmos DB account with default values and IP Firewall enabled. It uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB. To use this sample for other APIs, apply the `ip-range-filter` parameter in the script to the `az cosmosdb account create` command for your API specific script.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account. To use this sample for other APIs, apply the `ip-range-filter` parameter in the script below to `az cosmosdb account create` command for your API specific script.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/common/ipfirewall.sh "Create an Azure Cosmos account with ip firewall.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+For Azure CLI samples for specific APIs see:
+
+- [CLI Samples for Cassandra](../../../cassandr)
+- [CLI Samples for Gremlin](../../../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../../../sql/cli-samples.md)
+- [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Work with account keys and connection strings for an Azure Cosmos account using Azure CLI+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)]
+The script in this article demonstrates four operations.
+
+- List all account keys
+- List read only account keys
+- List connection strings
+- Regenerate account keys
+
+ This script uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-This script demonstrates four operations.
-- List all account keys-- List read only account keys-- List connection strings-- Regenerate account keys-
-> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account but the account key and connection string operations are identical across all database APIs in Cosmos DB.
+### Run the script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/common/keys.sh "Keys and connection string operations for Cosmos DB.")]
-## Clean up deployment
+## Clean up resources
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+For Azure CLI samples for specific APIs see:
+
+- [CLI Samples for Cassandra](../../../cassandr)
+- [CLI Samples for Gremlin](../../../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../../../sql/cli-samples.md)
+- [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Add regions, change failover priority, trigger failover for an Azure Cosmos account using Azure CLI---- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Sample script
-This script demonstrates three operations.
+The script in this article demonstrates three operations.
- Add a region to an existing Azure Cosmos account. - Change regional failover priority (applies to accounts using automatic failover) - Trigger a manual failover from primary to secondary regions (applies to accounts with manual failover)
-> [!NOTE]
+This script uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
+
+> [!IMPORTANT]
> Add and remove region operations on a Cosmos account cannot be done while changing other properties.
-> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account but these operations are identical across all database APIs in Cosmos DB.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/common/regions.sh "Regional operations for Cosmos DB.")]
-## Clean up deployment
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Sample script
+
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+### Run the script
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+For Azure CLI samples for specific APIs see:
+
+- [CLI Samples for Cassandra](../../../cassandr)
+- [CLI Samples for Gremlin](../../../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../../../sql/cli-samples.md)
+- [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Connect an existing Azure Cosmos account with virtual network service endpoints using Azure CLI+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)]
+The script in this article demonstrates connecting an existing Azure Cosmos account to an existing new virtual network where the subnet is not yet configured for service endpoints by using the `ignore-missing-vnet-service-endpoint` parameter. This allows the configuration for the Cosmos account to complete without error before the configuration to the virtual network's subnet is completed. Once the subnet configuration is complete, the Cosmos account is accessible through the configured subnet.
+
+This script uses a SQL (Core) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-This sample is intended to show how to connect an existing Azure Cosmos account to an existing new virtual network where the subnet is not yet configured for service endpoints by using the `ignore-missing-vnet-service-endpoint` parameter. This allows the configuration for the Cosmos account to complete without error before the configuration to the virtual network's subnet is completed. Once the subnet configuration is complete, the Cosmos account will then be accessible through the configured subnet.
-> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
+### Run the script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/common/service-endpoints-ignore-missing-vnet.sh "Create an Azure Cosmos account with service endpoints.")]
-## Clean up deployment
+## Clean up resources
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+For Azure CLI samples for specific APIs see:
+
+- [CLI Samples for Cassandra](../../../cassandr)
+- [CLI Samples for Gremlin](../../../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../../../sql/cli-samples.md)
+- [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos account with virtual network service endpoints using Azure CLI+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)]
+The script in this article creates a new virtual network with a front and back end subnet and enables service endpoints for `Microsoft.AzureCosmosDB`. It then retrieves the resource ID for this subnet and applies it to the Azure Cosmos account and enables service endpoints for the account.
+
+This script uses a Core (SQL) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-This sample creates a new virtual network with a front and back end subnet and enables service endpoints for `Microsoft.AzureCosmosDB`. It then retrieves the resource ID for this subnet and applies it to the Azure Cosmos account and enables service endpoints for the account.
-> [!NOTE]
-> This sample demonstrates using a Core (SQL) API account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script.
+### Run the script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/common/service-endpoints.sh "Create an Azure Cosmos account with service endpoints.")]
-## Clean up deployment
+## Clean up resources
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+For Azure CLI samples for specific APIs see:
+
+- [CLI Samples for Cassandra](../../../cassandr)
+- [CLI Samples for Gremlin](../../../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../../../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../../../sql/cli-samples.md)
+- [CLI Samples for Table](../../../table/cli-samples.md)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Previously updated : 7/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Gremlin API account, database and graph with autoscale using Azure CLI+ [!INCLUDE[appliesto-gremlin-api](../../../includes/appliesto-gremlin-api.md)]
+The script in this article demonstrates creating a Gremlin API database and graph with autoscale.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/gremlin/autoscale.sh "Create an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Gremlin API account, database and graph using Azure CLI+ [!INCLUDE[appliesto-gremlin-api](../../../includes/appliesto-gremlin-api.md)]
+The script in this article demonstrates creating a Gremlin database and graph.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/gremlin/create.sh "Create an Azure Cosmos DB Gremlin API account, database, and graph.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create a resource lock for Azure Cosmos Gremlin API database and graph using Azure CLI -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+The script in this article demonstrates performing resource lock operations for a Gremlin database and graph.
> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
> Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). ++
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/gremlin/lock.sh "Create a resource lock for an Azure Cosmos DB Gremlin API database and graph.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Create an Azure Cosmos Gremlin API serverless account, database and graph using Azure CLI [!INCLUDE[appliesto-gremlin-api](../../../includes/appliesto-gremlin-api.md)]
+The script in this article demonstrates creating a Gremlin serverless account, database and graph.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)]
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/gremlin/serverless.sh "Create an Azure Cosmos DB Gremlin API serverless account, database, and graph.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Title: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources Previously updated : 10/07/2020 Last updated : 02/21/2022 # Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB - Gremlin API++
+The script in this article creates a Gremlin database with shared throughput and a Gremlin graph with dedicated throughput, then updates the throughput for both the database and graph. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-This script creates a Gremlin database with shared throughput and a Gremlin graph with dedicated throughput, then updates the throughput for both the database and graph. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/gremlin/throughput.sh "Throughput operations for a Gremlin database and graph.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
Previously updated : 7/29/2020 Last updated : 02/21/2022 # Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB using Azure CLI+ [!INCLUDE[appliesto-mongodb-api](../../../includes/appliesto-mongodb-api.md)]
+The script in this article demonstrates creating a MongoDB API database with autoscale and 2 collections that share throughput.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/mongodb/autoscale.sh "Create an Azure Cosmos DB MongoDB API account, database with autoscale, and two shared throughput collections.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create a database and collection for MongoDB API for Azure Cosmos DB using Azure CLI+ [!INCLUDE[appliesto-mongodb-api](../../../includes/appliesto-mongodb-api.md)]
+The script in this article demonstrates creating a MongoDB API database and collection.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/mongodb/create.sh "Create an Azure Cosmos DB MongoDB API account, database, and collection.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create a resource lock for Azure Cosmos DB's API for MongoDB using Azure CLI -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+The script in this article demonstrates performing resource lock operations for a MongoDB API database and collection.
> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
> Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). ++
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/mongodb/lock.sh "Create a resource lock for an Azure Cosmos DB MongoDB API database and collection.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Create a serverless database and collection for MongoDB API for Azure Cosmos DB using Azure CLI [!INCLUDE[appliesto-mongodb-api](../../../includes/appliesto-mongodb-api.md)]
+The script in this article demonstrates creating a MongoDB API serverless account database and collection.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)]
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/mongodb/serverless.sh "Create an Azure Cosmos DB MongoDB API serverless account, database, and collection.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
Title: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources Previously updated : 10/07/2020 Last updated : 02/21/2022 # Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB API for MongoDB+ [!INCLUDE[appliesto-mongodb-api](../../../includes/appliesto-mongodb-api.md)]
+The script in this article creates a MongoDB database with shared throughput and collection with dedicated throughput, then updates the throughput for both. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-This script creates a MongoDB database with shared throughput and collection with dedicated throughput, then updates the throughput for both. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/mongodb/throughput.sh "Throughput operations for Azure Cosmos DB API for MongoDB.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Core (SQL) API account, database and container with autoscale using Azure CLI+ [!INCLUDE[appliesto-sql-api](../../../includes/appliesto-sql-api.md)]
+The script in this article demonstrates creating a SQL API database and container with autoscale.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.0.73 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/sql/autoscale.sh "Create an Azure Cosmos DB Core (SQL) API account, database, and container with autoscale.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Core (SQL) API account, database and container using Azure CLI+ [!INCLUDE[appliesto-sql-api](../../../includes/appliesto-sql-api.md)]
+The script in this article demonstrates creating a SQL API database and container.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/sql/create.sh "Create an Azure Cosmos DB SQL (Core) API account, database, and container.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create resource lock for a Azure Cosmos DB Core (SQL) API database and container using Azure CLI -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+The script in this article demonstrates performing resource lock operations for a SQL database and container.
> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
> Resource locks do not work for changes made by users connecting using any Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). ++
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/sql/lock.sh "Create a resource lock for an Azure Cosmos DB Core (SQL) API database and container.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Create an Azure Cosmos Core (SQL) API serverless account, database and container using Azure CLI [!INCLUDE[appliesto-sql-api](../../../includes/appliesto-sql-api.md)]
+The script in this article demonstrates creating a SQL API serverless account with database and container.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)]
+- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/sql/serverless.sh "Create an Azure Cosmos DB SQL (Core) API serverless account, database, and container.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
Title: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources Previously updated : 10/07/2020 Last updated : 02/21/2022 # Throughput (RU/s) operations with Azure CLI for a database or container for Azure Cosmos DB Core (SQL) API+ [!INCLUDE[appliesto-sql-api](../../../includes/appliesto-sql-api.md)]
+The script in this article creates a Core (SQL) API database with shared throughput and a Core (SQL) API container with dedicated throughput, then updates the throughput for both the database and container. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-This script creates a Core (SQL) API database with shared throughput and a Core (SQL) API container with dedicated throughput, then updates the throughput for both the database and container. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/sql/throughput.sh "Throughput operations for a SQL database and container.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Previously updated : 7/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Table API account and table with autoscale using Azure CLI+ [!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
+The script in this article demonstrates creating a Table API table with autoscale.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/table/autoscale.sh "Create an Azure Cosmos DB Table API account and table with autoscale.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create an Azure Cosmos Table API account and table using Azure CLI+ [!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
+The script in this article demonstrates creating a Table API table.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/table/create.sh "Create an Azure Cosmos DB Table API account and table.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Previously updated : 07/29/2020 Last updated : 02/21/2022 # Create resource lock for a Azure Cosmos DB Table API table using Azure CLI -- This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+The script in this article demonstrates performing resource lock operations for a Table API table.
> [!IMPORTANT]
+>
+> To create resource locks, you must have membership in the owner role in the subscription.
+>
> Resource locks do not work for changes made by users connecting Cosmos DB Table SDK, Azure Storage Table SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). ++
+- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/table/lock.sh "Create a resource lock for an Azure Cosmos DB Table API table.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Create an Azure Cosmos Table API serverless account and table using Azure CLI [!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
+The script in this article demonstrates creating a Table API serverless account and table.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)]
+- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ ## Sample script
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/table/serverless.sh "Create an Azure Cosmos DB Table API serverless account and table.")]
-## Clean up deployment
+### Run the script
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name $resourceGroupName
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
Title: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Table API resources
+ Title: Perform throughput (RU/s) operations for Azure Cosmos DB Table API resources
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Table API resources Previously updated : 10/07/2020 Last updated : 02/21/2022 # Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB Table API+ [!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
+The script in this article creates a Table API table then updates the throughput the table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
++ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-This script creates a Table API table then updates the throughput the table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
-[!code-azurecli-interactive[main](../../../../../cli_scripts/cosmosdb/table/throughput.sh "Throughput operations for Table API.")]
+### Run the script
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+## Clean up resources
-```azurecli-interactive
-az group delete --name $resourceGroupName
+
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).-
-All Azure Cosmos DB CLI script samples can be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 keywords: cosmos db, azure cli samples, azure cli code samples, azure cli script samples # Azure CLI samples for Azure Cosmos DB Core (SQL) API-
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API-specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](../cassandr)
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB SQL API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-## Common Samples
+These samples require Azure CLI version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-These samples apply to all Azure Cosmos DB APIs
+## Core (SQL) API Samples
|Task | Description | |||
-| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Create an Azure Cosmos account, database, and container](../scripts/cli/sql/create.md)| Creates an Azure Cosmos DB account, database, and container for Core (SQL) API. |
+| [Create a serverless Azure Cosmos account, database, and container](../scripts/cli/sql/serverless.md)| Creates a serverless Azure Cosmos DB account, database, and container for Core (SQL) API. |
+| [Create an Azure Cosmos account, database, and container with autoscale](../scripts/cli/sql/autoscale.md)| Creates an Azure Cosmos DB account, database, and container with autoscale for Core (SQL) API. |
+| [Perform throughput operations](../scripts/cli/sql/throughput.md) | Read, update, and migrate between autoscale and standard throughput on a database and container.|
+| [Lock resources from deletion](../scripts/cli/sql/lock.md)| Prevent resources from being deleted with resource locks.|
|||
-## Core (SQL) API Samples
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
|Task | Description | |||
-| [Create an Azure Cosmos account, database, and container](../scripts/cli/sql/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container for Core (SQL) API. |
-| [Create a serverless Azure Cosmos account, database, and container](../scripts/cli/sql/serverless.md?toc=%2fcli%2fazure%2ftoc.json)| Creates a serverless Azure Cosmos DB account, database, and container for Core (SQL) API. |
-| [Create an Azure Cosmos account, database, and container with autoscale](../scripts/cli/sql/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container with autoscale for Core (SQL) API. |
-| [Throughput operations](../scripts/cli/sql/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update, and migrate between autoscale and standard throughput on a database and container.|
-| [Lock resources from deletion](../scripts/cli/sql/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
|||+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for Gremlin](../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for Table](../table/cli-samples.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powershell-samples.md
For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../
|[Create a container with a large partition key](../scripts/powershell/sql/create-large-partition-key.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create a container with a large partition key. | |[Create a container with no index policy](../scripts/powershell/sql/create-index-none.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Create an Azure Cosmos container with index policy turned off.| |[List or get databases or containers](../scripts/powershell/sql/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or containers. |
-|[Throughput operations](../scripts/powershell/sql/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a database or container including get, update and migrate between autoscale and standard throughput. |
+|[Perform throughput operations](../scripts/powershell/sql/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a database or container including get, update and migrate between autoscale and standard throughput. |
|[Lock resources from deletion](../scripts/powershell/sql/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. | |||
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Previously updated : 08/26/2021 Last updated : 02/23/2022
The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/M
| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L135) |[Scripts.ExecuteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.executestoredprocedureasync) | | [Delete a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L351) |[Scripts.DeleteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.deletestoredprocedureasync) |
+## Custom Serialization
+
+The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs) which leverages `System.Text.Json` for serialization and deserialization.
+ ## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Previously updated : 11/15/2021 Last updated : 02/21/2022 # Azure CLI samples for Azure Cosmos DB Table API
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+The following tables include links to sample Azure CLI scripts for the Azure Cosmos DB Table API and to sample Azure CLI scripts that apply to all Cosmos DB APIs. Common samples are the same across all APIs.
-## Common Samples
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
-These samples apply to all Azure Cosmos DB APIs
+## Table API Samples
|Task | Description | |||
-| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+| [Create an Azure Cosmos account and table](../scripts/cli/table/create.md)| Creates an Azure Cosmos DB account and table for Table API. |
+| [Create a serverless Azure Cosmos account and table](../scripts/cli/table/serverless.md)| Creates a serverless Azure Cosmos DB account and table for Table API. |
+| [Create an Azure Cosmos account and table with autoscale](../scripts/cli/table/autoscale.md)| Creates an Azure Cosmos DB account and table with autoscale for Table API. |
+| [Perform throughput operations](../scripts/cli/table/throughput.md) | Read, update and migrate between autoscale and standard throughput on a table.|
+| [Lock resources from deletion](../scripts/cli/table/lock.md)| Prevent resources from being deleted with resource locks.|
|||
-## Table API Samples
+## Common API Samples
+
+These samples apply to all Azure Cosmos DB APIs. These samples use a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
|Task | Description | |||
-| [Create an Azure Cosmos account and table](../scripts/cli/table/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table for Table API. |
-| [Create a serverless Azure Cosmos account and table](../scripts/cli/table/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates a serverless Azure Cosmos DB account and table for Table API. |
-| [Create an Azure Cosmos account and table with autoscale](../scripts/cli/table/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table with autoscale for Table API. |
-| [Throughput operations](../scripts/cli/table/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a table.|
-| [Lock resources from deletion](../scripts/cli/table/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+| [Add or fail over regions](../scripts/cli/common/regions.md) | Add a region, change failover priority, trigger a manual failover.|
+| [Perform account key operations](../scripts/cli/common/keys.md) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
|||+
+## Next steps
+
+Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb).
+
+For Azure CLI samples for other APIs see:
+
+- [CLI Samples for Cassandra](../cassandr)
+- [CLI Samples for Gremlin](../graph/cli-samples.md)
+- [CLI Samples for MongoDB API](../mongodb/cli-samples.md)
+- [CLI Samples for SQL](../sql/cli-samples.md)
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
The following table includes links to commonly used Azure PowerShell scripts for
|[Create an account and table](../scripts/powershell/table/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table. | |[Create an account and table with autoscale](../scripts/powershell/table/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table autoscale. | |[List or get tables](../scripts/powershell/table/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get tables. |
-|[Throughput operations](../scripts/powershell/table/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a table including get, update and migrate between autoscale and standard throughput. |
+|[Perform throughput operations](../scripts/powershell/table/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Perform throughput operations for a table including get, update and migrate between autoscale and standard throughput. |
|[Lock resources from deletion](../scripts/powershell/table/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. | |||
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 01/24/2022 Last updated : 02/24/2022
When you request to transfer an entire enterprise enrollment to an enrollment, t
### Effective transfer date
-The effective transfer day can be on or after the start date of the target enrollment.
+The effective transfer day can be on or after the start date of the target enrollment. Transfers can only be backdated till the first day of the month in which request is made.
The source enrollment usage is charged against Azure Prepayment or as overage. Usage that occurs after the effective transfer date is transferred to the new enrollment and charged.
When you request an enrollment transfer, provide the following information:
- For the source enrollment, the enrollment number. - For the target enrollment, the enrollment number to transfer to.-- For the enrollment transfer effective date, it can be a date on or after the start date of the target enrollment. The chosen date can't affect usage for any overage invoice already issued.
+- For the enrollment transfer effective date, it can be a date on or after the start date of the target enrollment but no earlier than the first day of the month in which the request is made. The chosen date can't affect usage for any overage invoice already issued.
Other points to keep in mind before an enrollment transfer:
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
Previously updated : 06/09/2021 Last updated : 02/24/2022
To [create subscriptions under an enrollment account](programmatically-create-su
{ "properties": {
- "roleDefinitionId": "/providers/Microsoft.Billing/enrollmentAccounts/providers/Microsoft.Authorization/roleDefinitions/<ownerRoleDefinitionId>",
+ "roleDefinitionId": "/providers/Microsoft.Billing/enrollmentAccounts/<enrollmentAccountObjectId>/providers/Microsoft.Authorization/roleDefinitions/<ownerRoleDefinitionId>",
"principalId": "<userObjectId>" } }
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
On 1 October 2021, automatic payments in India may block some credit card transa
## Pay by default payment method
-The default payment method of your billing profile can either be a credit or debit card, or a check or wire transfer.
+The default payment method of your billing profile can either be a credit card, debit card, or check wire transfer.
### Credit or debit card
data-catalog Data Catalog Dsr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-dsr.md
Title: Supported data sources in Azure Data Catalog description: This article lists specifications of the currently supported data sources for Azure Data Catalog.--++ Previously updated : 08/01/2019 Last updated : 02/24/2022 # Supported data sources in Azure Data Catalog
You can publish metadata by using a public API or a click-once registration tool
<td>Γ£ô</td> <td>Γ£ô</td> <td>Browser</td>
- <td>Native mode servers only. SharePoint mode is not supported. SQL Server 2008 and later versions only</td>
+ <td>Native mode servers only. SharePoint mode isn't supported. SQL Server 2008 and later versions only</td>
</tr> <tr> <td>SQL Server table</td>
You can publish metadata by using a public API or a click-once registration tool
<td>Γ£ô</td> <td>Γ£ô</td> <td></td>
- <td>Only legacy collections from Azure DocumentDB and SQL API collections in Azure Cosmos DB are compatible. Newer Cosmos DB APIs are not yet supported. Choose Azure DocumentDB in the Data Source list.</td>
+ <td>Only legacy collections from Azure DocumentDB and SQL API collections in Azure Cosmos DB are compatible. Newer Cosmos DB APIs aren't yet supported. Choose Azure DocumentDB in the Data Source list.</td>
</tr> <tr> <td>Generic ODBC table</td>
data-catalog Data Catalog How To Discover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-discover.md
Title: How to discover data sources in Azure Data Catalog description: This article highlights how to discover registered data assets with Azure Data Catalog, including searching and filtering and using the hit highlighting capabilities of the Azure Data Catalog portal.--++ Previously updated : 08/01/2019 Last updated : 02/24/2022 # How to discover data sources in Azure Data Catalog
Although the default free text search is simple and intuitive, you can also use
| Basic search |Basic search that uses one or more search terms. Results are any assets that match any property with one or more of the terms specified. |`sales data` | | Property scoping |Return only data sources where the search term is matched with the specified property. |`name:finance` | | Boolean operators |Broaden or narrow a search by using Boolean operations. |`finance NOT corporate` |
-| Grouping with parenthesis |Use parentheses to group parts of the query to achieve logical isolation, especially in conjunction with Boolean operators. |`name:finance AND (tags:Q1 OR tags:Q2)` |
+| Grouping with parenthesis |Use parentheses to group parts of the query to achieve logical isolation, especially with Boolean operators. |`name:finance AND (tags:Q1 OR tags:Q2)` |
| Comparison operators |Use comparisons other than equality for properties that have numeric and date data types. |`modifiedTime > "11/05/2014"` | For more information about Data Catalog search, see the [Azure Data Catalog](/rest/api/datacatalog/#search-syntax-reference) article.
When you view search results, it may not always be obvious why a data asset is i
In the default tile view, each tile displayed in the search results includes a **View search term matches** icon, so that you can quickly view the number of matches and their location, and to jump to them if you want.
- ![Hit highlighting and search matches in the Azure Data Catalog portal](./media/data-catalog-how-to-discover/search-matches.png)
+ :::image type="content" source="./media/data-catalog-how-to-business-glossary/01-portal-menu.png" alt-text="The View search term matches icon is selected in the tile, showing a drop menu of all matched locations.":::
## Summary
data-catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/overview.md
Title: Introduction to Azure Data Catalog description: This article provides an overview of Microsoft Azure Data Catalog, including its features and the problems it addresses. Data Catalog enables any user to register, discover, understand, and consume data sources.--++ Previously updated : 08/01/2019 Last updated : 02/24/2022 # What is Azure Data Catalog? [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)]
-Azure Data Catalog is a fully managed cloud service. It lets users discover the data sources they need and understand the data sources they find. At the same time, Data Catalog helps organizations get more value from their existing investments.
+Azure Data Catalog is a fully managed cloud service that lets users discover the data sources they need and understand the data sources they find. At the same time, Data Catalog helps organizations get more value from their existing investments.
-With Data Catalog, any user (analyst, data scientist, or developer) can discover, understand, and consume data sources. Data Catalog includes a crowdsourcing model of metadata and annotations. It is a single, central place for all of an organization's users to contribute their knowledge and build a community and culture of data.
+With Data Catalog, any user (analyst, data scientist, or developer) can discover, understand, and consume data sources in their data landscape. Data Catalog includes a crowdsourcing model of metadata and annotations, so everyone can contribute to making data discoverable and useable. It's a single, central place for all of an organization's users to contribute their knowledge and build a community and culture of data.
## Discovery challenges for data consumers
-Traditionally, discovering enterprise data sources has been an organic process based on tribal knowledge. For companies that want to get the most value from their information assets, this approach presents numerous challenges:
+Traditionally, discovering enterprise data sources has been an organic process based on tribal knowledge. For companies that want to get the most value from their information assets, this approach presents many challenges:
-* Users might not know that a data source exists unless they come into contact with it as part of another process. There is no central location where data sources are registered.
-* Unless users know the location of a data source, they cannot connect to the data by using a client application. Data-consumption experiences require users to know the connection string or path.
-* Unless users know the location of a data source's documentation, they cannot understand the intended uses of the data. Data sources and documentation might live in a variety of places and be consumed through a variety of experiences.
-* If users have questions about an information asset, they must locate the expert or team that's responsible for the data and engage them offline. There is no explicit connection between data and the experts that have perspectives on its use.
-* Unless users understand the process for requesting access to the data source, discovering the data source and its documentation still does not help them access the data.
+* Users might not know that a data source exists unless they come into contact with it as part of another process. There's no central location where data sources are registered.
+* Unless users know the location of a data source, they canΓÇÖt connect to the data by using a client application. Data-consumption experiences require users to know the connection string or path.
+* Unless users know the location of a data source's documentation, they canΓÇÖt understand the intended uses of the data. Data sources and documentation might live in various places and be consumed through various experiences.
+* If users have questions about an information asset, they must locate the expert or team that's responsible for the data and engage them offline. There's no explicit connection between data and the experts that have perspectives on its use.
+* Unless users understand the process for requesting access to the data source, discovering the data source and its documentation still doesn't help them access the data.
## Discovery challenges for data producers
data-catalog Register Data Assets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/register-data-assets-tutorial.md
Title: 'Tutorial: Register data assets in Azure Data Catalog' description: This tutorial describes how to register data assets in your Azure Data Catalog. --++ Previously updated : 08/01/2019 Last updated : 02/24/2022 # Customer intent: As an Azure Active Directory owner, I want to store my data in Azure Data Catalog so that I can search my data all from one centralized place. # Tutorial: Register data assets in Azure Data Catalog
You can now register data assets from the database sample by using Azure Data Ca
1. Go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and select **Publish Data**.
- ![Azure Data Catalog--Publish Data button](media/register-data-assets-tutorial/data-catalog-publish-data.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-publish-data.png" alt-text="The data catalog is open with the Publish Data button selected.":::
-2. Select **Launch Application** to download, install, and run the registration tool on your computer.
+1. Select **Launch Application** to download, install, and run the registration tool on your computer.
- ![Azure Data Catalog--Launch button](media/register-data-assets-tutorial/data-catalog-launch-application.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-launch-application.png" alt-text="On the Publish Data page, the Launch Application button is selected.":::
-3. On the **Welcome** page, select **Sign in** and enter your credentials.
+1. On the **Welcome** page, select **Sign in** and enter your credentials.
- ![Azure Data Catalog--Welcome page](media/register-data-assets-tutorial/data-catalog-welcome-dialog.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-welcome-dialog.png" alt-text="On the Welcome page, the Sign In button is selected.":::
-4. On the **Microsoft Azure Data Catalog** page, select **SQL Server** and **Next**.
+1. On the **Microsoft Azure Data Catalog** page, select **SQL Server** and **Next**.
- ![Azure Data Catalog--data sources](media/register-data-assets-tutorial/data-catalog-data-sources.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-data-sources.png" alt-text="On the Microsoft Azure Data Catalog page, the SQL Server button is selected. Then the next button is selected.":::
-5. Enter the SQL Server connection properties for your database sample in Azure SQL Database and select **CONNECT**.
+1. Enter the SQL Server connection properties for your database sample in Azure SQL Database and select **CONNECT**.
- ![Azure Data Catalog--SQL Server connection settings](media/register-data-assets-tutorial/data-catalog-sql-server-connection.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-sql-server-connection.png" alt-text="On the S Q L Server connection properties page, the text boxes are highlighted for these attributes: Server Name, User Name, Password, and Database. Then the Connect button is selected.":::
-6. Register the metadata of your data asset. In this example, you register **Product** objects from the sample namespace:
+1. Register the metadata of your data asset. In this example, you register **Product** objects from the sample namespace:
1. In the **Server Hierarchy** tree, expand your database sample and select **SalesLT**.
- 2. Select **Product**, **ProductCategory**, **ProductDescription**, and **ProductModel** by using Ctrl+select.
+ 1. Select **Product**, **ProductCategory**, **ProductDescription**, and **ProductModel** by using Ctrl+select.
- 3. Select the **move-selected arrow** (**>**). This action moves all selected objects into the **Objects to be registered** list.
+ 1. Select the **move-selected arrow** (**>**). This action moves all selected objects into the **Objects to be registered** list.
- ![Azure Data Catalog tutorial--browse and select objects](media/register-data-assets-tutorial/data-catalog-server-hierarchy.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-server-hierarchy.png" alt-text="In the Server Hierarchy, Sales L T is selected. Then in the Available Objects list, the product, product category, product description, product model, and produce model produce description objects are all highlighted. Then the move-selected > is selected.":::
- 4. Select **Include a Preview** to include a snapshot preview of the data. The snapshot includes up to 20 records from each table, and it's copied into the catalog.
+ 1. Select **Include a Preview** to include a snapshot preview of the data. The snapshot includes up to 20 records from each table, and it's copied into the catalog.
- 5. Select **Include Data Profile** to include a snapshot of the object statistics for the data profile (for example: minimum, maximum, and average values for a column, number of rows).
+ 1. Select **Include Data Profile** to include a snapshot of the object statistics for the data profile (for example: minimum, maximum, and average values for a column, number of rows).
- 6. In the **Add tags** field, enter **sales, product, azure sql**. This action adds search tags for these data assets. Tags are a great way to help users find a registered data source.
+ 1. In the **Add tags** field, enter **sales, product, azure sql**. This action adds search tags for these data assets. Tags are a great way to help users find a registered data source.
- 7. Specify the name of an **expert** on this data (optional).
+ 1. Specify the name of an **expert** on this data (optional).
- ![Azure Data Catalog tutorial--objects to be registered](media/register-data-assets-tutorial/data-catalog-objects-register.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-objects-register.png" alt-text="In the objects to be registered list, these names are shown: product, product category, product description, product model, and product model product description. Then the 'Include preview' and 'Include data profile' options are selected. Then three tags are added to the tag field: sales, product, and azure S Q L.":::
- 8. Select **REGISTER**. Azure Data Catalog registers your selected objects. In this exercise, the selected objects from your database sample are registered. The registration tool extracts metadata from the data asset and copies that data into the Azure Data Catalog service. The data remains where it currently stays. Data remains under the control of the administrators and policies of the origin system.
+ 1. Select **REGISTER**. Azure Data Catalog registers your selected objects. In this exercise, the selected objects from your database sample are registered. The registration tool extracts metadata from the data asset and copies that data into the Azure Data Catalog service. The data remains where it currently stays. Data remains under the control of the administrators and policies of the origin system.
- ![Azure Data Catalog--registered objects](media/register-data-assets-tutorial/data-catalog-registered-objects.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-registered-objects.png" alt-text="In the Microsoft Azure Data Catalog window, all the newly registered objects are shown in the Objects to be registered list. At the top of the window there's a notification stating that the process to register the selected objects is finished. Then the View Portal button is selected.":::
- 9. To see your registered data source objects, select **View Portal**. In the Azure Data Catalog portal, confirm that you see all four tables and the database in the grid view (verify that the search bar is clear).
+ 1. To see your registered data source objects, select **View Portal**. In the Azure Data Catalog portal, confirm that you see all four tables and the database in the grid view (verify that the search bar is clear).
- ![Objects in the Azure Data Catalog portal](media/register-data-assets-tutorial/data-catalog-view-portal.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-view-portal.png" alt-text="In the Microsoft Azure Data Catalog window, there are new tiles in the grid view for each of the registered objects.":::
In this exercise, you registered objects from the database sample for Azure SQL Database so that they can be easily discovered by users across your organization.
Basic search helps you search a catalog by using one or more search terms. Resul
1. Select **Home** in the Azure Data Catalog portal. If you've closed the web browser, go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com).
-2. In the search box, enter `product` and press **ENTER**.
+1. In the search box, enter `product` and press **ENTER**.
- ![Azure Data Catalog--basic text search](media/register-data-assets-tutorial/data-catalog-basic-text-search.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-basic-text-search.png" alt-text="In the Azure Data Catalog Portal, the home button is selected. Then, in the search box 'product' has been entered.":::
-3. Confirm that you see all four tables and the database in the results. You can switch between **grid view** and **list view** by selecting buttons on the toolbar, as shown in the following image. Notice that the search keyword is highlighted in the search results because the **Highlight** option is **ON**. You can also specify the number of **results per page** in search results.
+1. Confirm that you see all four tables and the database in the results. You can switch between **grid view** and **list view** by selecting buttons on the toolbar, as shown in the following image. Notice that the search keyword is highlighted in the search results because the **Highlight** option is **ON**. You can also specify the number of **results per page** in search results.
- ![Azure Data Catalog--basic text search results](media/register-data-assets-tutorial/data-catalog-basic-text-search-results.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-basic-text-search-results.png" alt-text="In the search bar, 'product' is still entered, and the list and grid view options are highlighted next to the search bar. The 'Results per page' option is set to 10, and the 'Highlight' option is set to 'on', so 10 results are shown on the page, with any references to 'product' highlighted.":::
The **Searches** panel is on the left and the **Properties** panel is on the right. On the **Searches** panel, you can change search criteria and filter results. The **Properties** panel displays properties of a selected object in the grid or list.
-4. Select **Product** in the search results. select the **Preview**, **Columns**, **Data Profile**, and **Documentation** tabs, or select the arrow to expand the bottom pane.
+1. Select **Product** in the search results. select the **Preview**, **Columns**, **Data Profile**, and **Documentation** tabs, or select the arrow to expand the bottom pane.
- ![Azure Data Catalog--bottom pane](media/register-data-assets-tutorial/data-catalog-data-asset-preview.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-data-asset-preview.png" alt-text="At the top of the search results, the 'Preview' button is selected.":::
On the **Preview** tab, you see a preview of the data in the **Product** table.
-5. Select the **Columns** tab to find details about columns (such as **name** and **data type**) in the data asset.
+1. Select the **Columns** tab to find details about columns (such as **name** and **data type**) in the data asset.
-6. Select the **Data Profile** tab to see the profiling of data (for example: number of rows, size of data, or minimum value in a column) in the data asset.
+1. Select the **Data Profile** tab to see the profiling of data (for example: number of rows, size of data, or minimum value in a column) in the data asset.
### Discover data assets with property scoping
Property scoping helps you discover data assets where the search term is matched
1. Clear the **Table** filter under **Object Type** in **Filters**.
-2. In the search box, enter `tags:product` and press **ENTER**. See [Data Catalog Search syntax reference](/rest/api/datacatalog/#search-syntax-reference) for all the properties you can use for searching the data catalog.
+1. In the search box, enter `tags:product` and press **ENTER**. See [Data Catalog Search syntax reference](/rest/api/datacatalog/#search-syntax-reference) for all the properties you can use for searching the data catalog.
-3. Confirm that you see the tables and the database in the results.
+1. Confirm that you see the tables and the database in the results.
- ![Data Catalog--property scoping search results](media/register-data-assets-tutorial/data-catalog-property-scoping-results.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-property-scoping-results.png" alt-text="'Tags : product' is entered in the search bar, and the Object Type filter shows 'Table' has been selected.":::
### Save the search 1. In the **Searches** pane in the **Current Search** section, enter a name for the search and select **Save**.
- ![Azure Data Catalog--save search](media/register-data-assets-tutorial/data-catalog-save-search.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-save-search.png" alt-text="In the searches pane, 'Product tag search' has been entered as a name for the search. Then the 'Save' button is selected.":::
2. Confirm that the saved search shows up under **Saved Searches**.
By grouping with parentheses, you can group parts of the query to achieve logica
1. In the search box, enter `name:product AND (tags:product AND objectType:table)` and press **ENTER**.
-2. Confirm that you see only the **Product** table in the search results.
+1. Confirm that you see only the **Product** table in the search results.
- ![Azure Data Catalog--grouping search](media/register-data-assets-tutorial/data-catalog-grouping-search.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-save-search.png" alt-text="In the search bar `name : product AND ( tags : product AND object Type : table )` has been entered. The product table is the only search result returned.":::
### Comparison operators
With comparison operators, you can use comparisons other than equality for prope
1. In the search box, enter `lastRegisteredTime:>"06/09/2016"`.
-2. Clear the **Table** filter under **Object Type**.
+1. Clear the **Table** filter under **Object Type**.
-3. Press **ENTER**.
+1. Press **ENTER**.
-4. Confirm that you see the **Product**, **ProductCategory**, and **ProductDescription** tables and the SQL database you registered in search results.
+1. Confirm that you see the **Product**, **ProductCategory**, and **ProductDescription** tables and the SQL database you registered in search results.
- ![Azure Data Catalog--comparison search results](media/register-data-assets-tutorial/data-catalog-comparison-operator-results.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-comparison-operator-results.png" alt-text="In the search bar last Registered Time : > 06/09/2016 has been entered. The tables Product, Product Category, Product Description have been returned. The S Q L database has also been returned.":::
See [How to discover data assets](data-catalog-how-to-discover.md) for detailed information about discovering data assets. For more information on search syntax, see [Data Catalog Search syntax reference](/rest/api/datacatalog/#search-syntax-reference).
In this exercise, you annotate a single data asset (ProductPhoto). You add a fri
1. Go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com) and search with `tags:product` to find the data assets you've registered.
-2. Select **ProductModel** in search results.
+1. Select **ProductModel** in search results.
-3. Enter **Product images** for **Friendly Name** and **Product photos for marketing materials** for the **Description**.
+1. Enter **Product images** for **Friendly Name** and **Product photos for marketing materials** for the **Description**.
- ![Azure Data Catalog--ProductPhoto description](media/register-data-assets-tutorial/data-catalog-productmodel-description.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-productmodel-description.png" alt-text="In the Properties pane, the name, friendly name, and description of the selected resource are shown. The information is editable.":::
- The **Description** helps others discover and understand why and how to use the selected data asset. You can also add more tags and view columns. You can search and filter data sources by using the descriptive metadata youΓÇÖve added to the catalog.
+ The **Description** helps others discover and understand why and how to use the selected data asset. You can also add more tags and view columns. You can search and filter data sources by using the descriptive metadata youΓÇÖve added to the catalog.
You can also do the following steps on this page:
You can also do the following steps on this page:
You can also add an annotation to multiple data assets. For example, you can select all the data assets you registered and specify an expert for them.
-![Azure Data Catalog--annotate multiple data assets](media/register-data-assets-tutorial/data-catalog-multi-select-annotate.png)
Azure Data Catalog supports a crowd-sourcing approach to annotations. Any Data Catalog user can add tags (user or glossary), descriptions, and other metadata. By doing so, users add perspective on a data asset and its use, and share that perspective with other users.
In this exercise, you open data assets in an integrated client tool (Excel) and
1. Select **Product** from search results. select **Open In** on the toolbar and select **Excel**.
- ![Azure Data Catalog--connect to data asset](media/register-data-assets-tutorial/data-catalog-connect1.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-connect1.png" alt-text="Product is selected from the table of returned results. The Open In button is selected, and Excel is selected from the dropdown menu.":::
-2. Select **Open** in the download pop-up window. This experience may vary depending on the browser.
+1. Select **Open** in the download pop-up window. This experience may vary depending on the browser.
-3. In the **Microsoft Excel Security Notice** window, select **Enable**.
+1. In the **Microsoft Excel Security Notice** window, select **Enable**.
- ![Azure Data Catalog--Excel security popup](media/register-data-assets-tutorial/data-catalog-excel-security-popup.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-excel-security-popup.png" alt-text="In the Microsoft Excel Security Notice pop-up, the Enable button is selected.":::
-4. Keep the defaults in the **Import Data** dialog box and select **OK**.
+1. Keep the defaults in the **Import Data** dialog box and select **OK**.
- ![Azure Data Catalog--Excel import data](media/register-data-assets-tutorial/data-catalog-excel-import-data.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-excel-import-data.png" alt-text="In the Import Data dialog box, O K is selected.":::
-5. View the data source in Excel.
+1. View the data source in Excel.
- ![Azure Data Catalog--product table in Excel](media/register-data-assets-tutorial/data-catalog-connect2.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-connect2.png" alt-text="All the data is shown in the Excel table.":::
### SQL Server Management Studio
In this exercise, you connected to data assets discovered by using Azure Data Ca
1. Open **SQL Server Management Studio**.
-2. In the **Connect to Server** dialog box, enter the server name from the **Properties** pane in the Azure Data Catalog portal.
+1. In the **Connect to Server** dialog box, enter the server name from the **Properties** pane in the Azure Data Catalog portal.
-3. Use appropriate authentication and credentials to access the data asset. If you don't have access, use information in the **Request Access** field to get it.
+1. Use appropriate authentication and credentials to access the data asset. If you don't have access, use information in the **Request Access** field to get it.
- ![Azure Data Catalog--request access](media/register-data-assets-tutorial/data-catalog-request-access.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-request-access.png" alt-text="In the Connection Info dialogue box, the Request Access field is highlighted.":::
Select **View Connection Strings** to view and copy ADO.NET, ODBC, and OLEDB connection strings to the clipboard for use in your application.
You can use Data Catalog to discover data sources and to view the metadata relat
1. Go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com). In the **Search** text box, enter `tags:cycles` and press **ENTER**.
-2. Select an item in the result list and select **Take Ownership** on the toolbar.
+1. Select an item in the result list and select **Take Ownership** on the toolbar.
-3. In the **Management** section of the **Properties** panel, select **Take Ownership**.
+1. In the **Management** section of the **Properties** panel, select **Take Ownership**.
- ![Azure Data Catalog--take ownership](media/register-data-assets-tutorial/data-catalog-take-ownership.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-take-ownership.png" alt-text="The Product item is selected in the result list, and in the Properties tab, in the Management section, the Take Ownership button is highlighted.":::
-4. To restrict visibility, choose **Owners & These Users** in the **Visibility** section and select **Add**. Enter user email addresses in the text box and press **ENTER**.
+1. To restrict visibility, choose **Owners & These Users** in the **Visibility** section and select **Add**. Enter user email addresses in the text box and press **ENTER**.
- ![Azure Data Catalog--restrict access](media/register-data-assets-tutorial/data-catalog-ownership.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-ownership.png" alt-text="In the Properties tab, in the Management section, the add button under Owners is selected. Then, under Visibility, the Owners & These Users button is selected. Then the Add button under Visibility is selected.":::
## Remove data assets
In Azure Data Catalog, you can delete an individual asset or delete multiple ass
1. Go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com).
-2. In the **Search** text box, enter `tags:cycles` and select **ENTER**.
+1. In the **Search** text box, enter `tags:cycles` and select **ENTER**.
-3. Select an item in the result list and select **Delete** on the toolbar as shown in the following image:
+1. Select an item in the result list and select **Delete** on the toolbar as shown in the following image:
- ![Azure Data Catalog--delete grid item](media/register-data-assets-tutorial/data-catalog-delete-grid-item.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-delete-grid-item.png" alt-text="The Product tile is selected from a search result list, and the Delete button is selected in the upper toolbar.":::
If you're using the list view, the check box is to the left of the item as shown in the following image:
- ![Azure Data Catalog--delete list item](media/register-data-assets-tutorial/data-catalog-delete-list-item.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-delete-list-item.png" alt-text="In list view, the selection box is to the left of the search result item. The Product asset is selected and the delete button is selected in the upper toolbar.":::
You can also select multiple data assets and delete them as shown in the following image: ![Azure Data Catalog--delete multiple data assets](media/register-data-assets-tutorial/data-catalog-delete-assets.png)
+ :::image type="content" source="media/register-data-assets-tutorial/data-catalog-delete-assets.png" alt-text="In list view, multiple assets have been selected, and the delete button is selected in the upper toolbar.":::
> [!NOTE] > The default behavior of the catalog is to allow any user to register any data source, and to allow any user to delete any data asset that has been registered. The management capabilities included in the Standard Edition of Azure Data Catalog provide additional options for taking ownership of assets, restricting who can discover assets, and restricting who can delete assets.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 02/08/2022 Last updated : 02/23/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
Settings specific to Azure SQL Database are available in the **Source Options**
:::image type="content" source="media/data-flow/isolationlevel.png" alt-text="Isolation Level":::
+**Enable incremental extract (preview)**: If your table has a timestamp column, you can enable incremental extract. ADF will prompt you to choose a timestamp field that will be used to query for changed rows from the last time the pipeline ran. ADF will handle storing the watermark and querying changed rows for you. This feature is currently in public preview.
+ ### Sink transformation Settings specific to Azure SQL Database are available in the **Settings** tab of the sink transformation.
data-factory Cross Tenant Connections To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/cross-tenant-connections-to-azure-devops.md
+
+ Title: Cross-tenant connections to Azure DevOps
+description: Learn how to configure connections to Azure DevOps in another tenant in Azure Data Factory
++++++ Last updated : 02/24/2022++
+# Cross-tenant connections to Azure DevOps
+
+This document covers a step-by-step guide for configuring Azure DevOps account in another tenant than the Azure Data Factory. This is useful for when your Azure DevOps is not in the same tenant as the Azure Data Factory.
++
+## Prerequisites
+
+- You need to have an Azure DevOps account in another tenant than your Azure Data Factory.
+- You should have a project in the above Azure DevOps tenant.
+
+## Step-by-step guide
+
+1. Navigate in Azure Data Factory studio to _Manage hub_ &#8594; _Git configuration_ &#8594; _Configure_.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/configure-git.png" alt-text="Shows the Azure Data Factory Studio with the Git configuration blade selected.":::
+
+1. Select the _Cross tenant sign in_ option.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-sign-in.png" alt-text="Shows the repository configuration dialog with cross tenant sign in checked.":::
+
+1. Select **OK** in the _Cross tenant sign in_ dialog.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-sign-in-confirm.png" alt-text="Shows the confirmation dialog for cross tenant sign in.":::
+
+1. Choose a different account to login to Azure DevOps in the remote tenant.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/use-another-account.png" alt-text="Shows the account selection dialog for choosing an account to connect to the remote Azure DevOps tenant.":::
+
+1. After signing in, choose the directory.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/choose-directory.png" alt-text="Shows the repository configuration dialog with the directory selection dropdown highlighted.":::
+
+1. Choose the repository and configure it accordingly.
+
+ :::image type="content" source="media/cross-tenant-connections-to-azure-devops/configure-repository.png" alt-text="Shows the repository configuration dialog.":::
+
+## Appendix
+
+While opening the Azure Data Factory in another tab or a new browser, use the first sign-in to log into to your Azure Data Factory user account.
+
+You should see a dialog with the message _You do not have access to the VSTS repo associated with this factory._ Click **OK** to sign in with the cross-tenant account to gain access to Git through the Azure Data Factory.
+
data-factory How To Configure Shir For Log Analytics Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md
+
+ Title: Configure the self-hosted integration runtime for log analytics collection
+
+description: This article describes how to instrument the self-hosted integration runtime for log analytics collection.
+++++ Last updated : 02/22/2022+++
+# Configure the self-hosted integration runtime (SHIR) for log analytics collection
+
+## Prerequisites
+
+An available Log Analytics workspace is required for this approach. We recommended you note down the workspace ID and authentication Key of your Log Analytics workspace as you might need it for certain scenarios. This solution will increase the data that will be sent to the Log Analytics workspace and will have a small impact on overall cost. Read on for details on how to keep the amount of data to a minimum.
+
+## Objectives and scenarios
+
+Centralize the events and the performance counter data to your Log Analytics workspace, first the virtual machine hosting the SHIR must be appropriately instrumented. Choose between two main scenarios below.
+
+### Instrumenting on-premises virtual machines
+
+The article [Install Log Analytics agent on Windows computers](../azure-monitor/agents/agent-windows.md) describes how to install the client on a virtual machine typically hosted on-premises. This can be either a physical server or a virtual machine hosted on a customer managed hypervisor. As mentioned in the prerequisite section, when installing the Log Analytics agent, you'll have to provide the Log Analytics workspace ID and Workspace Key to finalize the connection.
+
+### Instrument Azure virtual machines
+
+The recommended approach to instrument an Azure virtual machine based SHIR is to use virtual machine insights as described in the article [Enable VM insights overview](../azure-monitor/vm/vminsights-enable-overview.md). There are multiple ways to configure the Log Analytics agent when the SHIR is hosted in an Azure virtual machine. All the options are described in the article [Log Analytics agent overview](../azure-monitor/agents/log-analytics-agent.md#installation-options).
+
+## Configure event log and performance counter capture
+
+This step will highlight how to configure both Event viewer logs and performance counters to be sent to Log Analytics. The steps described below are common regardless of how the agent was deployed.
+
+### Select event viewer journals
+
+First you must collect event viewer journals relevant to the SHIR as described in the article [Collect Windows event log data sources with Log Analytics agent in Azure Monitor](../azure-monitor/agents/data-sources-windows-events.md).
+
+It's important to note that when choosing the event logs using the interface, it's normal that you won't see all journals that can possibly exist on a machine. So, the two journals that we need for SHIR monitoring won't show up in this list. If you type the journal name exactly as it appears on the local virtual machine, it will be captured and sent to your Log analytics workspace.
+
+The event journal name we must configure are:
+
+- Connectors ΓÇô Integration Runtime
+- Integration Runtime
++
+> [!IMPORTANT]
+> Leaving the **Information** level checked will increase the volume of data significantly if you have many SHIR hosts deployed and a larger number of scans. We strongly suggest you keep only Error and Warning.
+
+### Select Performance counters
+
+In the same configuration pane, you can click on **Windows Performance Counters** to select individual performance counters to send up to log analytics.
+
+> [!IMPORTANT]
+> Keep in mind that performance counters are, by their nature, a continuous data stream. Therefore, itΓÇÖs crucial that you consider the impact of data collection on the total cost of your Azure Monitor/Log Analytics deployment. Unless an allowed data ingestion budget has been granted and a constant ingestion of data has been allowed and budgeted for, gathering performance counters should only be configured for a defined period to establish a performance baseline.
+
+In the interface, when first configuring it, a suggested counter set will be recommended. Select those that apply to the type of performance analysis you want to perform. **%CPU** and **Available memory** are commonly monitored counters but others like **Network bandwidth consumption** can be useful in scenarios where the data volume is large, and bandwidth or execution time are constrained.
++
+## View Events and Performance counter data in Log Analytics
+
+Consult this tutorial on [How to query data in Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md).
+The two tables where the telemetry is saved are called Perf and Event respectively. The following query will check the row count to see if we have data flowing in. This would confirm if the instrumentation described above is working.
+
+### Sample KQL queries
+
+#### Check row counts
+
+```kusto
+(
+ Event
+ | extend TableName = "Event"
+ | summarize count() by TableName
+)
+| union
+(
+ Perf
+ | extend TableName = "Perf"
+ | summarize count() by TableName
+)
+```
+
+#### Query events
+
+##### Retrieve the first 10 event rows
+
+```kusto
+Event
+| take 10
+```
+
+##### Retrieve the event count by message severity
+
+```kusto
+Event
+| summarize count() by EventLevelName
+```
+
+##### Render a pie chart of count by message severity
+
+```kusto
+Event
+| summarize count() by EventLevelName
+| render piechart
+```
+
+##### Retrieve all errors with a particular text string
+
+Here we are searching for all messages that have the word _disconnected_ in them.
+
+```kusto
+Event
+| where RenderedDescription has "disconnected"
+```
+
+##### Multi-table search for a keyword without knowing the schema
+
+The search command is useful when one does not know which column the information is contained in. This query returns all rows from the specified tables that have at least one column that contains the search term. The word is _disconnected_ in this example.
+
+```kusto
+search in (Perf, Event) "disconnected"
+```
+
+##### Retrieve all events from one specific log journal
+
+In this example weΓÇÖre narrowing the query to the log journal called **Connectors ΓÇô Integration Runtime**.
+
+```kusto
+Event
+| where EventLog == "Connectors ΓÇô Integration Runtime"
+```
+
+##### Use timespans to restrict query results
+
+This query uses the same query as above but restricts results to those occurring 2 days ago or more recently.
+
+```kusto
+Event
+| where EventLog == "Connectors ΓÇô Integration Runtime"
+ and TimeGenerated >= ago(2d)
+```
+
+#### Query performance counter data
+
+##### Retrieve the first 10 performance counter readings
+
+```kusto
+Perf
+| take 10
+```
+
+##### Retrieve a specific counter with time constraints
+
+```kusto
+Perf
+| where TimeGenerated >= ago(24h)
+ and ObjectName == "Network Adapter"
+ and InstanceName == "Mellanox ConnectX-4 Lx Virtual Ethernet Adapter"
+ and CounterName == "Bytes Received/sec"
+```
+
+Performance counters are hierarchical in nature, so be mindful to have enough _where_ predicates in your query to select only the specific counter you need.
+
+##### Retrieve 95th percentile for a given counter binned by 30 minute slices of the last 24 hours
+
+This example is all the counters for a specific network adapter.
+
+```kusto
+Perf
+| where TimeGenerated >= ago(24h)
+ and ObjectName == "Network Adapter"
+ and InstanceName == "Mellanox ConnectX-4 Lx Virtual Ethernet Adapter"
+| project TimeGenerated, Computer, ObjectName, InstanceName, CounterName, CounterValue
+| summarize percentile(CounterValue, 95) by bin(TimeGenerated, 30m), Computer, ObjectName, InstanceName, CounterName
+```
+
+##### Use variables for code reusability
+
+Here we are making the object name and counter name a variable so we do not have to change the KQL query body to make changes to those selections later.
+
+```kusto
+let pObjectName = "Memory"; // Required to select the right counter
+let pCounterName = "Available MBytes"; // Required to select the right counter
+Perf
+| where Type == "Perf" and ObjectName == pObjectName and CounterName == pCounterName
+| project TimeGenerated, Computer, CounterName, CounterValue
+| order by TimeGenerated asc
+| summarize Value=max(CounterValue) by CounterName, TimeStamps=TimeGenerated
+```
+
+## Next Steps
+
+- [Review integration runtime concepts in Azure Data Factory.](concepts-integration-runtime.md)
+- Learn how to [create a self-hosted integration runtime in the Azure portal.](create-self-hosted-integration-runtime.md)
+
data-factory Monitor Shir In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-shir-in-azure.md
+
+ Title: Monitor self-hosted integration runtime (SHIR) virtual machines in Azure
+
+description: This article describes how to monitor Azure virtual machines hosting the self-hosted integration runtime.
+++++ Last updated : 02/22/2022+++
+# Monitor self-hosted integration runtime (SHIR) virtual machines in Azure
+
+By default, the Self Hosted Integration RuntimeΓÇÖs diagnostic and performance telemetry is saved locally on the virtual or physical machine that is hosting it. Two broad categories of telemetry are of interest for monitoring the Self Hosted Integration Runtime.
+
+## Event logs
+
+When logged on locally to the Self Hosted Integration Runtime, specific events can be viewed using the [event viewer](/windows/win32/eventlog/viewing-the-event-log.md). The relevant events are captured in two event viewer journals named: **Connectors ΓÇô Integration Runtime** and **Integration Runtime** respectively. While itΓÇÖs possible to log on to to the Self Hosted Integration Runtime hosts individually to view these events, it's also possible to stream these events to a Log Analytics workspace in Azure monitor for ease of query and centralization purposes.
+
+## Performance counters
+
+Performance counters in Windows and Linux provide insight into the performance of hardware components, operating systems, and applications such as the Self Hosted Integration Runtime. The performance counters can be viewed and collected locally on the VM using the performance monitor tool. See the article on [using performance counters](/windows/win32/perfctrs/using-performance-counters.md) for more details.
+
+## Centralize log collection and analysis
+
+When a deployment requires a more in-depth level of analysis or has reached a certain scale, it becomes impractical to log on to locally to each Self Hosted Integration Runtime host. Therefore, we recommend using Azure Monitor and Azure Log Analytics specifically to collect that data and enable a single pane of glass monitoring for your Self Hosted Integration Runtimes. See the article on [Configuring the SHIR for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) for instructions on how to instrument your Self Hosted Integration Runtimes for Azure Monitor.
+
+## Next Steps
+
+- [How to configure SHIR for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md)
+- [Review integration runtime concepts in Azure Data Factory.](concepts-integration-runtime.md)
+- Learn how to [create a self-hosted integration runtime in the Azure portal.](create-self-hosted-integration-runtime.md)
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
the page.
## Creating Forwarding Rule to Endpoint
-1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs.
+1. Login and download the port forwarding script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs.
2. Run the script on with the following options:<br/>
+ **sudo chmod +x ip_fwd.sh**<br/>
**sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/> <FQDN/IP> is your target SQL Server IP.<br/>
+ > [!Note]
+ > The above script runs only once. In order to ensure that that port forwarding is enabled every time the machine starts, it should be configured as a startup service.
+
> [!Note] > FQDN doesn't work for on-premises SQL Server unless you add a record in Azure DNS zone.
data-lake-store Data Lake Store Get Started Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-java-sdk.md
description: Use the Java SDK for Azure Data Lake Storage Gen1 to perform filesy
Previously updated : 05/29/2018 Last updated : 02/23/2022
The code sample available [on GitHub](https://azure.microsoft.com/documentation/
```xml <dependencies>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.4.1</version>
- </dependency>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-storage-file-datalake</artifactId>
- <version>12.7.2</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.32</version>
- </dependency>
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-data-lake-store-sdk</artifactId>
+ <version>2.1.5</version>
+ </dependency>
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-nop</artifactId>
+ <version>1.7.21</version>
+ </dependency>
</dependencies> ```
- The second dependency is to use the Data Lake Storage Gen2 SDK (`azure-storage-file-datalake`) from the Maven repository. The third dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen2 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
+ The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
3. Add the following import statements to your application. ```java
- import com.azure.identity.ClientSecretCredential;
- import com.azure.identity.ClientSecretCredentialBuilder;
- import com.azure.storage.file.datalake.DataLakeDirectoryClient;
- import com.azure.storage.file.datalake.DataLakeFileClient;
- import com.azure.storage.file.datalake.DataLakeServiceClient;
- import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
- import com.azure.storage.file.datalake.DataLakeFileSystemClient;
- import com.azure.storage.file.datalake.models.ListPathsOptions;
- import com.azure.storage.file.datalake.models.PathAccessControl;
- import com.azure.storage.file.datalake.models.PathPermissions;
+ import com.microsoft.azure.datalake.store.ADLException;
+ import com.microsoft.azure.datalake.store.ADLStoreClient;
+ import com.microsoft.azure.datalake.store.DirectoryEntry;
+ import com.microsoft.azure.datalake.store.IfExists;
+ import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider;
+ import com.microsoft.azure.datalake.store.oauth2.ClientCredsTokenProvider;
import java.io.*;
- import java.time.Duration;
import java.util.Arrays; import java.util.List;
- import java.util.Map;
``` ## Authentication
-* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen2 using Java](data-lake-store-end-user-authenticate-java-sdk.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen2 using Java](data-lake-store-service-to-service-authenticate-java.md).
+* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen1 using Java](data-lake-store-end-user-authenticate-java-sdk.md).
+* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md).
-## Create a Data Lake Storage Gen2 client
-Creating a [DataLakeServiceClient](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html) object requires you to specify the Data Lake Storage Gen2 account name and the token provider you generated when you authenticated with Data Lake Storage Gen2 (see [Authentication](#authentication) section). The Data Lake Storage Gen2 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**.
+## Create a Data Lake Storage Gen1 client
+Creating an [ADLStoreClient](https://azure.github.io/azure-data-lake-store-java/javadoc/) object requires you to specify the Data Lake Storage Gen1 account name and the token provider you generated when you authenticated with Data Lake Storage Gen1 (see [Authentication](#authentication) section). The Data Lake Storage Gen1 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**.
```java
-private static String endPoint = "FILL-IN-HERE"; // Data lake storage end point
-DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClientBuilder().endpoint(endPoint).credential(credential).buildClient();
+private static String accountFQDN = "FILL-IN-HERE"; // full account FQDN, not just the account name
+ADLStoreClient client = ADLStoreClient.createClient(accountFQDN, provider);
```
-The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen2 Java SDK API docs](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html) of the **DataLakeServiceClient** object to see other operations.
+The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen1 Java SDK API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/) of the **ADLStoreClient** object to see other operations.
## Create a directory
-The following snippet creates a directory structure in the root of the Data Lake Storage Gen2 account you specified.
+The following snippet creates a directory structure in the root of the Data Lake Storage Gen1 account you specified.
```java // create directory
-private String fileSystemName = "FILL-IN-HERE"
-DataLakeFileSystemClient dataLakeFileSystemClient = dataLakeServiceClient.createFileSystem(fileSystemName);
-dataLakeFileSystemClient.createDirectory("a/b/w");
+client.createDirectory("/a/b/w");
System.out.println("Directory created."); ```
The following snippet creates a file (c.txt) in the directory structure and writ
```java // create file and write some content
-String filename = "c.txt";
-try (FileOutputStream stream = new FileOutputStream(filename);
- PrintWriter out = new PrintWriter(stream)) {
- for (int i = 1; i <= 10; i++) {
- out.println("This is line #" + i);
- out.format("This is the same line (%d), but using formatted output. %n", i);
- }
+String filename = "/a/b/c.txt";
+OutputStream stream = client.createFile(filename, IfExists.OVERWRITE );
+PrintStream out = new PrintStream(stream);
+for (int i = 1; i <= 10; i++) {
+ out.println("This is line #" + i);
+ out.format("This is the same line (%d), but using formatted output. %n", i);
}
-dataLakeFileSystemClient.createFile("a/b/" + filename, true);
+out.close();
System.out.println("File created."); ```
You can also create a file (d.txt) using byte arrays.
```java // create file using byte arrays
-DataLakeFileClient dataLakeFileClient = dataLakeFileSystemClient.createFile("a/b/d.txt", true);
+stream = client.createFile("/a/b/d.txt", IfExists.OVERWRITE);
byte[] buf = getSampleContent();
-try (ByteArrayInputStream stream = new ByteArrayInputStream(buf)) {
- dataLakeFileClient.upload(stream, buf.length);
-}
+stream.write(buf);
+stream.close();
System.out.println("File created using byte array."); ```
The following snippet appends content to an existing file.
```java // append to file
-byte[] buf = getSampleContent();
-try (ByteArrayInputStream stream = new ByteArrayInputStream(buf)) {
- DataLakeFileClient dataLakeFileClient = dataLakeDirectoryClient.getFileClient(filename);
- dataLakeFileClient.append(stream, 0, buf.length);
- System.out.println("File appended.");
-}
+stream = client.getAppendStream(filename);
+stream.write(getSampleContent());
+stream.close();
+System.out.println("File appended.");
``` The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/).
The following snippet reads content from a file in a Data Lake Storage Gen1 acco
```java // Read File
-try (InputStream dataLakeIn = dataLakeFileSystemClient.getFileClient(filename).openInputStream().getInputStream();
- BufferedReader reader = new BufferedReader(new InputStreamReader(dataLakeIn))) {
- String line;
- while ( (line = reader.readLine()) != null) {
+InputStream in = client.getReadStream(filename);
+BufferedReader reader = new BufferedReader(new InputStreamReader(in));
+String line;
+while ( (line = reader.readLine()) != null) {
System.out.println(line); } reader.close();
System.out.println("File contents read.");
## Concatenate files
-The following snippet concatenates two files in a Data Lake Storage Gen2 account. If successful, the concatenated file replaces the two existing files.
+The following snippet concatenates two files in a Data Lake Storage Gen1 account. If successful, the concatenated file replaces the two existing files.
```java // concatenate the two files into one
-dataLakeFileClient = dataLakeDirectoryClient.createFile("/a/b/f.txt", true);
List<String> fileList = Arrays.asList("/a/b/c.txt", "/a/b/d.txt");
-fileList.stream().forEach(filename -> {
- File concatenateFile = new File(filename);
- try (InputStream fileIn = new FileInputStream(concatenateFile)) {
- dataLakeFileClient.append(fileIn, 0, concatenateFile.length());
- } catch (IOException e) {
- e.printStackTrace();
- }
-});
+client.concatenateFiles("/a/b/f.txt", fileList);
System.out.println("Two files concatenated into a new file."); ```
The following snippet renames a file in a Data Lake Storage Gen1 account.
```java //rename the file
-dataLakeFileSystemClient.getFileClient("a/b/f.txt").rename(dataLakeFileSystemClient.getFileSystemName(), "a/b/g.txt");
+client.rename("/a/b/f.txt", "/a/b/g.txt");
System.out.println("New file renamed."); ```
The following snippet retrieves the metadata for a file in a Data Lake Storage G
```java // get file metadata
-Map<String, String> metaData = dataLakeFileSystemClient.getFileClient(filename).getProperties().getMetadata();
-printDirectoryInfo(metaData);
+DirectoryEntry ent = client.getDirectoryEntry(filename);
+printDirectoryInfo(ent);
System.out.println("File metadata retrieved."); ```
The following snippet sets permissions on the file that you created in the previ
```java // set file permission
-PathAccessControl pathAccessControl = dataLakeFileSystemClient.getFileClient(filename).getAccessControl();
-dataLakeFileSystemClient.getFileClient(filename).setPermissions(PathPermissions.parseOctal("744"), pathAccessControl.getGroup(), pathAccessControl.getOwner());
+client.setPermission(filename, "744");
System.out.println("File permission set."); ```
The following snippet lists the contents of a directory, recursively.
```java // list directory contents
-dataLakeFileSystemClient.listPaths(new ListPathsOptions().setPath("a/b"), Duration.ofSeconds(2000)).forEach(path -> {
- printDirectoryInfo(dataLakeFileSystemClient.getDirectoryClient(path.getName()).getProperties().getMetadata());
-});
+List<DirectoryEntry> list = client.enumerateDirectory("/a/b", 2000);
+System.out.println("Directory listing for directory /a/b:");
+for (DirectoryEntry entry : list) {
+ printDirectoryInfo(entry);
+}
System.out.println("Directory contents listed."); ```
The following snippet deletes the specified files and folders in a Data Lake Sto
```java // delete directory along with all the subdirectories and files in it
-dataLakeFileSystemClient.deleteDirectory("a");
+client.deleteRecursive("/a");
System.out.println("All files and folders deleted recursively"); promptEnterKey(); ```
promptEnterKey();
2. To produce a standalone jar that you can run from command-line build the jar with all dependencies included, using the [Maven assembly plugin](https://maven.apache.org/plugins/maven-assembly-plugin/usage.html). The pom.xml in the [example source code on GitHub](https://github.com/Azure-Samples/data-lake-store-java-upload-download-get-started/blob/master/pom.xml) has an example. ## Next steps
-* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-sdk-for-java/datalakestorage%28gen2%29.html)
-* [Secure data in Data Lake Storage Gen2](data-lake-store-secure-data.md)
-
+* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-data-lake-store-java/javadoc/)
+* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
databox-online Azure Stack Edge Gpu Disconnected Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-disconnected-scenario.md
+
+ Title: Disconnected scenario for Azure Stack Edge
+description: Describes assumptions, available features when using Azure Stack Edge standalone with no connection to Azure services.
++++++ Last updated : 02/24/2022+
+# Customer intent: As an IT admin, I need to understand how to use Azure Stack Edge with no internet connection to meet my organization's security restrictions.
++
+# Disconnected scenario for Azure Stack Edge
++
+This article helps you identify things to consider when you need to use Azure Stack Edge disconnected from the internet.
+
+Typically, Azure Stack Edge is deployed in a connected scenario with access to the Azure portal, services, and resources in the cloud. However, security or other restrictions sometime require that you deploy your Azure Stack Edge device in an environment with no internet connection. As a result, Azure Stack Edge becomes a standalone deployment that is disconnected from and doesn't communicate with Azure and other Azure services.
+
+## Assumptions
+
+Before you disconnect your Azure Stack Edge device from the network that allows internet access, you'll make these preparations:
+
+- To ensure most of the Azure Stack Edge features function in this disconnected mode, you'll activate your device via the Azure portal and deploy containerized and non-containerized workloads such as Kerberos, virtual machines (VMs), and IoT Edge use cases while you have an internet connection.
+
+ During offline use, you won't have access to the Azure portal to manage workloads; all management will be performed via operations local control plane operations. For a list of Azure endpoints that can't be reached during offline use, see [URL patterns for firewall rules](azure-stack-edge-gpu-system-requirements.md#url-patterns-for-firewall-rules).
+
+- For an IoT Edge and Kubernetes deployment, you'll complete these tasks before you disconnect:
+
+ 1. Enable and configure IoT Edge and/or Kubernetes components.
+ 1. Deploy compute modules and containers on the device.
+ 1. Make sure the modules and components are running.
+
+ For Kubernetes deployment guidance, see [Choose the deployment type](azure-stack-edge-gpu-kubernetes-workload-management.md#choose-the-deployment-type). For IoT Edge deployment guidance, see [Run a compute workload with IoT Edge module on Azure Stack Edge](azure-stack-edge-gpu-deploy-compute-module-simple.md).
+
+ > [!NOTE]
+ > Some workloads running in VMs, Kerberos, and IoT Edge may require connectivity to Azure. For example, some cognitive services require connectivity for billing.
+
+## Key differences for disconnected use
+
+When an Azure Stack Edge deployment is disconnected, it can't reach Azure endpoints. This lack of access affects the features that are available.
+
+The following table describes the behavior of features and components when the device is disconnected.
+
+| Azure Stack Edge feature/component | Impact/behavior when disconnected |
+||--|
+| Local UI and [Windows PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md) | Local access via the local web UI or the Windows PowerShell interface is available by connecting a client computer directly to the device. |
+| [Kubernetes](azure-stack-edge-gpu-kubernetes-overview.md) | Kubernetes deployments on a disconnected device have these differences:<ul><li>After you create your Kubernetes cluster, you can connect to and manage the cluster locally from your device using a native tool such as `kubectl`. With `kubectl`, a `kubeconfig` file allows the Kubernetes client to talk directly to the Kubernetes cluster without connecting to PowerShell interface of your device. Once you have the config file, you can direct the cluster using `kubectl` commands, without physical access to the cluster. For more information, see [Create and Manage a Kubernetes cluster on an Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-create-kubernetes-cluster.md).</li><li>Azure Stack Edge has a local container registry - the Edge container registry - to host container images. While your device is disconnected, you'll manage the deployment of these images, pushing them to and deleting them from the Edge container registry over your local network. You won't have direct access to the Azure Container Registry in the cloud. For more information, see [Enable an Edge container registry on an Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-edge-container-registry.md).</li><li>You can't monitor the Kubernetes cluster using Azure Monitor. Instead, use the local Kubernetes dashboard, available on the compute network. For more information, see [Monitor your Azure Stack Edge Pro device via the Kubernetes dashboard](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md).</li></ul>For more information, see [Kubernetes on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-kubernetes-overview.md). |
+| [Azure Arc on Kubernetes](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md) | An Azure Arc-enabled Kubernetes deployment can't be used in a disconnected deployment. |
+| Azure Arc-enabled data services | After the container images are deployed on the device, Azure Arc-enabled data services continue to run in a disconnected deployment. You'll deploy and manage those images over your local network. You'll push images to and delete them from the Edge container registry. For more information, see [Manage container registry images](azure-stack-edge-gpu-edge-container-registry.md#manage-container-registry-images). |
+| [IoT Edge](azure-stack-edge-gpu-deploy-compute-module-simple.md) | IoT Edge modules need to be deployed and updated while connected to Azure. If disconnected from Azure, they continue to run. |
+| [Azure Storage access tiers](azure-stack-edge-gpu-deploy-add-shares.md) | During disconnected use:<ul><li>Data in your Azure Storage account won't be uploaded to and from access tiers in the Azure cloud.</li><li>Ghosted data can't be accessed directly through the device. Any access attempt fails with an error.</li><li>The Refresh option can't be used to sync data in your Azure Storage account with shares in the Azure cloud. Data syncs resume when connectivity is established.</li></ul> |
+| [VM management](azure-stack-edge-gpu-virtual-machine-overview.md) | During disconnected use, virtual machines can be created, modified, stopped, started, and deleted using the [local Azure Resource Manager (ARM)](azure-stack-edge-gpu-local-resource-manager-overview.md). However, VM images can't be downloaded to the device from the cloud. For more information, see [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md). |
+| [Local ARM](azure-stack-edge-gpu-local-resource-manager-overview.md) | Local Azure Resource Manager (ARM) can function without connectivity to Azure. However, connectivity is required during registration and configuration of Local ARM - for example, to [set the ARM Edge user password](azure-stack-edge-gpu-set-azure-resource-manager-password.md#reset-password-via-the-azure-portal) and ARM subscription ID. |
+| [VPN](azure-stack-edge-mini-r-configure-vpn-powershell.md) | A configured virtual private network (VPN) remains intact when there's no connection to Azure. When connectivity to Azure is established, data-in-motion transfers over the VPN. |
+| [Updates](azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later) | Automatic updates from Windows Server Update Services (WSUS) aren't available during disconnected use. To apply updates, download update packages manually and then apply them via the device's local web UI. |
+| Supportability /<br>[Support log collection](azure-stack-edge-gpu-proactive-log-collection.md) /<br>[Remote supportability](azure-stack-edge-gpu-proactive-log-collection.md) | Microsoft Support is available, with these differences:<ul><li>You can't automatically generate a support request and send logs to Microsoft Support via the Azure portal. Instead, [collect a support package](azure-stack-edge-gpu-troubleshoot.md#collect-support-package) via the device's local web UI. Microsoft Support will send you a shared access signature (SAS) URI to upload the support packages to.</li><li>Microsoft can't perform remote diagnostics and repairs while the device is disconnected. Running the commands on the device requires direct communication with Azure.</li></ul> |
+| Billing | Billing for your order resource or management resource continues whether or not your Azure Stack Edge device is connected to the internet. |
+
+## Next steps
+
+- Review use cases for [Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-overview.md#use-cases), [Azure Stack Edge Pro R](azure-stack-edge-pro-r-overview.md#use-cases), and [Azure Stack Edge Mini R](azure-stack-edge-mini-r-overview.md#use-cases).
+- [Get pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/).
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Previously updated : 11/14/2021 Last updated : 02/24/2022 # Microsoft Defender for Cloud's enhanced security features
You can use any of the following ways to enable enhanced security for your subsc
### Can I enable Microsoft Defender for servers on a subset of servers in my subscription? No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for servers.
-An alternative is to enable Microsoft Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include just-in-time VM access, network detections, regulatory compliance, adaptive network hardening, adaptive application control, and more.
+An alternative is to enable Microsoft Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/ Qualys), just-in-time VM access, and more.
### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for servers? If you've already got a license for **Microsoft Defender for Endpoint for Servers**, you won't have to pay for that part of your Microsoft Defender for servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
If you've already got a license for **Microsoft Defender for Endpoint for Server
To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace. The discount will be effective starting from the approval date, and won't take place retroactively.+ ### My subscription has Microsoft Defender for servers enabled, do I pay for not-running servers? No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in the deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/16/2022 Last updated : 02/24/2022 # Protect your Kubernetes workloads
You can manually configure the Kubernetes workload add-on, or extension protecti
|--||| | Container CPU and memory limits should be enforced | Protect applications against DDoS attack | **Yes** | | Container images should be deployed only from trusted registries | Remediate vulnerabilities | **Yes** |
- | Containers should listen on allowed ports only | Restrict unauthorized network access | **Yes** |
| Least privileged Linux capabilities should be enforced for containers | Manage access and permissions | **Yes** |
- | Overriding or disabling of containers AppArmor profile should be restricted | Remediate security configurations | **Yes** |
+ | Containers should only use allowed AppArmor profiles | Remediate security configurations | **Yes** |
| Services should listen on allowed ports only | Restrict unauthorized network access | **Yes** | | Usage of host networking and ports should be restricted | Restrict unauthorized network access | **Yes** | | Usage of pod HostPath volume mounts should be restricted to a known list | Manage access and permissions | **Yes** |
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-authenticate-client.md
In an Azure function, you can use the managed identity credentials like this:
The [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) method is intended for interactive applications and will bring up a web browser for authentication. You can use this method instead of `DefaultAzureCredential` in cases where you require interactive authentication.
-To use the interactive browser credentials, you'll need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [Create an app registration](./how-to-create-app-registration-portal.md). Once the app registration is set up, you'll need...
+To use the interactive browser credentials, you'll need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration-portal.md). Once the app registration is set up, you'll need...
* [the app registration's Application (client) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id) * [the app registration's Directory (tenant) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id) * [the Azure Digital Twins instance's URL](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration-cli.md
# Mandatory fields. Title: Create an app registration (CLI)
+ Title: Create an app registration with Azure Digital Twins access (CLI)
-description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the CLI.
+description: Use the CLI to create an Azure AD app registration that can access Azure Digital Twins resources.
Previously updated : 1/24/2022 Last updated : 2/24/2022 + # Optional fields. Don't forget to remove # if you need a field. #
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
-This article describes how to create an app registration to use with Azure Digital Twins using the CLI. It includes instructions for creating a manifest file containing service information, creating the app registration, verifying success, collecting important values, and other possible steps that your organization may require.
+This article describes how to use the Azure CLI to create an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) *app registration* that can access Azure Digital Twins.
-When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with Azure Digital Twins, it's common to interact with your instance through client applications. Those applications need to authenticate with Azure Digital Twins, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an app registration.
-The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure CLI](/cli/azure/what-is-azure-cli). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
-
-## Azure AD app registrations
-
-[Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) is Microsoft's cloud-based identity and access management service. Setting up an **app registration** in Azure AD is one way to grant a client app access to Azure Digital Twins.
-
-This app registration is where you configure access permissions to the [Azure Digital Twins APIs](concepts-apis-sdks.md). Later, client apps can authenticate against the app registration using the registration's **client and tenant ID values**, and as a result be granted the configured access permissions to the APIs.
+The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up and grant it permissions to the Azure Digital Twins APIs. It also covers how to collect important values that you'll need to use the app registration when authenticating.
>[!TIP] > You may prefer to set up a new app registration every time you need one, *or* to do this only once, establishing a single app registration that will be shared among all scenarios that require it.
digital-twins How To Create App Registration Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration-portal.md
# Mandatory fields. Title: Create an app registration (portal)
+ Title: Create an app registration with Azure Digital Twins access (portal)
-description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the Azure portal.
+description: Use the Azure portal to create an Azure AD app registration that can access Azure Digital Twins resources.
Previously updated : 1/24/2022 Last updated : 2/24/2022
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
-This article describes how to create an app registration to use with Azure Digital Twins using the Azure portal. It includes instructions for creating the app registration, collecting important values, providing Azure Digital Twins API permission, verifying success, and other possible steps that your organization may require.
+This article describes how to use the Azure portal to create an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) *app registration* that can access Azure Digital Twins.
-When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as the custom client app built in [Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with Azure Digital Twins, it's common to interact with your instance through client applications. Those applications need to authenticate with Azure Digital Twins, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an app registration.
-The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure portal](https://portal.azure.com). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
-
-## Azure AD app registrations
-
-[Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) is Microsoft's cloud-based identity and access management service. Setting up an **app registration** in Azure AD is one way to grant a client app access to Azure Digital Twins.
-
-This app registration is where you configure access permissions to the [Azure Digital Twins APIs](concepts-apis-sdks.md). Later, client apps can authenticate against the app registration using the registration's **client and tenant ID values**, and as a result be granted the configured access permissions to the APIs.
+The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up and grant it permissions to the Azure Digital Twins APIs. It also covers how to collect important values that you'll need to use the app registration when authenticating.
>[!TIP] > You may prefer to set up a new app registration every time you need one, *or* to do this only once, establishing a single app registration that will be shared among all scenarios that require it.
digital-twins Troubleshoot Error 403 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-403.md
Next, select **API permissions** from the menu bar to verify that this app regis
#### Fix issues
-If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration](./how-to-create-app-registration-portal.md).
+If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration-portal.md).
## Next steps
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
1. Click the **Get Azure recommendation** button. 2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button. 3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link after the initial
+10 minutes to refresh the recommendation with the additional data collected.
5. In the above **Azure SQL Managed Instance** box click the **View details** button for more information about your recommendation. 6. Close the view details box and press the **Next** button.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
1. Click the **Get Azure recommendation** button. 2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button. 3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link after the initial
+10 minutes to refresh the recommendation with the additional data collected.
5. In the above **Azure SQL Managed Instance** box click the **View details** button for more information about your recommendation. 6. Close the view details box and press the **Next** button.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
1. Click the **Get Azure recommendation** button. 2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button. 3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL VM. You can also press the Refresh recommendation link after the initial 10 minutes to refresh the recommendation with the additional data collected.
5. In the above **SQL Server on Azure Virtual Machine** box click the **View details** button for more information about your recommendation. 6. Close the view details box and press the **Next** button.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
1. Click the **Get Azure recommendation** button. 2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button. 3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL VM. You can also press the Refresh recommendation link after the initial 10 minutes to refresh the recommendation with the additional data collected.
5. In the above **SQL Server on Azure Virtual Machine** box click the **View details** button for more information about your recommendation. 6. Close the view details box and press the **Next** button.
event-grid Custom Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-topics.md
Title: Custom topics in Azure Event Grid description: Describes custom topics in Azure Event Grid. Previously updated : 07/27/2021 Last updated : 02/23/2022 # Custom topics in Azure Event Grid
An event grid topic provides an endpoint where the source sends events. The publ
When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a **custom topic** for **each category of related events**. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want. ## Event schema
-For a detailed overview of event schema, see [Azure Event Grid event schema](event-schema.md). For custom topics, the event publisher determines the **data** object. The top-level data should have the same fields as standard resource-defined events.
+Azure Event Grid supports two types of event schemas: Event Grid event schema and Cloud event schema.
+
+### Event Grid event schema
+
+When you use Event Grid event schema, you can specify your application-specific properties in the **data** object.
```json [
For a detailed overview of event schema, see [Azure Event Grid event schema](eve
] ```
-The following sections provide links to tutorials to create custom topics using Azure portal, CLI, PowerShell, and Azure Resource Manager (ARM) templates.
+> [!NOTE]
+> For more information, see [Event Grid event schema](event-schema.md).
+
+### Cloud event schema
+In addition to its [default event schema](event-schema.md), Azure Event Grid natively supports events in the [JSON implementation of CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/json-format.md) and [HTTP protocol binding](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md). [CloudEvents](https://cloudevents.io/) is an [open specification](https://github.com/cloudevents/spec/blob/v1.0/spec.md) for describing event data.
+CloudEvents simplifies interoperability by providing a common event schema for publishing, and consuming cloud based events. This schema allows for uniform tooling, standard ways of routing & handling events, and universal ways of deserializing the outer event schema. With a common schema, you can more easily integrate work across platforms.
+
+> [!NOTE]
+> For more information, see [Cloud event schema](cloud-event-schema.md).
+
+The following sections provide links to tutorials to create custom topics using Azure portal, CLI, PowerShell, and Azure Resource Manager (ARM) templates.
## Azure portal tutorials |Title |Description |
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
``` > [!NOTE]
-> Any connections configured for FastPath in the target subscription will be enrolled in the selected preview. We do not advise enabling these previews in production subscriptions. Additionally, the VNet peering and Private Link previews are mutually exclusive. Enabling FastPath connectivity to a Private Link over VNet peering is not supported.
+> Any connections configured for FastPath in the target subscription will be enrolled in the selected preview. We do not advise enabling these previews in production subscriptions.
> If you already have FastPath configured and want to enroll in the preview feature, you need to do the following: > 1. Enroll in one of the FastPath preview features with the Azure PowerShell commands above. > 1. Disable and then re-enable FastPath on the target connection.
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
The **roleDefinitionIds** property uses the full resource identifier and doesn't
following code: ```azurecli-interactive
-az role definition list --name 'Contributor'
+az role definition list --name "Contributor"
``` > [!IMPORTANT]
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
To learn more, see [Azure subscriptions](../../guides/developer/azure-developer-
## Location
-**Location** is where you'd like to create the application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Currently, you can create an IoT Central application in the **Australia East**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **Southeast Asia**, **UK South**, **West Europe** and **West US** regions. Once you choose a location, you can't later move your application to a different location.
+**Location** is where you'd like to create the application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Currently, you can create an IoT Central application in the **Australia East**, **Canada Central**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **South Central US**, **Southeast Asia**, **UK South**, **West Europe**, and **West US** regions. Once you choose a location, you can't later move your application to a different location.
## Application template
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
If you choose one of the standard plans, you need to provide billing information
- The Azure subscription you're using. - The directory that contains the subscription you're using.-- The location to host your application. IoT Central uses Azure regions as locations: Australia East, Central US, East US, East US 2, Japan East, North Europe, Southeast Asia, UK South, West Europe, West US.
+- The location to host your application. IoT Central uses Azure regions as locations: Australia East, Canada Central, Central US, East US, East US 2, Japan East, North Europe, South Central US, Southeast Asia, UK South, West Europe, and West US.
## Azure IoT Central site
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
These commands first create a resource group in the east US region for the appli
| Parameter | Description | | -- | -- | | resource-group | The resource group that contains the application. This resource group must already exist in your subscription. |
-| location | By default, this command uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia East**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **Southeast Asia**, **UK South**, **West Europe** and **West US** regions. |
+| location | By default, this command uses the location from the resource group. Currently, you can create an IoT Central application in the **Australia East**, **Canada Central**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **South Central US**, **Southeast Asia**, **UK South**, **West Europe**, and **West US**. |
| name | The name of the application in the Azure portal. Avoid special characters - instead, use lower case letters (a-z), numbers (0-9), and dashes (-).| | subdomain | The subdomain in the URL of the application. In the example, the application URL is `https://mysubdomain.azureiotcentral.com`. | | sku | Currently, you can use either **ST1** or **ST2**. See [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). |
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
To create an application, navigate to the [IoT Central Application](https://port
* **Template** is the type of IoT Central application you want to create. You can create a new application either from the list of industry-relevant templates to help you get started quickly, or start from scratch using the **Custom application** template. * **Location** is the [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) where you'd like to create your application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Azure IoT Central is currently available in the following locations:
-
- * Australia
- * East Central US
- * East US
- * East US 2
- * Japan East
- * North Europe
- * Southeast Asia
- * UK South
- * West Europe
- * West US
+
+ * Australia East
+ * Canada Central
+ * Central US
+ * East US
+ * East US 2
+ * Japan East
+ * North Europe
+ * South Central US
+ * Southeast Asia
+ * UK South
+ * West Europe
+ * West US
Once you choose a location, you can't later move your application to a different location.
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
IoT Hub Device Twin sample
"state": 0, "workflow": { "action": 3,
- "id": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01"
+ "id": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01",
"retryTimestamp": "2022-01-26T11:33:29.9680598Z" }, "installedUpdateId": "{\"provider\":\"Contoso\",\"name\":\"Virtual-Vacuum\",\"version\":\"5.0\"}"
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
Previously updated : 07/07/2021 Last updated : 02/23/2022 #Customer intent: As a developer new to IoT Hub, learn the basic concepts.
-# IoT Concepts and Azure IoT Hub
+# IoT concepts and Azure IoT Hub
-This article discusses the Internet of Things (IoT), Azure IoT Hub, and IoT devices.
+The Internet of Things (IoT) is a network of physical devices that connect to and exchange data with other devices and services over the Internet or other network. There are currently over ten billion connected devices in the world and more are added every year. Anything that can be embedded with the necessary sensors and software can be connected over the internet.
-## IoT concepts
-
-The Internet of Things (IoT) is typically defined as a network of physical devices that connect to and exchange data with other devices and services over the Internet or other communication network. There are currently over ten billion connected devices in the world and more are added every year. Anything that can be embedded with the necessary sensors and software can be connected over the internet. The following technologies have made IoT possible:
--- Access to low cost, low power sensors.-- Various protocols that enable internet connectivity.-- Cloud computing platforms such as Azure.-- Big data.-- Machine learning.-- Artificial intelligence.-
-## Azure IoT Hub
-
-IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT Hub.
+Azure IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT hub.
Several messaging patterns are supported, including device-to-cloud telemetry, uploading files from devices, and request-reply methods to control your devices from the cloud. IoT Hub also supports monitoring to help you track device creation, device connections, and device failures.
-IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. For more information about scaling your IoT Hub, see [IoT Hub Scaling](iot-hub-scaling.md). To learn more about the multiple tiers of service offered by IoT Hub and how to best fit your scalability needs, check out the [pricing page](https://azure.microsoft.com/pricing/details/iot-hub/).
+IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. For more information about scaling your IoT Hub, see [IoT Hub scaling](iot-hub-scaling.md). To learn more about the tiers of service offered by IoT Hub, check out the [pricing page](https://azure.microsoft.com/pricing/details/iot-hub/).
You can integrate IoT Hub with other Azure services to build complete, end-to-end solutions. For example, use:
-* [Azure Event Grid](../event-grid/index.yml) to enable your business to react quickly to critical events in a reliable, scalable, and secure manner.
+- [Azure Event Grid](../event-grid/index.yml) to enable your business to react quickly to critical events in a reliable, scalable, and secure manner.
-* [Azure Logic Apps](../logic-apps/index.yml) to automate business processes.
+- [Azure Logic Apps](../logic-apps/index.yml) to automate business processes.
-* [Azure Machine Learning](iot-hub-weather-forecast-machine-learning.md) to add machine learning and AI models to your solution.
+- [Azure Machine Learning](iot-hub-weather-forecast-machine-learning.md) to add machine learning and AI models to your solution.
-* [Azure Stream Analytics](../stream-analytics/index.yml) to run real-time analytic computations on the data streaming from your devices.
+- [Azure Stream Analytics](../stream-analytics/index.yml) to run real-time analytic computations on the data streaming from your devices.
[IoT Central](../iot-central/core/overview-iot-central.md) applications use multiple IoT hubs as part of their scalable and resilient infrastructure.
-IoT Hub has a 99.9% [Service Level Agreement for IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
-
-Each Azure subscription has default quota limits in place to prevent service abuse. These limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub Quotas and Throttling](iot-hub-devguide-quotas-throttling.md). For more details on quota limits, see one of the following articles:
+Each Azure subscription has default quota limits in place to prevent service abuse. These limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md). For more information on quota limits, see one of the following articles:
-* [Azure subscription service limits](../azure-resource-manager/management/azure-subscription-service-limits.md)
+- [Azure subscription service limits](../azure-resource-manager/management/azure-subscription-service-limits.md)
-* [IoT Hub throttling and you](https://azure.microsoft.com/blog/iot-hub-throttling-and-you/)
+- [IoT Hub throttling and you](https://azure.microsoft.com/blog/iot-hub-throttling-and-you/)
## IoT devices
IoT devices differ from other clients such as browsers and mobile apps. Specific
- Might have intermittent, slow, or expensive network connectivity. - Might need to use proprietary, custom, or industry-specific application protocols.
-### Device identity
+## Device identity and authentication
-Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module must also authenticate with the IoT hub based on credentials stored in the identity registry.
+Every IoT hub has an identity registry that stores information about the devices and modules permitted to connect to it. Before a device or module can connect, there must be an entry for that device or module in the IoT hub's identity registry. A device or module authenticates with the IoT hub based on credentials stored in the identity registry.
-We support two methods of authentication between the device and the IoT Hub. You can use an SAS token-based authentication or X.509 certificate authentication.
+We support two methods of authentication between the device and the IoT hub. You can use SAS token-based authentication or X.509 certificate authentication.
-The SAS-based token method provides authentication for each call made by the device to IoT Hub by associating the symmetric key to each call. X.509-based authentication allows authentication of an IoT device at the physical layer as part of the Transport Layer Security (TLS) standard connection establishment. The security-token-based method can be used without the X.509 authentication, which is a less secure pattern. The choice between the two methods is primarily dictated by how secure the device authentication needs to be, and availability of secure storage on the device (to store the private key securely).
+The SAS token method provides authentication for each call made by the device to IoT Hub by associating the symmetric key to each call. X.509 authentication allows authentication of an IoT device at the physical layer as part of the Transport Layer Security (TLS) standard connection establishment. The choice between the two methods is primarily dictated by how secure the device authentication needs to be, and availability of secure storage on the device (to store the private key securely).
You can set up and provision many devices at a time using the [IoT Hub Device Provisioning Service](../iot-dps/index.yml).
-### Device communication
+## Device communication
After selecting your authentication method, the internet connection between the IoT device and IoT Hub is secured using the Transport Layer Security (TLS) standard. Azure IoT supports TLS 1.2, TLS 1.1, and TLS 1.0, in that order. Support for TLS 1.0 is provided for backward compatibility only. Check TLS support in IoT Hub to see how to configure your hub to use TLS 1.2, which provides the most security.
-### Device communication patterns
-
-Typically, IoT devices send telemetry from the sensors to back-end services in the cloud. However, other types of communication are possible, such as a back-end service sending commands to your devices. Some examples of different types of communication include the following:
+Typically, IoT devices send telemetry from the sensors to back-end services in the cloud. However, other types of communication are possible, such as a back-end service sending commands to your devices. Some examples of different types of communication include the following:
-* A refrigeration truck sending temperature every 5 minutes to an IoT Hub
-* A back-end service sending a command to a device to change the frequency at which it sends telemetry to help diagnose a problem
-* A device monitoring a batch reactor in a chemical plant, sending an alert when the temperature exceeds a certain value
+- A refrigeration truck sending temperature every 5 minutes to an IoT hub.
+- A back-end service sending a command to a device to change the frequency at which it sends telemetry to help diagnose a problem.
+- A device monitoring a batch reactor in a chemical plant, sending an alert when the temperature exceeds a certain value.
-### Device telemetry
+## Device telemetry
-Examples of telemetry received from a device can include sensor data such as speed or temperature, an error message such as missed event, or an information message to indicate the device is in good health. IoT Devices send events to an application to gain insights. Applications may require specific subsets of events for processing or storage at different endpoints.
+Examples of telemetry received from a device can include sensor data such as speed or temperature, an error message such as missed event, or an information message to indicate the device is in good health. IoT devices send events to an application to gain insights. Applications may require specific subsets of events for processing or storage at different endpoints.
-### Device properties
+## Device properties
-Properties can be read or set from the IoT Hub and can be used to send notifications when an action has completed. An example of a specific property on a device is temperature. This can be a writable property that can be updated on the device or read from a temperature sensor attached to the device.
+Properties can be read or set from the IoT hub and can be used to send notifications when an action has completed. An example of a specific property on a device is temperature. Temperature can be a writable property that can be updated on the device or read from a temperature sensor attached to the device.
-You can enable properties in IoT Hub using [Device Twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot-develop/overview-iot-plug-and-play.md).
+You can enable properties in IoT Hub using [Device twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot-develop/overview-iot-plug-and-play.md).
To learn more about the differences between device twins and Plug and Play, see [Plug and Play](../iot-develop/concepts-digital-twin.md#device-twins-and-digital-twins).
-### Device commands
+## Device commands
An example of a command is rebooting a device. IoT Hub implements commands by allowing you to invoke direct methods on devices. [Direct methods](iot-hub-devguide-direct-methods.md) represent a request-reply interaction with a device similar to an HTTP call in that they succeed or fail immediately (after a user-specified timeout). This approach is useful for scenarios where the course of immediate action is different depending on whether the device was able to respond.
-### Act on device data
+## Act on device data
-IoT Hub gives you the ability to unlock the value of your device data with other Azure services so you can shift to predictive problem-solving rather than reactive management. Connect your IoT Hub with other Azure services to do machine learning, analytics and AI to act on real-time data, optimize processing, and gain deeper insights.
+IoT Hub gives you the ability to unlock the value of your device data with other Azure services so you can shift to predictive problem-solving rather than reactive management. Connect your IoT hub with other Azure services to do machine learning, analytics, and AI to act on real-time data, optimize processing, and gain deeper insights.
-#### Built-in endpoint collects device data by default
+### Built-in endpoint collects device data by default
A built-in endpoint collects data from your device by default. The data is collected using a request-response pattern over dedicated IoT device endpoints, is available for a maximum duration of seven days, and can be used to take actions on a device. Here is the data accepted by the device endpoint:
-* Send device-to-cloud messages.
-* Receive cloud-to-device messages.
-* Initiate file uploads.
-* Retrieve and update device twin properties.
-* Receive direct method requests.
+- Send device-to-cloud messages.
+- Receive cloud-to-device messages.
+- Initiate file uploads.
+- Retrieve and update device twin properties.
+- Receive direct method requests.
For more information about IoT Hub endpoints, see [IoT Hub Dev Guide Endpoints]( iot-hub-devguide-endpoints.md#list-of-built-in-iot-hub-endpoints)
-#### Message Routing sends data to other endpoints
+### Message routing sends data to other endpoints
Data can also be routed to different services for further processing. As the IoT solution scales out, the number of devices, volume of events, variety of events, and different services also varies. A flexible, scalable, consistent, and reliable method to route events is necessary to serve this pattern. Once a message route has been created, data stops flowing to the built-in-endpoint unless a fallback route has been configured. For a tutorial showing multiple uses of message routing, see the [Routing Tutorial](tutorial-routing.md).
-IoT Hub also integrates with Event Grid which enables you to fan out data to multiple subscribers. Event Grid is a fully managed event service that enables you to easily manage events across many different Azure services and applications. Made for performance and scale, it simplifies building event-driven applications and serverless architectures. The differences between message routing and using Event Grid are explained in the [Message Routing and Event Grid Comparison](iot-hub-event-grid-routing-comparison.md)
+IoT Hub also integrates with Event Grid, which enables you to fan out data to multiple subscribers. Event Grid is a fully managed event service that enables you to easily manage events across many different Azure services and applications. Made for performance and scale, it simplifies building event-driven applications and serverless architectures. The differences between message routing and using Event Grid are explained in the [Message Routing and Event Grid Comparison](iot-hub-event-grid-routing-comparison.md)
## Next steps To try out an end-to-end IoT solution, check out the IoT Hub quickstarts:
-* [Quickstart: Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
+- [Send telemetry from a device to IoT Hub](quickstart-send-telemetry-cli.md)
+- [Send telemetry from an IoT Plug and Play device to IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)
+- [Control a device connected to an IoT hub](quickstart-control-device.md)
To learn more about the ways you can build and deploy IoT solutions with Azure IoT, visit:
-* [Fundamentals: Azure IoT technologies and solutions](../iot-fundamentals/iot-services-and-technologies.md).
+- [What is Azure IoT device and application development](../iot-develop/about-iot-develop.md)
+- [Fundamentals: Azure IoT technologies and solutions](../iot-fundamentals/iot-services-and-technologies.md)
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-file-upload.md
The following how-to guides provide complete, step-by-step instructions to uploa
The device calls the [Create File Upload SAS URI](/rest/api/iothub/device/create-file-upload-sas-uri) REST API or the equivalent API in one of the device SDKs to initiate a file upload.
-**Supported protocols**: AMQP, AMQP-WS, MQTT, MQTT-WS, and HTTPS <br/>
+**Supported protocols**: HTTPS <br/>
**Endpoint**: {iot hub}.azure-devices.net/devices/{deviceId}/files <br/> **Method**: POST
Working with Azure storage APIs is beyond the scope of this article. In addition
The device calls the [Update File Upload Status](/rest/api/iothub/device/update-file-upload-status) REST API or the equivalent API in one of the device SDKs when it completes the file upload. The device should update the file upload status with IoT Hub regardless of whether the upload succeeds or fails.
-**Supported protocols**: AMQP, AMQP-WS, MQTT, MQTT-WS, and HTTPS <br/>
+**Supported protocols**: HTTPS <br/>
**Endpoint**: {iot hub}.azure-devices.net/devices/{deviceId}/files/notifications <br/> **Method**: POST
Services can use notifications to manage uploads. For example, they can trigger
* [Azure Blob Storage documentation](../storage/blobs/index.yml)
-* [Azure IoT device and service SDKs](iot-hub-devguide-sdks.md) lists the various language SDKs you can use when you develop both device and service apps that interact with IoT Hub.
+* [Azure IoT device and service SDKs](iot-hub-devguide-sdks.md) lists the various language SDKs you can use when you develop both device and service apps that interact with IoT Hub.
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-openssl.md
Previously updated : 02/26/2021 Last updated : 02/24/2022 #Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to OpenSSL that I can use to generate test certificates.
subjectKeyIdentifier = hash
## Step 6 - Create a subordinate CA
-From the *subca* directory, create a new serial number in the *rootca/db/serial* file for the subordinate CA certificate.
-
-```bash
- openssl rand -hex 16 > ../rootca/db/serial
-```
-
->[!IMPORTANT]
->You must create a new serial number for every subordinate CA certificate and every device certificate that you create. Different certificates cannot have the same serial number.
- This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and subordinate CAs issue client certificates.
-Use the configuration file to generate a private key and a certificate signing request (CSR).
+From the *subca* directory, use the configuration file to generate a private key and a certificate signing request (CSR).
```bash openssl req -new -config subca.conf -out subca.csr -keyout private/subca.key
Submit the CSR to the root CA and use the root CA to issue and sign the subordin
## Step 7 - Demonstrate proof of possession
-You now have both a root CA certificate and a subordinate CA certificate. You can use either one to sign device certificates. The one you choose must be uploaded to your IoT Hub. The following steps assume that you are using the subordinate CA certificate. To upload and register your subordinate CA certificate to your IoT Hub:
+You now have both a root CA certificate and a subordinate CA certificate. You can use either one to sign device certificates. The one you choose must be uploaded to your IoT Hub. The following steps assume that you're using the subordinate CA certificate. To upload and register your subordinate CA certificate to your IoT Hub:
1. In the Azure portal, navigate to your IoTHub and select **Settings > Certificates**.
To generate a client certificate, you must first generate a private key. The fol
openssl genpkey -out device.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048 ```
-Create a certificate signing request (CSR) for the key. You do not need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field. You can also enter your own values for the other parameters such as **Country Name**, **Organization Name**, and so on.
+Create a certificate signing request (CSR) for the key. You don't need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field. You can also enter your own values for the other parameters such as **Country Name**, **Organization Name**, and so on.
```bash openssl req -new -key device.key -out device.csr
Check that the CSR is what you expect.
openssl req -text -in device.csr -noout ```
-Send the CSR to the subordinate CA for signing into the certificate hierarchy. Specify `client_ext` in the `-extensions` switch. Notice that the `Basic Constraints` in the issued certificate indicate that this certificate is not for a CA. If you are signing multiple certificates, be sure to update the serial number before generating each certificate by using the openssl `rand -hex 16 > db/serial` command.
+Send the CSR to the subordinate CA for signing into the certificate hierarchy. Specify `client_ext` in the `-extensions` switch. Notice that the `Basic Constraints` in the issued certificate indicate that this certificate isn't for a CA. If you're signing multiple certificates, be sure to update the serial number before generating each certificate by using the openssl `rand -hex 16 > db/serial` command.
```bash openssl ca -config subca.conf -in device.csr -out device.crt -extensions client_ext
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
|Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md) |Azure Front Door|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-azure-key-vault-account-and-certificate) |Azure Purview|[Using credentials for source authentication in Azure Purview](../../purview/manage-credentials.md)
+|Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
> [!NOTE] > You must set up the relevant Key Vault access policies to allow the corresponding services to get access to Key Vault.
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
This Azure CLI script example creates everything needed to run several Ubuntu vi
## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine/create-vm-nlb/create-vm-nlb.sh "Quick Create VM")]
- ## Clean up deployment Run the following command to remove the resource group, VM, and all related resources.
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
ms.suite: integration
Previously updated : 08/24/2021 Last updated : 02/24/2022 #Customer intent: As a developer, I want to create my first automated integration workflow by using Azure Logic Apps in the Azure portal
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Together, MLflow Tracking and Azure Machine learning allow you to track an exper
With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can do the following tasks,
-+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models.
++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md). + Track and manage models in MLflow and Azure Machine Learning model registry. + [Track Azure Databricks training runs](how-to-use-mlflow-azure-databricks.md).
-Learn more at [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
- ## Train MLflow projects (preview) [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
When you create or attach an AKS cluster, you can configure the cluster to use a
``` >[!IMPORTANT]
-> Azure Machine Learning does not support TLS termination with Internal Load Balancer. Internal Load Balancer has a private IP and that private IP could be on another network and certificate can be recused.
+> If your AKS cluster is configured with an Internal Load Balancer, using a Microsoft provided certificate is not supported and you must use [custom certificate to enable TLS](how-to-secure-web-service.md#deploy-on-azure-kubernetes-service).
>[!NOTE] > For more information about how to secure inferencing environment, please see [Secure an Azure Machine Learning Inferencing Environment](how-to-secure-inferencing-vnet.md)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
Previously updated : 09/28/2020 Last updated : 02/23/2022 # Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
Whether you're developing a TensorFlow model from the ground-up or you're bringi
Run this code on either of these environments: - Azure Machine Learning compute instance - no downloads or installation necessary-
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow** folder. - Your own Jupyter Notebook server
web_paths = [
'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz' ]
-dataset = Dataset.File.from_files(path=web_paths)
+dataset = Dataset.File.from_files(path = web_paths)
``` Use the `register()` method to register the data set to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
For more information on compute targets, see the [what is a compute target](conc
To define the Azure ML [Environment](concept-environments.md) that encapsulates your training script's dependencies, you can either define a custom environment or use an Azure ML curated environment. #### Use a curated environment+ Azure ML provides prebuilt, curated environments if you don't want to define your own environment. Azure ML has several CPU and GPU curated environments for TensorFlow corresponding to different versions of TensorFlow. For more info, see [Azure ML Curated Environments](resource-curated-environments.md). If you want to use a curated environment, you can run the following command instead:
tf_env = Environment.get(workspace=ws, name=curated_env_name)
``` To see the packages included in the curated environment, you can write out the conda dependencies to disk:+ ```python+ tf_env.save_to_directory(path=curated_env_name) ``` Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:+ ```python+ tf_env = Environment.from_conda_specification(name='tensorflow-2.2-gpu', file_path='./conda_dependencies.yml') ``` If you had instead modified the curated environment object directly, you can clone that environment with a new name:+ ```python+ tf_env = tf_env.clone(new_name='tensorflow-2.2-gpu') ```
The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides
run = Experiment(workspace=ws, name='Tutorial-TF-Mnist').submit(src) run.wait_for_completion(show_output=True) ```+ ### What happens during run execution+ As the run is executed, it goes through the following stages: - **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Title: Create a Training Job with the job creation UI
-description: Learn how to use the job creation UI in Azure Machine Learning Studio to create a training job.
+description: Learn how to use the job creation UI in Azure Machine Learning studio to create a training job.
# Create a training job with the job creation UI (preview)
-There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs) with the CLI (v2) (preview)](how-to-train-cli.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning Studio.
+There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs) with the CLI (v2) (preview)](how-to-train-cli.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio.
## Prerequisites
Here, the source code is in the `src` subdirectory. The command would be `python
#### Inputs
-There are two ways to do input binding:
+When you use an input in the command, you need to specify the input name. To indicate an input variable, use the form `${{inputs.input_name}}`. For instance, `${{inputs.wiki}}`. You can then refer to it in the command, for instance, `--data ${{inputs.wiki}}`.
-* Input name: When you use an input in the command, you need to specify the input name. To indicate an input variable, use the form `{inputs.input_name}`. For instance, `{inputs.wiki}`. You can then refer to it in the command, for instance, `--data {inputs.wiki}`.
[![Refer input name in the command](media/how-to-train-with-ui/input-command-name.png)](media/how-to-train-with-ui/input-command-name.png)
-* Path: You can use `--data .path` to specify a cloud location. The path is what you enter in the **Path on compute** field.
-[![Refer input path in the command](media/how-to-train-with-ui/input-command-path.png)](media/how-to-train-with-ui/input-command-path.png)
-
->[!NOTE]
->In the **command to start the job**, you must add a period to the **Path on compute** value. For instance, `/data/wikitext-2` becomes `./data/wikitext-2`.
- ## Review and Create Once you've configured your job, choose **Next** to go to the **Review** page. To modify a setting, choose the pencil icon and make the change.
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-overview.md
Learn how to migrate from Studio (classic) to Azure Machine Learning. Azure Mach
This is a guide for a basic "lift and shift" migration. If you want to optimize an existing machine learning workflow, or modernize a machine learning platform, see the [Azure Machine Learning adoption framework](https://aka.ms/mlstudio-classic-migration-repo) for additional resources including digital survey tools, worksheets, and planning templates.
+Please work with your Cloud Solution Architect on the migration.
+ ![Azure ML adoption framework](./media/migrate-overview/aml-adoption-framework.png) ## Recommended approach
To migrate to Azure Machine Learning, we recommend the following approach:
1. Align an actionable Azure Machine Learning adoption plan to business outcomes. 1. Prepare people, processes, and environments for change.
+Please work with your Cloud Solution Architect to define your strategy.
+ See the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for planning resources including a planning doc template. ## Step 3: Rebuild your first model
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-experiment.md
After the run finishes, you can check the results of each module:
- **View Log**: View driver and system logs. Use the **70_driver_log** to see information related to your user-submitted script such as errors and exceptions. > [!IMPORTANT]
-> Designer components use open source Python packages, compared to C# packages in Studio (classic). As a result, module output may vary slightly between the designer and Studio (classic).
+> Designer components use open source Python packages to implement machine learning algorithms. However Studio (classic) uses a Microsoft internal C# library. Therefore, prediction result may vary between the designer and Studio (classic).
++
+## Save trained model to use in another pipeline
+
+Sometimes you may want to save the model trained in a pipeline and use the model in another pipeline later. In Studio (classic), all trained models are saved in "Trained Models" category in the module list. In designer, the trained models are automatically registered as file dataset with a system generated name. Naming convention follows "MD - pipeline draft name - component name - Trained model ID" pattern.
+
+To give a trained model a meaningful name, you can register the output of **Train Model** component as a **file dataset**. Give it the name you want, for example linear-regression-model.
+
+![Screenshot showing how to save trained model.](./media/migrate-rebuild-experiment/save-model.png)
+
+You can find the trained model in "Dataset" category in the component list or search it by name. Then connect the trained model to a **Score Model** component to use it for prediction.
+
+![Screenshot showing how to find trained model.](./media/migrate-rebuild-experiment/search-model-in-list.png)
## Next steps
marketplace Add Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/add-manage-users.md
Last updated 01/20/2022
- Owner - Manager
-The **Account Settings** page in Partner Center lets you use Azure AD to manage the users, groups, and Azure AD applications that have access to your Partner Center account. Your account must have Manager-level permissions for the [work account (Azure AD tenant)](company-work-accounts.md) in which you want to add or edit users. To manage users within a different work account / tenant, you will need to sign out and then sign back in as a user with **Manager** permissions on that work account / tenant.
+The **Account Settings** page in Partner Center lets you use Azure AD to manage the users, groups, and Azure AD applications that have access to your Partner Center account. Your account must have Manager-level permissions for the [work account (Azure AD tenant)](company-work-accounts.md) in which you want to add or edit users. To manage users within a different work account/tenant, you will need to sign out and then sign back in as a user with **Manager** permissions on that work account / tenant.
After you are signed in with your work account (Azure AD tenant), you can add and manage users.
marketplace Azure Vm Use Approved Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-use-approved-base.md
description: Learn how to create a virtual machine (VM) offer from an approved b
--++ Last updated 02/23/2022
media-services Live Event Streaming Best Practices Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-streaming-best-practices-guide.md
latency:
5. **Send content that is no higher in resolution than what you plan to stream.** For example, if you're using 720p standard encoding live
- events, you send files that are already at 720p.
+ events, you a stream that is already at 720p.
6. **Keep your framerate at 30fps or lower unless using pass-through live events.** While we support 60 fps input for live events, our
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. | | **UEFI - Secure boot** | Not supported for migration.|
-| **Disk size** | Up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks. </br></br> For existing Azure Migrate projects, you may need to upgrade the replication provider on the Hyper-V host to the latest version to replicate large disks up to 32 TB.|
+| **Disk size** | Up to 2 TB OS disk, 4 TB for the data.|
| **Disk number** | A maximum of 16 disks per VM.| | **Encrypted disks/volumes** | Not supported for migration.| | **RDM/passthrough disks** | Not supported for migration.|
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Australia Southeast | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Canada East | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
+| Canada East | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
One advantage of running your workload in Azure is its global reach. The flexibl
| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Japan West | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
+| Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Korea Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Korea South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| North Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. The flexibl
| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| UK West | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
+| UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| West Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
This release of Azure Database for MySQL - Single Server includes the following updates.
-**Bug fixes**
-
-The MySQL client version 8.0.27 or later is now compatible with Azure Database for MySQL - Single Server. Now you can connect form the MySQL client version 8.0.27 or later created either via mysql.exe or workbench.
-
**Known Issues** Customers in Japan,East US received two Maintenance Notification emails for this month. The Email notification send for *05-Feb 2022* was send by mistake and no changes will be done to the service on this date. You can safely ignore them. We apologize for the inconvenience.
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-version-policy.md
Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.pos
## Next steps - See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md) - See Azure Database for PostgreSQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](hyperscale/concepts-versions.md)
+- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](hyperscale/reference-versions.md)
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archivin
## Limitations This feature still has significant limitations. See [Hyperscale
-(Citus) limits and limitations](concepts-limits.md#columnar-storage).
+(Citus) limits and limitations](reference-limits.md#columnar-storage).
## Next steps
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
through PgBouncer, follow these steps:
> > If the checkbox does not exist, PgBouncer isn't enabled for your server > group yet. Managed PgBouncer is being rolled out to all [supported
- > regions](concepts-configuration-options.md#regions). Once
+ > regions](resources-regions.md). Once
> enabled in a region, it'll be added to existing server groups in the > region during a [scheduled > maintenance](concepts-maintenance.md) event.
through PgBouncer, follow these steps:
## Next steps
-Discover more about the [limits and limitations](concepts-limits.md)
+Discover more about the [limits and limitations](reference-limits.md)
of Hyperscale (Citus).
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
WHERE shardid = 102027;
## Next steps -- Learn how to [choose a distribution column](concepts-choose-distribution-column.md) for distributed tables.
+- Learn how to [choose a distribution column](howto-choose-distribution-column.md) for distributed tables.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
These metrics are available for Hyperscale (Citus) nodes:
||||| |active_connections|Active Connections|Count|The number of active connections to the server.| |cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](concepts-configuration-options.md)|
+|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](resources-compute.md)|
|memory_percent|Memory percent|Percent|The percentage of memory in use.| |network_bytes_ingress|Network In|Bytes|Network In across active connections.| |network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-nodes.md
WHERE shardid = 102027;
## Next steps -- [Determine your application's type](concepts-app-type.md) to prepare for data modeling
+- [Determine your application's type](howto-app-type.md) to prepare for data modeling
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-private-access.md
consider:
## Limits and limitations
-See Hyperscale (Citus) [limits and limitations](concepts-limits.md)
+See Hyperscale (Citus) [limits and limitations](reference-limits.md)
page. ## Next steps
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-security-overview.md
disabled.
## Limits and limitations
-See Hyperscale (Citus) [limits and limitations](concepts-limits.md)
+See Hyperscale (Citus) [limits and limitations](reference-limits.md)
page. ## Next steps
postgresql Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-type.md
+
+ Title: Determine application type - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Identify your application for effective distributed data modeling
+++++ Last updated : 07/17/2020++
+# Determining Application Type
+
+Running efficient queries on a Hyperscale (Citus) server group requires that
+tables be properly distributed across servers. The recommended distribution
+varies by the type of application and its query patterns.
+
+There are broadly two kinds of applications that work well on Hyperscale
+(Citus). The first step in data modeling is to identify which of them more
+closely resembles your application.
+
+## At a Glance
+
+| Multi-Tenant Applications | Real-Time Applications |
+|--|-|
+| Sometimes dozens or hundreds of tables in schema | Small number of tables |
+| Queries relating to one tenant (company/store) at a time | Relatively simple analytics queries with aggregations |
+| OLTP workloads for serving web clients | High ingest volume of mostly immutable data |
+| OLAP workloads that serve per-tenant analytical queries | Often centering around large table of events |
+
+## Examples and Characteristics
+
+**Multi-Tenant Application**
+
+> These are typically SaaS applications that serve other companies,
+> accounts, or organizations. Most SaaS applications are inherently
+> relational. They have a natural dimension on which to distribute data
+> across nodes: just shard by tenant\_id.
+>
+> Hyperscale (Citus) enables you to scale out your database to millions of
+> tenants without having to re-architect your application. You can keep the
+> relational semantics you need, like joins, foreign key constraints,
+> transactions, ACID, and consistency.
+>
+> - **Examples**: Websites which host store-fronts for other
+> businesses, such as a digital marketing solution, or a sales
+> automation tool.
+> - **Characteristics**: Queries relating to a single tenant rather
+> than joining information across tenants. This includes OLTP
+> workloads for serving web clients, and OLAP workloads that serve
+> per-tenant analytical queries. Having dozens or hundreds of tables
+> in your database schema is also an indicator for the multi-tenant
+> data model.
+>
+> Scaling a multi-tenant app with Hyperscale (Citus) also requires minimal
+> changes to application code. We have support for popular frameworks like Ruby
+> on Rails and Django.
+
+**Real-Time Analytics**
+
+> Applications needing massive parallelism, coordinating hundreds of cores for
+> fast results to numerical, statistical, or counting queries. By sharding and
+> parallelizing SQL queries across multiple nodes, Hyperscale (Citus) makes it
+> possible to perform real-time queries across billions of records in under a
+> second.
+>
+> Tables in real-time analytics data models are typically distributed by
+> columns like user\_id, host\_id, or device\_id.
+>
+> - **Examples**: Customer-facing analytics dashboards requiring
+> sub-second response times.
+> - **Characteristics**: Few tables, often centering around a big
+> table of device-, site- or user-events and requiring high ingest
+> volume of mostly immutable data. Relatively simple (but
+> computationally intensive) analytics queries involving several
+> aggregations and GROUP BYs.
+
+If your situation resembles either case above, then the next step is to decide
+how to shard your data in the server group. The database administrator\'s
+choice of distribution columns needs to match the access patterns of typical
+queries to ensure performance.
+
+## Next steps
+
+* [Choose a distribution
+ column](howto-choose-distribution-column.md) for tables in your
+ application to distribute data efficiently
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
+
+ Title: Choose distribution columns ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to choose distribution columns in common scenarios in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 12/06/2021++
+# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Choosing each table's distribution column is one of the most important modeling decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) stores rows in shards based on the value of the rows' distribution column.
+
+The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes.
+
+This article gives distribution column tips for the two most common Hyperscale (Citus) scenarios.
+
+### Multi-tenant apps
+
+The multi-tenant architecture uses a form of hierarchical database modeling to
+distribute queries across nodes in the server group. The top of the data
+hierarchy is known as the *tenant ID* and needs to be stored in a column on
+each table.
+
+Hyperscale (Citus) inspects queries to see which tenant ID they involve and finds the matching table shard. It
+routes the query to a single worker node that contains the shard. Running a query with
+all relevant data placed on the same node is called colocation.
+
+The following diagram illustrates colocation in the multi-tenant data
+model. It contains two tables, Accounts and Campaigns, each distributed
+by `account_id`. The shaded boxes represent shards. Green shards are stored
+together on one worker node, and blue shards are stored on another worker node. Notice how a join
+query between Accounts and Campaigns has all the necessary data
+together on one node when both tables are restricted to the same
+account\_id.
+
+![Multi-tenant
+colocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
+
+To apply this design in your own schema, identify
+what constitutes a tenant in your application. Common instances include
+company, account, organization, or customer. The column name will be
+something like `company_id` or `customer_id`. Examine each of your
+queries and ask yourself, would it work if it had additional WHERE
+clauses to restrict all tables involved to rows with the same tenant ID?
+Queries in the multi-tenant model are scoped to a tenant. For
+instance, queries on sales or inventory are scoped within a certain
+store.
+
+#### Best practices
+
+- **Distribute tables by a common tenant\_id column.** For
+ instance, in a SaaS application where tenants are companies, the
+ tenant\_id is likely to be the company\_id.
+- **Convert small cross-tenant tables to reference tables.** When
+ multiple tenants share a small table of information, distribute it
+ as a reference table.
+- **Restrict filter all application queries by tenant\_id.** Each
+ query should request information for one tenant at a time.
+
+Read the [multi-tenant
+tutorial](./tutorial-design-database-multi-tenant.md) for an example of how to
+build this kind of application.
+
+### Real-time apps
+
+The multi-tenant architecture introduces a hierarchical structure
+and uses data colocation to route queries per tenant. By contrast, real-time
+architectures depend on specific distribution properties of their data
+to achieve highly parallel processing.
+
+We use "entity ID" as a term for distribution columns in the real-time
+model. Typical entities are users, hosts, or devices.
+
+Real-time queries typically ask for numeric aggregates grouped by date or
+category. Hyperscale (Citus) sends these queries to each shard for partial results and
+assembles the final answer on the coordinator node. Queries run fastest when as
+many nodes contribute as possible, and when no single node must do a
+disproportionate amount of work.
+
+#### Best practices
+
+- **Choose a column with high cardinality as the distribution
+ column.** For comparison, a Status field on an order table with
+ values New, Paid, and Shipped is a poor choice of
+ distribution column. It assumes only those few values, which limits the number of shards that can hold
+ the data, and the number of nodes that can process it. Among columns
+ with high cardinality, it's also good to choose those columns that
+ are frequently used in group-by clauses or as join keys.
+- **Choose a column with even distribution.** If you distribute a
+ table on a column skewed to certain common values, data in the
+ table tends to accumulate in certain shards. The nodes that hold
+ those shards end up doing more work than other nodes.
+- **Distribute fact and dimension tables on their common columns.**
+ Your fact table can have only one distribution key. Tables that join
+ on another key won't be colocated with the fact table. Choose
+ one dimension to colocate based on how frequently it's joined and
+ the size of the joining rows.
+- **Change some dimension tables into reference tables.** If a
+ dimension table can't be colocated with the fact table, you can
+ improve query performance by distributing copies of the dimension
+ table to all of the nodes in the form of a reference table.
+
+Read the [real-time dashboard
+tutorial](./tutorial-design-database-realtime.md) for an example of how to build this kind of application.
+
+### Time-series data
+
+In a time-series workload, applications query recent information while they
+archive old information.
+
+The most common mistake in modeling time-series information in Hyperscale (Citus) is to
+use the timestamp itself as a distribution column. A hash distribution based
+on time distributes times seemingly at random into different shards rather
+than keeping ranges of time together in shards. Queries that involve time
+generally reference ranges of time, for example, the most recent data. This type of
+hash distribution leads to network overhead.
+
+#### Best practices
+
+- **Don't choose a timestamp as the distribution column.** Choose a
+ different distribution column. In a multi-tenant app, use the tenant
+ ID, or in a real-time app use the entity ID.
+- **Use PostgreSQL table partitioning for time instead.** Use table
+ partitioning to break a large table of time-ordered data into
+ multiple inherited tables with each table containing different time
+ ranges. Distributing a Postgres-partitioned table in Hyperscale (Citus)
+ creates shards for the inherited tables.
+
+## Next steps
+
+- Learn how [colocation](concepts-colocation.md) between distributed data helps queries run fast.
+- Discover the distribution column of a distributed table, and other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-compute-quota.md
worker nodes.
## Next steps
-* Learn about other Hyperscale (Citus) [quotas and limits](concepts-limits.md).
+* Learn about other Hyperscale (Citus) [quotas and limits](reference-limits.md).
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-create-users.md
Permissions for the `citus` role:
extensions--even views or extensions normally visible only to superusers. * Execute monitoring functions that may take ACCESS SHARE locks on tables, potentially for a long time.
-* [Create PostgreSQL extensions](concepts-extensions.md) (because
+* [Create PostgreSQL extensions](reference-extensions.md) (because
the role is a member of `azure_pg_admin`). Notably, the `citus` role has some restrictions:
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-grow.md
adjusted independently. Adjust the **storage** slider under **Configuration
## Next steps -- Learn more about server group [performance
- options](concepts-configuration-options.md).
+- Learn more about server group [performance options](resources-compute.md).
- [Rebalance distributed table shards](howto-scale-rebalance.md) so that all worker nodes can participate in parallel queries - See the sizes of distributed tables, and other [useful diagnostic
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-initial.md
in total equals that of the original instance. In such scenarios we have seen
allowing smaller indices etc. The vCore count is actually the only decision. RAM allocation is currently
-determined based on vCore count, as described in the [Hyperscale (Citus)
-configuration options](concepts-configuration-options.md) page.
-The coordinator node doesn't require as much RAM as workers, but there's
-no way to choose RAM and vCores independently.
+determined based on vCore count, as described in the [compute and
+storage](resources-compute.md) page. The coordinator node doesn't require as
+much RAM as workers, but there's no way to choose RAM and vCores independently.
### Real-time analytics
the current latency for queries in your single-node database and the required
latency in Hyperscale (Citus). Divide current latency by desired latency, and round the result.
-Worker RAM: the best case would be providing enough memory that most
-the working set fits in memory. The type of queries your application uses
-affect memory requirements. You can run EXPLAIN ANALYZE on a query to determine
-how much memory it requires. Remember that vCores and RAM are scaled together
-as described in the [Hyperscale (Citus) configuration
-options](concepts-configuration-options.md) article.
+Worker RAM: the best case would be providing enough memory that most the
+working set fits in memory. The type of queries your application uses affect
+memory requirements. You can run EXPLAIN ANALYZE on a query to determine how
+much memory it requires. Remember that vCores and RAM are scaled together as
+described in the [compute and storage](resources-compute.md) article.
## Choosing a Hyperscale (Citus) tier
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
it will again say **Rebalancing is not recommended at this time**.
## Next steps -- Learn more about server group [performance
- options](concepts-configuration-options.md).
+- Learn more about server group [performance options](resources-compute.md).
- [Scale a server group](howto-scale-grow.md) up or out - See the [rebalance_table_shards](reference-functions.md#rebalance_table_shards)
postgresql Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ssl-connection-security.md
+
+ Title: Transport Layer Security (TLS) - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Instructions and information to configure Azure Database for PostgreSQL - Hyperscale (Citus) and associated applications to properly use TLS connections.
+++++ Last updated : 07/16/2020+
+# Configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)
+The Hyperscale (Citus) coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks.
+
+## Enforcing TLS connections
+Applications use a "connection string" to identify the destination database and settings for a connection. Different clients require different settings. To see a list of connection strings used by common clients, consult the **Connection Strings** section for your server group in the Azure portal.
+
+The TLS parameters `ssl` and `sslmode` vary based on the capabilities of the connector, for example `ssl=true` or `sslmode=require` or `sslmode=required`.
+
+## Ensure your application or framework supports TLS connections
+Some application frameworks don't enable TLS by default for PostgreSQL connections. However, without a secure connection an application can't connect to a Hyperscale (Citus) coordinator node. Consult your application's documentation to learn how to enable TLS connections.
+
+## Applications that require certificate verification for TLS connectivity
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. The certificate to connect to an Azure Database for PostgreSQL - Hyperscale (Citus) is located at https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem. Download the certificate file and save it to your preferred location.
+
+> [!NOTE]
+>
+> To check the certificate's authenticity, you can verify its SHA-256
+> fingerprint using the OpenSSL command line tool:
+>
+> ```sh
+> openssl x509 -in DigiCertGlobalRootCA.crt.pem -noout -sha256 -fingerprint
+>
+> # should output:
+> # 43:48:A0:E9:44:4C:78:CB:26:5E:05:8D:5E:89:44:B4:D8:4F:96:62:BD:26:DB:25:7F:89:34:A4:43:C7:01:61
+> ```
+
+### Connect using psql
+The following example shows how to connect to your Hyperscale (Citus) coordinator node using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
+
+Below is an example of the psql connection string:
+```
+psql "sslmode=verify-full sslrootcert=DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=citus user=citus password=your_pass"
+```
+> [!TIP]
+> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
+
+## Next steps
+Increase security further with [Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-firewall-rules.md).
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
on all server group nodes.
Upgrading PostgreSQL causes more changes than you might imagine, because Hyperscale (Citus) will also upgrade the [database
-extensions](concepts-extensions.md), including the Citus extension.
+extensions](reference-extensions.md), including the Citus extension.
We strongly recommend you to test your application with the new PostgreSQL and Citus version before you upgrade your production environment.
works properly, upgrade the original server group.
## Next steps
-* Learn about [supported PostgreSQL versions](concepts-versions.md).
-* See [which extensions](concepts-extensions.md) are packaged with
+* Learn about [supported PostgreSQL versions](reference-versions.md).
+* See [which extensions](reference-extensions.md) are packaged with
each PostgreSQL version in a Hyperscale (Citus) server group.
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
The output contains the host and port of the worker database.
Each distributed table in Hyperscale (Citus) has a "distribution column." (For more information, see [Distributed Data
-Modeling](concepts-choose-distribution-column.md).) It can be
+Modeling](howto-choose-distribution-column.md).) It can be
important to know which column it is. For instance, when joining or filtering tables, you may see error messages with hints like, "add a filter to the distribution column."
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
+
+ Title: Extensions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Describes the ability to extend the functionality of your database by using extensions in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 02/24/2022+
+# PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html).
+
+## Use PostgreSQL extensions
+
+PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
+
+> [!NOTE]
+> If `CREATE EXTENSION` fails with a permission denied error, try the
+> `create_extension()` function instead. For instance:
+>
+> ```sql
+> SELECT create_extension('postgis');
+> ```
+
+Azure Database for PostgreSQL - Hyperscale (Citus) currently supports a subset of key extensions as listed here. Extensions other than the ones listed aren't supported. You can't create your own extension with Azure Database for PostgreSQL.
+
+## Extensions supported by Azure Database for PostgreSQL
+
+The following tables list the standard PostgreSQL extensions that are currently supported by Azure Database for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
+
+The versions of each extension installed in a server group sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+
+### Citus extension
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.10 | 10.0.6 | 10.2.4 | 10.2.4 |
+
+### Data types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.2.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 |
+
+### Full-text search extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Functions extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Index types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+
+### Language extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+
+### Miscellaneous extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
++
+### PostGIS extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.4 | 3.0.3 | 3.1.4 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.4 | 3.0.4 | 3.1.4 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.4 | 3.0.4 | 3.1.4 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.4 | 3.0.4 | 3.1.4 |
++
+## pg_stat_statements
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+
+The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](../howto-configure-server-parameters-using-portal.md) or the [Azure CLI](../howto-configure-server-parameters-using-cli.md).
+
+There's a tradeoff between the query execution information pg_stat_statements provides and the effect on server performance as it logs each SQL statement. If you aren't actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
+
+## dblink and postgres_fdw
+
+You can use dblink and postgres\_fdw to connect from one PostgreSQL server to
+another, or to another database in the same server. The receiving server needs
+to allow connections from the sending server through its firewall. To use
+these extensions to connect between Azure Database for PostgreSQL servers or
+Hyperscale (Citus) server groups, set **Allow Azure services and resources to
+access this server group (or server)** to ON. You also need to turn this
+setting ON if you want to use the extensions to loop back to the same server.
+The **Allow Azure services and resources to access this server group** setting
+can be found in the Azure portal page for the Hyperscale (Citus) server group
+under **Networking**. Currently, outbound connections from Azure Database for
+PostgreSQL Single server and Hyperscale (Citus) aren't supported, except for
+connections to other Azure Database for PostgreSQL servers and Hyperscale
+(Citus) server groups.
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
name. The translation is useful to determine the distribution column of a
distributed table. For a more detailed discussion, see [choosing a distribution
-column](concepts-choose-distribution-column.md).
+column](howto-choose-distribution-column.md).
#### Arguments
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
+
+ Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Current limits for Hyperscale (Citus) server groups
+++++ Last updated : 01/14/2022++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
+
+The following section describes capacity and functional limits in the
+Hyperscale (Citus) service.
+
+### Naming
+
+#### Server group name
+
+A Hyperscale (Citus) server group must have a name that is 40 characters or
+shorter.
+
+## Networking
+
+### Maximum connections
+
+Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
+it's important to limit simultaneous connections. Here are the limits we chose
+to keep nodes healthy:
+
+* Coordinator node
+ * Maximum connections
+ * 300 for 0-3 vCores
+ * 500 for 4-15 vCores
+ * 1000 for 16+ vCores
+ * Maximum user connections
+ * 297 for 0-3 vCores
+ * 497 for 4-15 vCores
+ * 997 for 16+ vCores
+* Worker node
+ * Maximum connections
+ * 600
+
+Attempts to connect beyond these limits will fail with an error. The system
+reserves three connections for monitoring nodes, which is why there are three
+fewer connections available for user queries than connections total.
+
+#### Connection pooling
+
+You can scale connections further using [connection
+pooling](concepts-connection-pool.md). Hyperscale (Citus) offers a
+managed pgBouncer connection pooler configured for up to 2,000 simultaneous
+client connections.
+
+## Storage
+
+### Storage scaling
+
+Storage on coordinator and worker nodes can be scaled up (increased) but can't
+be scaled down (decreased).
+
+### Storage size
+
+Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
+available storage options and IOPS calculation [above](resources-compute.md)
+for node and cluster sizes.
+
+## Compute
+
+### Subscription vCore limits
+
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable quotas: vCores for coordinator nodes, and vCores for
+worker nodes. The default quota should be more than enough to experiment with
+Hyperscale (Citus). If you do need more vCores for a region in your
+subscription, see how to [adjust compute
+quotas](howto-compute-quota.md).
+
+## PostgreSQL
+
+### Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+Hyperscale (Citus) server group, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+
+### Columnar storage
+
+Hyperscale (Citus) currently has these limitations with [columnar
+tables](concepts-columnar.md):
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
+
+## Next steps
+
+* Learn how to [create a Hyperscale (Citus) server group in the
+ portal](quickstart-create-portal.md).
+* Learn to enable [connection pooling](concepts-connection-pool.md).
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
configuration options for:
* automating timeseries partitioning * parallelizing query execution across shards
-## SQL Functions
+## SQL functions
### Sharding
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
The supported values are:
## Next steps
-* Another form of configuration, besides server parameters, are the resource [configuration options](concepts-configuration-options.md) in a Hyperscale (Citus) server group.
+* Another form of configuration, besides server parameters, are the resource [configuration options](resources-compute.md) in a Hyperscale (Citus) server group.
* The underlying PostgreSQL data base also has [configuration parameters](http://www.postgresql.org/docs/current/static/runtime-config.html).
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
+
+ Title: Supported versions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: PostgreSQL versions available in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 10/01/2021++
+# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## PostgreSQL versions
+
+The version of PostgreSQL running in a Hyperscale (Citus) server group is
+customizable during creation. Hyperscale (Citus) currently supports the
+following major [PostgreSQL
+versions](https://www.postgresql.org/docs/release/):
+
+### PostgreSQL version 14
+
+The current minor release is 14.1. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/14/release-14-1.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 13
+
+The current minor release is 13.5. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/13/release-13-5.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 12
+
+The current minor release is 12.9. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/12/release-12-9.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 11
+
+The current minor release is 11.14. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/11/release-11-14.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 10 and older
+
+We don't support PostgreSQL version 10 and older for Azure Database for
+PostgreSQL - Hyperscale (Citus).
+
+## Citus and other extension versions
+
+Depending on which version of PostgreSQL is running in a server group,
+different [versions of PostgreSQL extensions](reference-extensions.md)
+will be installed as well. In particular, Postgres versions 12-14 come with
+Citus 10, and earlier Postgres versions come with Citus 9.5.
+
+## Next steps
+
+* See which [extensions](reference-extensions.md) are installed in
+ which versions.
+* Learn to [create a Hyperscale (Citus) server
+ group](quickstart-create-portal.md).
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
+
+ Title: Compute and storage ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Options for a Hyperscale (Citus) server group, including node compute and storage
+++++ Last updated : 02/23/2022++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) compute and storage
+
+You can select the compute and storage settings independently for
+worker nodes and the coordinator node in a Hyperscale (Citus) server
+group. Compute resources are provided as vCores, which represent
+the logical CPU of the underlying hardware. The storage size for
+provisioning refers to the capacity available to the coordinator
+and worker nodes in your Hyperscale (Citus) server group. The storage
+includes database files, temporary files, transaction logs, and
+the Postgres server logs.
+
+## Standard tier
+
+| Resource | Worker node | Coordinator node |
+|--|--|--|
+| Compute, vCores | 4, 8, 16, 32, 64 | 4, 8, 16, 32, 64 |
+| Memory per vCore, GiB | 8 | 4 |
+| Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 |
+| Storage type | General purpose (SSD) | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | One worker node, GiB RAM | Coordinator node, GiB RAM |
+|--|--||
+| 4 | 32 | 16 |
+| 8 | 64 | 32 |
+| 16 | 128 | 64 |
+| 32 | 256 | 128 |
+| 64 | 432 | 256 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to each worker and coordinator node.
+
+| Storage size, TiB | Maximum IOPS |
+|-|--|
+| 0.5 | 1,536 |
+| 1 | 3,072 |
+| 2 | 6,148 |
+
+For the entire Hyperscale (Citus) cluster, the aggregated IOPS work out to the
+following values:
+
+| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS |
+|--||-|-|
+| 2 | 3,072 | 6,144 | 12,296 |
+| 3 | 4,608 | 9,216 | 18,444 |
+| 4 | 6,144 | 12,288 | 24,592 |
+| 5 | 7,680 | 15,360 | 30,740 |
+| 6 | 9,216 | 18,432 | 36,888 |
+| 7 | 10,752 | 21,504 | 43,036 |
+| 8 | 12,288 | 24,576 | 49,184 |
+| 9 | 13,824 | 27,648 | 55,332 |
+| 10 | 15,360 | 30,720 | 61,480 |
+| 11 | 16,896 | 33,792 | 67,628 |
+| 12 | 18,432 | 36,864 | 73,776 |
+| 13 | 19,968 | 39,936 | 79,924 |
+| 14 | 21,504 | 43,008 | 86,072 |
+| 15 | 23,040 | 46,080 | 92,220 |
+| 16 | 24,576 | 49,152 | 98,368 |
+| 17 | 26,112 | 52,224 | 104,516 |
+| 18 | 27,648 | 55,296 | 110,664 |
+| 19 | 29,184 | 58,368 | 116,812 |
+| 20 | 30,720 | 61,440 | 122,960 |
+
+## Basic tier
+
+The Hyperscale (Citus) [basic tier](concepts-tiers.md) is a server
+group with just one node. Because there isn't a distinction between
+coordinator and worker nodes, it's less complicated to choose compute and
+storage resources.
+
+| Resource | Available options |
+|--|--|
+| Compute, vCores | 2, 4, 8 |
+| Memory per vCore, GiB | 4 |
+| Storage size, GiB | 128, 256, 512 |
+| Storage type | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | GiB RAM |
+|--||
+| 2 | 8 |
+| 4 | 16 |
+| 8 | 32 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to the basic tier node.
+
+| Storage size, GiB | Maximum IOPS |
+|-|--|
+| 128 | 384 |
+| 256 | 768 |
+| 512 | 1,536 |
+
+## Next steps
+
+* Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md)
+* Change [compute quotas](howto-compute-quota.md) for a subscription and region
postgresql Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-pricing.md
+
+ Title: Pricing ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Pricing and how to save with Hyperscale (Citus)
+++++ Last updated : 02/23/2022++
+# Pricing for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## General pricing
+
+For the most up-to-date pricing information, see the service
+[pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
+To see the cost for the configuration you want, the
+[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)
+shows the monthly cost on the **Configure** tab based on the options you
+select. If you don't have an Azure subscription, you can use the Azure pricing
+calculator to get an estimated price. On the
+[Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+website, select **Add items**, expand the **Databases** category, and choose
+**Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)** to customize the
+options.
+
+## Prepay for compute resources with reserved capacity
+
+Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Hyperscale (Citus) reserved capacity, you make an upfront commitment on Hyperscale (Citus) server group for a one- or three-year period to get a significant discount on the compute costs. To purchase Hyperscale (Citus) reserved capacity, you need to specify the Azure region, reservation term, and billing frequency.
+
+> [!IMPORTANT]
+> This article is about reserved capacity for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For information about reserved capacity for Azure Database for PostgreSQL ΓÇô Single Server, see [Prepay for Azure Database for PostgreSQL ΓÇô Single Server compute resources with reserved capacity](../concept-reserved-pricing.md).
+
+You don't need to assign the reservation to specific Hyperscale (Citus) server groups. An already running Hyperscale (Citus) server group or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one year or three years. As soon as you buy a reservation, the Hyperscale (Citus) compute charges that match the reservation attributes are no longer charged at the pay-as-you-go rates.
+
+A reservation doesn't cover software, networking, or storage charges associated with the Hyperscale (Citus) server groups. At the end of the reservation term, the billing benefit expires, and the Hyperscale (Citus) server groups are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
+
+You can buy Hyperscale (Citus) reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise Agreement (EA) or individual subscription with pay-as-you-go rates.
+* For Enterprise Agreement subscriptions, **Add Reserved Instances** must be enabled in the [EA Portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an Enterprise Agreement admin on the subscription.
+* For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Hyperscale (Citus) reserved capacity.
+
+For information on how Enterprise Agreement customers and pay-as-you-go customers are charged for reservation purchases, see:
+- [Understand Azure reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+- [Understand Azure reservation usage for your pay-as-you-go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md)
+
+### Determine the right server group size before purchase
+
+The size of reservation is based on the total amount of compute used by the existing or soon-to-be-deployed coordinator and worker nodes in Hyperscale (Citus) server groups within a specific region.
+
+For example, let's suppose you're running one Hyperscale (Citus) server group with 16 vCore coordinator and three 8 vCore worker nodes. Further, let's assume you plan to deploy within the next month an additional Hyperscale (Citus) server group with a 32 vCore coordinator and two 4 vCore worker nodes. Let's also suppose you need these resources for at least one year.
+
+In this case, purchase a one-year reservation for:
+
+* Total 16 vCores + 32 vCores = 48 vCores for coordinator nodes
+* Total 3 nodes x 8 vCores + 2 nodes x 4 vCores = 24 + 8 = 32 vCores for worker nodes
+
+### Buy Azure Database for PostgreSQL reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select **All services** > **Reservations**.
+1. Select **Add**. In the **Purchase reservations** pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
+1. Select the **Hyperscale (Citus) Compute** type to purchase, and click **Select**.
+1. Review the quantity for the selected compute type on the **Products** tab.
+1. Continue to the **Buy + Review** tab to finish your purchase.
+
+The following table describes required fields.
+
+| Field | Description |
+|--|--|
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
+| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select **Shared**, the vCore reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions within your billing context. For Enterprise Agreement customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator. If you select **Management group**, the reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions that are a part of both the management group and billing scope. If you select **Single subscription**, the vCore reservation discount is applied to Hyperscale (Citus) server groups in this subscription. If you select **Single resource group**, the reservation discount is applied to Hyperscale (Citus) server groups in the selected subscription and the selected resource group within that subscription. |
+| Region | The Azure region that's covered by the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation. |
+| Term | One year or three years. |
+| Quantity | The amount of compute resources being purchased within the Hyperscale (Citus) reserved capacity reservation. In particular, the number of coordinator or worker node vCores in the selected Azure region that are being reserved and which will get the billing discount. For example, if you're running (or plan to run) Hyperscale (Citus) server groups with the total compute capacity of 64 coordinator node vCores and 32 worker node vCores in the East US region, specify the quantity as 64 and 32 for coordinator and worker nodes, respectively, to maximize the benefit for all servers. |
+++
+### Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+### vCore size flexibility
+
+vCore size flexibility helps you scale up or down coordinator and worker nodes within a region, without losing the reserved capacity benefit.
+
+### Need help? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of Hyperscale (Citus) server groups that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation through the Azure portal, PowerShell, the Azure CLI, or the API.
+
+To learn more about Azure reservations, see the following articles:
+
+* [What are Azure reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+* [Manage Azure reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
+
+ Title: Regional availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Where you can run a Hyperscale (Citus) server group
++++++ Last updated : 02/23/2022++
+# Regional availability for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Hyperscale (Citus) server groups are available in the following Azure regions:
+
+* Americas:
+ * Brazil South
+ * Canada Central
+ * Central US
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West Central US
+ * West US
+ * West US 2
+* Asia Pacific:
+ * Australia East
+ * Central India
+ * East Asia
+ * Japan East
+ * Japan West
+ * Korea Central
+ * Southeast Asia
+* Europe:
+ * France Central
+ * Germany West Central
+ * North Europe
+ * Switzerland North
+ * UK South
+ * West Europe
+
+Some of these regions may not be initially activated on all Azure
+subscriptions. If you want to use a region from the list above and don't see it
+in your subscription, or if you want to use a region not on this list, open a
+[support
+request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+**Next steps**
+
+Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md).
postgresql Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-shard.md
and placements. We saw a challenge of using uniqueness and foreign key
constraints, and finally saw how distributed queries work at a high level. * Read more about Hyperscale (Citus) [table types](concepts-nodes.md)
-* Get more tips on [choosing a distribution column](concepts-choose-distribution-column.md)
+* Get more tips on [choosing a distribution column](howto-choose-distribution-column.md)
* Learn the benefits of [table colocation](concepts-colocation.md)
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
This article outlines the process to register an Azure Cosmos database (SQL API)
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)|No|No** |
+| [Yes](#register) | [Yes](#scan)|[No](#scan) | [Yes](#scan)|[Yes](#scan)|No|No** |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
There is only one way to set up authentication for SQL server on-premises:
#### SQL Authentication to register
-The SQL account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+Ensure the SQL Server deployment is configured to allow SQL Server and Windows Authentication.
+
+To enable this, within SQL Server Management Studio (SSMS), navigate to "Server Properties" and change from "Windows Authentication Mode" to "SQL Server and Windows Authentication mode".
++
+A change to the Server Authentication will require a restart of the SQL Server Instance and SQL Server Agent, this can be triggered within SSMS by navigating to the SQL Server instance and selecting "Restart" within the right-click options pane.
##### Creating a new login and user If you would like to create a new login and user to be able to scan your SQL server, follow the steps below:
+The SQL account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+ > [!Note] > All the steps below can be executed using the code provided [here](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql)
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-trusted-service-exception.md
The easiest way to test the connection is by running the Import data wizard.
+ [Connect to other Azure resources using a managed identity](search-howto-managed-identities-data-sources.md) + [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) + [Azure Data Lake Storage Gen2 indexer](search-howto-index-azure-data-lake-storage.md)
-+ [Authenticate with Azure Active Directory](/azure/architecture/framework/security/design-identity-authentication.md)
-+ [About managed identities (Azure Active Directory)](../active-directory/managed-identities-azure-resources/overview.md)
++ [Authenticate with Azure Active Directory](/azure/architecture/framework/security/design-identity-authentication)++ [About managed identities (Azure Active Directory)](../active-directory/managed-identities-azure-resources/overview.md)
sentinel Mitre Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mitre-coverage.md
+
+ Title: View MITRE coverage for your organization from Microsoft Sentinel | Microsoft Docs
+description: Learn how to view coverage indicator in Microsoft Sentinel for MITRE tactics that are currently covered, and available to configure, for your organization.
++ Last updated : 12/21/2021+++
+# Understand security coverage by the MITRE ATT&CK® framework
+
+> [!IMPORTANT]
+> The MITRE page in Microsoft Sentinel is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+[MITRE ATT&CK](https://attack.mitre.org/#) is a publicly accessible knowledge base of tactics and techniques that are commonly used by attackers, and is created and maintained by observing real-world observations. Many organizations use the MITRE ATT&CK knowledge base to develop specific threat models and methodologies that are used to verify security status in their environments.
+
+Microsoft Sentinel analyzes ingested data, not only to [detect threats](detect-threats-built-in.md) and help you [investigate](investigate-cases.md), but also to visualize the nature and coverage of your organization's security status.
+
+This article describes how to use the **MITRE** page in Microsoft Sentinel to view the detections already active in your workspace, and those available for you to configure, to understand your organization's security coverage, based on the tactics and techniques from the MITRE ATT&CK® framework.
++
+Microsoft Sentinel is currently aligned to The MITRE ATT&CK framework, version 9.
+
+## View current MITRE coverage
+
+In Microsoft Sentinel, in the **Threat management** menu on the left, select **MITRE**. By default, both currently active scheduled query and near real-time (NRT) rules are indicated in the coverage matrix.
+
+- **Use the legend at the top-right** to understand how many detections are currently active in your workspace for specific technique.
+
+- **Use the search bar at the top-left** to search for a specific technique in the matrix, using the technique name or ID, to view your organization's security status for the selected technique.
+
+- **Select a specific technique** in the matrix to view more details on the right. There, use the links to jump to any of the following locations:
+
+ - Select **View technique details** for more information about the selected technique in the MITRE ATT&CK framework knowledge base.
+
+ - Select links to any of the active items to jump to the relevant area in Microsoft Sentinel.
+
+## Simulate possible coverage with available detections
+
+In the MITRE coverage matrix, *simulated* coverage refers to detections that are available, but not currently configured, in your Microsoft Sentinel workspace. View your simulated coverage to understand your organization's possible security status, were you to configure all detections available to you.
+
+In Microsoft Sentinel, in the **General** menu on the left, select **MITRE**.
+
+Select items in the **Simulate** menu to simulate your organization's possible security status.
+
+- **Use the legend at the top-right** to understand how many detections, including analytics rule templates or hunting queries, are available for you to configure.
+
+- **Use the search bar at the top-left** to search for a specific technique in the matrix, using the technique name or ID, to view your organization's simulated security status for the selected technique.
+
+- **Select a specific technique** in the matrix to view more details on the right. There, use the links to jump to any of the following locations:
+
+ - Select **View technique details** for more information about the selected technique in the MITRE ATT&CK framework knowledge base.
+
+ - Select links to any of the simulation items to jump to the relevant area in Microsoft Sentinel.
+
+ For example, select **Hunting queries** to jump to the **Hunting** page. There, you'll see a filtered list of the hunting queries that are associated with the selected technique, and available for you to configure in your workspace.
+
+## Use the MITRE ATT&CK framework in analytics rules and incidents
+
+Having a scheduled rule with MITRE techniques applied running regularly in your Microsoft Sentinel workspace enhances the security status shown for your organization in the MITRE coverage matrix.
+
+- **Analytics rules**:
+
+ - When configuring analytics rules, select specific MITRE techniques to apply to your rule.
+ - When searching for analytics rules, filter the rules displayed by technique to find your rules quicker.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md) and [Create custom analytics rules to detect threats](detect-threats-custom.md).
+
+- **Incidents**:
+
+ When incidents are created for alerts that are surfaced by rules with MITRE techniques configured, the techniques are also added to the incidents.
+
+ For more information, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).
+
+- **Threat hunting**:
+
+ - When creating a new hunting query, select the specific tactics and techniques to apply to your query.
+ - When searching for active hunting queries, filter the queries displayed by tactics by selecting an item from the list above the grid. Select a query to see tactic and technique details on the right.
+ - When creating bookmarks, either use the technique mapping inherited from the hunting query, or create your own mapping.
+
+ For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md) and [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md).
+
+## Next steps
+
+For more information, see:
+
+- [MITRE | ATT&CK framework](https://attack.mitre.org/)
+- [MITRE ATT&CK for Industrial Control Systems](https://collaborate.mitre.org/attackics/index.php/Main_Page)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## February 2022
+- [View MITRE support coverage (Public preview)](#view-mitre-support-coverage-public-preview)
- [View Azure Purview data in Microsoft Sentinel](#view-azure-purview-data-in-microsoft-sentinel-public-preview) - [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview)+
+### View MITRE support coverage (Public preview)
+
+Microsoft Sentinel now provides a new **MITRE** page, which highlights the MITRE tactic and technique coverage you currently have, and can configure, for your organization.
+
+Select items from the **Active** and **Simulated** menus at the top of the page to view the detections currently active in your workspace, and the simulated detections available for you to configure.
+
+For example:
++
+For more information, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md).
- [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview) - [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview)
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
The typical mechanism for identifying duplicate message deliveries is by checkin
> * OS update > * Changing properties on the entity (Queue, Topic, Subscription) while holding the lock. >
-> When the lock is lost, Azure Service Bus will generate a LockLostException which will be surfaced on the client application code. In this case, the client's default retry logic should automatically kick in and retry the operation.
+> When the lock is lost, Azure Service Bus will generate a MessageLockLostException which will be surfaced on the client application code. In this case, the client's default retry logic should automatically kick in and retry the operation.
## Renew locks The default value for the lock duration is **30 seconds**. You can specify a different value for the lock duration at the queue or subscription level. The client owning the lock can renew the message lock by using methods on the receiver object. Instead, you can use the automatic lock-renewal feature where you can specify the time duration for which you want to keep getting the lock renewed.
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md
Title: Service Connector Region Support description: Service Connector region availability and region support list--++ Previously updated : 10/29/2021 Last updated : 02/17/2021
-# Service Connector region support
+# Service Connector Preview region support
-When you create a service connection with Service Connector, the conceptual connection resource is provisioned into the same region with your compute service instance by default. This page shows the region support information and corresponding behavior of Service Connector Public Preview.
+When you create a service connection with Service Connector, the conceptual connection resource is provisioned into the same region as your compute service instance by default. This page shows the region support information and corresponding behavior of Service Connector Public Preview.
## Supported regions with regional endpoint If your compute service instance is located in one of the regions that Service Connector supports below, you can use Service Connector to create and manage service connections.
+- Australia East
+- East US
+- East US 2 EUAP
+- Japan East
+- North Europe
+- UK South
- West Central US - West Europe-- North Europe-- East US - West US 2 ## Supported regions with geographical endpoint
-Your compute service instance might be created in the region that Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. You will see an information banner about the region details when you create a service connection in this case. The region difference may impact your compliance, data residency, and data latency.
+Your compute service instance might be created in a region where Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. In such cases you will see a banner providing some details about the region when you create a service connection. The region difference may impact your compliance, data residency, and data latency.
+
+|Region | Support Region|
+|-||
+|Australia Central |Australia East |
+|Australia Southeast|Australia East |
+|Central US |West US 2 |
+|East US 2 |East US |
+|Japan West |Japan East |
+|UK West |UK South |
+|North Central US |East US |
+|West US |East US |
+|West US 3 |West US 2 |
+|South Central US |West US 2 |
-- East US 2-- West US 3-- South Central US
+## Regions not supported in the public preview
-## Not supported regions in public preview
+In regions where Service Connector isn't supported, you will still find Service Connector CLI commands and the portal node, but you won't be able to create or manage service connections. The product team is working actively to enable more regions.
-You can still see Service Connector CLI command or portal node in the region that Service Connector does support. But you cannot create or manage service connections in these regions. The product team is working actively to enable more regions.
service-fabric Service Fabric Cluster Resource Manager Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-metrics.md
serviceDescription.Metrics.Add(totalCountMetric);
await fabricClient.ServiceManager.CreateServiceAsync(serviceDescription); ```
-Powershell:
+PowerShell:
```posh New-ServiceFabricService -ApplicationName $applicationName -ServiceName $serviceName -ServiceTypeName $serviceTypeName ΓÇôStateful -MinReplicaSetSize 3 -TargetReplicaSetSize 3 -PartitionSchemeSingleton ΓÇôMetric @("ConnectionCount,High,20,5ΓÇ¥,"PrimaryCount,Medium,1,0ΓÇ¥,"ReplicaCount,Low,1,1ΓÇ¥,"Count,Low,1,1ΓÇ¥)
LetΓÇÖs take our previous example and see what happens when we add some custom m
LetΓÇÖs presume that we initially created the stateful service with the following command:
-Powershell:
+PowerShell:
```posh New-ServiceFabricService -ApplicationName $applicationName -ServiceName $serviceName -ServiceTypeName $serviceTypeName ΓÇôStateful -MinReplicaSetSize 3 -TargetReplicaSetSize 3 -PartitionSchemeSingleton ΓÇôMetric @("MemoryInMb,High,21,11ΓÇ¥,"PrimaryCount,Medium,1,0ΓÇ¥,"ReplicaCount,Low,1,1ΓÇ¥,"Count,Low,1,1ΓÇ¥)
service-fabric Service Fabric Concepts Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-scalability.md
updateDescription.InstanceCount = 50;
await fabricClient.ServiceManager.UpdateServiceAsync(new Uri("fabric:/app/service"), updateDescription); ```
-Powershell:
+PowerShell:
```posh Update-ServiceFabricService -Stateless -ServiceName $serviceName -InstanceCount 50
serviceDescription.InstanceCount = -1;
await fc.ServiceManager.CreateServiceAsync(serviceDescription); ```
-Powershell:
+PowerShell:
```posh New-ServiceFabricService -ApplicationName $applicationName -ServiceName $serviceName -ServiceTypeName $serviceTypeName -Stateless -PartitionSchemeSingleton -InstanceCount "-1"
service-fabric Service Fabric Containers Volume Logging Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-volume-logging-drivers.md
The Azure Files volume driver is a [Docker volume plugin](https://docs.docker.co
* Follow the instructions in the [Azure Files documentation](../storage/files/storage-how-to-create-file-share.md) to create a file share for the Service Fabric container application to use as volume.
-* You will need [Powershell with the Service Fabric module](./service-fabric-get-started.md) or [SFCTL](./service-fabric-cli.md) installed.
+* You will need [PowerShell with the Service Fabric module](./service-fabric-get-started.md) or [SFCTL](./service-fabric-cli.md) installed.
* If you are using Hyper-V containers, the following snippets need to be added in the ClusterManifest (local cluster) or fabricSettings section in your Azure Resource Manager template (Azure cluster) or ClusterConfig.json (standalone cluster).
In the fabricSettings section in your Azure Resource Manager template (for Azure
## Deploy a sample application using Service Fabric Azure Files volume driver
-### Using Azure Resource Manager via the provided Powershell script (recommended)
+### Using Azure Resource Manager via the provided PowerShell script (recommended)
If your cluster is based in Azure, we recommend deploying applications to it using the Azure Resource Manager application resource model for ease of use and to help move towards the model of maintaining infrastructure as code. This approach eliminates the need to keep track of the app version for the Azure Files volume driver. It also enables you to maintain separate Azure Resource Manager templates for each supported OS. The script assumes you are deploying the latest version of the Azure Files application and takes parameters for OS type, cluster subscription ID, and resource group. You can download the script from the [Service Fabric download site](https://sfazfilevd.blob.core.windows.net/sfazfilevd/DeployAzureFilesVolumeDriver.zip). Note that this automatically sets the ListenPort, which is the port on which the Azure Files volume plugin listens for requests from the Docker daemon, to 19100. You can change it by adding parameter named "listenPort". Ensure that the port does not conflict with any other port that the cluster or your applications uses.
service-fabric Service Fabric Controlled Chaos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-controlled-chaos.md
The [Fault Injection and Cluster Analysis Service](./service-fabric-testability-
Chaos simulates periodic, interleaved faults (both graceful and ungraceful) throughout the cluster over extended periods of time. A graceful fault consists of a set of Service Fabric API calls, for example, restart replica fault is a graceful fault because this is a close followed by an open on a replica. Remove replica, move primary replica, move secondary replica, and move instance are the other graceful faults exercised by Chaos. Ungraceful faults are process exits, like restart node and restart code package.
-Once you have configured Chaos with the rate and the kind of faults, you can start Chaos through C#, Powershell, or REST API to start generating faults in the cluster and in your services. You can configure Chaos to run for a specified time period (for example, for one hour), after which Chaos stops automatically, or you can call StopChaos API (C#, Powershell, or REST) to stop it at any time.
+Once you have configured Chaos with the rate and the kind of faults, you can start Chaos through C#, PowerShell, or REST API to start generating faults in the cluster and in your services. You can configure Chaos to run for a specified time period (for example, for one hour), after which Chaos stops automatically, or you can call StopChaos API (C#, PowerShell, or REST) to stop it at any time.
> [!NOTE] > In its current form, Chaos induces only safe faults, which implies that in the absence of external faults a quorum loss, or data loss never occurs. >
-While Chaos is running, it produces different events that capture the state of the run at the moment. For example, an ExecutingFaultsEvent contains all the faults that Chaos has decided to execute in that iteration. A ValidationFailedEvent contains the details of a validation failure (health or stability issues) that was found during the validation of the cluster. You can invoke the GetChaosReport API (C#, Powershell, or REST) to get the report of Chaos runs. These events get persisted in a [reliable dictionary](./service-fabric-reliable-services-reliable-collections.md), which has a truncation policy dictated by two configurations: **MaxStoredChaosEventCount** (default value is 25000) and **StoredActionCleanupIntervalInSeconds** (default value is 3600). Every *StoredActionCleanupIntervalInSeconds* Chaos checks and all but the most recent *MaxStoredChaosEventCount* events, are purged from the reliable dictionary.
+While Chaos is running, it produces different events that capture the state of the run at the moment. For example, an ExecutingFaultsEvent contains all the faults that Chaos has decided to execute in that iteration. A ValidationFailedEvent contains the details of a validation failure (health or stability issues) that was found during the validation of the cluster. You can invoke the GetChaosReport API (C#, PowerShell, or REST) to get the report of Chaos runs. These events get persisted in a [reliable dictionary](./service-fabric-reliable-services-reliable-collections.md), which has a truncation policy dictated by two configurations: **MaxStoredChaosEventCount** (default value is 25000) and **StoredActionCleanupIntervalInSeconds** (default value is 3600). Every *StoredActionCleanupIntervalInSeconds* Chaos checks and all but the most recent *MaxStoredChaosEventCount* events, are purged from the reliable dictionary.
## Faults induced in Chaos Chaos generates faults across the entire Service Fabric cluster and compresses faults that are seen in months or years into a few hours. The combination of interleaved faults with the high fault rate finds corner cases that may otherwise be missed. This exercise of Chaos leads to a significant improvement in the code quality of the service.
service-fabric Service Fabric Deploy Remove Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-remove-applications.md
Title: Azure Service Fabric deployment with PowerShell
-description: Learn about removing and deploying applications in Azure Service Fabric and how to perform these actions in Powershell.
+description: Learn about removing and deploying applications in Azure Service Fabric and how to perform these actions in PowerShell.
Last updated 01/19/2018
service-fabric Service Fabric Dnsservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-dnsservice.md
The following example sets the DNS name for a stateful service to `statefulsvc.a
</Service> ```
-### Setting the DNS name for a service using Powershell
-You can set the DNS name for a service when creating it using the `New-ServiceFabricService` Powershell command. The following example creates a new stateless service with the DNS name `service1.application1`
+### Setting the DNS name for a service using PowerShell
+You can set the DNS name for a service when creating it using the `New-ServiceFabricService` PowerShell command. The following example creates a new stateless service with the DNS name `service1.application1`
```powershell New-ServiceFabricService `
service-fabric Service Fabric Linux Windows Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-linux-windows-differences.md
There are some features that are supported on Windows, but not yet on Linux. Eve
* Console redirection (not supported in Linux or Windows production clusters) * The Fault Analysis Service (FAS) on Linux * DNS service for Service Fabric services (DNS service is supported for containers on Linux)
-* CLI command equivalents of certain Powershell commands (list below, most of which apply only to standalone clusters)
+* CLI command equivalents of certain PowerShell commands (list below, most of which apply only to standalone clusters)
* [Differences in log implementation that may affect scalability](service-fabric-concepts-scalability.md#choosing-a-platform)
-## Powershell cmdlets that do not work against a Linux Service Fabric cluster
+## PowerShell cmdlets that do not work against a Linux Service Fabric cluster
* Invoke-ServiceFabricChaosTestScenario * Invoke-ServiceFabricFailoverTestScenario
service-fabric Service Fabric Package Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-package-apps.md
For a compressed application package, [uploading the application package](servic
The deployment mechanism is same for compressed and uncompressed packages. If the package is compressed, it is stored as such in the cluster image store and it's uncompressed on the node before the application is run. The compression replaces the valid Service Fabric package with the compressed version. The folder must allow write permissions. Running compression on an already compressed package yields no changes.
-You can compress a package by running the Powershell command [Copy-ServiceFabricApplicationPackage](/powershell/module/servicefabric/copy-servicefabricapplicationpackage)
+You can compress a package by running the PowerShell command [Copy-ServiceFabricApplicationPackage](/powershell/module/servicefabric/copy-servicefabricapplicationpackage)
with `CompressPackage` switch. You can uncompress the package with the same command, using `UncompressPackage` switch. The following command compresses the package without copying it to the image store. You can copy a compressed package to one or more Service Fabric clusters, as needed, using [Copy-ServiceFabricApplicationPackage](/powershell/module/servicefabric/copy-servicefabricapplicationpackage)
service-fabric Service Fabric Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-powershell-samples.md
Title: Azure PowerShell Samples - Service Fabric
-description: Learn about the creation and management of Azure Service Fabric clusters, apps, and services using Powershell.
+description: Learn about the creation and management of Azure Service Fabric clusters, apps, and services using PowerShell.
Last updated 11/29/2018
service-fabric Service Fabric Reliable Services Reliable Collections Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-serialization.md
This way each version can read as much it can and jump over the remaining part o
* [Serialization and upgrade](service-fabric-application-upgrade-data-serialization.md) * [Developer reference for Reliable Collections](/dotnet/api/microsoft.servicefabric.data.collections#microsoft_servicefabric_data_collections) * [Upgrading your Application Using Visual Studio](service-fabric-application-upgrade-tutorial.md) walks you through an application upgrade using Visual Studio.
- * [Upgrading your Application Using Powershell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
+ * [Upgrading your Application Using PowerShell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
* Control how your application upgrades by using [Upgrade Parameters](service-fabric-application-upgrade-parameters.md). * Learn how to use advanced functionality while upgrading your application by referring to [Advanced Topics](service-fabric-application-upgrade-advanced.md). * Fix common problems in application upgrades by referring to the steps in [Troubleshooting Application Upgrades](service-fabric-application-upgrade-troubleshooting.md).
service-fabric Service Fabric Testability Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-scenarios.md
class Test
PowerShell
-The Service Fabric Powershell module includes two ways to begin a chaos scenario. `Invoke-ServiceFabricChaosTestScenario` is client-based, and if the client machine is shutdown midway through the test, no further faults will be introduced. Alternatively, there is a set of commands meant to keep the test running in the event of machine shutdown. `Start-ServiceFabricChaos` uses a stateful and reliable system service called FaultAnalysisService, ensuring faults will remain introduced until the TimeToRun is up. `Stop-ServiceFabricChaos` can be used to manually stop the scenario, and `Get-ServiceFabricChaosReport` will obtain a report. For more information see the [Azure Service Fabric Powershell reference](/powershell/module/ServiceFabric/New-ServiceFabricService?preserve-view=true&view=azureservicefabricps) and [Inducing controlled chaos in Service Fabric clusters](service-fabric-controlled-chaos.md).
+The Service Fabric PowerShell module includes two ways to begin a chaos scenario. `Invoke-ServiceFabricChaosTestScenario` is client-based, and if the client machine is shutdown midway through the test, no further faults will be introduced. Alternatively, there is a set of commands meant to keep the test running in the event of machine shutdown. `Start-ServiceFabricChaos` uses a stateful and reliable system service called FaultAnalysisService, ensuring faults will remain introduced until the TimeToRun is up. `Stop-ServiceFabricChaos` can be used to manually stop the scenario, and `Get-ServiceFabricChaosReport` will obtain a report. For more information see the [Azure Service Fabric PowerShell reference](/powershell/module/ServiceFabric/New-ServiceFabricService?preserve-view=true&view=azureservicefabricps) and [Inducing controlled chaos in Service Fabric clusters](service-fabric-controlled-chaos.md).
```powershell $connection = "localhost:19000"
service-fabric Service Fabric Tutorial Dotnet App Enable Https Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md
First, export the certificate to a PFX file. Open the certlm.msc application and
In the export wizard, choose **Yes, export the private key** and choose the Personal Information Exchange (PFX) format. Export the file to *C:\Users\sfuser\votingappcert.pfx*.
-Next, install the certificate on the remote cluster using [these provided Powershell scripts](./scripts/service-fabric-powershell-add-application-certificate.md).
+Next, install the certificate on the remote cluster using [these provided PowerShell scripts](./scripts/service-fabric-powershell-add-application-certificate.md).
> [!Warning] > A self-signed certificate is sufficient for development and testing applications. For production applications, use a certificate from a [certificate authority (CA)](https://wikipedia.org/wiki/Certificate_authority) instead of a self-signed certificate.
service-fabric Service Fabric Understand And Troubleshoot With System Health Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-understand-and-troubleshoot-with-system-health-reports.md
The warning report is automatically cleared when all seed nodes become healthy.
For cluster running Service Fabric version older than 6.5: In this case, the warning report needs to be cleared manually. **Users should make sure all the seed nodes become healthy before clearing the report**: if the seed node is Down, users need to bring that seed node up;if the seed node is Removed or Unknown, that seed node needs to be removed from the cluster.
-After all the seed nodes become healthy, use following command from Powershell to [clear the warning report](/powershell/module/servicefabric/send-servicefabricclusterhealthreport):
+After all the seed nodes become healthy, use following command from PowerShell to [clear the warning report](/powershell/module/servicefabric/send-servicefabricclusterhealthreport):
```powershell PS C:\> Send-ServiceFabricClusterHealthReport -SourceId "System.FM" -HealthProperty "SeedNodeStatus" -HealthState OK
service-fabric Service Fabric View Entities Aggregated Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-view-entities-aggregated-health.md
Read more about the [Service Fabric application upgrade](service-fabric-applicat
## Use health evaluations to troubleshoot Whenever there is an issue with the cluster or an application, look at the cluster or application health to pinpoint what is wrong. The unhealthy evaluations provide details about what triggered the current unhealthy state. If you need to, you can drill down into unhealthy child entities to identify the root cause.
-For example, consider an application unhealthy because there is an error report on one of its replicas. The following Powershell cmdlet shows the unhealthy evaluations:
+For example, consider an application unhealthy because there is an error report on one of its replicas. The following PowerShell cmdlet shows the unhealthy evaluations:
```powershell PS D:\ServiceFabric> Get-ServiceFabricApplicationHealth fabric:/WordCount -EventsFilter None -ServicesFilter None -DeployedApplicationsFilter None -ExcludeHealthStatistics
site-recovery Site Recovery Ipconfig Cmdlet Parameter Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-ipconfig-cmdlet-parameter-deprecation.md
This article describes the deprecation, the corresponding implications, and the
Configuring Primary IP Config settings for Failover or Test Failover.
-This cmdlet impacts all the customers of Azure to Azure DR scenario using the cmdlet New-AzRecoveryServicesAsrVMNicConfig in Version _Az Powershell 5.9.0 and above_.
+This cmdlet impacts all the customers of Azure to Azure DR scenario using the cmdlet New-AzRecoveryServicesAsrVMNicConfig in Version _Az PowerShell 5.9.0 and above_.
> [!IMPORTANT] > Customers are advised to take the remediation steps at the earliest to avoid any disruption to their environment.
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md
Now that the repository is created, you can create a static web app from the Azu
--login-with-github ```
- # [React](#tab/react)
+ # [Blazor](#tab/blazor)
```azurecli az staticwebapp create \
Now that the repository is created, you can create a static web app from the Azu
--source https://github.com/$GITHUB_USER_NAME/my-first-static-web-app \ --location "eastus2" \ --branch main \
- --app-location "/" \
- --output-location "build" \
+ --app-location "Client" \
+ --output-location "wwwroot" \
--login-with-github ```
- # [Vue](#tab/vue)
+ # [React](#tab/react)
```azurecli az staticwebapp create \
Now that the repository is created, you can create a static web app from the Azu
--source https://github.com/$GITHUB_USER_NAME/my-first-static-web-app \ --location "eastus2" \ --branch main \
- --app-location "/" \
- --output-location "dist" \
+ --app-location "/" \
+ --output-location "build" \
--login-with-github ```
- # [Blazor](#tab/blazor)
+ # [Vue](#tab/vue)
```azurecli az staticwebapp create \
Now that the repository is created, you can create a static web app from the Azu
--source https://github.com/$GITHUB_USER_NAME/my-first-static-web-app \ --location "eastus2" \ --branch main \
- --app-location "Client" \
- --output-location "wwwroot" \
+ --app-location "/" \
+ --output-location "dist" \
--login-with-github ```+ > [!IMPORTANT]
static-web-apps Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md
After you sign in with GitHub, enter the repository information.
1. Leave the _Api location_ box empty. 1. Type **dist/angular-basic** in the _App artifact location_ box.
+ # [Blazor](#tab/blazor)
+
+ 1. Select **Blazor** from the _Build Presets_ dropdown.
+ 1. Keep the default value of **Client** in the _App location_ box.
+ 1. Leave the _Api location_ box empty.
+ 1. Keep the default value of **wwwroot** in the _App artifact location_ box.
+ # [React](#tab/react) 1. Select **React** from the _Build Presets_ dropdown.
After you sign in with GitHub, enter the repository information.
1. Keep the default value in the _App location_ box. 1. Leave the _Api location_ box empty. 1. Keep the default value in the _App artifact location_ box.
-
- # [Blazor](#tab/blazor)
-
- 1. Select **Blazor** from the _Build Presets_ dropdown.
- 1. Keep the default value of **Client** in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Keep the default value of **wwwroot** in the _App artifact location_ box.
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
If you don't already have the [Azure Static Web Apps extension for Visual Studio
:::image type="content" source="media/getting-started/extension-angular.png" alt-text="Application build output location: Angular":::
+ # [Blazor](#tab/blazor)
+
+ :::image type="content" source="media/getting-started/extension-presets-blazor.png" alt-text="A screenshot showing the application presets for Blazor":::
+
+ Enter **Client** as the location for the application files, since this is the root folder of the Blazor project.
+
+ Enter **wwwroot** as the build output location.
+ # [React](#tab/react) :::image type="content" source="media/getting-started/extension-presets-react.png" alt-text="Application presets: React":::
If you don't already have the [Azure Static Web Apps extension for Visual Studio
Enter **dist** as the build output location.
- # [Blazor](#tab/blazor)
-
- :::image type="content" source="media/getting-started/extension-presets-blazor.png" alt-text="A screenshot showing the application presets for Blazor":::
-
- Enter **Client** as the location for the application files, since this is the root folder of the Blazor project.
-
- Enter **wwwroot** as the build output location.
- 1. Once the app is created, a confirmation notification is shown in Visual Studio Code.
storage Encryption Customer Provided Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 08/18/2021 Last updated : 02/24/2022
Data sets have unique lifecycles. Early in the lifecycle, people access some dat
With the lifecycle management policy, you can: - Transition blobs from cool to hot immediately when they are accessed, to optimize for performance.-- Transition blobs, blob versions, and blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.-- Delete blobs, blob versions, and blob snapshots at the end of their lifecycles.
+- Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
+- Delete current versions of a blob, previous versions of a blob, or blob snapshots at the end of their lifecycles.
- Define rules to be run once per day at the storage account level. - Apply rules to containers or to a subset of blobs, using name prefixes or [blob index tags](storage-manage-find-blobs.md) as filters.
The following sample rule filters the account to run the actions on objects that
- Tier blob to cool tier 30 days after last modification - Tier blob to archive tier 90 days after last modification - Delete blob 2,555 days (seven years) after last modification-- Delete previous blob versions 90 days after creation
+- Delete previous versions 90 days after creation
```json {
The following sample rule filters the account to run the actions on objects that
} ```
+> [!NOTE]
+> The **baseBlob** element in a lifecycle management policy refers to the current version of a blob. The **version** element refers to a previous version.
+ ### Rule filters Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined, a logical `AND` runs on all filters.
To learn more about the blob index feature together with known issues and limita
Actions are applied to the filtered blobs when the run condition is met.
-Lifecycle management supports tiering and deletion of blobs, previous blob versions, and blob snapshots. Define at least one action for each rule on base blobs, previous blob versions, or blob snapshots.
+Lifecycle management supports tiering and deletion of current versions, previous versions, and blob snapshots. Define at least one action for each rule.
-| Action | Base Blob | Snapshot | Version
+| Action | Current Version | Snapshot | Previous Versions
|--|--||| | tierToCool | Supported for `blockBlob` | Supported | Supported | | enableAutoTierToHotFromCool | Supported for `blockBlob` | Not supported | Not supported |
Lifecycle management supports tiering and deletion of blobs, previous blob versi
> [!NOTE] > If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, action `delete` is cheaper than action `tierToArchive`. Action `tierToArchive` is cheaper than action `tierToCool`.
-The run conditions are based on age. Base blobs use the last modified time, blob versions use the version creation time, and blob snapshots use the snapshot creation time to track age.
+The run conditions are based on age. Current versions use the last modified time or last access time, previous versions use the version creation time, and blob snapshots use the snapshot creation time to track age.
| Action run condition | Condition value | Description | |--|--|--|
-| daysAfterModificationGreaterThan | Integer value indicating the age in days | The condition for base blob actions |
-| daysAfterCreationGreaterThan | Integer value indicating the age in days | The condition for blob version and blob snapshot actions |
-| daysAfterLastAccessTimeGreaterThan | Integer value indicating the age in days | The condition for base blob actions when access tracking is enabled |
+| daysAfterModificationGreaterThan | Integer value indicating the age in days | The condition for actions on a current version of a blob |
+| daysAfterCreationGreaterThan | Integer value indicating the age in days | The condition for actions on a previous version of a blob or a blob snapshot |
+| daysAfterLastAccessTimeGreaterThan | Integer value indicating the age in days | The condition for a current version of a blob when access tracking is enabled |
## Examples of lifecycle policies
Some data should only be expired if explicitly marked for deletion. You can conf
} ```
-### Manage versions
+### Manage previous versions
For data that is modified and accessed regularly throughout its lifetime, you can enable blob storage versioning to automatically maintain previous versions of an object. You can create a policy to tier or delete previous versions. The version age is determined by evaluating the version creation time. This policy rule tiers previous versions within container `activedata` that are 90 days or older after version creation to cool tier, and deletes previous versions that are 365 days or older.
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 07/23/2021 Last updated : 02/23/2022
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-overview.md
Previously updated : 07/06/2021 Last updated : 02/23/2022
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 11/16/2021 Last updated : 02/24/2022
The items that appear in these tables will change over time as support continues
## Standard general-purpose v2 accounts
-| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
+| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
||-|||--| | [Access tier - archive](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Access tier - cold](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
The items that appear in these tables will change over time as support continues
| [Change feed](storage-blob-change-feed.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
The items that appear in these tables will change over time as support continues
| [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Change feed](storage-blob-change-feed.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
The items that appear in these tables will change over time as support continues
| [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![No](../media/icons/no-icon.png) | | [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
storage Storage Troubleshoot Linux File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md
To close open handles for a file share, directory or file, use the [Close-AzStor
- [Fpart](https://github.com/martymac/fpart) - Sorts files and packs them into partitions. - [Fpsync](https://github.com/martymac/fpart/blob/master/tools/fpsync) - Uses Fpart and a copy tool to spawn multiple instances to migrate data from src_dir to dst_url. - [Multi](https://github.com/pkolano/mutil) - Multi-threaded cp and md5sum based on GNU coreutils.-- Setting the file size in advance, instead of making every write an extending write, helps improve copy speed in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file size with `truncate - size <size><file>` command. After that, `dd if=<source> of=<target> bs=1M conv=notrunc`command will copy a source file without having to repeatedly update the size of the target file. For example, you can set the destination file size for every file you want to copy (assume a share is mounted under /mnt/share):
- - `$ for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done`
- - and then - copy files without extending writes in parallel: `$find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc`
+- Setting the file size in advance, instead of making every write an extending write, helps improve copy speed in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file size with `truncate --size <size> <file>` command. After that, `dd if=<source> of=<target> bs=1M conv=notrunc`command will copy a source file without having to repeatedly update the size of the target file. For example, you can set the destination file size for every file you want to copy (assume a share is mounted under /mnt/share):
+ - `for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done`
+ - and then copy files without extending writes in parallel: `find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc`
<a id="error115"></a> ## "Mount error(115): Operation now in progress" when you mount Azure Files by using SMB 3.x
The force flag **f** in COPYFILE results in executing **cp -p -f** on Unix. This
Use the storage account user for copying the files: -- `Useadd : [storage account name]`-- `Passwd [storage account name]`-- `Su [storage account name]`-- `Cp -p filename.txt /share`
+- `str_acc_name=[storage account name]`
+- `sudo useradd $str_acc_name`
+- `sudo passwd $str_acc_name`
+- `su $str_acc_name`
+- `cp -p filename.txt /share`
## ls: cannot access '&lt;path&gt;': Input/output error
sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,pa
## Need help? Contact support.
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
The Azure Virtual Desktop agent updates at least once per month.
Here's what's changed in the Azure Virtual Desktop Agent: -- Version 1.0.4009.1500: This update was released in January 2022 and includes the following changes.
+- Version 1.0.4119.1500: This update was released in February 2022 and includes the following changes:
+ - Fixes an issue with arithmetic overflow casting exceptions.
+ - Updated the agent to now start the Azure Instance Metadata Service (IMDS) when the agent starts.
+ - Fixes an issue that caused Sandero name pipe service start ups to be slow when the VM has no registration information.
+ - Gneral bug fixes and agent improvements.
+- Version 1.0.4009.1500: This update was released in January 2022 and includes the following changes:
- Added logging to better capture agent update telemetry. - Updated the agent's Azure Instance Metadata Service health check to be Azure Stack HCI-friendly - Version 1.0.3855.1400: This update was released December 2021 and has the following changes:
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
You can find the classic deployment model and Resource Manager representations o
| Multiple network interfaces on a VM |Network interfaces |If a VM has multiple network interfaces associated with it, each network interface becomes a top-level resource as part of the migration, along with all the properties. | | Load-balanced endpoint set |Load balancer |In the classic deployment model, the platform assigned an implicit load balancer for every cloud service. During migration, a new load-balancer resource is created, and the load-balancing endpoint set becomes load-balancer rules. | | Inbound NAT rules |Inbound NAT rules |Input endpoints defined on the VM are converted to inbound network address translation rules under the load balancer during the migration. |
-| VIP address |Public IP address with DNS name |The virtual IP address becomes a public IP address, and is associated with the load balancer. A virtual IP can only be migrated if there is an input endpoint assigned to it. |
+| VIP address |Public IP address with DNS name |The virtual IP address becomes a public IP address, and is associated with the load balancer. A virtual IP can only be migrated if there is an input endpoint assigned to it. To retain the IP, you can [convert it to Reserved IP](https://docs.microsoft.com/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service) before migration. There will be downtime of about 60 seconds during this change.|
| Virtual network |Virtual network |The virtual network is migrated, with all its properties, to the Resource Manager deployment model. A new resource group is created with the name `-migrated`. | | Reserved IPs |Public IP address with static allocation method |Reserved IPs associated with the load balancer are migrated, along with the migration of the cloud service or the virtual machine. Unassociated reserved IPs can be migrated using [Move-AzureReservedIP](/powershell/module/servicemanagement/azure.service/move-azurereservedip). | | Public IP address per VM |Public IP address with dynamic allocation method |The public IP address associated with the VM is converted as a public IP address resource, with the allocation method set to dynamic. |
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
This article catalogs the most common errors and mitigations during the migratio
| Error string | Mitigation | | | |
-| Internal server error |In some cases, this is a transient error that goes away with a retry. If it continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) as it needs investigation of platform logs. <br><br> **NOTE:** Once the incident is tracked by the support team, please do not attempt any self-mitigation as this might have unintended consequences on your environment. |
-| Migration is not supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it is a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, please remove the web/worker role from the deployment and try migration again. |
-| Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, please [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, please do not attempt any self-mitigation as this might have unintended consequences on your environment. |
-| The virtual network {virtual-network-name} does not exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" |
-| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which is not supported in Azure Resource Manager. It is recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* are not supported in Azure Resource Manager. Therefore, these extensions cannot be migrated. If these extensions are left installed on the virtual machine, they are automatically uninstalled before completing the migration. |
+| Internal server error |In some cases, this is a transient error that goes away with a retry. If it continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) as it needs investigation of platform logs. <br><br> **NOTE:** Once the incident is tracked by the support team, please don't attempt any self-mitigation as this might have unintended consequences on your environment. |
+| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, please remove the web/worker role from the deployment and try migration again. |
+| Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, please [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, please don't attempt any self-mitigation as this might have unintended consequences on your environment. |
+| The virtual network {virtual-network-name} doesn't exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" |
+| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. If these extensions are left installed on the virtual machine, they're automatically uninstalled before completing the migration. |
| VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, please follow the workaround at https://aka.ms/vmbackupmigration |
-| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status is not being reported from the VM. Hence, this VM cannot be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM cannot be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. |
-| Migration is not supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. |
-| Migration is not supported for Deployment {deployment-name} in HostedService {hosted-service-name because it has VMs that are not part of the Availability Set even though the HostedService contains one. |The workaround for this scenario is to either move all the virtual machines into a single Availability set or remove all Virtual machines from the Availability set in the hosted service. |
+| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status isn't being reported from the VM. Hence, this VM can't be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM can't be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. |
+| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. |
+| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name because it has VMs that are not part of the Availability Set even though the HostedService contains one. |The workaround for this scenario is to either move all the virtual machines into a single Availability set or remove all Virtual machines from the Availability set in the hosted service. |
| Storage account/HostedService/Virtual Network {virtual-network-name} is in the process of being migrated and hence cannot be changed |This error happens when the "Prepare" migration operation has been completed on the resource and an operation that would make a change to the resource is triggered. Because of the lock on the management plane after "Prepare" operation, any changes to the resource are blocked. To unlock the management plane, you can run the "Commit" migration operation to complete migration or the "Abort" migration operation to roll back the "Prepare" operation. |
-| Migration is not allowed for HostedService {hosted-service-name} because it has VM {vm-name} in State: RoleStateUnknown. Migration is allowed only when the VM is in one of the following states - Running, Stopped, Stopped Deallocated. |The VM might be undergoing through a state transition, which usually happens when during an update operation on the HostedService such as a reboot, extension installation etc. It is recommended for the update operation to complete on the HostedService before trying migration. |
-| Deployment {deployment-name} in HostedService {hosted-service-name} contains a VM {vm-name} with Data Disk {data-disk-name} whose physical blob size {size-of-the-vhd-blob-backing-the-data-disk} bytes does not match the VM Data Disk logical size {size-of-the-data-disk-specified-in-the-vm-api} bytes. Migration will proceed without specifying a size for the data disk for the Azure Resource Manager VM. | This error happens if you've resized the VHD blob without updating the size in the VM API model. Detailed mitigation steps are outlined [below](#vm-with-data-disk-whose-physical-blob-size-bytes-does-not-match-the-vm-data-disk-logical-size-bytes).|
+| Migration isn't allowed for HostedService {hosted-service-name} because it has VM {vm-name} in State: RoleStateUnknown. Migration is allowed only when the VM is in one of the following states - Running, Stopped, Stopped Deallocated. |The VM might be undergoing through a state transition, which usually happens when during an update operation on the HostedService such as a reboot, extension installation etc. It is recommended for the update operation to complete on the HostedService before trying migration. |
+| Deployment {deployment-name} in HostedService {hosted-service-name} contains a VM {vm-name} with Data Disk {data-disk-name} whose physical blob size {size-of-the-vhd-blob-backing-the-data-disk} bytes doesn't match the VM Data Disk logical size {size-of-the-data-disk-specified-in-the-vm-api} bytes. Migration will proceed without specifying a size for the data disk for the Azure Resource Manager VM. | This error happens if you've resized the VHD blob without updating the size in the VM API model. Detailed mitigation steps are outlined [below](#vm-with-data-disk-whose-physical-blob-size-bytes-does-not-match-the-vm-data-disk-logical-size-bytes).|
| A storage exception occurred while validating data disk {data disk name} with media link {data disk Uri} for VM {VM name} in Cloud Service {Cloud Service name}. Please ensure that the VHD media link is accessible for this virtual machine | This error can happen if the disks of the VM have been deleted or are not accessible anymore. Please make sure the disks for the VM exist.|
-| VM {vm-name} in HostedService {cloud-service-name} contains Disk with MediaLink {vhd-uri} which has blob name {vhd-blob-name} that is not supported in Azure Resource Manager. | This error occurs when the name of the blob has a "/" in it which is not supported in Compute Resource Provider currently. |
-| Migration is not allowed for Deployment {deployment-name} in HostedService {cloud-service-name} as it is not in the regional scope. Please refer to https:\//aka.ms/regionalscope for moving this deployment to regional scope. | In 2014, Azure announced that networking resources will move from a cluster level scope to regional scope. See [https://aka.ms/regionalscope](https://aka.ms/regionalscope) for more details. This error happens when the deployment being migrated has not had an update operation, which automatically moves it to a regional scope. The best work-around is to either add an endpoint to a VM, or a data disk to the VM, and then retry migration. <br> See [How to set up endpoints on a classic virtual machine in Azure](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints#create-an-endpoint) or [Attach a data disk to a virtual machine created with the classic deployment model](./linux/attach-disk-portal.md)|
-| Migration is not supported for Virtual Network {vnet-name} because it has non-gateway PaaS deployments. | This error occurs when you have non-gateway PaaS deployments such as Application Gateway or API Management services that are connected to the Virtual Network.|
+| VM {vm-name} in HostedService {cloud-service-name} contains Disk with MediaLink {vhd-uri} which has blob name {vhd-blob-name} that isn't supported in Azure Resource Manager. | This error occurs when the name of the blob has a "/" in it which isn't supported in Compute Resource Provider currently. |
+| Migration isn't allowed for Deployment {deployment-name} in HostedService {cloud-service-name} as it isn't in the regional scope. Please refer to https:\//aka.ms/regionalscope for moving this deployment to regional scope. | In 2014, Azure announced that networking resources will move from a cluster level scope to regional scope. See [https://aka.ms/regionalscope](https://aka.ms/regionalscope) for more details. This error happens when the deployment being migrated has not had an update operation, which automatically moves it to a regional scope. The best work-around is to either add an endpoint to a VM, or a data disk to the VM, and then retry migration. <br> See [How to set up endpoints on a classic virtual machine in Azure](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints#create-an-endpoint) or [Attach a data disk to a virtual machine created with the classic deployment model](./linux/attach-disk-portal.md)|
+| Migration isn't supported for Virtual Network {vnet-name} because it has non-gateway PaaS deployments. | This error occurs when you have non-gateway PaaS deployments such as Application Gateway or API Management services that are connected to the Virtual Network.|
+| Management operations on VM are disallowed because migration is in progress| This error occurs because the VM is in Prepare state and therefore locked for any update/delete operation. Call Abort using PS/CLI on the VM to rollback the migration and unlock the VM for update/delete operations. Calling commit will also unlock the VM but will commit the migration to ARM.|
## Detailed mitigations
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
+
+ Title: NV A10 v5-series
+description: Specifications for the NV A10 v5-series VMs.
++++ Last updated : 02/01/2022+++
+# NVadsA10 v5-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The NVadsA10v5-series virtual machines are powered by [Nvidia A10](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) GPUs and AMD EPYC 74F3V(Milan) CPUs with a base frequency of 3.4 GHz, all-cores peak frequency of 4.0 GHz. With NVadsA10v5-series Azure is introducing virtual machines with partial Nvidia GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/6th of a GPU with 4-GiB frame buffer to a full A10 GPU with 24-GiB frame buffer.
+
+<br>
+
+[ACU](acu.md): Not Available<br>
+[Premium Storage](premium-storage-performance.md): Supported<br>
+[Premium Storage caching](premium-storage-performance.md): Supported<br>
+[Ultra Disks](disks-types.md#ultra-disks): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage and performance) <br>
+[Live Migration](maintenance-and-updates.md): Not Supported<br>
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
+[VM Generation Support](generation-2.md): Generation 1 and 2<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+<br>
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU partition | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (MBps) |
+| | | | | | | | |
+| Standard_NV6ads_A10_v5 |6 |55 |180 | 1/6 | 4 | 4 | 2 / 5000 |
+| Standard_NV12ads_A10_v5 |12 |110 |360 | 1/3 | 6 | 4 | 2 / 10000 |
+| Standard_NV18ads_A10_v5 |18 |220 |720 | 1/2 | 12 | 8 | 4 / 20000 |
+| Standard_NV36ads_A10_v5 |36 |440 |720 | 1 | 24 | 16 | 4 / 40000 |
+| Standard_NV36adms_A10_v5 |36 |880 |720 | 1 | 24 | 32 | 8 / 80000 |
+| Standard_NV72ads_A10_v5 |72 |880 |1400 | 2 | 48 | 32 | 8 / 80000 |
+
+<sup>1</sup> NVadsA10v5-series VMs feature AMD Simultaneous multithreading Technology
+++
+## Supported operating systems and drivers
+
+To take advantage of the GPU capabilities of Azure NVadsA10v5-series VMs, Nvidia GPU drivers must be installed.
+
+During preview you need to manually install the Nvidia GPU-P driver for [Linux](https://download.microsoft.com/download/4/3/9/439aea00-a02d-4875-8712-d1ab46cf6a73/NVIDIA-Linux-x86_64-510.47.03-grid-azure.run) and [Windows](https://download.microsoft.com/download/8/d/2/8d228f28-56e2-4e60-bdde-a1dccfe94869/511.65_grid_win10_win11_server2016_server2019_server2022_64bit_Azure_swl.exe). We'll release updated drivers before GA and include it in extensions and all the standard documentation pages.
++++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Copy Managed Disks To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md
+
+ Title: Copy managed disks to same or different subscription - CLI Sample
+description: Azure CLI Script Sample - Copy (or move) managed disks to the same or a different subscription
+documentationcenter: storage
+++
+ms.devlang: azurecli
++ Last updated : 02/23/2022++++
+# Copy managed disks to same or different subscription with CLI
+
+This script copies a managed disk to same or different subscription but in the same region. The copy works only when the subscriptions are part of the same Azure AD tenant.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name mySourceResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to create a new managed disk in the target subscription using the `Id` of the source managed disk. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az disk show](/cli/azure/disk) | Gets all the properties of a managed disk using the name and resource group properties of the managed disk. The `Id` property is used to copy the managed disk to different subscription. |
+| [az disk create](/cli/azure/disk) | Copies a managed disk by creating a new managed disk in different subscription using the `Id` and name the parent managed disk. |
+
+## Next steps
+
+[Create a virtual machine from a managed disk](./virtual-machines-linux-cli-sample-create-vm-from-managed-os-disks.md?toc=%2fpowershell%2fmodule%2ftoc.json)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Copy Managed Disks Vhd To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-vhd-to-storage-account.md
+
+ Title: Copy a managed disk to a storage account - CLI
+description: Azure CLI sample - Export or copy a managed disk to a storage account.
+documentationcenter: storage
++++++ Last updated : 02/23/2022++++
+# Export/Copy a managed disk to a storage account using the Azure CLI
+
+This script exports the underlying VHD of a managed disk to a storage account in same or different region. It first generates the SAS URI of the managed disk and then uses it to copy the VHD to a storage account. Use this script to copy managed disks to another region for regional expansion. If you want to publish the VHD file of a managed disk in Azure Marketplace, you can use this script to copy the VHD file to a storage account and then generate a SAS URI of the copied VHD to publish it in the Marketplace.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to generate the SAS URI for a managed disk and copies the underlying VHD to a storage account using the SAS URI. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az disk grant-access](/cli/azure/disk#az-disk-grant-access) | Generates read-only SAS that is used to copy the underlying VHD file to a storage account or download it to on-premises |
+| [az storage blob copy start](/cli/azure/storage/blob/copy) | Copies a blob asynchronously from one storage account to another |
+
+## Next steps
+
+[Create a managed disk from a VHD](virtual-machines-cli-sample-create-managed-disk-from-vhd.md)
+
+[Create a virtual machine from a managed disk](virtual-machines-linux-cli-sample-create-vm-from-managed-os-disks.md)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md).
virtual-machines Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-same-or-different-subscription.md
+
+ Title: Copy managed disk snapshot to a subscription - CLI Sample
+description: Azure CLI Script Sample - Copy (or move) snapshot of a managed disk to same or different subscription with CLI
+documentationcenter: storage
++++++ Last updated : 02/23/2022++++
+# Copy snapshot of a managed disk to same or different subscription with CLI
+
+This script copies a snapshot of a managed disk to same or different subscription. Use this script for the following scenarios:
+
+- Migrate a snapshot in Premium storage (Premium_LRS) to Standard storage (Standard_LRS or Standard_ZRS) to reduce your cost.
+- Migrate a snapshot from locally redundant storage (Premium_LRS, Standard_LRS) to zone redundant storage (Standard_ZRS) to benefit from the higher reliability of ZRS storage.
+- Move a snapshot to different subscription in the same region for longer retention.
+
+> [!NOTE]
+> Both subscriptions must be located under the same tenant
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name mySourceResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to create a snapshot in the target subscription using the `Id` of the source snapshot. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az snapshot show](/cli/azure/snapshot) | Gets all the properties of a snapshot using the name and resource group properties of the snapshot. The `Id` property is used to copy the snapshot to different subscription. |
+| [az snapshot create](/cli/azure/snapshot) | Copies a snapshot by creating a snapshot in different subscription using the `Id` and name of the parent snapshot. |
+
+## Next steps
+
+[Create a virtual machine from a snapshot](./virtual-machines-linux-cli-sample-create-vm-from-snapshot.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-storage-account.md
+
+ Title: Copy a snapshot to a storage account in another region using the CLI
+description: Azure CLI Script Sample - Export/Copy snapshot as VHD to a storage account in same or different region.
+documentationcenter: storage
++++
+ms.devlang: azurecli
++ Last updated : 02/23/2022++++
+# Export/Copy a snapshot to a storage account in different region with CLI
+
+This script exports a managed snapshot to a storage account in different region. It first generates the SAS URI of the snapshot and then uses it to copy it to a storage account in different region. Use this script to maintain backup of your managed disks in different region for disaster recovery.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to generate SAS URI for a managed snapshot and copies the snapshot to a storage account using SAS URI. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az snapshot grant-access](/cli/azure/snapshot) | Generates read-only SAS that is used to copy underlying VHD file to a storage account or download it to on-premises |
+| [az storage blob copy start](/cli/azure/storage/blob/copy) | Copies a blob asynchronously from one storage account to another |
+
+## Next steps
+
+[Create a managed disk from a VHD](virtual-machines-cli-sample-create-managed-disk-from-vhd.md?toc=%2fcli%2fmodule%2ftoc.json)
+
+[Create a virtual machine from a managed disk](virtual-machines-linux-cli-sample-create-vm-from-managed-os-disks.md?toc=%2fcli%2fmodule%2ftoc.json)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
+
+ Title: Create managed disk from snapshot (Linux) - CLI sample
+description: Azure CLI Script Sample - Create a managed disk from a snapshot
+
+documentationcenter: storage
+++
+tags: azure-service-management
+
+ms.assetid:
+
+ms.devlang: azurecli
+
+ vm-linux
+ Last updated : 02/23/2022++++
+# Create a managed disk from a snapshot with CLI (Linux)
+
+This script creates a managed disk from a snapshot. Use it to restore a virtual machine from snapshots of OS and data disks. Create OS and data managed disks from respective snapshots and then create a new virtual machine by attaching managed disks. You can also restore data disks of an existing VM by attaching data disks created from snapshots.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to create a managed disk from a snapshot. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az snapshot show](/cli/azure/snapshot) | Gets all the properties of a snapshot using the name and resource group properties of the snapshot. Id property is used to create managed disk. |
+| [az disk create](/cli/azure/disk) | Creates a managed disk using snapshot Id of a managed snapshot |
+
+## Next steps
+
+[Create a virtual machine by attaching a managed disk as OS disk](./virtual-machines-linux-cli-sample-create-vm-from-managed-os-disks.md?toc=%2fcli%2fmodule%2ftoc.json)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-vhd.md
+
+ Title: Create a managed disk from a VHD file in the same account - CLI sample
+description: Azure CLI Script Sample - Create a managed disk from a VHD file in a storage account in the same subscription
+documentationcenter: storage
++++
+ms.devlang: azurecli
++ Last updated : 02/23/2022++++
+# Create a managed disk from a VHD file in a storage account in the same subscription with CLI (Linux)
+
+This script creates a managed disk from a VHD file in a storage account in the same subscription. Use this script to import a specialized (not generalized/sysprepped) VHD to managed OS disk to create a virtual machine. Or, use it to import a data VHD to managed data disk.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses following commands to create a managed disk from a VHD. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az disk create](/cli/azure/disk) | Creates a managed disk using URI of a VHD in a storage account in the same subscription |
+
+## Next steps
+
+[Create a virtual machine by attaching a managed disk as OS disk](./virtual-machines-linux-cli-sample-create-vm-from-managed-os-disks.md?toc=%2fcli%2fmodule%2ftoc.json)
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine and managed disks CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
+
+ Title: Create a VM by attaching a managed disk as OS disk - CLI Sample
+description: Azure CLI Script Sample - Create a VM by attaching a managed disk as OS disk
+
+documentationcenter: virtual-machines
++
+editor: ramankum
+tags: azure-service-management
+
+ms.assetid:
+
+ms.devlang: azurecli
+
+ vm-linux
+ Last updated : 02/23/2022++++
+# Create a virtual machine using an existing managed OS disk with CLI
+
+This script creates a virtual machine by attaching an existing managed disk as OS disk. Use this script in preceding scenarios:
+
+* Create a VM from an existing managed OS disk that was copied from a managed disk in different subscription
+* Create a VM from an existing managed disk that was created from a specialized VHD file
+* Create a VM from an existing managed OS disk that was created from a snapshot
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses the following commands to get managed disk properties, attach a managed disk to a new VM and create a VM. Each item in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az disk show](/cli/azure/disk) | Gets managed disk properties using disk name and resource group name. Id property is used to attach a managed disk to a new VM |
+| [az vm create](/cli/azure/vm) | Creates a VM using a managed OS disk |
+
+## Next steps
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
+
+ Title: Create a VM from a snapshot - CLI Sample
+description: Azure CLI Script Sample - Create a VM from a snapshot
+
+documentationcenter: virtual-machines
++
+editor: ramankum
+tags: azure-service-management
+
+ms.assetid:
+
+ms.devlang: azurecli
+
+ vm-linux
+ Last updated : 02/23/2022++++
+# Create a virtual machine from a snapshot with CLI
+
+This script creates a virtual machine from a snapshot of an OS disk.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli-interactive
+az group delete --name myResourceGroupName
+```
+
+## Sample reference
+
+This script uses the following commands to create a managed disk, virtual machine, and all related resources. Each command in the table links to command specific documentation.
+
+| Command | Notes |
+|||
+| [az snapshot show](/cli/azure/snapshot) | Gets snapshot using snapshot name and resource group name. Id property of the returned object is used to create a managed disk. |
+| [az disk create](/cli/azure/disk) | Creates managed disks from a snapshot using snapshot Id, disk name, storage type, and size |
+| [az vm create](/cli/azure/vm) | Creates a VM using a managed OS disk |
+
+## Next steps
+
+For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+
+Additional virtual machine CLI script samples can be found in the [Azure Linux VM documentation](../linux/cli-samples.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureDigitalTwins** | Azure Digital Twins.<br/><br/>**Note**: This tag or the IP addresses covered by this tag can be used to restrict access to endpoints configured for event routes. | Inbound | No | Yes | | **AzureEventGrid** | Azure Event Grid. | Both | No | No | | **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | No | No |
+| **AzureHealthcareAPIs** | The IP addresses covered by this tag can be used to restrict access to Azure Health Data Services. | Both | No | Yes |
| **AzureInformationProtection** | Azure Information Protection.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**, **AzureFrontDoor.Frontend** and **AzureFrontDoor.FirstParty** tags. | Outbound | No | No | | **AzureIoTHub** | Azure IoT Hub. | Outbound | Yes | No | | **AzureKeyVault** | Azure Key Vault.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | Yes |
virtual-wan Monitor Point To Site Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-point-to-site-connections.md
The Azure workbook is now ready to be created. We'll use a mix of built-in funct
``` 1. To see the results, select the blue button **Run Query** to see the results.
-1. If yous see the following error, then navigate back to the file (vpnstatfile.json) in the storage container's blob, and regenerate the SAS URL. Then paste the updated SAS URL in the query.
+1. If you see the following error, then navigate back to the file (vpnstatfile.json) in the storage container's blob, and regenerate the SAS URL. Then paste the updated SAS URL in the query.
:::image type="content" source="./media/monitor-point-to-site-connections/workbook-error.png" alt-text="Screenshot shows error when running query in workbook."::: 1. Save the workbook to return to it later.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
When you choose to deploy a security partner provider to protect Internet access
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
+### Why am I seeing a message and button called "Update router to latest software version" in portal?
+
+The Virtual WAN team has been working on upgrading virtual routers from their current cloud service infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware and have enhanced scaling out capabilities during high CPU usage. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the lastest version by clicking on the button.
+
+Please note that youΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNET connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new VMSS based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating.
+ ## Next steps * For more information about Virtual WAN, see [About Virtual WAN](virtual-wan-about.md).
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-forced-tunneling-rm.md
Forced tunneling in Azure is configured using virtual network custom user-define
* **On-premises routes:** To the Azure VPN gateway. * **Default route:** Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes are dropped. * This procedure uses user-defined routes (UDR) to create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets.
-* Forced tunneling must be associated with a VNet that has a route-based VPN gateway. You need to set a "default site" among the cross-premises local sites connected to the virtual network. Also, the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
+* Forced tunneling must be associated with a VNet that has a route-based VPN gateway. Your forced tunneling configuration will override the default route for any subnet in its VNet. You need to set a "default site" among the cross-premises local sites connected to the virtual network. Also, the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
* ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [ExpressRoute Documentation](https://azure.microsoft.com/documentation/services/expressroute/). * When having both VPN Gateway and ExpressRoute Gateway deployed in the same VNet, user-defined routes (UDR) is no longer needed as ExpressRoute Gateway will advertise configured "default site" into VNet.