Updates from: 05/31/2023 01:12:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Previously updated : 12/29/2022 Last updated : 05/29/2023
Manage your Azure AD B2C environment.
| Use version control for your custom policies | Consider using GitHub, Azure Repos, or another cloud-based version control system for your Azure AD B2C custom policies. | | Use the Microsoft Graph API to automate the management of your B2C tenants | Microsoft Graph APIs:<br/>Manage [Identity Experience Framework](/graph/api/resources/trustframeworkpolicy?preserve-view=true&view=graph-rest-beta) (custom policies)<br/>[Keys](/graph/api/resources/trustframeworkkeyset?preserve-view=true&view=graph-rest-beta)<br/>[User Flows](/graph/api/resources/identityuserflow?preserve-view=true&view=graph-rest-beta) | | Integrate with Azure DevOps | A [CI/CD pipeline](deploy-custom-policies-devops.md) makes moving code between different environments easy and ensures production readiness always. |
-| Custom policy deployment | Azure AD B2C relies on caching to deliver performance to your end users. When you deploy a custom policy using whatever method, expect a delay of up to **30 minutes** for your users to see the changes. As a result of this behavior, consider the following practices when you deploy your custom policies: <br> - If you're deploying to a development environment, set the `DeploymentMode` attribute to `Development` in your custom policy file's `<TrustFrameworkPolicy>` element. <br> - Deploy your updated policy files to a production environment when traffic in your app is low. <br> - When you deploy to a production environment to update existing policy files, upload the updated files with new name(s), and then update your app reference to the new name(s). You can then remove the old policy files afterwards.<br> - You can set the `DeploymentMode` to `Development` in a production environment to bypass the caching behavior. However, we don't recommend this practice. If you [Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md), all claims sent to and from identity providers are collected, which is a security and performance risk. |
+| Deploy custom policy | Azure AD B2C relies on caching to deliver performance to your end users. When you deploy a custom policy using whatever method, expect a delay of up to **30 minutes** for your users to see the changes. As a result of this behavior, consider the following practices when you deploy your custom policies: <br> - If you're deploying to a development environment, set the `DeploymentMode` attribute to `Development` in your custom policy file's `<TrustFrameworkPolicy>` element. <br> - Deploy your updated policy files to a production environment when traffic in your app is low. <br> - When you deploy to a production environment to update existing policy files, upload the updated files with new name(s), and then update your app reference to the new name(s). You can then remove the old policy files afterwards.<br> - You can set the `DeploymentMode` to `Development` in a production environment to bypass the caching behavior. However, we don't recommend this practice. If you [Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md), all claims sent to and from identity providers are collected, which is a security and performance risk. |
+| Deploy app registration updates | When you modify your application registration in your Azure AD B2C tenant, such as updating the application's redirect URI, expect a delay of up to **2 hours (3600s)** for the changes to take effect in the production environment. We recommend that you modify your application registration in your production environment when traffic in your app is low.|
| Integrate with Azure Monitor | [Audit log events](view-audit-logs.md) are only retained for seven days. [Integrate with Azure Monitor](azure-monitor.md) to retain the logs for long-term use, or integrate with third-party security information and event management (SIEM) tools to gain insights into your environment. | | Setup active alerting and monitoring | [Track user behavior](./analytics-with-application-insights.md) in Azure AD B2C using Application Insights. |
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsschema.md
The **DataType** element supports the following values:
|boolean|Represents a Boolean (`true` or `false`) value.| |date| Represents an instant in time, typically expressed as a date of a day. The value of the date follows ISO 8601 convention.| |dateTime|Represents an instant in time, typically expressed as a date and time of day. The value of the date follows ISO 8601 convention during runtime and is converted to UNIX epoch time when issued as a claim into the token.|
-|duration|Represents a time interval in years, months, days, hours, minutes, and seconds. The format of is `PnYnMnDTnHnMnS`, where `P` indicates positive, or `N` for negative value. `nY` is the number of years followed by a literal `Y`. `nMo` is the number of months followed by a literal `Mo`. `nD` is the number of days followed by a literal `D`. Examples: `P21Y` represents 21 years. `P1Y2Mo` represents one year, and two months. `P1Y2Mo5D` represents one year, two months, and five days. `P1Y2M5DT8H5M620S` represents one year, two months, five days, eight hours, five minutes, and twenty seconds. |
+|duration|Represents a time interval in years, months, days, hours, minutes, and seconds. The format of is `PnYnMnDTnHnMnS`, where `P` indicates positive, or `N` for negative value. `nY` is the number of years followed by a literal `Y`. `nMo` is the number of months followed by a literal `Mo`. `nD` is the number of days followed by a literal `D`. Examples: `P21Y` represents 21 years. `P1Y2Mo` represents one year, and two months. `P1Y2Mo5D` represents one year, two months, and five days. `P1Y2M5DT8H5M20S` represents one year, two months, five days, eight hours, five minutes, and twenty seconds. |
|phoneNumber|Represents a phone number. | |int| Represents number between -2,147,483,648 and 2,147,483,647| |long| Represents number between -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
The **UserInputType** element available user input types:
|Password | `string` |Password text box.| |RadioSingleSelect |`string` | Collection of radio buttons. The claim value is the selected value.| |Readonly | `boolean`, `date`, `dateTime`, `duration`, `int`, `long`, `string`| Read-only text box. |
-|TextBox |`boolean`, `int`, `string` |Single-line text box. |
+|TextBox |`boolean`, `int`, `phoneNumber`, `string` |Single-line text box. |
#### TextBox
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md
To create a public container in Blob storage, perform the following steps:
1. Under **Data storage** in the left-hand menu, select **Containers**. 1. Select **+ Container**. 1. For **Name**, enter *root*. The name can be a name of your choosing, for example *contoso*, but we use *root* in this example for simplicity.
-1. For **Public access level**, select **Blob**.
+1. For **Public access level**, select **Blob**. By selecting the **Blob** option, you allow an anonymous public read-only access for this container.
1. Select **Create** to create the container. 1. Select **root** to open the new container.
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/embedded-login.md
The **Sources** attribute contains the URI of your web application. Add a space
- The URI must be trusted and owned by your application. - The URI must use the https scheme. - The full URI of the web app must be specified. Wildcards are not supported.
+- The **JourneyFraming** element only allows site URLs with a **two to seven-character** Top-level domain (TLD) to align with commonly recognized TLDs.
In addition, we recommend that you also block your own domain name from being embedded in an iframe by setting the `Content-Security-Policy` and `X-Frame-Options` headers respectively on your application pages. This will mitigate security concerns around older browsers related to nested embedding of iframes.
active-directory-b2c Enable Authentication Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md
export const b2cPolicies = {
export const msalConfig: Configuration = { auth: { clientId: '<your-MyApp-application-ID>',
- authority: b2cPolicies.authorities.signUpSignIn,
+ authority: b2cPolicies.authorities.signUpSignIn.authority,
knownAuthorities: [b2cPolicies.authorityDomain], redirectUri: '/', },
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
You configure localized resources elements for the content definition and any la
<LocalizedString ElementType="UxElement" StringId="local_intro_email">#Iniciar sesión con su cuenta existente</LocalizedString> <LocalizedString ElementType="UxElement" StringId="invalid_email">#Escriba una dirección de correo electrónico válida</LocalizedString> <LocalizedString ElementType="UxElement" StringId="unknown_error">#Tenemos problemas para iniciar su sesión. Vuelva a intentarlo más tarde. </LocalizedString>
- <LocalizedString ElementType="UxElement" StringId="email_pattern">^[a-zA-Z0-9.!#$%&amp;'^_`{}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)*$</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="email_pattern">^[a-zA-Z0-9.!#$%&amp;'^_`\{\}~\-]+@[a-zA-Z0-9\-]+(?:\.[a-zA-Z0-9\-]+)*$</LocalizedString>
<LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfInvalidPassword">#Su contrase├▒a es incorrecta.</LocalizedString> <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfClaimsPrincipalDoesNotExist">#Parece que no podemos encontrar su cuenta.</LocalizedString> <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfOldPasswordUsed">#Parece que ha usado una contrase├▒a antigua.</LocalizedString>
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
The following IDs are used for a content definition with an ID of `api.signupors
| `logonIdentifier_email` | Email Address | `< 2.0.0` | | `requiredField_email` | Please enter your email | `< 2.0.0` | | `invalid_email` | Please enter a valid email address | `< 2.0.0` |
-| `email_pattern` | ```^[a-zA-Z0-9.!#$%&''\*+/=?^\_\`{\|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)\*$``` | `< 2.0.0` |
+| `email_pattern` | ```^[a-zA-Z0-9.!#$%&'*+\/=?^_`\{\|\}~\-]+@[a-zA-Z0-9\-]+(?:\\.[a-zA-Z0-9\-]+)\*$``` | `< 2.0.0` |
| `local_intro_username` | Sign in with your user name | `< 2.0.0` | | `logonIdentifier_username` | Username | `< 2.0.0` | | `requiredField_username` | Please enter your user name | `< 2.0.0` |
The following IDs are used for a content definition having an ID of `api.localac
| `alert_message` | Are you sure that you want to cancel entering your details? | | `ver_intro_msg` | Verification is necessary. Please click Send button. | | `ver_input` | Verification code |
+| `required_field_descriptive` | {0} is required |
### Sign-up and self-asserted pages disclaimer links
The following example shows the use of some of the user interface elements in th
<LocalizedString ElementType="UxElement" StringId="initial_intro">Please provide the following details.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="preloader_alt">Please wait</LocalizedString> <LocalizedString ElementType="UxElement" StringId="required_field">This information is required.</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="required_field_descriptive">{0} is required</LocalizedString>
<LocalizedString ElementType="UxElement" StringId="ver_but_edit">Change e-mail</LocalizedString> <LocalizedString ElementType="UxElement" StringId="ver_but_resend">Send new code</LocalizedString> <LocalizedString ElementType="UxElement" StringId="ver_but_send">Send verification code</LocalizedString>
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
To create an instance of Application Insights in your subscription, follow these
UserJourneyRecorderEndpoint="urn:journeyrecorder:applicationinsights" ```
-1. If it doesn't already exist, add a `<UserJourneyBehaviors>` child node to the `<RelyingParty>` node. It must be located after `<DefaultUserJourney ReferenceId="UserJourney Id" from your extensions policy, or equivalent (for example:SignUpOrSigninWithAAD" />`.
+1. If it doesn't already exist, add a `<UserJourneyBehaviors>` child node to the `<RelyingParty>` node. It must be located after `<DefaultUserJourney ReferenceId="UserJourney Id" from your extensions policy, or equivalent (for example:SignUpOrSigninWithAAD" />`. See [RelyingParty schema reference](./relyingparty.md) for a complete order of the **RelyingParty** child elements.
1. Add the following node as a child of the `<UserJourneyBehaviors>` element. Make sure to replace `{Your Application Insights Key}` with the Application Insights **Instrumentation Key** that you recorded earlier. ```xml
To create an instance of Application Insights in your subscription, follow these
... <RelyingParty> <DefaultUserJourney ReferenceId="UserJourney ID from your extensions policy, or equivalent (for example: SignUpOrSigninWithAzureAD)" />
+ <Endpoints>
+ <!--points to refresh token journey when app makes refresh token request-->
+ <Endpoint Id="Token" UserJourneyReferenceId="RedeemRefreshToken" />
+ </Endpoints>
<UserJourneyBehaviors> <JourneyInsights TelemetryEngine="ApplicationInsights" InstrumentationKey="{Your Application Insights Key}" DeveloperMode="true" ClientEnabled="false" ServerEnabled="true" TelemetryVersion="1.0.0" /> </UserJourneyBehaviors>
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
# Migrate from MFA Server to Azure AD Multi-Factor Authentication
-Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure AD Multi-Factor Authentication Server (MFA Server) isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
+Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure AD Multi-Factor Authentication Server (MFA Server) isn't available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
In this article, we assume that you have a hybrid environment where:
There are multiple possible end states to your migration, depending on your goal
| <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS | |||-|--|
-|MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |
+|MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |
|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. | |Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. | If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-mfa-user-authentication.md).
-If you canΓÇÖt move your user authentication, see the step-by-step guidance for [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md).
+If you can't move your user authentication, see the step-by-step guidance for [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-mfa-with-federation.md).
## Prerequisites
If you canΓÇÖt move your user authentication, see the step-by-step guidance for
## Considerations for all migration paths Migrating from MFA Server to Azure AD Multi-Factor Authentication involves more than just moving the registered MFA phone numbers.
-MicrosoftΓÇÖs MFA server can be integrated with many systems, and you must evaluate how these systems are using MFA Server to understand the best ways to integrate with Azure AD Multi-Factor Authentication.
+Microsoft's MFA server can be integrated with many systems, and you must evaluate how these systems are using MFA Server to understand the best ways to integrate with Azure AD Multi-Factor Authentication.
### Migrating MFA user information
Others might include:
## Next steps -- [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md)
+- [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-mfa-with-federation.md)
- [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-mfa-user-authentication.md) - [How to use the MFA Server Migration Utility](how-to-mfa-server-migration-utility.md)
active-directory How To Migrate Mfa Server To Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md
+
+ Title: Migrate to Azure AD MFA with federations
+description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation
+++ Last updated : 05/23/2023++++++
+# Migrate to Azure AD MFA with federation
+
+Moving your multi-factor-authentication (MFA) solution to Azure Active Directory (Azure AD) is a great first step in your journey to the cloud. Consider also moving to Azure AD for user authentication in the future. For more information, see the process for migrating to Azure AD MFA with cloud authentication.
+
+To migrate to Azure AD MFA with federation, the Azure AD MFA authentication provider is installed on AD FS. The Azure AD relying party trust and other relying party trusts are configured to use Azure AD MFA for migrated users.
+
+The following diagram shows the migration process.
+
+ ![Flow chart of the migration process. Process areas and headings in this document are in the same order](./media/how-to-migrate-mfa-server-to-mfa-with-federation/mfa-federation-flow.png)
+
+## Create migration groups
+
+To create new Conditional Access policies, you'll need to assign those policies to groups. You can use Azure AD security groups or Microsoft 365 Groups for this purpose. You can also create or sync new ones.
+
+You'll also need an Azure AD security group for iteratively migrating users to Azure AD MFA. These groups are used in your claims rules.
+
+Don't reuse groups that are used for security. If you're using a security group to secure a group of high-value apps with a Conditional Access policy, only use the group for that purpose.
+
+## Prepare AD FS
+
+### Upgrade AD FS server farm to 2019, FBL 4
+
+In AD FS 2019, you can specify additional authentication methods for a relying party, such as an application. You use group membership to determine authentication provider. By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition. For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server). The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL to 4.
+
+### Configure claims rules to invoke Azure AD MFA
+
+Now that Azure AD MFA is an additional authentication method, you can assign groups of users to use it. You do so by configuring claims rules, also known as relying party trusts. By using groups, you can control which authentication provider is called globally or by application. For example, you can call Azure AD MFA for users who have registered for combined security information, while calling MFA Server for those who haven't.
+
+ > [!NOTE]
+ > Claims rules require on-premises security group. Before making changes to claims rules, back them up.
++
+#### Back up rules
+
+Before configuring new claims rules, back up your rules. You'll need to restore these rules as a part of your clean-up steps.
+
+Depending on your configuration, you may also need to copy the rule and append the new rules being created for the migration.
+
+To view global rules, run:
+
+```powershell
+Get-AdfsAdditionalAuthenticationRule
+```
+
+To view relying party trusts, run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name "RPTrustName").AdditionalAuthenticationRules
+```
+
+#### Access control policies
+
+> [!NOTE]
+> Access control policies can't be configured so that a specific authentication provider is invoked based on group membership.
+
+
+To transition from access control policies to additional authentication rules, run the following command for each of your Relying Party Trusts using the MFA Server authentication provider:
++
+```powershell
+Set-AdfsRelyingPartyTrust -TargetName AppA -AccessControlPolicyName $Null
+```
+
+
+
+This command will move the logic from your current Access Control Policy into Additional Authentication Rules.
++
+#### Set up the group, and find the SID
+
+You'll need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You'll need the security identifier (SID) for that group.
+
+To find the group SID, use the following command, with your group name
+
+`Get-ADGroup "GroupName"`
+
+ ![Image of screen shot showing the results of the Get-ADGroup script.](./media/how-to-migrate-mfa-server-to-mfa-user-authentication/find-the-sid.png)
+
+#### Setting the claims rules to call Azure AD MFA
+
+The following PowerShell cmdlets invoke Azure AD MFA for users in the group when not on the corporate network. Replace "YourGroupSid" with the SID found by running the above cmdlet.
+
+Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server).
+
+ > [!IMPORTANT]
+ > Back up your claims rules
+
+
+
+#### Set global claims rule
+
+Run the following PowerShell cmdlet:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name "RPTrustName").AdditionalAuthenticationRules
+```
+
+
+
+The command returns your current additional authentication rules for your relying party trust. Append the following rules to your current claim rules:
+
+```console
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+The following example assumes your current claim rules are configured to prompt for MFA when users connect from outside your network. This example includes the additional rules that you need to append.
+
+```PowerShell
+Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type ==
+"http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"http://schemas.microsoft.com/claims/multipleauthn" );
+ c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
++
+#### Set per-application claims rule
+
+This example modifies claim rules on a specific relying party trust (application), and includes the information you must append.
+
+```PowerShell
+Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type ==
+"http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"http://schemas.microsoft.com/claims/multipleauthn" );
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
++
+### Configure Azure AD MFA as an authentication provider in AD FS
+
+To configure Azure AD MFA for AD FS, you must configure each AD FS server. If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell.
+
+For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) in the article [Configure Azure AD MFA as authentication provider with AD FS](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa).
+
+Once you've configured the servers, you can add Azure AD MFA as an additional authentication method.
+
+![Screen shot showing the Edit authentication methods screen with Azure AD MFA and Azure Multi-factor authentication Server selected](./media/how-to-migrate-mfa-server-to-mfa-user-authentication/edit-authentication-methods.png)
+
+## Prepare Azure AD and implement migration
+
+This section covers final steps before migrating user MFA settings.
+
+### Set federatedIdpMfaBehavior to enforceMfaByFederatedIdp
+
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true ).
+
+ >[!NOTE]
+ > The **federatedIdpMfaBehavior** setting is a new version of the **SupportsMfa** property of the [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration) cmdlet.
+
+For domains that set the **SupportsMfa** property, these rules determine how **federatedIdpMfaBehavior** and **SupportsMfa** work together:
+
+- Switching between **federatedIdpMfaBehavior** and **SupportsMfa** isn't supported.
+- Once **federatedIdpMfaBehavior** property is set, Azure AD ignores the **SupportsMfa** setting.
+- If the **federatedIdpMfaBehavior** property is never set, Azure AD will continue to honor the **SupportsMfa** setting.
+- If **federatedIdpMfaBehavior** or **SupportsMfa** isn't set, Azure AD will default to `acceptIfMfaDoneByFederatedIdp` behavior.
+
+You can check the status of **federatedIdpMfaBehavior** by using [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true).
+
+```powershell
+Get-MgDomainFederationConfiguration ΓÇôDomainID yourdomain.com
+```
+
+You can also check the status of your **SupportsMfa** flag with [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration):
+
+```powershell
+Get-MgDomainFederationConfiguration ΓÇôDomainName yourdomain.com
+```
+
+The following example shows how to set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` by using Graph PowerShell.
+
+#### Request
+<!-- {
+ "blockType": "request",
+ "name": "update_internaldomainfederation"
+}
+-->
+``` http
+PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc
+Content-Type: application/json
+{
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+}
+```
++
+#### Response
+>**Note:** The response object shown here might be shortened for readability.
+<!-- {
+ "blockType": "response",
+ "truncated": true,
+ "@odata.type": "microsoft.graph.internalDomainFederation"
+}
+-->
+``` http
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "@odata.type": "#microsoft.graph.internalDomainFederation",
+ "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc",
+ "issuerUri": "http://contoso.com/adfs/services/trust",
+ "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex",
+ "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "passiveSignInUri": "https://sts.contoso.com/adfs/ls",
+ "preferredAuthenticationProtocol": "wsFed",
+ "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed",
+ "signOutUri": "https://sts.contoso.com/adfs/ls",
+ "promptLoginBehavior": "nativeSupport",
+ "isSignedAuthenticationRequestRequired": true,
+ "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "signingCertificateUpdateStatus": {
+ "certificateUpdateResult": "Success",
+ "lastRunDateTime": "2021-08-25T07:44:46.2616778Z"
+ },
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+}
+```
++
+### Configure Conditional Access policies if needed
+
+If you use Conditional Access to determine when users are prompted for MFA, you shouldn't need to change your policies.
+
+If your federated domain(s) have SupportsMfa set to false, analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+
+After creating conditional access policies to enforce the same controls as AD FS, you can back up and remove your claim rules customizations on the Azure AD Relying Party.
+
+For more information, see the following resources:
+
+* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+
+* [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
++
+## Register users for Azure AD MFA
+
+This section covers how users can register for combined security (MFA and self-service-password reset) and how to migrate their MFA settings. Microsoft Authenticator can be used as in passwordless mode. It can also be used as a second factor for MFA with either registration method.
+
+### Register for combined security registration (recommended)
+
+We recommend having your users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR.
+
+Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process.
+These include templates for email, posters, table tents, and various other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen.
+
+We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location. For information on tracking registration statuses, see [Authentication method activity for Azure Active Directory](howto-authentication-methods-activity.md).
+
+ > [!NOTE]
+ > Users who must register their combined security information from a non-trusted location or device can be issued a Temporary Access Pass or alternatively, temporarily excluded from the policy.
+
+### Migrate MFA settings from MFA Server
+
+You can use the [MFA Server Migration utility](how-to-mfa-server-migration-utility.md) to synchronize registered MFA settings for users from MFA Server to Azure AD.
+You can synchronize phone numbers, hardware tokens, and device registrations such as Microsoft Authenticator settings.
+
+### Add users to the appropriate groups
+
+* If you created new conditional access policies, add the appropriate users to those groups.
+
+* If you created on-premises security groups for claims rules, add the appropriate users to those groups.
+
+We don't recommend that you reuse groups that are used for security. If you're using a security group to secure a group of high-value apps with a Conditional Access policy, only use the group for that purpose.
+
+## Monitoring
+
+Azure AD MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**.
+
+In Usage & insights, select **Authentication methods**.
+
+Detailed Azure AD MFA registration information can be found on the Registration tab. You can drill down to view a list of registered users by selecting the **Users capable of Azure multi-factor authentication** hyperlink.
+
+ ![Image of Authentication methods activity screen showing user registrations to MFA](./media/how-to-migrate-mfa-server-to-mfa-with-federation/authentication-methods.png)
+
+## Cleanup steps
+
+Once you have completed migration to Azure AD MFA and are ready to decommission the MFA Server, do the following three things:
+
+1. Revert your claim rules on AD FS to their pre-migration configuration and remove the MFA Server authentication provider.
+
+1. Remove MFA server as an authentication provider in AD FS. This will ensure all users use Azure AD MFA as it will be the only additional authentication method enabled.
+
+1. Decommission the MFA Server.
+
+### Revert claims rules on AD FS and remove MFA Server authentication provider
+
+Follow the steps under Configure claims rules to invoke Azure AD MFA to revert back to the backed up claims rules and remove any AzureMFAServerAuthentication claims rules.
+
+For example, remove the following from the rule(s):
+
+
+```console
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"**YourGroupSID**"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+### Disable MFA Server as an authentication provider in AD FS
+
+This change ensures only Azure AD MFA is used as an authentication provider.
+
+1. Open the **AD FS management console**.
+
+1. Under **Services**, right-click on **Authentication Methods**, and select **Edit Multi-factor Authentication Methods**.
+
+1. Uncheck the box next to **Azure Multi-Factor Authentication Server**.
+
+### Decommission the MFA Server
+
+Follow your enterprise server decommissioning process to remove the MFA Servers in your environment.
+
+Possible considerations when decommissions the MFA Servers include:
+
+* Review MFA Servers' logs to ensure no users or applications are using it before you remove the server.
+
+* Uninstall Multi-Factor Authentication Server from the Control Panel on the server
+
+* Optionally clean up logs and data directories that are left behind after backing them up first.
+
+* Uninstall the Multi-Factor Authentication Web Server SDK if applicable, including any files left over in etpub\wwwroot\MultiFactorAuthWebServiceSdk and or MultiFactorAuth directories
+
+* For MFA Server versions prior to 8.0, it may also be necessary to remove the Multi-Factor Auth Phone App Web Service
+
+## Next Steps
+
+- [Deploy password hash synchronization](../hybrid/whatis-phs.md)
+- [Learn more about Conditional Access](../conditional-access/overview.md)
+- [Migrate applications to Azure AD](../manage-apps/migrate-application-authentication-to-azure-active-directory.md)
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 05/17/2023 Last updated : 05/30/2023
To unblock a user, complete the following steps:
## Report suspicious activity
-A preview of **Report Suspicious Activity**, the updated MFA **Fraud Alert** feature, is now available. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt by using Microsoft Authenticator or through their phone. These alerts are integrated with [Identity Protection](../identity-protection/overview-identity-protection.md) for more comprehensive coverage and capability.
+**Report suspicious activity**, the updated **MFA Fraud Alert** feature, is now available. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt by using Microsoft Authenticator or through their phone. These alerts are integrated with [Identity Protection](../identity-protection/overview-identity-protection.md) for more comprehensive coverage and capability.
Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md).
-To enable **Report Suspicious Activity** from the Authentication Methods Settings:
+To enable **Report suspicious activity** from the Authentication Methods Settings:
1. In the Azure portal, click **Azure Active Directory** > **Security** > **Authentication Methods** > **Settings**.
-1. Set **Report Suspicious Activity** to **Enabled**.
+1. Set **Report suspicious activity** to **Enabled**.
1. Select **All users** or a specific group. ### View suspicious activity events
Once a user has reported a prompt as suspicious, the risk should be investigated
### Report suspicious activity and fraud alert
-**Report Suspicious Activity** and the legacy **Fraud Alert** implementation can operate in parallel. You can keep your tenant-wide **Fraud Alert** functionality in place while you start to use **Report Suspicious Activity** with a targeted test group.
+**Report suspicious activity** and the legacy **Fraud Alert** implementation can operate in parallel. You can keep your tenant-wide **Fraud Alert** functionality in place while you start to use **Report suspicious activity** with a targeted test group.
-If **Fraud Alert** is enabled with Automatic Blocking, and **Report Suspicious Activity** is enabled, the user will be added to the blocklist and set as high-risk and in-scope for any other policies configured. These users will need to be removed from the blocklist and have their risk remediated to enable them to sign in with MFA.
+If **Fraud Alert** is enabled with Automatic Blocking, and **Report suspicious activity** is enabled, the user will be added to the blocklist and set as high-risk and in-scope for any other policies configured. These users will need to be removed from the blocklist and have their risk remediated to enable them to sign in with MFA.
## Notifications
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
Previously updated : 03/14/2023 Last updated : 05/23/2023
The `error` field has several possible values - review the protocol documentatio
| AADSTS7000215 | Invalid client secret is provided. Developer error - the app is attempting to sign in without the necessary or correct authentication parameters.| | AADSTS7000218 | The request body must contain the following parameter: 'client_assertion' or 'client_secret'. | | AADSTS7000222 | InvalidClientSecretExpiredKeysProvided - The provided client secret keys are expired. Visit the Azure portal to create new keys for your app, or consider using certificate credentials for added security: [https://aka.ms/certCreds](./active-directory-certificate-credentials.md) |
+| AADSTS700229 | ForbiddenTokenType- Only app-only tokens may be used as Federated Identity Credentials for AAD issuer. Use an app-only access token (generated during a client credentials flow) instead of a user-delegated access token (representing a request coming from a user context). |
| AADSTS700005 | InvalidGrantRedeemAgainstWrongTenant - Provided Authorization Code is intended to use against other tenant, thus rejected. OAuth2 Authorization Code must be redeemed against same tenant it was acquired for (/common or /{tenant-ID} as appropriate) | | AADSTS1000000 | UserNotBoundError - The Bind API requires the Azure AD user to also authenticate with an external IDP, which hasn't happened yet. | | AADSTS1000002 | BindCompleteInterruptError - The bind completed successfully, but the user must be informed. |
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-out-saml-protocol.md
Previously updated : 11/25/2022 Last updated : 05/30/2023
# Single Sign-Out SAML Protocol
-Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration. If the app is [added to the Azure App Gallery](../manage-apps/v2-howto-app-gallery-listing.md) then this value can be set by default. Otherwise, the value must be determined and set by the person adding the app to their Azure AD tenant. Azure AD uses the LogoutURL to redirect users after they're signed out.
+Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration.
-Azure AD supports redirect binding (HTTP GET), and not HTTP POST binding.
+If the app is [added to the Azure App Gallery](../manage-apps/v2-howto-app-gallery-listing.md) then this value can be set by default. Otherwise, the value must be determined and set by the person adding the app to their Azure AD tenant. Azure AD uses the **LogoutURL** to redirect users after they're signed out. Azure AD supports redirect binding (HTTP GET), and not HTTP POST binding.
The following diagram shows the workflow of the Azure AD single sign-out process.
The `Issuer` element in a `LogoutRequest` must exactly match one of the **Servic
The value of the `NameID` element must exactly match the `NameID` of the user that is being signed out. > [!NOTE]
-> During SAML logout request, the `NameID` value is not considered by Azure Active Directory.
-> If a single user session is active, Azure Active Directory will automatically select that session and the SAML logout will proceed.
-> If multiple user sessions are active, Azure Active Directory will enumerate the active sessions for user selection. After user selection, the SAML logout will proceed.
+> During SAML logout request, the `NameID` value is not considered by Azure AD.
+> If a single user session is active, Azure AD will automatically select that session and the SAML logout will proceed.
+> If multiple user sessions are active, Azure AD will enumerate the active sessions for user selection. After user selection, the SAML logout will proceed.
## LogoutResponse+ Azure AD sends a `LogoutResponse` in response to a `LogoutRequest` element. The following excerpt shows a sample `LogoutResponse`. ``` <samlp:LogoutResponse ID="_f0961a83-d071-4be5-a18c-9ae7b22987a4" Version="2.0" IssueInstant="2013-03-18T08:49:24.405Z" InResponseTo="iddce91f96e56747b5ace6d2e2aa9d4f8c" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
- <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://sts.windows.net/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer>
+ <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://login.microsoftonline.com/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer>
<samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success" /> </samlp:Status> </samlp:LogoutResponse>+ ``` Azure AD sets the `ID`, `Version` and `IssueInstant` values in the `LogoutResponse` element. It also sets the `InResponseTo` element to the value of the `ID` attribute of the `LogoutRequest` that elicited the response. ### Issuer
-Azure AD sets this value to `https://login.microsoftonline.com/<TenantIdGUID>/` where \<TenantIdGUID> is the tenant ID of the Azure AD tenant.
-To evaluate the value of the `Issuer` element, use the value of the **App ID URI** provided during application registration.
+Azure AD sets this value to `https://login.microsoftonline.com/<TenantIdGUID>/` where \<TenantIdGUID> is the tenant ID of the Azure AD tenant.
+
+To correctly identify the issuer element, use the value `https://login.microsoftonline.com/<TenantIdGUID>/` as shown in the sample LogoutResponse. This URL format identifies the Azure AD tenant as the issuer, representing the authority responsible for issuing the response.
### Status Azure AD uses the `StatusCode` element in the `Status` element to indicate the success or failure of sign-out. When the sign-out attempt fails, the `StatusCode` element can also contain custom error messages.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
description: Sign in Azure AD users by using the Microsoft identity platform's i
Previously updated : 02/14/2023 Last updated : 05/30/2023
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| | | | | `tenant` | Required | You can use the `{tenant}` value in the path of the request to control who can sign in to the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [protocol basics](active-directory-v2-protocols.md#endpoints). Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.| | `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `response_type` | Required | Must include `code` for OpenID Connect sign-in. |
+| `response_type` | Required | Must include `id_token` for OpenID Connect sign-in. |
| `redirect_uri` | Recommended | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except that it must be URL-encoded. If not present, the endpoint will pick one registered `redirect_uri` at random to send the user back to. | | `scope` | Required | A space-separated list of scopes. For OpenID Connect, it must include the scope `openid`, which translates to the **Sign you in** permission in the consent UI. You might also include other scopes in this request for requesting consent. | | `nonce` | Required | A value generated and sent by your app in its request for an ID token. The same `nonce` value is included in the ID token returned to your app by the Microsoft identity platform. To mitigate token replay attacks, your app should verify the `nonce` value in the ID token is the same value it sent when requesting the token. The value is typically a unique, random string. |
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Security defaults make it easier to help protect your organization from these id
If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
+> [!NOTE]
+> To help protect organizations, we're always working to improve the security of Microsoft account services. As part of this, free tenants not actively using multifactor authentication for all their users will be periodically notified for the automatic enablement of the security defaults setting. After this setting is enabled, all users in the organization will need to register for multifactor authentication. To avoid confusion, please refer to the email you received and alternatively you can [disable security defaults](#disabling-security-defaults) after it's enabled.
+ To enable security defaults in your directory: 1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator.
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Last updated 05/26/2023-+
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Previously updated : 01/10/2023 Last updated : 05/30/2023 -+ # Usage and insights in Azure Active Directory
-With the Azure Active Directory (Azure AD) **Usage and insights** reports, you can get an application-centric view of your sign-in data. Usage & insights also includes a report on authentication methods activity. You can find answers to the following questions:
+With the Azure Active Directory (Azure AD) **Usage and insights** reports, you can get an application-centric view of your sign-in data. Usage & insights includes a report on authentication methods, service principal sign-ins, and application credential activity. You can find answers to the following questions:
-* What are the top used applications in my organization?
-* What applications have the most failed sign-ins?
-* What are the top sign-in errors for each application?
+* What are the top used applications in my organization?
+* What applications have the most failed sign-ins?
+* What are the top sign-in errors for each application?
+* What was the date of the last sign-in for an application?
-This article provides an overview of three reports that look sign-in data.
+## Prerequisites
-## Access Usage & insights
-
-Accessing the data from Usage and insights requires:
+To access the data from Usage and insights you must have:
* An Azure AD tenant * An Azure AD premium (P1/P2) license to view the sign-in data
-* A user in the Global Administrator, Security Administrator, Security Reader, or Reports Reader roles.
+* A user in the Reports Reader, Security Reader, Security Administrator, or Global Administrator role.
+
+## Access Usage and insights
+
+You can access the Usage and insights reports from the Azure portal and using Microsoft Graph.
-To access Usage & insights:
+### To access Usage & insights in the portal:
1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role. 1. Go to **Azure Active Directory** > **Usage & insights**.
-The **Usage & insights** report is also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info).
+The **Usage & insights** reports are also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info).
-## View the Usage & insights reports
+### To access Usage & insights using Microsoft Graph:
-There are currently three reports available in Azure AD Usage & insights. All three reports use sign-in data to provide helpful information an application usage and authentication methods.
+The reports can be viewed and managed using Microsoft Graph on the `/beta` endpoint in Graph Explorer.
-### Azure AD application activity (preview)
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **GET** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+
+Refer to the section on each report in this article for the specific objects and parameters to include. For more information, see the [Microsoft Graph documentation for Identity and access reports](/graph/api/resources/report-identity-access).
+
+## Azure AD application activity (preview)
The **Azure AD application activity (preview)** report shows the list of applications with one or more sign-in attempts. Any application activity during the selected date range appears in the report. The report allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate. It's possible that activity for a deleted application may appear in the report if the activity took place during the selected date range and before the application was deleted. Other scenarios could include a user attempting to sign in to an application that doesn't have a service principal associated with the app. For these types of scenarios, you may need to review the audit logs or sign-in logs to investigate further.
-Select the **View sign in activity** link for an application to view more details. The sign-in graph per application counts interactive user sign-ins. The details of any sign-in failures appears below the table.
+To view the details of the sign-in activity for an application, select the **View sign-in activity** link for the application.
![Screenshot shows Usage and insights for Application activity where you can select a range and view sign-in activity for different apps.](./media/concept-usage-insights-report/usage-insights-overview.png)
-Select a day in the application usage graph to see a detailed list of the sign-in activities for the application. This detailed list is actually the sign-in log with the filter set to the selected application and date.
+The sign-in activity graph uses interactive user sign-ins. Select a day in the application usage graph to see a detailed list of the sign-in activities for the application. This detailed list is actually the sign-in log with the filter set to the selected application and date. The details of any sign-in failures appear below the table.
![Screenshot of the sign-in activity details for a selected application.](./media/concept-usage-insights-report/application-activity-sign-in-detail.png)
-### AD FS application activity
+### Application activity using Microsoft Graph
+
+You can view the `applicationSignInSummary` or `applicationSignInDetailedSummary` of Azure AD application activity with Microsoft Graph.
+
+Add the following query to view the **sign-in summary**, then select the **Run query** button.
+
+ ```http
+ GET https://graph.microsoft.com/beta/reports/getAzureADApplicationSignInSummary(period='{period}')
+ ```
+
+Add the following query to view the **sign-in details**, then select the **Run query** button.
+
+ ```http
+ GET https://graph.microsoft.com/beta/reports/applicationSignInDetailedSummary/{id}
+ ```
+
+For more information, see [Application sign-in in Microsoft Graph](/graph/api/resources/applicationsigninsummary?view=graph-rest-beta&preserve-view=true).
+
+## AD FS application activity
The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user login to authenticate in the last 30 days. These applications have not been migrated to Azure AD for authentication.
-### Authentication methods activity
+Viewing the AD FS application activity using Microsoft Graph retrieves a list of the `relyingPartyDetailedSummary` objects, which identifies the relying party to a particular Federation Service.
+
+Add the following query, then select the **Run query** button.
+
+ ```http
+ GET https://graph.microsoft.com/beta/reports/getRelyingPartyDetailedSummary
+ ```
+
+For more information, see [AD FS application activity in Microsoft Graph](/graph/api/resources/relyingpartydetailedsummary?view=graph-rest-beta&preserve-view=true).
+
+## Authentication methods activity
The **Authentication methods activity** in Usage & insights displays visualizations of the different authentication methods used by your organization. The **Registration tab** displays statistics of users registered for each of your available authentication methods. Select the **Usage** tab at the top of the page to see actual usage for each authentication method.
Looking for the details of a user and their authentication methods? Look at the
Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You'll be able to see the method used to attempt to register or reset an authentication method.
+## Service principal sign-in activity (preview)
+
+The Service principal sign-in activity (preview) report provides the last activity date for every service principal. The report provides you information on the usage of the service principal - whether it was used as a client or resource app and whether it was used in an app-only or delegated context. The report shows the last time the service principal was used.
+
+[ ![Screenshot of the service principal sign-in activity report.](./media/concept-usage-insights-report/service-principal-sign-ins.png) ](./media/concept-usage-insights-report/service-principal-sign-ins.png#lightbox)
+
+Select the **View more details** link to locate the client and object IDs for the application as well as specific service principal sign-in activity.
+
+[ ![Screenshot of the service principal sign-in activity details.](./media/concept-usage-insights-report/service-principal-sign-in-activity-details.png) ](./media/concept-usage-insights-report/service-principal-sign-in-activity-details.png#lightbox)
+
+### Service principal sign-in activity using Microsoft Graph
+
+The `servicePrincipalSignInActivity` reports can be viewed using Microsoft Graph in Graph Explorer.
+
+Add the following query to retrieve the service principal sign-in activity, then select the **Run query** button.
+
+```http
+GET https://graph.microsoft.com/beta/reports/servicePrincipalSignInActivities/{id}
+```
+
+The following is an example of the response:
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#reports/servicePrincipalSignInActivities",
+ "id": "ODNmNDUyOTYtZmI4Zi00YWFhLWEzOTktYWM1MTA4NGUwMmI3",
+ "appId": "83f45296-fb8f-4aaa-a399-ac51084e02b7",
+ "delegatedClientSignInActivity": {
+ "lastSignInDateTime": "2021-01-01T00:00:00Z",
+ "lastSignInRequestId": "2d245633-0f48-4b0e-8c04-546c2bcd61f5"
+ },
+ "delegatedResourceSignInActivity": {
+ "lastSignInDateTime": "2021-02-01T00:00:00Z",
+ "lastSignInRequestId": "d2b4c623-f930-42b5-9519-7851ca604b16"
+ },
+ "applicationAuthenticationClientSignInActivity": {
+ "lastSignInDateTime": "2021-03-01T00:00:00Z",
+ "lastSignInRequestId": "b71f24ec-f212-4306-b2ae-c229e15805ea"
+ },
+ "applicationAuthenticationResourceSignInActivity": {
+ "lastSignInDateTime": "2021-04-01T00:00:00Z",
+ "lastSignInRequestId": "53e6981f-2272-4deb-972c-c8272aca986d"
+ },
+ "lastSignInActivity": {
+ "lastSignInDateTime": "2021-04-01T00:00:00Z",
+ "lastSignInRequestId": "cd9733e8-d75a-468f-a63d-6e82bd48c05e"
+ }
+}
+```
+
+For more information, see [List service principal activity in Microsoft Graph](/graph/api/reportroot-list-serviceprincipalsigninactivities?view=graph-rest-beta&preserve-view=true).
+
+## Application credential activity (preview)
+
+The Application credential activity (preview) report provides the last credential activity date for every application credential. The report provides the credential type (certificate or client secret), the last used date, and the expiration date. With this report you can view the expiration dates of all your applications in one place.
+
+To view the details of the application credential activity, select the **View more details** link. These details include the application object, service principal, and resource IDs. You can also see if the credential origin is the application or the service principal.
+
+[ ![Screenshot of the app credential activity report.](media/concept-usage-insights-report/app-credential-activity.png) ](media/concept-usage-insights-report/app-credential-activity.png#lightbox)
+
+When you select the **View more details** link, you can see the application object ID and resource ID, in addition to the details visible in the report.
+
+[ ![Screenshot of the app credential activity details.](media/concept-usage-insights-report/app-credential-activity-details.png) ](media/concept-usage-insights-report/app-credential-activity-details.png#lightbox)
+
+### Application credential activity using Microsoft Graph
+
+Application credential activity can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can get the application credential sign-in activity by entity `id`, `keyId`, and `appId` .
+
+To get started, follow these instructions to work with `appCredentialSignInActivity` using Microsoft Graph in Graph Explorer.
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **GET** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+1. Add the following query to retrieve recommendations, then select the **Run query** button.
+
+ ```http
+ GET https://graph.microsoft.com/beta/reports/appCredentialSignInActivities/{id}
+ ```
+The following is an example of the response:
+
+```json
+{
+ "@odata.type": "#microsoft.graph.appCredentialSignInActivity",
+ "id": "ODNmNDUyOTYtZmI4Zi00YWFhLWEzOTktYWM1MTA4NGUwMmI3fGFwcGxpY2F0aW9u",
+ "keyId": "83f45296-fb8f-4aaa-a399-ac51084e02b7",
+ "keyType": "certificate",
+ "keyUsage": "sign",
+ "appId": "f4d9654f-0305-4072-878c-8bf266dfe146",
+ "appObjectId": "6920caa5-1cae-4bc8-bf59-9c0b8495d240",
+ "servicePrincipalObjectId": "cf533854-9fb7-4c01-9c0e-f68922ada8b6",
+ "resourceId": "a89dc091-a671-4da4-9fcf-3ef06bdf3ac3",
+ "credentialOrigin": "application",
+ "expirationDate": "2021-04-01T21:36:48-8:00",
+ "signInActivity": {
+ "lastSignInDateTime": "2021-04-01T00:00:00-8:00",
+ "lastSignInRequestId": "b0a282a3-68ec-4ec8-aef0-290ed4350271"
+ }
+}
+```
+
+For more information, see [Application credential activity in Microsoft Graph](/graph/api/resources/appcredentialsigninactivity?view=graph-rest-beta&preserve-view=true).
+ ## Next steps - [Learn about the sign-ins report](concept-sign-ins.md)
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Last updated 04/18/2023
# What's new in Azure Advisor? Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## May 2023
+### Service retirement workbook
+
+It is important to be aware of the upcoming Azure service and feature retirements to understand their impact on your workloads and plan migration. The [Service Retirement workbook](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/workbooks) provides a single centralized resource level view of service retirements and helps you assess impact, evaluate options, and plan migration.
+The workbook includes 35 services and features planned for retirement. You can view planned retirement dates, list and map of impacted resources and get information to make the necessary actions.
+
+To learn more, visit [Prepare migration of your workloads impacted by service retirements](advisor-how-to-plan-migration-workloads-service-retirement.md).
+ ## April 2023 ### VM/VMSS right-sizing recommendations with custom lookback period
api-management Self Hosted Gateway Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-support-policies.md
We have the following tagging strategy for the [self-hosted gateway container im
* Supported third-party open-source projects, for example: Open Telemetry and DAPR (Distributed Application Runtime).
-## Microsoft does not provide technical support for the following examples
+### Microsoft does not provide technical support for the following examples
* Questions about how to use the self-hosted gateway inside Kubernetes. For example, Microsoft Support doesn't provide advice on how to create custom ingress controllers, service mesh, use application workloads, or apply third-party or open-source software packages or tools.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Once you've completed all of the above steps, you can start migration. Make sure
When migration is complete, you'll have an App Service Environment v3, and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
-If your migration included a custom domain suffix, for App Service Environment v3, the custom domain will no longer be shown in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
+If your migration included a custom domain suffix, the domain was shown in the **Essentials** section of the **Overview** page of the portal for App Service Environment v1/v2, but it is no longer shown there in App Service Environment v3. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
:::image type="content" source="./media/migration/custom-domain-suffix-app-service-environment-v3.png" alt-text="Screenshot that shows how to access custom domain suffix configuration for App Service Environment v3.":::
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
A frontend IP address is associated to a *listener*, which checks for incoming r
> > **Outbound Rule**: (no specific requirement)
+> [!IMPORTANT]
+> **The default domain name behavior for V1 SKU**:
+> - Deployments before 1st May 2023: These deployments will continue to have the default domain names like "string".cloudapp.net mapped to the application gateway's Public IP address.
+> - Deployments after 1st May 2023: For deployments after this date, there will NOT be any default domain name mapped to the gateway's Public IP address. You must manually configure using your domain name by mapping its DNS record to the gateway's IP address
+ ## Next steps - [Learn about listener configuration](configuration-listeners.md)
application-gateway Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/retirement-faq.md
If you have an Application Gateway V1, [Migration from v1 to v2](./migrate-v1-v2
### Can Microsoft migrate this data for me? No, Microsoft cannot migrate user's data on their behalf. Users must do the migration themselves by using the self-serve options provided.
-Application Gateway v1 is built on some legacy components and customers have deployed the gateways in many different ways which makes the one click seamless migration a near impossible goal to achieve. Hence customer involvement is required for migration.
+Application Gateway v1 is built on legacy components and customers have deployed the gateways in many different ways in their architecture ,due to which customer involvement is required for migration.This also allows users to plan the migration during a maintenance window, which can help to ensure that the migration is successful with minimal downtime for the user's applications.
### What is the time required for migration?
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Title: Create a Python function using Visual Studio Code - Azure Functions description: Learn how to create a Python function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 10/24/2022 Last updated : 05/29/2023 ms.devlang: python zone_pivot_groups: python-mode-functions
In this article, you use Visual Studio Code to create a Python function that res
This article covers both Python programming models supported by Azure Functions. Use the selector at the top to choose your programming model.
+>[!NOTE]
+>There is now a v2 programming model for creating Python functions. To create your first function using [the new v2 programming model](create-first-function-vs-code-python.md?pivots=python-mode-decorators), select **v2** at the top of the article.
>[!NOTE] >The v2 programming model provides a decorator based approach to create functions. To learn more about the v2 programming model, see the [Developer Reference Guide](functions-reference-python.md). Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
azure-government Documentation Accelerate Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/documentation-accelerate-compliance.md
cloud: gov documentationcenter: '' -+ ms.assetid: na Previously updated : 01/05/2021 Last updated : 05/30/2023
Microsoft is able to scale through its partners. Scale is what will allow us to
## Publishing to Azure Marketplace
-1. Join the Partner Network - ItΓÇÖs a requirement for publishing but easy to sign up. Instructions are located here: [Ensure you have a MPN ID and Partner Center Account](../../marketplace/create-account.md#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace).
+1. Join the Partner Network - ItΓÇÖs a requirement for publishing but easy to sign up. Instructions are located here: [Ensure you have a MCPP ID and Partner Center Account](../../marketplace/create-account.md#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace).
2. Enable your partner center account as Publisher / Developer for Marketplace, follow the instructions [here](../../marketplace/create-account.md). 3. With an enabled Partner Center Account, publish listing as a SaaS App as instructed [here](../../marketplace/create-new-saas-offer.md).
For a list of existing Azure Marketplace offerings in this space, visit [this pa
* Free [training on FedRAMP](https://www.fedramp.gov/training/). * FedRAMP [templates](https://www.fedramp.gov/templates/) to help you with program requirements. * Get familiar with the [FedRAMP Marketplace](https://marketplace.fedramp.gov/#/products).
- * Are you a partner and want to join our program? Fill out the [form](https://aka.ms/partnerazcl).
- * Learn more about [Azure Blueprints](../../governance/blueprints/overview.md) and review [samples](../../governance/blueprints/samples/index.md).
- * To learn how Azure Blueprints help you when using Azure Policy review the [blog post](https://azure.microsoft.com/blog/new-azure-blueprint-simplifies-compliance-with-nist-sp-800-53/).
+ * Learn more about [Azure Compliance Offerings per market and industry](https://learn.microsoft.com/azure/compliance/).
## Next steps
-Review the documentation above. If you are still facing issues reach out to [Azure Government Partner Inquiries](mailto:azgovpartinf@microsoft.com).
+Review the documentation above.
+Review the Azure Marketplace [Publishing guide by offer type](https://learn.microsoft.com/partner-center/marketplace/publisher-guide-by-offer-type) for further tips and troubleshooting.
+If you are still facing issues, open a ticket in Partner Center.
azure-government Documentation Government Csp Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-application.md
na Previously updated : 08/31/2021 Last updated : 05/30/2023 # Azure Government CSP application process
Before being able to apply for CSP or any other programs that run under the Micr
The process begins with a request for an Azure Government tenant. For more information and to begin the validation, see [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=CSP). Once complete, you should receive a tenant to activate your enrollment in the Cloud Solution Provider program for the US government. Validation steps are shown below: -- Be a Microsoft Partner (have an MPN ID).
+- Be enrolled in the [Microsoft Cloud Partner Program](/partner-center/mpn-overview) (have a MCPP ID).
- Verification of legitimacy of the Company, Systems Integrator, Distributor, or Independent Software Vendor (ISV) applying for the tenant. - Verification of business engagements with government customers (for example, proof of services rendered to government agencies, statements of works, evidence of being part of GSA Schedule). - If you already have an Azure Government tenant, you can use your existing credentials to complete the CSP application.
The application process includes:
- Estimation of potential revenue - Company validation via Dun and Bradstreet - Email verification
+- Verification of an active enrollment in the Advanced Support for Partners program or Prmier Support for Partners program. More information [here](https://partner.microsoft.com/support/partnersupport).
- Acceptance of [Terms and Conditions](https://download.microsoft.com/download/2/C/8/2C8CAC17-FCE7-4F51-9556-4D77C7022DF5/MCRA2018_AOC_USGCC_ENG_Feb2019_CR.pdf) After the validation has been completed and terms have been signed, you are ready to transact. For more information on billing, see [Azure plan](/partner-center/azure-plan-lp).
After the validation has been completed and terms have been signed, you are read
## Next steps
-Once you have onboarded and are ready to create your first customer, make sure to review [Resources for building your Government CSP practice](https://devblogs.microsoft.com/azuregov/resources-for-building-your-government-csp-practice/). If you have any more questions, contact the [Azure Government CSP program](mailto:azgovcsp@microsoft.com).
+Once you have onboarded and are ready to create your first customer, make sure to review [Resources for building your Government CSP practice](https://devblogs.microsoft.com/azuregov/resources-for-building-your-government-csp-practice/). To review further documentation please visit the FAQ located [here](/partner-center/faq-for-us-govt-cloud). For all other questions, please open a ticket within Partner Center.
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Title: Drawing tool events | Microsoft Azure Maps description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK-- Previously updated : 12/05/2019++ Last updated : 05/23/2023
When using drawing tools on a map, it's useful to react to certain events as the
| `drawingmodechanged` | Fired when the drawing mode has changed. The new drawing mode is passed into the event handler. | | `drawingstarted` | Fired when the user starts drawing a shape or puts a shape into edit mode. |
-The following code shows how the events in the Drawing Tools module work. Draw shapes on the map and watch as the events fire.
+For a complete working sample of how to display data from a vector tile source on the map, see [Drawing tool events] in the [Azure Maps Samples]. In this sample you can draw shapes on the map and watch as the events fire.
+The following image shows a screenshot of the complete working sample that demonstrates how the events in the Drawing Tools module work.
++
+<!
<br/> <iframe height="500" scrolling="no" title="Drawing tools events" src="https://codepen.io/azuremaps/embed/dyPMRWo?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
The following code shows how the events in the Drawing Tools module work. Draw s
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-<br/>
+-->
## Examples
Let's see some common scenarios that use the drawing tools events.
This code demonstrates how to monitor an event of a user drawing shapes. For this example, the code monitors shapes of polygons, rectangles, and circles. Then, it determines which data points on the map are within the drawn area. The `drawingcomplete` event is used to trigger the select logic. In the select logic, the code loops through all the data points on the map. It checks if there's an intersection of the point and the area of the drawn shape. This example makes use of the open-source [Turf.js](https://turfjs.org/) library to perform a spatial intersection calculation.
+For a complete working sample of how to use the drawing tools to draw polygon areas on the map with points within them that can be selected, see [Select data in drawn polygon area] in the [Azure Maps Samples].
++
+<!-
<br/> <iframe height="500" scrolling="no" title="Select data in drawn polygon area" src="https://codepen.io/azuremaps/embed/XWJdeja?height=500&theme-id=default&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/XWJdeja'>Select data in drawn polygon area</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>-
-<br/>
+->
### Draw and search in polygon area This code searches for points of interests inside the area of a shape after the user finished drawing the shape. You can modify and execute the code by clicking 'Edit on Code pen' on the top-right corner of the frame. The `drawingcomplete` event is used to trigger the search logic. If the user draws a rectangle or polygon, a search inside geometry is performed. If a circle is drawn, the radius and center position is used to perform a point of interest search. The `drawingmodechanged` event is used to determine when the user switches to the drawing mode, and this event clears the drawing canvas.
+For a complete working sample of how to use the drawing tools to search for points of interests within drawn areas, see [Draw and search polygon area] in the [Azure Maps Samples].
++
+<!-
<br/> <iframe height="500" scrolling="no" title="Draw and search in polygon area" src="https://codepen.io/azuremaps/embed/eYmZGNv?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/eYmZGNv'>Draw and search in polygon area</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>-
-<br/>
+->
### Create a measuring tool The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information.
-<br/>
+For a complete working sample of how to use the drawing tools to measure distances and areas, see [Create a measuring tool] in the [Azure Maps Samples].
+
+<!-
<iframe height="500" scrolling="no" title="Measuring tool" src="https://codepen.io/azuremaps/embed/RwNaZXe?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/RwNaZXe'>Measuring tool</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>-
-<br/>
+->
## Next steps
Check out more code samples:
> [!div class="nextstepaction"] > [Code sample page](https://aka.ms/AzureMapsSamples)+
+[Azure Maps Samples]:https://samples.azuremaps.com
+[Drawing tool events]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=drawing-tools-events
+[Select data in drawn polygon area]:https://samples.azuremaps.com/?search=Drawing%20tool&sample=select-data-in-drawn-polygon-area
+[Draw and search polygon area]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=draw-and-search-polygon-area
+[Create a measuring tool]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=create-a-measuring-tool
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
If your IT security policies don't allow computers on your network to connect to
Before you start, review the following requirements. >[!Note]
->From April 14, 2023, System Center Operations Manager versions lower than [2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) will stop sending data to Log Analytics workspaces. Ensure that your agents are on System Center Operations Manager Agent version 10.19.10177.0 ([2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) or later) or 10.22.10056.0 ([2022 RTM](/system-center/scom/release-build-versions?view=sc-om-2022#agents&preserve-view=true)) and that the System Center Operations Manager Management Group version is the System Center Operations Manager 2022 and 2019 UR3 or later version.
+>From June 30th, 2023, System Center Operations Manager versions lower than [2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) will stop sending data to Log Analytics workspaces. Ensure that your agents are on System Center Operations Manager Agent version 10.19.10177.0 ([2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) or later) or 10.22.10056.0 ([2022 RTM](/system-center/scom/release-build-versions?view=sc-om-2022#agents&preserve-view=true)) and that the System Center Operations Manager Management Group version is the System Center Operations Manager 2022 and 2019 UR3 or later version.
* Azure Monitor supports: * System Center Operations Manager 2022
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Alerts triggered by these alert rules contain a payload that uses the [common al
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
-1. (Optional) If you've configured action groups for this alert rule, you can add custom properties in key:value pairs to add more information to the alert notification payload in the <a name="custom-props">**Custom properties**</a> section. Add the property **Name** and **Value** for the custom property you want included in the payload.
+
+ > [!NOTE]
+ > We're continually adding more regions for regional data processing.
+
+1. (Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add custom properties in key:value pairs to the alert notification payload to add more information to it. Add the property **Name** and **Value** for the custom property you want included in the payload.
You can also use custom properties to extract and manipulate data from alert payloads that use the common schema. You can use those values in the action group webhook or logic app.
- The format for extracting values from the [common schema](alerts-common-schema.md), use a "$", and then the path of the common schema field inside curly brackets. For example: `${data.essentials.monitorCondition}`.
+ The format for extracting values from the common schema, use a "$", and then the path of the [common schema](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-common-schema) field inside curly brackets. For example: `${data.essentials.monitorCondition}`.
++ In the following examples, values in the **custom properties** are used to utilize data from the payload: **Example 1**
- This example creates an "Additional Details" tag with data about the "evaluation window start time" and "evaluation window end time".
+
+ This example creates an "Additional Details" tag with data refarding the "window start time" and "window end time".
+ - **Name:** "Additional Details" - **Value:** "Evaluation windowStartTime: \${data.alertContext.condition.windowStartTime}. windowEndTime: \${data.alertContext.condition.windowEndTime}" - **Result:** "AdditionalDetails:Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z" **Example 2**
- This example adds data about the reason the alert was fired or resolved.
+ This example add the data regarding the reason of resolving or firing the alert.
+ - **Name:** "Alert \${data.essentials.monitorCondition} reason" - **Value:** "\${data.alertContext.condition.allOf[0].metricName} \${data.alertContext.condition.allOf[0].operator} \${data.alertContext.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.alertContext.condition.allOf[0].metricValue}" - **Result:** Example results could be something like:
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: + > [!NOTE] > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts. + 1. On the **Details** tab, define the **Project details**. - Select the **Subscription**. - Select the **Resource group**.
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
Title: Optimize log alert queries | Microsoft Docs description: This article gives recommendations for writing efficient alert queries. Previously updated : 2/23/2022 Last updated : 5/30/2023 # Optimize log alert queries
Log alert rules using [cross-resource queries](../logs/cross-workspace-query.md)
```Kusto union
-app('Contoso-app1').requests,
-app('Contoso-app2').requests,
-workspace('Contoso-workspace1').Perf
+app('00000000-0000-0000-0000-000000000001').requests,
+app('00000000-0000-0000-0000-000000000002').requests,
+workspace('00000000-0000-0000-0000-000000000003').Perf
``` >[!NOTE]
-> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, see [Upgrade legacy rules management to the current Azure Monitor Log Alerts API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch) to learn about switching.
+> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, see [Upgrade legacy rules management to the current Azure Monitor Log Alerts API](./alerts-log-api-switch.md) to learn about switching.
## Examples
azure-monitor Prometheus Authorization Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-authorization-proxy.md
Title: Azure Active Directory authorization proxy description: Azure Active Directory authorization proxy -+ Last updated 07/10/2022
The Azure Active Directory authorization proxy is a reverse proxy, which can be
> The remote write example in this article uses Prometheus remote write to write data to Azure Monitor. Onboarding your AKS cluster to Prometheus automatically installs Prometheus on your cluster and sends data to your workspace. ## Deployment
-The proxy can be deployed with custom templates using release image or as helm chart. Both deployments contain the same customizable parameters. These parameters are described in the [Parameters](#parameters) table.
+The proxy can be deployed with custom templates using release image or as a helm chart. Both deployments contain the same customizable parameters. These parameters are described in the [Parameters](#parameters) table.
+
+For for more information, see [Azure Active Directory authentication proxy](https://github.com/Azure/aad-auth-proxy) project.
The following examples show how to deploy the proxy for remote write and for querying data from Azure Monitor.
Before deploying the proxy, find your managed identity and assign it the `Monito
spec: containers: - name: aad-auth-proxy
- image: mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/images/aad-auth-proxy:aad-auth-proxy-0.1.0-main-04-11-2023-623473b0
+ image: mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/images/aad-auth-proxy:0.1.0-main-05-24-2023-b911fe1c
imagePullPolicy: Always ports: - name: auth-port
Before deploying the proxy, find your managed identity and assign it the `Monito
```bash helm install aad-auth-proxy oci://mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/helmchart/aad-auth-proxy \
- --version 0.1.0 \
+ --version 0.1.0-main-05-24-2023-b911fe1c \
-n observability \ --set targetHost=https://proxy-test-abc123.eastus-1.metrics.ingest.monitor.azure.com \ --set identityType=userAssigned \
Before deploying the proxy, find your managed identity and assign it the `Monito
## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write ## remoteWrite:
- - url: "http://azuremonitor-ingestion.observability.svc.cluster.local/dataCollectionRules/dcr-abc123d987e654f3210abc1def234567/streams/ Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"
+ - url: "http://azuremonitor-ingestion.observability.svc.cluster.local/dataCollectionRules/dcr-abc123d987e654f3210abc1def234567/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"
``` 1. Apply the remote write configuration.
+> [!NOTE]
+> For the latest proxy image version ,see the [release notes](https://github.com/Azure/aad-auth-proxy/blob/main/RELEASENOTES.md)
+ ### Check that the proxy is ingesting data Check that the proxy is successfully ingesting metrics by checking the pod's logs, or by querying the Azure Monitor workspace.
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
description: This article describes how you can query against resources from mul
Previously updated : 04/01/2023 Last updated : 05/30/2023
There are two methods to query data that's stored in multiple workspaces and app
## Cross-resource query limits
-* The number of Application Insights resources and Log Analytics workspaces that you can include in a single query is limited to 100.
-* Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch).
+* The number of Application Insights components and Log Analytics workspaces that you can include in a single query is limited to 100.
+* Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).
* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Identify workspace resources](#identify-workspace-resources) for examples. ## Query across Log Analytics workspaces and from Application Insights
To reference another workspace in your query, use the [workspace](../logs/worksp
### Identify workspace resources
-You can identify a workspace in one of several ways:
+You can identify a workspace using one of these IDs:
* **Workspace ID**: A workspace ID is the unique, immutable, identifier assigned to each workspace represented as a globally unique identifier (GUID). `workspace("00000000-0000-0000-0000-000000000000").Update | count`
-* **Azure Resource ID**: This ID is the Azure-defined unique identity of the workspace. You use the Resource ID when the resource name is ambiguous. For workspaces, the format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/workspaceName*.
+* **Azure Resource ID**: This ID is the Azure-defined unique identity of the workspace. For workspaces, the format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/workspaceName*.
For example:
You can identify a workspace in one of several ways:
### Identify an application The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights.
-You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following IDs:
+You can identify an app using one of these IDs:
* **ID**: This ID is the app GUID of the application. `app("00000000-0000-0000-0000-000000000000").requests | count`
-* **Azure Resource ID**: This ID is the Azure-defined unique identity of the app. You use the resource ID when the resource name is ambiguous. The format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*.
+* **Azure Resource ID**: This ID is the Azure-defined unique identity of the app. The format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*.
For example:
You can identify an application in Application Insights with the `app(Identifier
### Perform a query across multiple resources You can query multiple resources from any of your resource instances. These resources can be workspaces and apps combined.
-Example for a query across two workspaces:
+Example for a query across three workspaces:
``` union Update,
- workspace("").Update, workspace("00000000-0000-0000-0000-000000000000").Update
+ workspace("00000000-0000-0000-0000-000000000001").Update,
+ workspace("00000000-0000-0000-0000-000000000002").Update
| where TimeGenerated >= ago(1h) | where UpdateState == "Needed" | summarize dcount(Computer) by Classification ``` ## Use a cross-resource query for multiple resources
-When you use cross-resource queries to correlate data from multiple Log Analytics workspaces and Application Insights resources, the query can become complex and difficult to maintain. You should make use of [functions in Azure Monitor log queries](./functions.md) to separate the query logic from the scoping of the query resources. This method simplifies the query structure. The following example demonstrates how you can monitor multiple Application Insights resources and visualize the count of failed requests by application name.
+When you use cross-resource queries to correlate data from multiple Log Analytics workspaces and Application Insights components, the query can become complex and difficult to maintain. You should make use of [functions in Azure Monitor log queries](./functions.md) to separate the query logic from the scoping of the query resources. This method simplifies the query structure. The following example demonstrates how you can monitor multiple Application Insights components and visualize the count of failed requests by application name.
-Create a query like the following example that references the scope of Application Insights resources. The `withsource= SourceApp` command adds a column that designates the application name that sent the log. [Save the query as a function](./functions.md#create-a-function) with the alias `applicationsScoping`.
+Create a query like the following example that references the scope of Application Insights components. The `withsource= SourceApp` command adds a column that designates the application name that sent the log. [Save the query as a function](./functions.md#create-a-function) with the alias `applicationsScoping`.
```Kusto
-// crossResource function that scopes my Application Insights resources
+// crossResource function that scopes my Application Insights components
union withsource= SourceApp app('00000000-0000-0000-0000-000000000000').requests, app('00000000-0000-0000-0000-000000000001').requests,
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
Until September 18, 2023, some data security-related data types collected [Micro
- ProtectionStatus - Update - UpdateSummary -
+- CommonSecurityLog
## Set the daily cap ### Log Analytics workspace
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
The managed identity service generates the *principalId* GUID when you create th
## Link a workspace to a cluster
-When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace, is routed to the cluster while existing data remains in the existing Log Analytics cluster. If the dedicated cluster is configured with customer-managed keys (CMK), new ingested data is encrypted with your key. The system abstracts the data location, you can query data as usual while the system performs cross-cluster queries in the background.
+When a Log Analytics workspace is linked to a dedicated cluster, the workspace billing plan in workspace is changed per cluster plan, new data ingested to the workspace is routed to the cluster, and existing data remains in Log Analytics cluster. Linking a workspace has no affect on data ingestion and query experiences.
+
+Queries and experiences aren't affected by the
+If the dedicated cluster is configured with customer-managed keys (CMK), new ingested data is encrypted with your key. The system abstracts the data location, you can query data as usual while the system performs cross-cluster queries in the background.
A cluster can be linked to up to 1,000 workspaces. Linked workspaces can be located in the same region as the cluster. A workspace can't be linked to a cluster more than twice a month, to prevent data fragmentation.
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics.
+You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics.
> [!WARNING] > Unlinking a workspace does not move workspace data out of the cluster. Any data collected for workspace while linked to cluster, remains in cluster for the retention period defined in workspace, and accessible as long as cluster isn't deleted.
azure-signalr Concept Upstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-upstream.md
To enable managed identity in your SignalR service instance and grant it Key Vau
3. Replace your sensitive text with the below syntax in the upstream endpoint URL Pattern: ``` {@Microsoft.KeyVault(SecretUri=<secret-identity>)}
- ``
+ ```
`<secret-identity>` is the full data-plane URI of a secret in Key Vault, optionally including a version, e.g., https://myvault.vault.azure.net/secrets/mysecret/ or https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931 For example, a complete reference would look like the following:
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
When audio effects are retrieved in the closed caption files, they are retrieved
Audio Effects in closed captions file is retrieved with the following logic employed:
-* `Silence` event type will not be added to the closed captions
-* Maximum duration to show an event I 5 seconds
-* Minimum timer duration to show an event is 700 milliseconds
+* `Silence` event type will not be added to the closed captions.
+* Minimum timer duration to show an event is 700 milliseconds.
## Adding audio effects in closed caption files
backup Backup Azure Dataprotection Use Rest Api Backup Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-backup-disks.md
Title: Back up Azure Disks using Azure Data Protection REST API. description: In this article, learn how to configure, initiate, and manage backup operations of Azure Disks using REST API.- Previously updated : 10/06/2021+ Last updated : 05/30/2023 ms.assetid: 6050a941-89d7-4b27-9976-69898cc34cde + # Back up Azure Disks using Azure Data Protection via REST API This article describes how to manage backups for Azure Disks via REST API.
+Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using incremental snapshots with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
+ For information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md). ## Prerequisites
You need to assign a few permissions via RBAC to the vault (represented by vault
### Prepare the request to configure backup
-Once the relevant permissions are set to the vault and the disk, and the vault and policy are configured, we can prepare the request to configure backup. The following is the request body to configure backup for an Azure Disk. The Azure Resource Manager ID (ARM ID) of the Azure Disk and its details are mentioned in the _datasourceinfo_ section and the policy information is present in the _policyinfo_ section where the snapshot resource group is provided as one of the policy parameters.
+Once the relevant permissions are set to the vault and the disk, and the vault and policy are configured, we can prepare the request to configure backup. The following is the request body to configure backup for an Azure Disk. The Azure Resource Manager ID (ARM ID) of the Azure Disk and its details are mentioned in the `datasourceinfo` section and the policy information is present in the `policyinfo` section where the snapshot resource group is provided as one of the policy parameters.
```json {
POST https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
The [request body](#prepare-the-request-to-configure-backup) that we prepared earlier will be used to provide details of the Azure Disk to be protected.
-#### Example request body
+**Example request body**
```json {
The [request body](#prepare-the-request-to-configure-backup) that we prepared ea
Backup request validation is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created and 200 (OK) when that operation completes.
|Name |Type |Description | ||||
It returns two responses: 202 (Accepted) when another operation is created and t
|200 OK | [OperationJobExtendedInfo](/rest/api/dataprotection/backup-instances/validate-for-backup#operationjobextendedinfo) | Accepted | | Other Status codes | [CloudError](/rest/api/dataprotection/backup-instances/validate-for-backup#clouderror) | Error response describing why the operation failed |
-##### Example responses for validate backup request
+**Example responses for validate backup request**
-###### Error response
+##### Error response
If the given disk is already protected, it returns the response as HTTP 400 (Bad request) and states that the given disk is protected to a backup vault along with details.
X-Powered-By: ASP.NET
} ```
-###### Tracking response
+##### Track response
If the datasource is unprotected, then the API proceeds for further validations and creates a tracking operation.
DELETE "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/Test
*DELETE* protection is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation completes.
|Name |Type |Description | ||||
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
For a demonstration of these scenarios in Speech Studio, view this [introductory
In Speech Studio, the following Speech service features are available as project types:
-* [Real-time speech to text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech to text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech to text works on your audio samples. To explore the full functionality, see [What is speech to text?](speech-to-text.md).
+* [Real-time speech to text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech to text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech to text works on your audio samples. To explore the full functionality, see [What is speech to text](speech-to-text.md).
+
+* [Batch speech to text](https://aka.ms/speechstudio/batchspeechtotext): Quickly test batch transcription capabilities to transcribe a large amount of audio in storage and receive results asynchronously, To learn more about Batch Speech-to-text, see [Batch speech to text overview](batch-transcription.md).
* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md). * [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
+* [Speech Translation](https://aka.ms/speechstudio/speechtranslation): Quickly test and translate speech into other languages of your choice with low latency. To explore the full functionality, see [What is speech translation](speech-translation.md).
+ * [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=tts). Bring your scenarios to life with highly expressive and human-like neural voices. * [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text to speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
communication-services Number Lookup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-concept.md
+
+ Title: Number Lookup API concepts in Azure Communication Services
+
+description: Learn about Communication Services Number Lookup API concepts.
+++++ Last updated : 05/02/2023++++
+# Number Lookup overview
++
+Azure Communication Services enable you to retrieve insights and look up a specific phone number using the Azure Communication Services Number Lookup SDK. It is part of the Phone Numbers SDK and can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Azure Communication Services Number Lookup allows you to reliably retrieve number insights before engaging with end-users.
++
+## Number Lookup features
+
+Key features of Azure Communication Services Number Lookup include:
+
+- **Simple** Our API is easy to integrate with your application. We provide detailed documentation to guide you through the process, and our team of experts is always available to assist you.
+- **High Accuracy** We gather data from the most reliable suppliers to ensure that you receive accurate data. Our data is updated regularly to guarantee the highest quality possible.
+- **High Velocity** Our API is designed to deliver fast and accurate data, even when dealing with high volumes of data. It is optimized for speed and performance to ensure you always receive the information you need quickly and reliably.
+- **Number Capability Check** Our API provides the associated number type that generally can help determine if an SMS can be sent to a particular number. This helps to avoid frustrating attempts to send messages to non-SMS-capable numbers.
+- **Carrier Details** We provide information about the country of destination and carrier information which helps to estimate potential costs and find alternative messaging methods (e.g., sending an email).
+
+## Value Proposition
+
+The main benefits the solution will provide to ACS customers can be summarized on the below:
+- **Reduce Cost:** Optimize your communication expenses by sending messages only to phone numbers that are SMS-ready
+- **Increase efficiency:** Better target customers based on subscribersΓÇÖ data (name, type, location, etc.). You can also decide on the best communication channel to choose based on status (e.g., SMS or email while roaming instead of calls).
+
+## Key Use Cases
+
+- **Validate the number can receive the SMS before you send it:** Check if a number has SMS capabilities or not and decide if needed to use different communication channels.
+ *Contoso bank collected the phone numbers of the people who are interested in their services on their site. Contoso wants to send an invite to register for the promotional offer. Contoso checks before sending the link on the offer if SMS is possible channel for the number that customer provided on the site and donΓÇÖt waste money to send SMS to non mobile numbers.*
+- **Estimate the total cost of an SMS campaign before you launch it:** Get the current carrier of the target number and compare that with the list of known carrier surcharges.
+*Contoso, a marketing company, wants to launch a large SMS campaign to promote a product. Contoso checks the current carrier details for the different numbers he is targeting with this campaign to estimate the cost based on what ACS is charging him.*
+
+![Diagram showing call recording architecture using calling client sdk.](../numbers/mvp-use-case.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Number Lookup API](../../quickstarts/telephony/number-lookup.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Number Lookup SDK](../numbers/number-lookup-sdk.md)
communication-services Number Lookup Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-sdk.md
+
+ Title: Number Lookup SDK overview for Azure Communication Services
+
+description: Provides an overview of the Number Lookup SDK and its offerings.
+++++ Last updated : 05/02/2023++++
+# Number Lookup SDK overview
++
+Azure Communication Services Number Lookup is part of the Phone Numbers SDK. It can be used for your applications to add additional checks before sending and SMS or placing a call.
+
+## Number Lookup SDK capabilities
+
+The following list presents the set of features which are currently available in our SDKs.
+
+| Group of features | Capability | .NET | JS | Java | Python |
+| -- | - | | - | - | |
+| Core Capabilities | Get Number Type | ✔️ | ❌ | ❌ | ❌ |
+| | Get Carrier registered name | ✔️ | ❌ | ❌ | ❌ |
+| | Get associated Mobile Network Code, if available(two or three decimal digits used to identify network operator within a country) | ✔️ | ❌ | ❌ | ❌ |
+| | Get associated Mobile Country Code, if available(three decimal digits used to identify the country of a mobile operator) | ✔️ | ❌ | ❌ | ❌ |
+| Phone Number | All number types in E164 format | ✔️ | ❌ | ❌ | ❌ |
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Number Lookup API](../../quickstarts/telephony/number-lookup.md)
+
+- [Number Lookup Concept](../numbers/number-lookup-concept.md)
communication-services Call Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md
Last updated 08/10/2021
-zone_pivot_groups: acs-plat-ios-android
+zone_pivot_groups: acs-plat-ios-android-windows
#Customer intent: As a developer, I want to display the call transcription state on the client.
When using call transcription you may want to let your users know that a call is
[!INCLUDE [Call transcription client-side iOS](./includes/call-transcription/call-transcription-ios.md)] ::: zone-end + ## Next steps - [Learn how to manage video](./manage-video.md) - [Learn how to manage calls](./manage-calls.md)
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
Last updated 08/10/2021
-zone_pivot_groups: acs-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to manage call recording on the client so that my users can record calls.
zone_pivot_groups: acs-web-ios-android
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include-document.md)]
-[Call recording](../../concepts/voice-video-calling/call-recording.md), lets your users record their calls made with Azure Communication Services. Here we'll learn how to manage recording on the client side. Before this can work you will need to setup [server side](../../quickstarts/voice-video-calling/call-recording-sample.md) recording.
+[Call recording](../../concepts/voice-video-calling/call-recording.md), lets your users record their calls made with Azure Communication Services. Here we learn how to manage recording on the client side. Before this can work, you'll need to set up [server side](../../quickstarts/voice-video-calling/call-recording-sample.md) recording.
## Prerequisites
zone_pivot_groups: acs-web-ios-android
[!INCLUDE [Record Calls Client-side iOS](./includes/record-calls/record-calls-ios.md)] ::: zone-end + ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md)-- [Learn how to transcribe calls](./call-transcription.md)
+- [Learn how to transcribe calls](./call-transcription.md)
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Install the notation CLI and AKV plugin
-1. Install notation v1.0.0-rc.5 on a Linux environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/installation/cli/).
+1. Install notation v1.0.0-rc.7 on a Linux environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/installation/cli/).
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.5/notation_1.0.0-rc.5_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.7/notation_1.0.0-rc.7_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the notation cli to the desired bin directory in your PATH cp ./notation /usr/local/bin ```
-2. Install the notation Azure Key Vault plugin for remote signing and verification.
+2. Install the notation Azure Key Vault plugin on a Linux environment for remote signing and verification. You can also download the package for other environments by following the [Notation AKV plugin installation guide](https://github.com/Azure/notation-azure-kv#installation-the-akv-plugin).
> [!NOTE] > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu. Please read the [Notation directory structure for system configuration](https://notaryproject.dev/docs/concepts/directory-structure/) for more information.
In this tutorial:
# Download the plugin curl -Lo notation-azure-kv.tar.gz \
- https://github.com/Azure/notation-azure-kv/releases/download/v0.6.0/notation-azure-kv_0.6.0_Linux_amd64.tar.gz
+ https://github.com/Azure/notation-azure-kv/releases/download/v1.0.0-rc.2/notation-azure-kv_1.0.0-rc.2_linux_amd64.tar.gz
# Extract to the plugin directory tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv
cosmos-db How To Javascript Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-create-container.md
+
+ Title: Create a container in Azure Cosmos DB for NoSQL using JavaScript
+description: Learn how to create a container in your Azure Cosmos DB for NoSQL account using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 05/17/2023+++
+# Create a container in Azure Cosmos DB for NoSQL using JavaScript
++
+Containers in Azure Cosmos DB store sets of items. Before you can create, query, or manage items, you must first create a container.
+
+## Name a container
+
+In Azure Cosmos DB, a container is analogous to a table in a relational database. When you create a container, the container name forms a segment of the URI used to access the container resource and any child items.
+
+Here are some quick rules when naming a container:
+
+- Keep container names between 3 and 63 characters long
+- Container names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
+
+Once created, the URI for a container is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/colls/<container-name>``
+
+## Create a container
+
+Get a [Database](how-to-javascript-create-database.md) object, then create a [Container](/javascript/api/@azure/cosmos/container):
+
+* [createIfNotExists](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) - Creates a container if it doesn't exist. If it does exist, return container.
+* [create](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-create) - Creates a container. If it does exist, return error statusCode.
+
+```javascript
+const containerName = 'myContainer';
+
+// Possible results:
+// Create then return container
+// Return existing container
+// Return error statusCode
+const { statusCode, container } = await database.containers.createIfNotExists({ id: containerName });
+
+// Possible results:
+// Create then return container
+// Return error statusCode, reason includes container already exists
+const { statusCode, container} = await database.containers.create({ id: containerName });
+```
+
+The statusCode is an HTTP response code. A successful response is in the 200-299 range.
+
+## Access a container
+
+A container is accessed from the [Container](/javascript/api/@azure/cosmos/container) object either directly or chained from the [CosmosClient](/javascript/api/@azure/cosmos/cosmosclient) or [Database](/javascript/api/@azure/cosmos/database) objects.
+
+```javascript
+const databaseName = 'myDb';
+const containerName = 'myContainer';
+
+// Chained - assumes database and container already exis
+const { container, statusCode } = await client.database(databaseName).container(containerName);
+
+// Direct - assumes database and container already exist
+const { database, statusCode } = await client.database(databaseName);
+if(statusCode < 400){
+ const { container, statusCode } = await database.container(containerName);
+}
+
+// Query - assumes database and container already exist
+const { resources } = await client.database(databaseName).containers
+.query({
+ query: `SELECT * FROM root r where r.id =@containerId`,
+ parameters: [
+ {
+ name: '@containerId',
+ value: containerName
+ }
+ ]
+})
+.fetchAll();
+```
+
+Access by object:
+* [Containers](/javascript/api/@azure/cosmos/containers) (plural): Create or query containers.
+* [Container](/javascript/api/@azure/cosmos/container) (singular): Delete container, work with items.
+++
+## Delete a container
+
+Once you get the [Container](/javascript/api/@azure/cosmos/container) object, you can use the Container object to [delete](/javascript/api/@azure/cosmos/container#@azure-cosmos-container-delete) the container:
+
+```javascript
+const { statusCode } = await container.delete();
+```
+
+The statusCode is an HTTP response code. A successful response is in the 200-299 range.
+
+## Next steps
+
+Now that you've create a container, use the next guide to create items.
+
+> [!div class="nextstepaction"]
+> [Create an item](how-to-javascript-create-item.md)
cosmos-db How To Javascript Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-create-database.md
+
+ Title: Create a database in Azure Cosmos DB for NoSQL using JavaScript
+description: Learn how to create a database in your Azure Cosmos DB for NoSQL account using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 05/17/2023+++
+# Create a database in Azure Cosmos DB for NoSQL using JavaScript
++
+Databases in Azure Cosmos DB are units of management for one or more containers. Before you can create or manage containers, you must first create a database.
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Create a database
+
+Once you create the [CosmosClient](/javascript/api/@azure/cosmos/cosmosclient), use the client to create a [Database](/javascript/api/@azure/cosmos/database) from two different calls:
+
+* [createIfNotExists](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) - Creates a database if it doesn't exist. If it does exist, return database.
+* [create](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-create) - Creates a database. If it does exist, return error statusCode.
+
+```javascript
+const databaseName = 'myDb';
+
+// Possible results:
+// Create then return database
+// Return existing database
+// Return error statusCode
+const {statusCode, database } = await client.databases.createIfNotExists({ id: databaseName });
+
+// Possible results:
+// Create then return database
+// Return error statusCode, reason includes database already exists
+const {statusCode, database } = await client.databases.create({ id: databaseName });
+```
+
+The statusCode is an HTTP response code. A successful response is in the 200-299 range.
+
+## Access a database
+
+A database is accessed from the [Database](/javascript/api/@azure/cosmos/database) object either directly or through a query result from the [CosmosClient](/javascript/api/@azure/cosmos/cosmosclient).
+
+```javascript
+const databaseName = 'myDb';
+
+// Direct - assumes database already exists
+const { database, statusCode } = await client.database(databaseName);
+
+// Query - assumes database already exists
+const { resources } = await client.databases
+.query({
+ query: `SELECT * FROM root r where r.id =@dbId`,
+ parameters: [
+ {
+ name: '@dbId',
+ value: databaseName
+ }
+ ]
+})
+.fetchAll();
+```
+
+Access by object:
+* [Databases](/javascript/api/@azure/cosmos/databases) (plural): Used for creating new databases, or querying/reading all databases.
+* [Database](/javascript/api/@azure/cosmos/database) (singular): Used for reading, updating, or deleting an existing database by ID or accessing containers belonging to that database.
+
+## Delete a database
+
+Once you get the [Database](/javascript/api/@azure/cosmos/database) object, you can use the Database object to [delete](/javascript/api/@azure/cosmos/database#@azure-cosmos-database-delete) the database:
+
+```javascript
+const {statusCode } = await database.delete();
+```
+
+The statusCode is an HTTP response code. A successful response is in the 200-299 range.
+
+## Next steps
+
+Now that you've created a database, use the next guide to create containers.
+
+> [!div class="nextstepaction"]
+> [Create a container](how-to-javascript-create-container.md)
cosmos-db How To Javascript Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-create-item.md
+
+ Title: Create an item in Azure Cosmos DB for NoSQL using JavaScript
+description: Learn how to create an item in your Azure Cosmos DB for NoSQL account using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 05/17/2023+++
+# Create an item in Azure Cosmos DB for NoSQL using JavaScript
++
+Items in Azure Cosmos DB represent a specific entity stored within a container. In the API for NoSQL, an item consists of JSON-formatted data with a unique identifier.
+
+## Item, item definition, and item response
+
+In the JavaScript SDK, the three objects related to an item have different purposes.
+
+|Name|Operations|
+|--|--|
+|[Item](/javascript/api/@azure/cosmos/item)|Functionality including **Read**, **Patch**, **Replace**, **Delete**.|
+|[ItemDefinition](/javascript/api/@azure/cosmos/itemdefinition)|Your custom data object. Includes `id` and `ttl` properties automatically.|
+|[ItemResponse](/javascript/api/@azure/cosmos/itemresponse)|Includes `statusCode`, `item`, and other properties.|
+
+Use the properties of the **ItemResponse** object to understand the result of the operation.
+
+* **statusCode**: HTTP status code. A successful response is in the 200-299 range.
+* **activityId**: Unique identifier for the operation such as create, read, replace, or delete.
+* **etag**: Entity tag associated with the data. Use for optimistic concurrency, caching, and conditional requests.
+* **item**: [Item](/javascript/api/@azure/cosmos/item) object used to perform operations such as read, replace, delete.
+* **resource**: Your custom data.
+
+## Create a unique identifier for an item
+
+The unique identifier is a distinct string that identifies an item within a container. The ``id`` property is the only required property when creating a new JSON document. For example, this JSON document is a valid item in Azure Cosmos DB:
+
+```json
+{
+ "id": "unique-string-2309509"
+}
+```
+
+Within the scope of a container, two items can't share the same unique identifier.
+
+> [!IMPORTANT]
+> The ``id`` property is case-sensitive. Properties named ``ID``, ``Id``, ``iD``, and ``_id`` will be treated as an arbitrary JSON property.
+
+Once created, the URI for an item is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/docs/<item-resource-identifier>``
+
+When referencing the item using a URI, use the system-generated *resource identifier* instead of the ``id`` field. For more information about system-generated item properties in Azure Cosmos DB for NoSQL, see [properties of an item](../resource-model.md#properties-of-an-item)
+
+## Create an item
+
+Create an item with the container's [items](/javascript/api/@azure/cosmos/container#@azure-cosmos-container-items) object using the [create](/javascript/api/@azure/cosmos/items) method.
+
+```javascript
+const { statusCode, item, resource, activityId, etag} = await container.items.create({
+ id: '2',
+ category: 'gear-surf-surfboards',
+ name: 'Sunnox Surfboard',
+ quantity: 8,
+ sale: true
+ });
+```
+
+## Access an item
+
+Access an item through the [Item](/javascript/api/@azure/cosmos/item) object. This can accessed from the [Container](/javascript/api/@azure/cosmos/container) object or changed from either the [Database](/javascript/api/@azure/cosmos/database) or [CosmosClient](/javascript/api/@azure/cosmos/cosmosclient) objects.
+
+```javascript
+// Chained, then use a method of the Item object such as `read`
+const { statusCode, item, resource, activityId, etag} = await client.database(databaseId).container(containerId).item(itemId).read();
+```
+
+Access by object:
+* [Items](/javascript/api/@azure/cosmos/items) (plural): Create, batch, watch change feed, read all, upsert, or query items.
+* [Item](/javascript/api/@azure/cosmos/item) (singular): Read, patch, replace, or delete an item.
+
+## Replace an item
+
+Replace the data with the [Item](/javascript/api/@azure/cosmos/item) object with the [replace](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-replace) method.
+
+```javascript
+const { statusCode, item, resource, activityId, etag} = await item.replace({
+ id: '2',
+ category: 'gear-surf-surfboards-retro',
+ name: 'Sunnox Surfboard Retro',
+ quantity: 5,
+ sale: false
+ });
+```
+
+## Read an item
+
+Read the most current data with the [Item](/javascript/api/@azure/cosmos/item) object's [read](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) method.
+
+```javascript
+const { statusCode, item, resource, activityId, etag} = await item.read();
+```
+
+## Delete an item
+
+Delete the item with the [Item](/javascript/api/@azure/cosmos/item) object's' [delete](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) method.
+
+```javascript
+const { statusCode, item, activityId, etag} = await item.delete();
+```
+
+## Next steps
+
+Now that you've created various items, use the next guide to query for item.
+
+> [!div class="nextstepaction"]
+> [Read an item](how-to-javascript-query-items.md)
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-get-started.md
+
+ Title: Get started with Azure Cosmos DB for NoSQL using JavaScript
+description: Get started developing a JavaScript application that works with Azure Cosmos DB for NoSQL. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for NoSQL endpoint.
++++
+ms.devlang: javascript
+ Last updated : 07/06/2022+++
+# Get started with Azure Cosmos DB for NoSQL using JavaScript
++
+This article shows you how to connect to Azure Cosmos DB for NoSQL using the JavaScript SDK. Once connected, you can perform operations on databases, containers, and items.
+
+[Package (npm)](https://www.npmjs.com/package/@azure/cosmos) | [Samples](samples-nodejs.md) | [API reference](/javascript/api/@azure/cosmos) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cosmosdb/cosmos) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+- [Node.js LTS](https://nodejs.org/)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Set up your local project
+
+1. Create a new directory for your JavaScript project in a bash shell.
+
+ ```bash
+ mkdir cosmos-db-nosql-javascript-samples && cd ./cosmos-db-nosql-javascript-samples
+ ```
+
+1. Create a new JavaScript application by using the [``npm init``](https://docs.npmjs.com/cli/v6/commands/npm-init) command with the **console** template.
+
+ ```bash
+ npm init -y
+ ```
+
+1. Install the required dependency for the Azure Cosmos DB for NoSQL JavaScript SDK.
+
+ ```bash
+ npm install @azure/cosmos
+ ```
+
+## <a id="connect-to-azure-cosmos-db-sql-api"></a>Connect to Azure Cosmos DB for NoSQL
+
+To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to an API for NoSQL account using the **CosmosClient** class:
+
+- [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
+- [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
+- [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+
+### Connect with an endpoint and key
+
+The most common constructor for **CosmosClient** has two parameters:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | API for NoSQL endpoint to use for all requests |
+| ``authKeyOrResourceToken`` | ``COSMOS_KEY`` environment variable | Account key or resource token to use when authenticating |
+
+#### Retrieve your account endpoint and key
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Create a shell variable for *resourceGroupName*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-javascript-howto-rg"
+ ```
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Get the API for NoSQL endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query "documentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "keys" \
+ --query "primaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Create a shell variable for *RESOURCE_GROUP_NAME*.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos-javascript-howto-rg"
+ ```
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Get the API for NoSQL endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ }
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property "DocumentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "Keys"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "PrimaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-javascript-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB for NoSQL account page.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL API account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL API account.":::
+++
+To use the **URI** and **PRIMARY KEY** values within your code, persist them to new environment variables on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
+$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_ENDPOINT="<cosmos-account-URI>"
+export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
+```
+++
+#### Create CosmosClient with account endpoint and key
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` and ``COSMOS_KEY`` environment variables as parameters.
+
+```javascript
+const client = new CosmosClient({ endpoint, key });
+```
+
+### Connect with a connection string
+
+Another constructor for **CosmosClient** only contains a single parameter:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | API for NoSQL endpoint to use for all requests |
+| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the API for NoSQL account |
+
+#### Retrieve your account connection string
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "connection-strings" \
+ --query "connectionStrings[?description == \`Primary SQL Connection String\`] | [0].connectionString"
+ ```
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "ConnectionStrings"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "Primary SQL Connection String" -First 1
+ ```
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-javascript-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB for NoSQL account page.
+
+1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option.
+
+1. Record the value from the **PRIMARY CONNECTION STRING** field.
++
+To use the **PRIMARY CONNECTION STRING** value within your code, persist it to a new environment variable on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_CONNECTION_STRING = "<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_CONNECTION_STRING="<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+++
+#### Create CosmosClient with connection string
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_CONNECTION_STRING`` environment variable as the only parameter.
+
+```javascript
+// New instance of CosmosClient class using a connection string
+const cosmosClient = new CosmosClient(process.env.COSMOS_CONNECTION_STRING);
+```
+
+### Connect using the Microsoft Identity Platform
+
+To connect to your API for NoSQL account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal depends on where you host your application code. The table below serves as a quick reference guide.
+
+| Where the application runs | Security principal
+|--|--||
+| Local machine (developing and testing) | User identity or service principal |
+| Azure | Managed identity |
+| Servers or clients outside of Azure | Service principal |
+
+#### Import @azure/identity
+
+The **@azure/identity** npm package contains core authentication functionality that is shared among all Azure SDK libraries.
+
+1. Import the [@azure/identity](https://www.npmjs.com/package/@azure/identity) npm package using the ``npm install`` command.
+
+ ```bash
+ npm install @azure/identity
+ ```
+
+1. In your code editor, add the dependencies.
+
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
+ ```
+#### Create CosmosClient with default credential implementation
+
+If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/javascript/api/@azure/identity/defaultazurecredential) instance. Then create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
+
+```javascript
+const { CosmosClient } = require("@azure/cosmos");
+const { DefaultAzureCredential } = require("@azure/identity");
+
+const credential = new DefaultAzureCredential();
+
+const cosmosClient = new CosmosClient({
+ endpoint,
+ aadCredentials: credential
+});
+```
+
+#### Create CosmosClient with a custom credential implementation
+
+If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [@azure/identity client library for JavaScript](/javascript/api/@azure/identity/). These other classes also derive from the ``TokenCredential`` class.
+
+For this example, we create a [``ClientSecretCredential``](/javascript/api/@azure/identity/tokencredential) instance by using client and tenant identifiers, along with a client secret.
+
+You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
+
+```javascript
+const { CosmosClient } = require("@azure/cosmos");
+const { DefaultAzureCredential } = require("@azure/identity");
+
+const credential = new ClientSecretCredential(
+ tenantId: process.env.AAD_TENANT_ID,
+ clientId: process.env.AAD_CLIENT_ID,
+ clientSecret: process.env.AAD_CLIENT_SECRET
+);
+
+const cosmosClient = new CosmosClient({
+ endpoint,
+ aadCredentials: credential
+});
+```
+
+## Build your application
+
+As you build your application, your code will primarily interact with four types of resources:
+
+- The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
+
+- Databases, which organize the containers in your account.
+
+- Containers, which contain a set of individual items in your database.
+
+- Items, which represent a JSON document in your container.
+
+The following diagram shows the relationship between these resources.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+
+Each type of resource is represented by one or more associated classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+| [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) | This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. |
+| [``Database``](/javascript/api/@azure/cosmos/database) | This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it. |
+| [``Container``](/javascript/api/@azure/cosmos/container) | This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it. |
+
+The following guides show you how to use each of these classes to build your application.
+
+| Guide | Description |
+|--||
+| [Create a database](how-to-javascript-create-database.md) | Create databases |
+| [Create a container](how-to-javascript-create-container.md) | Create containers |
+| [Create and read an item](how-to-javascript-create-item.md) | Point read a specific item |
+| [Query items](how-to-javascript-query-items.md) | Query multiple items |
+
+## See also
+
+- [npm package](https://www.npmjs.com/package/@azure/cosmos)
+- [Samples](samples-nodejs.md)
+- [API reference](/javascript/api/@azure/cosmos/)
+- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cosmosdb/cosmos)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+
+## Next steps
+
+Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB for NoSQL using JavaScript](how-to-javascript-create-database.md)
cosmos-db How To Javascript Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-query-items.md
+
+ Title: Query items in Azure Cosmos DB for NoSQL using JavaScript
+description: Learn how to query items in your Azure Cosmos DB for NoSQL account using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 05/17/2023+++
+# Query items in Azure Cosmos DB for NoSQL using JavaScript
+
+Items in Azure Cosmos DB represent entities stored within a container. In the API for NoSQL, an item consists of JSON-formatted data with a unique identifier. When you issue queries using the API for NoSQL, results are returned as a JSON array of JSON documents.
+
+## Query items using SQL
+
+The Azure Cosmos DB for NoSQL supports the use of Structured Query Language (SQL) to perform queries on items in containers. A simple SQL query like ``SELECT * FROM products`` returns all items and properties from a container. Queries can be even more complex and include specific field projections, filters, and other common SQL clauses:
+
+```sql
+SELECT
+ p.name,
+ p.quantity
+FROM
+ products p
+WHERE
+ p.quantity > 500
+```
+
+To learn more about the SQL syntax for Azure Cosmos DB for NoSQL, see [Getting started with SQL queries](query/getting-started.md).
+
+## Query an item
+
+Create an array of matched items from the container's [items](/javascript/api/@azure/cosmos/container#@azure-cosmos-container-items) object using the [query](/javascript/api/@azure/cosmos/items) method.
+
+```javascript
+const querySpec = {
+ query: `SELECT * FROM ${container.id} f WHERE f.name = @name`,
+ parameters: [{
+ name: "@name",
+ value: "Sunnox Surfboard",
+ }],
+};
+const { resources } = await container.items.query(querySpec).fetchAll();
+
+for (const product of resources) {
+ console.log(`${product.name}, ${product.quantity} in stock `);
+}
+```
+
+The [query](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method returns a [QueryIterator](/javascript/api/@azure/cosmos/queryiterator) object. Use the iterator's [fetchAll](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchall) method to retrieve all the results. The QueryIterator also provides [fetchNext](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext), [hasMoreResults](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-hasmoreresults), and other methods to help you use the results.
+
+## Next steps
+
+Now that you've queried multiple items, try one of our end-to-end tutorials with the API for NoSQL.
+
+> [!div class="nextstepaction"]
+> [Build a Node.js web app by using the JavaScript SDK to manage an API for NoSQL account in Azure Cosmos DB](tutorial-nodejs-web-app.md)
cosmos-db Howto Ingest Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-blob-storage.md
Previously updated : 01/30/2023 Last updated : 05/12/2023 # How to ingest data using pg_azure_storage in Azure Cosmos DB for PostgreSQL
pg_size_pretty | 5257 kB
content_type | application/x-gzip ```
-You can filter the output either by using a regular SQL `WHERE` clause, or by using the `prefix` parameter of the `blob_list` UDF. The latter will filter the returned rows on the Azure Blob Storage side.
+You can filter the output either by using a regular SQL `WHERE` clause, or by using the `prefix` parameter of the `blob_list` UDF. The latter filters the returned rows on the Azure Blob Storage side.
> [!NOTE]
Currently the extension supports the following file formats:
### Load data with blob_get()
-The `COPY` command is convenient, but limited in flexibility. Internally COPY uses the `blob_get` function, which you can use directly to manipulate data in much more complex scenarios.
+The `COPY` command is convenient, but limited in flexibility. Internally COPY uses the `blob_get` function, which you can use directly to manipulate data in more complex scenarios.
```sql SELECT *
INSERT 0 264308
Congratulations, you just learned how to load data into Azure Cosmos DB for PostgreSQL directly from Azure Blob Storage.
-Learn how to create a [real-time dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
+- Learn how to create a [real-time dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
+- Learn more about [pg_azure_storage](reference-pg-azure-storage.md).
cosmos-db Reference Pg Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-pg-azure-storage.md
+
+ Title: pg_azure_storage extension
+description: Reference documentation for using Azure Blob Storage extension
+++++ Last updated : 05/30/2023++
+# pg_azure_storage extension
++
+The pg_azure_storage extension allows you to load data in multiple file formats directly from Azure blob storage to your Azure Cosmos DB for PostgreSQL cluster. Containers with access level ΓÇ£PrivateΓÇ¥ or ΓÇ£BlobΓÇ¥ requires adding private access key.
+
+You can create the extension from psql by running:
+```postgresql
+SELECT create_extension('azure_storage');
+```
+
+## COPY FROM
+
+`COPY FROM` copies data from a file, hosted on a file system or within `Azure blob storage`, to an SQL table (appending the data to whatever is in the table already). The command is helpful in dealing with large datasets, significantly reducing the time and resources required for data transfer.
+
+```postgresql
+COPY table_name [ ( column_name [, ...] ) ]
+FROM { 'filename' | PROGRAM 'command' | STDIN | Azure_blob_url}
+ [ [ WITH ] ( option [, ...] ) ]
+ [ WHERE condition ]
+```
+> [!NOTE]
+> Syntax and options supported remains likewise to Postgres Native [COPY](https://www.postgresql.org/docs/current/sql-copy.html) command, with following exceptions:
+>
+> - `FREEZE [ boolean ]`
+> - `HEADER MATCH`
+>
+> `COPY TO` syntax is yet not supported.
+
+### Arguments
+#### Azure_blob_url
+Allows unstructured data to be stored and accessed at a massive scale in block blobs. Objects in blob storage can be accessed from anywhere in the world via HTTPS. The storage client libraries are available for multiple languages, including .NET, Java, Node.js, Python, PHP, and Ruby.
+
+### Option
+#### format
+Specifies the format of destination file. Currently the extension supports following formats
+
+| **Format** | **Description** |
+||-|
+| csv | Comma-separated values format used by PostgreSQL COPY |
+| tsv | Tab-separated values, the default PostgreSQL COPY format |
+| binary | Binary PostgreSQL COPY format |
+| text | A file containing a single text value (for example, large JSON or XML) |
+
+## azure_storage.account_add
+Function allows adding access to a storage account.
+
+```postgresql
+azure_storage.account_add
+ (account_name_p text
+ ,account_key_p text);
+```
+### Arguments
+#### account_name_p
+An Azure blob storage (ABS) account contains all of your ABS objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your ABS that is accessible from anywhere in the world over HTTPS.
+
+#### account_key_p
+Your Azure blob storage (ABS) access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible by the postgres superuser, azure_storage_admin and all roles granted those admin permissions. To see which storage accounts exist, use the function account_list.
+
+## azure_storage.account_remove
+Function allows revoking account access to storage account.
+
+```sql
+azure_storage.account_remove
+ (account_name_p text);
+```
+
+### Arguments
+#### account_name_p
+Azure blob storage (ABS) account contains all of your ABS objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your ABS that is accessible from anywhere in the world over HTTPS.
+
+## azure_storage.account_user_add
+The function allows adding access for a role to a storage account.
+
+```postgresql
+azure_storage.account_add
+ ( account_name_p text
+ , user_p regrole);
+```
+
+### Arguments
+#### account_name_p
+An Azure blob storage (ABS) account contains all of your ABS objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your ABS that is accessible from anywhere in the world over HTTPS.
+
+#### user_p
+Role created by user visible on the cluster.
+
+> [!Note]
+> `account_user_add`,`account_add`,`account_remove`,`account_user_remove` functions require setting permissions for each individual nodes in cluster.
+
+## azure_storage.account_user_remove
+The function allows removing access for a role to a storage account.
+
+```postgresql
+azure_storage.account_remove
+ (account_name_p text
+ ,user_p regrole);
+```
+
+### Arguments
+#### account_name_p
+An Azure blob storage (ABS) account contains all of your ABS objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your ABS that is accessible from anywhere in the world over HTTPS.
+
+#### user_p
+Role created by user visible on the cluster.
+
+## azure_storage.account_list
+The function lists the account & role having access to Azure blob storage.
+
+```postgresql
+azure_storage.account_list
+ (OUT account_name text
+ ,OUT allowed_users regrole[]
+ )
+Returns TABLE;
+```
+
+### Arguments
+#### account_name
+Azure blob storage (ABS) account contains all of your ABS objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your ABS that is accessible from anywhere in the world over HTTPS.
+
+#### allowed_users
+Lists the users having access to the Azure blob storage.
+
+### Return Type
+TABLE
+
+## azure_storage.blob_list
+The function lists the available blob files within a user container with their properties.
+
+```postgresql
+azure_storage.blob_list
+ (account_name text
+ ,container_name text
+ ,prefix text DEFAULT ''::text
+ ,OUT path text
+ ,OUT bytes bigint
+ ,OUT last_modified timestamp with time zone
+ ,OUT etag text
+ ,OUT content_type text
+ ,OUT content_encoding text
+ ,OUT content_hash text
+ )
+Returns SETOF record;
+```
+
+### Arguments
+#### account_name
+The `storage account name` provides a unique namespace for your Azure storage data that's accessible from anywhere in the world over HTTPS.
+
+#### container_name
+A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
+A container name must be a valid DNS name, as it forms part of the unique URI used to address the container or its blobs. Follow these rules when naming a container:
+
+* Container names can be between 3 and 63 characters long.
+* Container names must start with a letter or number, and can contain only lowercase letters, numbers, and the dash (-) character.
+* Two or more consecutive dash characters aren't permitted in container names.
+
+The URI for a container is similar to:
+`https://myaccount.blob.core.windows.net/mycontainer`
+
+#### prefix
+Returns file from blob container with matching string initials.
+#### path
+Full qualified path of Azure blob directory.
+#### bytes
+Size of file object in bytes.
+#### last_modified
+When was the file content last modified.
+#### etag
+An ETag property is used for optimistic concurrency during updates. It isn't a timestamp as there's another property called Timestamp that stores the last time a record was updated. For example, if you load an entity and want to update it, the ETag must match what is currently stored. Setting the appropriate ETag is important because if you have multiple users editing the same item, you don't want them overwriting each other's changes.
+#### content_type
+The Blob object represents a blob, which is a file-like object of immutable, raw data. They can be read as text or binary data, or converted into a ReadableStream so its methods can be used for processing the data. Blobs can represent data that isn't necessarily in a JavaScript-native format.
+#### content_encoding
+Azure Storage allows you to define Content-Encoding property on a blob. For compressed content, you could set the property to be GZIP. When the browser accesses the content, it automatically decompresses the content.
+#### content_hash
+This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes don't match, the operation fails with error code 400 (Bad Request).
+
+### Return Type
+SETOF record
+
+> [!NOTE]
+> **Permissions**
+Now you can list containers set to Private and Blob access levels for that storage but only as the `citus user`, which has the `azure_storage_admin` role granted to it. If you create a new user named `support`, it won't be allowed to access container contents by default.
+
+## azure_storage.blob_get
+The function allows loading the content of file \ files from within the container, with added support on filtering or manipulation of data, prior to import.
+
+```postgresql
+azure_storage.blob_get
+ (account_name text
+ ,container_name text
+ ,path text
+ ,decoder text DEFAULT 'auto'::text
+ ,compression text DEFAULT 'auto'::text
+ ,options jsonb DEFAULT NULL::jsonb
+ )
+RETURNS SETOF record;
+```
+There's an overloaded version of function, containing rec parameter that allows you to conveniently define the output format record.
+```postgresql
+azure_storage.blob_get
+ (account_name text
+ ,container_name text
+ ,path text
+ ,rec anyelement
+ ,decoder text DEFAULT 'auto'::text
+ ,compression text DEFAULT 'auto'::text
+ ,options jsonb DEFAULT NULL::jsonb
+ )
+RETURNS SETOF anyelement;
+```
+
+### Arguments
+#### account
+The storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTPS.
+#### container
+A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
+A container name must be a valid DNS name, as it forms part of the unique URI used to address the container or its blobs.
+#### path
+Blob name existing in the container.
+#### rec
+Define the record output structure.
+#### decoder
+Specify the blob format
+Decoder can be set to auto (default) or any of the following values
+#### decoder description
+| **Format** | **Description** |
+||-|
+| csv | Comma-separated values format used by PostgreSQL COPY |
+| tsv | Tab-separated values, the default PostgreSQL COPY format |
+| binary | Binary PostgreSQL COPY format |
+| text | A file containing a single text value (for example, large JSON or XML) |
+
+#### compression
+Defines the compression format. Available options are `auto`, `gzip` & `none`. The use of the `auto` option (default), guesses the compression based on the file extension (.gz == gzip). The option `none` forces to ignore the extension and not attempt to decode. While gzip forces using the gzip decoder (for when you have a gzipped file with a non-standard extension). We currently don't support any other compression formats for the extension.
+#### options
+For handling custom headers, custom separators, escape characters etc., `options` works in similar fashion to `COPY` command in PostgreSQL, parameter utilizes to blob_get function.
+
+### Return Type
+SETOF Record
+
+> [!Note]
+> There are four utility functions, called as a parameter within blob_get that help building values for it. Each utility function is designated for the decoder matching its name.
+
+## azure_storage.options_csv_get
+The function acts as a utility function called as a parameter within blob_get, which is useful for decoding the csv content.
+
+```postgresql
+azure_storage.options_csv_get
+ (delimiter text DEFAULT NULL::text
+ ,null_string text DEFAULT NULL::text
+ ,header boolean DEFAULT NULL::boolean
+ ,quote text DEFAULT NULL::text
+ ,escape text DEFAULT NULL::text
+ ,force_not_null text[] DEFAULT NULL::text[]
+ ,force_null text[] DEFAULT NULL::text[]
+ ,content_encoding text DEFAULT NULL::text
+ )
+Returns jsonb;
+```
+
+### Arguments
+#### delimiter
+Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
+
+#### null_string
+Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
+
+#### header
+Specifies that the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table.
+
+#### quote
+Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single one-byte character.
+
+#### escape
+Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single one-byte character.
+
+#### force_not_null
+Don't match the specified columns' values against the null string. In the default case where the null string is empty, it means that empty values are read as zero-length strings rather than nulls, even when they aren't quoted.
+
+#### force_null
+Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
+
+#### content_encoding
+Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
+
+### Return Type
+jsonb
+
+## azure_storage.options_copy
+The function acts as a utility function called as a parameter within blob_get.
+
+```postgresql
+azure_storage.options_copy
+ (delimiter text DEFAULT NULL::text
+ ,null_string text DEFAULT NULL::text
+ ,header boolean DEFAULT NULL::boolean
+ ,quote text DEFAULT NULL::text
+ ,escape text DEFAULT NULL::text
+ ,force_quote text[] DEFAULT NULL::text[]
+ ,force_not_null text[] DEFAULT NULL::text[]
+ ,force_null text[] DEFAULT NULL::text[]
+ ,content_encoding text DEFAULT NULL::text
+ )
+Returns jsonb;
+```
+
+### Arguments
+#### delimiter
+Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
+
+#### null_string
+Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
+
+#### header
+Specifies that the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table.
+
+#### quote
+Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single one-byte character.
+
+#### escape
+Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single one-byte character.
+
+#### force_quote
+Forces quoting to be used for all non-NULL values in each specified column. NULL output is never quoted. If * is specified, non-NULL values are quoted in all columns.
+
+#### force_not_null
+Don't match the specified columns' values against the null string. In the default case where the null string is empty, it means that empty values are read as zero-length strings rather than nulls, even when they aren't quoted.
+
+#### force_null
+Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
+
+#### content_encoding
+Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
+
+### Return Type
+jsonb
+
+## azure_storage.options_tsv
+The function acts as a utility function called as a parameter within blob_get. It's useful for decoding the tsv content.
+
+```postgresql
+azure_storage.options_tsv
+ (delimiter text DEFAULT NULL::text
+ ,null_string text DEFAULT NULL::text
+ ,content_encoding text DEFAULT NULL::text
+ )
+Returns jsonb;
+```
+
+### Arguments
+#### delimiter
+Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
+
+#### null_string
+Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
+
+#### content_encoding
+Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
+
+### Return Type
+jsonb
+
+## azure_storage.options_binary
+The function acts as a utility function called as a parameter within blob_get. It's useful for decoding the binary content.
+
+```postgresql
+azure_storage.options_binary
+ (content_encoding text DEFAULT NULL::text)
+Returns jsonb;
+```
+
+### Arguments
+#### content_encoding
+Specifies that the file is encoded in the encoding_name. If this option is omitted, the current client encoding is used.
+
+### Return Type
+jsonb
+
+> [!NOTE]
+**Permissions**
+Now you can list containers set to Private and Blob access levels for that storage but only as the `citus user`, which has the `azure_storage_admin` role granted to it. If you create a new user named support, it won't be allowed to access container contents by default.
++
+## Examples
+The examples used make use of sample Azure storage account `(pgquickstart)` with custom files uploaded for adding to coverage of different use cases. We can start by creating table used across the set of example used.
+```sql
+CREATE TABLE IF NOT EXISTS public.events
+ (
+ event_id bigint
+ ,event_type text
+ ,event_public boolean
+ ,repo_id bigint
+ ,payload jsonb
+ ,repo jsonb
+ ,user_id bigint
+ ,org jsonb
+ ,created_at timestamp without time zone
+ );
+```
+
+### Adding access key of storage account (mandatory for access level = private)
+The example illustrates adding of access key for the storage account to get access for querying from a session on the Azure Cosmos DB for Postgres cluster.
+
+```sql
+SELECT azure_storage.account_add('pgquickstart', 'SECRET_ACCESS_KEY');
+```
+> [!TIP]
+> In your storage account, open **Access keys**. Copy the **Storage account name** and copy the **Key** from **key1** section (you have to select **Show** next to the key first).
++
+### Removing access key of storage account
+The example illustrates removing the access key for a storage account. This action would result in removing access to files hosted in private bucket in container.
+
+```sql
+SELECT azure_storage.account_remove('pgquickstart');
+```
+
+### List the objects within a `public` container on Azure storage account
+The following example illustrates accessing the available files within the public container.
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','publiccontainer');
+```
+
+### List the objects within a `private` container on Azure storage account (adding access key is mandatory)
+The following example illustrates accessing the available files within the private container.
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','privatecontainer');
+```
+
+### List the objects with specific string initials within public container
+The following example illustrates listing all the available files starting with a string initial.
+
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','publiccontainer','e');
+```
+Alternatively
+
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','publiccontainer') WHERE path LIKE 'e%';
+```
+
+### Read content from an object in a container
+The `blob_get` function retrieves a file from blob storage. In order for blob_get to know how to parse the data you can either pass a value (NULL::table_name), which has same format as the file.
+
+```sql
+SELECT * FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events.csv.gz'
+ , NULL::events)
+LIMIT 5;
+```
+
+Alternatively, we can explicitly define the columns in the `FROM` clause.
+
+```sql
+SELECT * FROM azure_storage.blob_get('pgquickstart','publiccontainer','events.csv')
+AS res (
+ event_id TEXT
+ ,event_type DATE
+ ,event_public INTEGER
+ ,repo_id INTEGER
+ ,payload JSONB
+ ,repo JSONB
+ ,user_id BIGINT
+ ,org JSONB
+ ,created_at TIMESTAMP WITHOUT TIME ZONE)
+LIMIT 5;
+```
+
+### Use decoder option
+The example illustrates the use of `decoder` option. Normally format is inferred from the extension of the file, but when the file content doesn't have a matching extension you can pass the decoder argument.
+
+```sql
+SELECT * FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events'
+ , NULL::events
+ , decoder := 'csv')
+LIMIT 5;
+```
+
+### Use compression with decoder option
+The example shows how to enforce using the gzip compression on a gzip compressed file without a standard .gz extension.
+
+```sql
+SELECT * FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events-compressed'
+ , NULL::events
+ , decoder := 'csv'
+ , compression := 'gzip')
+LIMIT 5;
+```
+
+### Import filtered content & modify before loading from csv format object
+The example illustrates the possibility to filter & modify the content being imported from object in container before loading that into a SQL table.
+
+```sql
+SELECT concat('P-',event_id::text) FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events.csv'
+ , NULL::events)
+WHERE event_type='PushEvent'
+LIMIT 5;
+```
+
+### Query content from file with headers, custom separators, escape characters
+The example illustrates the use of `options` argument for processing files with headers, custom separators, escape characters, etc., you can either use the COPY command or pass COPY options to the blob_get function using the `azure_storage.options_copy` function.
+```sql
+SELECT * FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events_pipe.csv'
+ ,NULL::events
+ ,options := azure_storage.options_csv_get(delimiter := '|' , header := 'true')
+ );
+```
+
+### Aggregation query on content of an object in the container
+The example illustrates the ability to directly perform analysis on the data without importing the set into the database.
+```sql
+SELECT event_type,COUNT(1) FROM azure_storage.blob_get
+ ('pgquickstart'
+ ,'publiccontainer'
+ ,'events.csv'
+ , NULL::events)
+GROUP BY event_type
+ORDER BY 2 DESC
+LIMIT 5;
+```
+
+## Next Steps
+
+Learn more about analyzing the dataset, along with alternative options.
+
+> [!div class="nextstepaction"]
+> [How to ingest data using pg_azure_storage](howto-ingest-azure-blob-storage.md)
cost-management-billing Understand Rhel Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-rhel-reservation-charges.md
For example, if you buy a plan for Red Hat Linux Enterprise Server for a VM with
- 1 deployed VMs with 1 to 4 vCPUs, - or 0.46 or about 46% of Red Hat Enterprise Linux costs for a VM with 5 or more vCPUs.
-### Red Hat Enterprise Linux
-
-Azure portal marketplace names:
-- Red Hat Enterprise Linux 6.7-- Red Hat Enterprise Linux 6.8-- Red Hat Enterprise Linux 6.9-- Red Hat Enterprise Linux 6.10-- Red Hat Enterprise Linux 7-- Red Hat Enterprise Linux 7.2-- Red Hat Enterprise Linux 7.3-- Red Hat Enterprise Linux 7.4-- Red Hat Enterprise Linux 7.5-- Red Hat Enterprise Linux 7.6-- Red Hat Enterprise Linux 8.2-
-[Check Red Hat Enterprise Linux meters that the plan applies to](https://phoenixnap.com/kb/how-to-check-redhat-version)
+For more in formation to [Review SUSE VM usage before you buy](understand-suse-reservation-charges.md#review-suse-vm-usage-before-you-buy)
## Next steps
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
If you're a billing administrator, use following steps to view and manage all re
- If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one. 1. In the left menu, select **Products + services** > **Reservations**. 1. The complete list of reservations for your EA enrollment or billing profile is shown.
-1. Billing administrators can take ownership of a reservation by selecting one or multiple reservations, selecting **Grant access** and selecting **Grant access** in the window that appears.
+1. Billing administrators can take ownership of a reservation by selecting one or multiple reservations, selecting **Grant access** and selecting **Grant access** in the window that appears. For a Microsoft Customer Agreement, user should be in the same Azure Active Directory (Azure AD) tenant (directory) as the reservation.
### Add billing administrators
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Learn more about [CSPM](concept-cloud-security-posture-management.md).
Agentless Container Posture provides the following capabilities:
+- [Agentless discovery and visibility](#agentless-discovery-and-visibility-within-kubernetes-components) within Kubernetes components.
+- [Agentless container registry vulnerability assessment](#agentless-container-registry-vulnerability-assessment), using the image scanning results of your Azure Container Registry (ACR) with cloud security explorer.
- Using Kubernetes [attack path analysis](concept-attack-path.md) to visualize risks and threats to Kubernetes environments.--- Using cloud security explorer for risk hunting by querying various risk scenarios.--- Viewing security insights, such as internet exposure, and other pre-defined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).--- Agentless discovery and visibility within Kubernetes components.--- Agentless container registry vulnerability assessment, using the image scanning results of your Azure Container Registry (ACR) with cloud security explorer.- - Using [cloud security explorer](how-to-manage-cloud-security-explorer.md) for risk hunting by querying various risk scenarios.
+- Viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
-- Viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for Kubernetes in the [list of Insights](attack-path-reference.md#cloud-security-graph-components-list).--- [Agentless discovery and visibility within Kubernetes components](#agentless-discovery-and-visibility-within-kubernetes-components)--- [Container registry vulnerability assessment](#container-registry-vulnerability-assessment) ## Agentless discovery and visibility within Kubernetes components
By enabling the Agentless discovery for Kubernetes extension, the following proc
- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.
-## Container registry vulnerability assessment
+## Agentless Container registry vulnerability assessment
- Container registry vulnerability assessment scans images in your Azure Container Registry (ACR) to provide recommendations for improving your posture by remediating vulnerabilities.
defender-for-cloud Create Custom Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/create-custom-recommendations.md
Last updated 03/26/2023 + # Create custom recommendations and security standards Recommendations give you suggestions on how to better secure your resources.
There are three elements involved when creating and managing custom recommendati
|Aspect|Details| |-|:-| |Required/preferred environmental requirements| This preview includes only AWS and GCP recommendations. <br> This feature will be part of the Defender CSPM plan in the future. |
-| Required roles & permissions | Subscription Owner / Contributor |
+| Required roles & permissions | Security Admin |
|Clouds:| :::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | ## Create a custom recommendation
You can use the following links to learn more about Kusto queries:
- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/) - [Must Learn KQL Part 1: Tools and Resources](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/) - [What are security policies, initiatives, and recommendations?](security-policy-concept.md)++
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
description: Review data residency and workspace design for Microsoft Defender f
Previously updated : 11/06/2022 Last updated : 05/30/2023 # Plan data residency and workspaces for Defender for Servers
You can store your server information in the default workspace or you can use a
### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
-When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
+When you enable the Servers plan on the subscription level, Defender for Cloud enables the Servers plan on your default workspaces automatically. Connect to the default workspace by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
:::image type="content" source="media/plan-defender-for-servers-data-workspace/connect-workspace.png" alt-text="Screenshot showing how to auto-provision Defender for Cloud to manage your workspaces.":::
-However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
+However, if you're using a custom workspace in place of the default workspace, you need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
-If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (MDVM/Qualys), and Just-in-Time VM access.
+If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation appears on the Recommendations page. This recommendation gives you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (MDVM/Qualys), and Just-in-Time VM access.
Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
Yes. If you configure your Log Analytics agent to send data to two or more diffe
### Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?
-You'll get 500-MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
+You receive a daily allowance of 500 MB of free data ingestion for each virtual machine (VM) connected to the workspace. This allocation specifically applies to the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) collected directly by Defender for Cloud.
-This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100 MB and others send 800 MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
+The data allowance is a daily rate calculated across all connected machines. Your total daily free limit is equal to the **[number of machines] x 500 MB**. So even if on a given day some machines send 100 MB and others send 800 MB, if the total data from all machines doesn't exceed your daily free limit, you won't be charged extra.
### What data types are included in the 500-MB data daily allowance?+ Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security): - [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
You can view your data usage in two different ways, the Azure portal, or by runn
**To view your usage in the Azure portal**:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to **Log Analytics workspaces**.
You can also view estimated costs under different pricing tiers by selecting :::
**To view your usage by using a script**:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to **Log Analytics workspaces** > **Logs**.
You may want to manage your costs and limit the amount of data collected for a s
> Solution targeting has been deprecated because the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use solution targeting if you already have it configured, but it is not available in new regions. > The feature will not be supported after August 31, 2024. > Regions that support solution targeting until the deprecation date are:
->
+>
> | Region code | Region name | > | : | :- | > | CCAN | canadacentral |
You may want to manage your costs and limit the amount of data collected for a s
> > | Air-gapped clouds | Region code | Region name | > | :- | :- | :- |
-> | UsNat | EXE | usnateast |
-> | UsNat | EXW | usnatwest |
-> | UsGov | FF | usgovvirginia |
-> | China | MC | ChinaEast2 |
-> | UsGov | PHX | usgovarizona |
-> | UsSec | RXE | usseceast |
-> | UsSec | RXW | ussecwest |
+> | UsNat | EXE | usnateast |
+> | UsNat | EXW | usnatwest |
+> | UsGov | FF | usgovvirginia |
+> | China | MC | ChinaEast2 |
+> | UsGov | PHX | usgovarizona |
+> | UsSec | RXE | usseceast |
+> | UsSec | RXW | ussecwest |
## Next steps
defender-for-cloud Tutorial Protect Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-protect-resources.md
Title: Access & application controls tutorial - Microsoft Defender for Cloud
+ Title: Protect your VMs with Microsoft Defender for Servers
description: This tutorial shows you how to configure a just-in-time VM access policy and an application control policy. Last updated 01/08/2023
-# Tutorial: Protect your resources with Microsoft Defender for Cloud
+# Tutorial: Protect your VMs with Microsoft Defender for Servers
Defender for Cloud limits your exposure to threats by using access and application controls to block malicious activity. Just-in-time (JIT) virtual machine (VM) access reduces your exposure to attacks by enabling you to deny persistent access to VMs. Instead, you provide controlled and audited access to VMs only when needed. Adaptive application controls help harden VMs against malware by controlling which applications can run on your VMs. Defender for Cloud uses machine learning to analyze the processes running in the VM and helps you apply allowlist rules using this intelligence. In this tutorial you'll learn how to: > [!div class="checklist"]
+>
> * Configure a just-in-time VM access policy > * Configure an application control policy ## Prerequisites+ To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. A free trial is available. To upgrade, see [Enable enhanced protections](enable-enhanced-security.md). ## Manage VM access+ JIT VM access can be used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. Management ports don't need to be open always. They only need to be open while you're connected to the VM, for example to perform management or maintenance tasks. When just-in-time is enabled, Defender for Cloud uses Network Security Group (NSG) rules, which restrict access to management ports so they can't be targeted by attackers.
Management ports don't need to be open always. They only need to be open while y
Follow the guidance in [Secure your management ports with just-in-time access](just-in-time-access-usage.md). ## Harden VMs against malware+ Adaptive application controls help you define a set of applications that are allowed to run on configured resource groups, which among other benefits helps harden your VMs against malware. Defender for Cloud uses machine learning to analyze the processes running in the VM and helps you apply allowlist rules using this intelligence. Follow the guidance in [Use adaptive application controls to reduce your machines' attack surfaces](adaptive-application-controls.md). ## Next steps+ In this tutorial, you learned how to limit your exposure to threats by: > [!div class="checklist"]
+>
> * Configuring a just-in-time VM access policy to provide controlled and audited access to VMs only when needed > * Configuring an adaptive application controls policy to control which applications can run on your VMs
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-portfolio-overview-os-support.md
Most of the Linux Operating Systems (OS) are covered by both agents. The agents
The Defender for IoT micro agent also supports Yocto as an open source.
-For additional information on supported operating systems, or to request access to the source code so you can incorporate it as a part of the device's firmware, contact your account manager, or send an email to <defender_micro_agent@microsoft.com>.
+For additional information on supported operating systems, or to request access to the source code so you can incorporate it as a part of the device's firmware, contact your account manager.
For a more granular view of the micro agent-operating system dependencies, see [Linux dependencies](concept-micro-agent-linux-dependencies.md#linux-dependencies).
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This article describes an OT sensor deployment on a virtual appliance using Micr
|**Status** | Supported | > [!IMPORTANT]
-> Versions 22.2.x of the sensor are incompatible with Hyper-V. Until the issue has been resolved, we recommend using either version 22.3.x or 22.1.7.
+> Versions 22.2.x of the sensor are incompatible with Hyper-V. Until the issue has been resolved, we recommend using versions 22.3.x and above.
## Prerequisites
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
Before installing Microsoft Defender for IoT, make sure that you have:
This step is performed by your deployment teams.
+> [!NOTE]
+> There is no need to pre-install an operating system on the VM, the sensor installation includes the operating system image.
+ ## Download software files from the Azure portal Download the OT sensor software from Defender for IoT in the Azure portal.
This procedure describes how to install OT monitoring software on an OT network
**To install your software**:
-> [!NOTE]
-> There is no need to pre-install an operating system on the VM, the sensor installation includes the operating system image.
- 1. Mount the ISO file onto your hardware appliance or VM using one of the following options: - **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Title: Configure a lab to use a remote desktop gateway
+ Title: Secure labs with remote desktop gateways
description: Learn how to configure a remote desktop gateway in Azure DevTest Labs for secure access to lab VMs without exposing RDP ports. Previously updated : 05/19/2023 Last updated : 05/30/2023 # Configure and use a remote desktop gateway in Azure DevTest Labs
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 05/22/2023 Last updated : 05/29/2023
The following table shows connectivity locations and the service providers for e
| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS, National Telecom UIH | | **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | Supported | Colt, Equinix, NTT Global DataCenters EMEA|
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect, Equinix |
-| **Busan** | [LG CNS](https://www.lgcns.com/en/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS |
-| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | Supported | Colt, Equinix, NTT Global DataCenters EMEA|
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | Cirion Technologies, Equinix |
+| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect, Equinix |
| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS | | **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC |
The following table shows connectivity locations and the service providers for e
| **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion |
-| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |
| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion, KPN, Orange |
-| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems, Verizon, Zayo |
+| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems, Verizon, Zayo |
| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix, InterCloud | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt, Equinix, InterCloud, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
If you're remote and don't have fiber connectivity or want to explore other conn
| **London** | BICS, Equinix, euNetworks| Bezeq International Ltd., CoreAzure, Epsilon Telecommunications Limited, Exponential E, HSO, NexGen Networks, Proximus, Tamares Telecom, Zain | | **Los Angeles** | Equinix |Crown Castle, Spectrum Enterprise, Transtelco | | **Madrid** | Level3 | Zertia |
-| **Montreal** | Cologix| Airgate Technologies, Inc. Aptum Technologies, Rogers, Zirro |
+| **Montreal** | Cologix| Airgate Technologies, Inc. Aptum Technologies, Oncore Cloud Services Inc., Rogers, Zirro |
| **Mumbai** | Tata Communications | Tata Teleservices | | **New York** |Equinix, Megaport | Altice Business, Crown Castle, Spectrum Enterprise, Webair | | **Paris** | Equinix | Proximus |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 05/22/2023 Last updated : 05/29/2023
If you're remote and don't have fiber connectivity, or you want to explore other
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town, Johannesburg | | **[NexGen Networks](https://www.nexgen-net.com/nexgen-networks-direct-connect-microsoft-azure-expressroute.html)** | Interxion | London | | **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam, Frankfurt |
-| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Toronto |
+| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Montreal, Toronto |
| **[POST Telecom Luxembourg](https://business.post.lu/grandes-entreprises/telecom-ict/telecom)**| Equinix | Amsterdam | | **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**| Equinix | Amsterdam, Dublin, London, Paris | | **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt |
firewall Basic Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/basic-features.md
Azure Firewall Basic includes the following features:
- Multiple public IP addresses - Azure Monitor logging - Certifications+
+To compare Azure Firewall features for all Firewall SKUs, see [Choose the right Azure Firewall SKU to meet your needs](choose-firewall-sku.md).
## Built-in high availability
firewall Choose Firewall Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/choose-firewall-sku.md
Title: Choosing the right Azure Firewall SKU to meet your needs
+ Title: Choose the right Azure Firewall SKU to meet your needs
description: Learn about the different Azure Firewall SKUs and how to choose the right one for your needs.
Last updated 03/15/2023
-# Choosing the right Azure Firewall SKU to meet your needs
+# Choose the right Azure Firewall SKU to meet your needs
Azure Firewall now supports three different SKUs to cater to a wide range of customer use cases and preferences. - Azure Firewall Premium is recommended to secure highly sensitive applications (such as payment processing). It supports advanced threat protection capabilities like malware and TLS inspection.-- Azure Firewall Standard is recommended for customers looking for Layer 3ΓÇôLayer 7 firewall and needs auto-scaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.
+- Azure Firewall Standard is recommended for customers looking for Layer 3ΓÇôLayer 7 firewall and needs autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.
- Azure Firewall Basic is recommended for SMB customers with throughput needs of 250 Mbps.-- LetΓÇÖs take a closer look at the features across the three Azure Firewall SKUs.
+## Feature comparison
+
+Take a closer look at the features across the three Azure Firewall SKUs:
++
+## Next steps
+
+- [Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md)
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Azure Firewall includes the following features:
- Web categories - Certifications
+To compare Azure Firewall features for all Firewall SKUs, see [Choose the right Azure Firewall SKU to meet your needs](choose-firewall-sku.md).
+ ## Built-in high availability High availability is built in, so no extra load balancers are required and there's nothing you need to configure.
firewall Firewall Multi Hub Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-multi-hub-spoke.md
Previously updated : 05/22/2023 Last updated : 05/30/2023
Here's an example route table for the spoke subnets connected to Hub-01:
## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Azure Firewall Basic is similar to Firewall Standard, but has the following main
- Fixed scale unit to run the service on two virtual machine backend instances. - Recommended for environments with an estimated throughput of 250 Mbps.
-To deploy a Basic Firewall, see [Deploy and configure Azure Firewall Basic and policy using the Azure portal](deploy-firewall-basic-portal-policy.md).
+To learn more about Azure Firewall Basic, see [Azure Firewall Basic features](basic-features.md)
+
+## Feature comparison
+
+To compare the all Firewall SKU features, see [Choose the right Azure Firewall SKU to meet your needs](choose-firewall-sku.md)
## Azure Firewall Manager
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Azure Firewall Premium includes the following features:
- **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL along with any additional path. For example, `www.contoso.com/a/c` instead of `www.contoso.com`. - **Web categories** - administrators can allow or deny user access to website categories such as gambling websites, social media websites, and others.
+To compare Azure Firewall features for all Firewall SKUs, see [Choose the right Azure Firewall SKU to meet your needs](choose-firewall-sku.md).
+ ## TLS inspection The TLS (Transport Layer Security) protocol primarily provides cryptography for privacy, integrity, and authenticity using certificates between two or more communicating applications. It runs in the application layer and is widely used to encrypt the HTTP protocol.
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Azure Front Door private link is available in the following regions:
Origin support for direct private endpoint connectivity is currently limited to: * Storage (Azure Blobs) * App Services
-* Internal load balancers
+* Internal load balancers, or any services that expose internal load balancers such as Azure Kubernetes Service, Azure Container Apps or Azure Red Hat OpenShift
* Storage Static Website + The Azure Front Door Private Link feature is region agnostic but for the best latency, you should always pick an Azure region closest to your origin when choosing to enable Azure Front Door Private Link endpoint. ## Next steps
The Azure Front Door Private Link feature is region agnostic but for the best la
* Learn how to [connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md). * Learn how to [connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md). * Learn how to [connect Azure Front Door Premium to a storage static website origin with Private Link](how-to-enable-private-link-storage-static-website.md).+
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-jobs.md
Consider using jobs when you need to schedule and track progress any of the foll
## Job lifecycle
-Jobs are initiated by the solution back end and maintained by IoT Hub. You can initiate a job through a service-facing URI (`PUT https://<iot hub>/jobs/v2/<jobID>?api-version=2021-04-12`) and query for progress on an executing job through a service-facing URI (`GET https://<iot hub>/jobs/v2/<jobID?api-version=2021-04-12`). To refresh the status of running jobs once a job is initiated, run a job query.
+Jobs are initiated by the solution back end and maintained by IoT Hub. You can initiate a job through a service-facing URI (`PUT https://<iot hub>/jobs/v2/<jobID>?api-version=2021-04-12`) and query for progress on an executing job through a service-facing URI (`GET https://<iot hub>/jobs/v2/<jobID?api-version=2021-04-12`). To refresh the status of running jobs once a job is initiated, run a job query. There is no explicit purge of job history, but they have a TTL of 30 days. 
> [!NOTE] > When you initiate a job, property names and values can only contain US-ASCII printable alphanumeric, except any in the following set: `$ ( ) < > @ , ; : \ " / [ ] ? = { } SP HT`
Other reference topics in the IoT Hub developer guide include:
To try out some of the concepts described in this article, see the following IoT Hub tutorial: * [Schedule and broadcast jobs](schedule-jobs-node.md)++
load-balancer Ipv6 Configure Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/ipv6-configure-template-json.md
Title: Deploy an IPv6 dual stack application with Basic Load Balancer in Azure virtual network - Resource Manger template description: This article shows how to deploy an IPv6 dual stack application in Azure virtual network using Azure Resource Manager VM templates.-+ documentationcenter: na -+
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - CLI
description: Learn how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure CLI. -+ Last updated 04/10/2023
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - PowerShell
description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure PowerShell. -+ Last updated 04/10/2023
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile,
Previously updated : 06/25/2018 Last updated : 05/30/2023
This example creates the following items:
* A NAT rule to translate all incoming traffic on port 3391 to port 3389 for remote desktop protocol (RDP).\* * A load balancer rule to balance all incoming traffic on port 80 to port 80 on the addresses in the back-end pool.
-\* NAT rules are associated with a specific virtual-machine instance behind the load balancer. The network traffic that arrives on port 3389 is sent to the specific virtual machine and port that's associated with the NAT rule. You must specify a protocol (UDP or TCP) for a NAT rule. You cannot assign both protocols to the same port.
+\* NAT rules are associated with a specific virtual-machine instance behind the load balancer. The network traffic that arrives on port 3389 is sent to the specific virtual machine and port that's associated with the NAT rule. You must specify a protocol (UDP or TCP) for a NAT rule. You can't assign both protocols to the same port.
1. Set up the PowerShell variables:
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile,
Previously updated : 09/25/2017 Last updated : 05/30/2023
The following diagram illustrates the load balancing solution being deployed in
![Load balancer scenario](./media/load-balancer-ipv6-internet-ps/lb-ipv6-scenario.png)
-In this scenario you will create the following Azure resources:
+In this scenario you'll create the following Azure resources:
* an Internet-facing Load Balancer with an IPv4 and an IPv6 Public IP address * two load balancing rules to map the public VIPs to the private endpoints
This example creates the following items:
$RDPprobe = New-AzLoadBalancerProbeConfig -Name 'RDPprobe' -Protocol Tcp -Port 3389 -IntervalInSeconds 15 -ProbeCount 2 ```
- For this example, we are going to use the TCP probes.
+ For this example, we're going to use the TCP probes.
3. Create a load balancer rule.
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-cli.md
Previously updated : 06/25/2018 Last updated : 05/30/2023
machine-learning Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/regression.md
AutoML creates a number of pipelines in parallel that try different algorithms a
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](..//how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](../how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
machine-learning Component Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/component-reference.md
Each component represents a set of code that can run independently and perform a
For help with choosing algorithms, see * [How to select algorithms](../how-to-select-algorithms.md)
-* [Azure Machine Learning Algorithm Cheat Sheet](../algorithm-cheat-sheet.md)
+* [Azure Machine Learning Algorithm Cheat Sheet](../v1/algorithm-cheat-sheet.md)
> [!TIP] > In any pipeline in the designer, you can get information about a specific component. Select the **Learn more** link in the component card when hovering on the component in the component list, or in the right pane of the component.
machine-learning Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-model.md
In Azure Machine Learning, creating and using a machine learning model is typica
Model interpretability provides possibility to comprehend the ML model and to present the underlying basis for decision-making in a way that is understandable to humans.
-Currently **Train Model** component supports [using interpretability package to explain ML models](../how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs). Following built-in algorithms are supported:
+Currently **Train Model** component supports [using interpretability package to explain ML models](../v1/how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs). Following built-in algorithms are supported:
- Linear Regression - Neural Network Regression
After the pipeline run completed, you can visit **Explanations** tab in the righ
![Screenshot showing model explanation charts](./media/module/train-model-explanations-tab.gif)
-To learn more about using model explanations in Azure Machine Learning, refer to the how-to article about [Interpret ML models](../how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs).
+To learn more about using model explanations in Azure Machine Learning, refer to the how-to article about [Interpret ML models](../v1/how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs).
## Results
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
This article explains deep learning vs. machine learning and how they fit into the broader category of artificial intelligence. Learn about deep learning solutions you can build on Azure Machine Learning, such as fraud detection, voice and facial recognition, sentiment analysis, and time series forecasting.
-For guidance on choosing algorithms for your solutions, see the [Machine Learning Algorithm Cheat Sheet](./algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri).
+For guidance on choosing algorithms for your solutions, see the [Machine Learning Algorithm Cheat Sheet](./v1/algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri).
## Deep learning, machine learning, and AI
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Designer only shows the assets that you created and named in your workspace. You
Designer is a tool that lets you create pipelines with your assets in a visual way. When you use designer, you'll encounter two concepts related to pipelines: pipeline draft and pipeline jobs.
-![pipeline-draft-and-pipeline-job-list](./media/concept-designer/pipeline-draft-and-job.png)
- :::image type="content" source="./media/concept-designer/pipeline-draft-and-job.png" alt-text="Screenshot of pipeline draft and pipeline job list." lightbox= "./media/concept-designer/pipeline-draft-and-job.png"::: ### Pipeline draft
You can edit your pipeline and then submit again. After submitting, you can see
## Next step -- [Create pipeline with components (UI)](./how-to-create-component-pipelines-ui.md)
+- [Create pipeline with components (UI)](./how-to-create-component-pipelines-ui.md)
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
forecasting_job.set_training(
To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment). > [!NOTE]
-> * When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled.
+> * When you enable DNN for experiments created with the SDK, [best model explanations](./v1/how-to-machine-learning-interpretability-automl.md) are disabled.
> * DNN support for forecasting in Automated Machine Learning is not supported for runs initiated in Databricks. > * GPU compute types are recommended when DNN training is enabled
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
## Next steps * Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md).
-* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
+* Learn about [Interpretability: model explanations in automated machine learning (preview)](./v1/how-to-machine-learning-interpretability-automl.md).
* Learn about [how AutoML builds forecasting models](./concept-automl-forecasting-methods.md). * Learn how to [configure AutoML for various forecasting scenarios](./how-to-automl-forecasting-faq.md#what-modeling-configuration-should-i-use).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
The following table shows the accepted settings for featurization.
|Featurization Configuration | Description | | - | - |
-|`"mode": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
+|`"mode": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](./v1/how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
|`"mode": 'off'`| Indicates featurization step shouldn't be done automatically.| |`"mode":`&nbsp;`'custom'`| Indicates customized featurization step should be used.|
Automated ML offers options for you to monitor and evaluate your training result
* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
-* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
+* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](./v1/how-to-configure-auto-features.md#featurization-transparency).
From Azure Machine Learning UI at the model's page you can also view the hyperparameters used when training a particular model and also view and customize the internal model's training code used.
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md). > [!TIP]
-> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md).
+> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](./v1/how-to-track-designer-experiments.md).
## Prerequisites
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
This article describes methods you can use for model interpretability in Azure M
## Why model interpretability is important to model debugging
-When you're using machine learning models in ways that affect peopleΓÇÖs lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as:
+When you're using machine learning models in ways that affect people's lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as:
* Model debugging: Why did my model make this mistake? How can I improve my model?
-* Human-AI collaboration: How can I understand and trust the modelΓÇÖs decisions?
+* Human-AI collaboration: How can I understand and trust the model's decisions?
* Regulatory compliance: Does my model satisfy legal requirements?
-The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior:
+The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the "diagnose" stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a machine learning model. It provides multiple views into a model's behavior:
* Global explanations: For example, what features affect the overall behavior of a loan allocation model?
-* Local explanations: For example, why was a customerΓÇÖs loan application approved or rejected?
+* Local explanations: For example, why was a customer's loan application approved or rejected?
You can also observe model explanations for a selected cohort as a subgroup of data points. This approach is valuable when, for example, you're assessing fairness in model predictions for individuals in a particular demographic group. The **Local explanation** tab of this component also represents a full data visualization, which is great for general eyeballing of the data and looking at differences between correct and incorrect predictions of each cohort.
The capabilities of this component are founded by the [InterpretML](https://inte
Use interpretability when you need to:
-* Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.
+* Determine how trustworthy your AI system's predictions are by understanding what features are most important for the predictions.
* Approach the debugging of your model by understanding it first and identifying whether the model is using healthy features or merely false correlations. * Uncover potential sources of unfairness by understanding whether the model is basing predictions on sensitive features or on features that are highly correlated with them.
-* Build user trust in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.
+* Build user trust in your model's decisions by generating local explanations to illustrate their outcomes.
* Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans. ## How to interpret your model
Interpret-Community serves as the host for the following supported explainers, a
| Guided gradCAM | Guided GradCAM is a popular explanation method for deep neural networks that provides insights into the learned representations of the model. It generates a visualization of the input features that contribute most to a particular output class, by combining the gradient-based approach of guided backpropagation with the localization approach of GradCAM. Specifically, it computes the gradients of the output class with respect to the feature maps of the last convolutional layer in the network, and then weights each feature map according to the importance of its activation for that class. This produces a high-resolution heatmap that highlights the most discriminative regions of the input image for the given output class. Guided GradCAM can be used to explain a wide range of deep learning models, including CNNs, RNNs, and transformers. Additionally, by incorporating guided backpropagation, it ensures that the visualization is meaningful and interpretable, avoiding spurious activations and negative contributions. | AutoML | Image Multi-class Classification, Image Multi-label Classification | | Integrated Gradients | Integrated Gradients is a popular explanation method for deep neural networks that provides insights into the contribution of each input feature to a given prediction. It computes the integral of the gradient of the output class with respect to the input image, along a straight path between a baseline image and the actual input image. This path is typically chosen to be a linear interpolation between the two images, with the baseline being a neutral image that has no salient features. By integrating the gradient along this path, Integrated Gradients provides a measure of how each input feature contributes to the prediction, allowing for an attribution map to be generated. This map highlights the most influential input features, and can be used to gain insights into the model's decision-making process. Integrated Gradients can be used to explain a wide range of deep learning models, including CNNs, RNNs, and transformers. Additionally, it's a theoretically grounded technique that satisfies a set of desirable properties, such as sensitivity, implementation invariance, and completeness. | AutoML | Image Multi-class Classification, Image Multi-label Classification | | XRAI | [XRAI](https://arxiv.org/pdf/1906.02825.pdf) is a novel region-based saliency method based on Integrated Gradients (IG). It over-segments the image and iteratively tests the importance of each region, coalescing smaller regions into larger segments based on attribution scores. This strategy yields high quality, tightly bounded saliency regions that outperform existing saliency techniques. XRAI can be used with any DNN-based model as long as there's a way to cluster the input features into segments through some similarity metric. | AutoML | Image Multi-class Classification, Image Multi-label Classification |
-| D-RISE | D-RISE is a model agnostic method for creating visual explanations for the predictions of object detection models. By accounting for both the localization and categorization aspects of object detection, D-RISE can produce saliency maps that highlight parts of an image that most contribute to the prediction of the detector. Unlike gradient-based methods, D-RISE is more general and doesn't need access to the inner workings of the object detector; it only requires access to the inputs and outputs of the model. The method can be applied to one-stage detectors (for example, YOLOv3), two-stage detectors (for example, Faster-RCNN), and Vision Transformers (for example, DETR, OWL-ViT). <br> D-Rise provides the saliency map by creating random masks of the input image and will send it to the object detector with the random masks of the input image. By assessing the change of the object detectorΓÇÖs score, it aggregates all the detections with each mask and produce a final saliency map. | Model Agnostic | Object Detection |
+| D-RISE | D-RISE is a model agnostic method for creating visual explanations for the predictions of object detection models. By accounting for both the localization and categorization aspects of object detection, D-RISE can produce saliency maps that highlight parts of an image that most contribute to the prediction of the detector. Unlike gradient-based methods, D-RISE is more general and doesn't need access to the inner workings of the object detector; it only requires access to the inputs and outputs of the model. The method can be applied to one-stage detectors (for example, YOLOv3), two-stage detectors (for example, Faster-RCNN), and Vision Transformers (for example, DETR, OWL-ViT). <br> D-Rise provides the saliency map by creating random masks of the input image and will send it to the object detector with the random masks of the input image. By assessing the change of the object detector's score, it aggregates all the detections with each mask and produce a final saliency map. | Model Agnostic | Object Detection |
### Supported in Python SDK v1
You can run the explanation remotely on Azure Machine Learning Compute and log t
* Learn how to generate the Responsible AI dashboard via [CLI v2 and SDK v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md). * Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard. * Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
-* Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
+* Learn how to enable [interpretability for automated machine learning models](./v1/how-to-machine-learning-interpretability-automl.md).
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
az group delete -g <resource-group-name>
For more information, see the [az ml workspace delete](/cli/azure/ml/workspace#az-ml-workspace-delete) documentation.
-If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
+If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./v1/how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
## Troubleshooting
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
When you no longer need a workspace, delete it.
[!INCLUDE [machine-learning-delete-workspace](../../includes/machine-learning-delete-workspace.md)]
-If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](./how-to-high-availability-machine-learning.md#workspace-deletion).
+If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](./v1/how-to-high-availability-machine-learning.md#workspace-deletion).
# [Python SDK](#tab/python)
machine-learning How To Share Data Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-data-across-workspaces-with-registries.md
az ml data create -f local-folder.yml
For more information on creating data assets in a workspace, see [How to create data assets](how-to-create-data-assets.md).
-The data asset created in the workspace can be shared to a registry. From the registry, it can be used in multiple workspaces. You can also change the name and version when sharing the data from workspace to registry. Sharing a data asset from a workspace to a registry uses the `--path` parameter to reference the data asset to be shared. Valid path formats are:
+The data asset created in the workspace can be shared to a registry. From the registry, it can be used in multiple workspaces. Note that we are passing `--share_with_name` and `--share_with_version` parameter in share function. These parameters are optional and if you do not pass these data will be shared with same name and version as in workspace.
-* `azureml://subscriptions/<subscription-id>/resourcegroup/<resource-group-name>/data/<data-asset-name>/versions/<version-number>`
-* `azureml://resourcegroup/<resource-group-name>/data/<data-asset-name>/versions/<version-number>`
-* `azureml://data/<data-asset-name>/versions/<version-number>`
-
-The following example demonstrates using the `--path` parameter to share a data asset. Replace `<registry-name>` with the name of the registry that the data will be shared to. Replace `<resourceGroupName>` with the name of the resource group that contains the Azure Machine Learning workspace where the data asset is registered:
+The following example demonstrates using share command to share a data asset. Replace `<registry-name>` with the name of the registry that the data will be shared to.
```azurecli
-az ml data create --registry-name <registry-name> --path azureml://resourcegroup/<resourceGroupName>/data/local-folder-example-titanic/versions/1
+az ml data share --name local-folder-example-titanic --version <version-in-workspace> --share-with-name <name-in-registry> --share-with-version <version-in-registry> --registry-name <registry-name>
``` # [Python SDK](#tab/python)
For more information on creating data assets in a workspace, see [How to create
The data asset created in workspace can be shared to a registry and it can be used in multiple workspaces from there. You can also change the name and version when sharing the data from workspace to registry.
-```python
-# Fetch the data from the workspace
-data_in_workspace = ml_client_workspace.data.get(name="titanic-dataset", version="1")
-print("data from workspace:\n\n", data_in_workspace)
-
-# Change the format to one that the registry understands:
-# Note the asset ID when printing the `data_ready_to_copy` object.
-data_ready_to_copy = ml_client_workspace.data._prepare_to_copy(data_in_workspace)
-print("\n\ndata ready to copy:\n\n", data_ready_to_copy)
+Note that we are passing `share_with_name` and `share_with_version` parameter in share function. These parameters are optional and if you do not pass these data will be shared with same name and version as in workspace.
-# Copy the data from the workspace to the registry
-ml_client_registry.data.create_or_update(data_ready_to_copy).wait()
+```python
+# Sharing data from workspace to registry
+ml_client_workspace.data.share(
+ name="titanic-dataset",
+ version="1",
+ registry_name="<REGISTRY_NAME>",
+ share_with_name=<name-in-registry>,
+ share_with_version=<version-in-registry>,
+)
```
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
The mAP, precision and recall values are logged at an epoch-level for image obje
While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#model-explanations-preview).
-For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](how-to-machine-learning-interpretability-automl.md).
+For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](./v1/how-to-machine-learning-interpretability-automl.md).
> [!NOTE] > Interpretability, best model explanation, is not available for automated ML forecasting experiments that recommend the following algorithms as the best model or ensemble:
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](./v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
After your experiment completes, you can test the model(s) that automated ML gen
To better understand your model, you can see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
-The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations](how-to-machine-learning-interpretability-aml.md#visualizations).
+The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations](./v1/how-to-machine-learning-interpretability-aml.md#visualizations).
To get explanations for a particular model,
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
For details on how to create a deployment configuration and deploy a registered
Model interpretability allows you to understand why your models made predictions, and the underlying feature importance values. The SDK includes various packages for enabling model interpretability features, both at training and inference time, for local and deployed models.
-See how to [enable interpretability features](../how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
+See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
For general information on how model explanations and feature importance can be enabled in other areas of the SDK outside of automated machine learning, see the [concept article on interpretability](../how-to-machine-learning-interpretability.md) .
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
logging.basicConfig(level=logging.DEBUG)
Azure Machine Learning can also log information from other sources during training, such as automated machine learning runs, or Docker containers that run the jobs. These logs aren't documented, but if you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
-For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](../how-to-track-designer-experiments.md)
+For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md)
## Example notebooks
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-machine-learning-interpretability-automl.md
You can visualize the feature importance chart in your workspace in [Azure Machi
[![Machine Learning Interpretability Architecture](./media/how-to-machine-learning-interpretability-automl/automl-explanation.png)](./media/how-to-machine-learning-interpretability-automl/automl-explanation.png#lightbox)
-For more information on the explanation dashboard visualizations and specific plots, please refer to the [how-to doc on interpretability](../how-to-machine-learning-interpretability-aml.md).
+For more information on the explanation dashboard visualizations and specific plots, please refer to the [how-to doc on interpretability](how-to-machine-learning-interpretability-aml.md).
## Next steps
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
You can also delete the resource group, which deletes the workspace and all othe
az group delete -g <resource-group-name> ```
-If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](../how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
+If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
## Troubleshooting
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
When you no longer need a workspace, delete it.
[!INCLUDE [machine-learning-delete-workspace](../../../includes/machine-learning-delete-workspace.md)]
-If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](../how-to-high-availability-machine-learning.md#workspace-deletion).
+If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](how-to-high-availability-machine-learning.md#workspace-deletion).
Delete the workspace `ws`:
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for Mar
| East US |40.71.8.203, 40.71.83.113 |40.121.158.30|191.238.6.43 | | East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | | | France Central | 40.79.137.0, 40.79.129.1 | | |
-| France South | 40.79.177.0 | | |
+| France South | 40.79.177.0, 40.79.176.40 | | |
| Germany Central | 51.4.144.100 | | | | Germany North | 51.116.56.0 | | | Germany North East | 51.5.144.179 | | |
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
ms. Previously updated : 12/12/2022 Last updated : 05/31/2023 # Assessment - Common questions
-This article answers common questions about assessments in Azure Migrate. If you've other questions, check these resources:
+This article answers common questions about assessments in Azure Migrate. If you have other questions, check these resources:
- [General questions](resources-faq.md) about Azure Migrate - Questions about the [Azure Migrate appliance](common-questions-appliance.md)
Review the supported geographies for [public](migrate-support-matrix.md#public-c
## How many servers can I discover with an appliance?
-You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you've more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
+You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you have more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
## How do I choose the assessment type?
By design, in Hyper-V if maximum memory provisioned is less than what is require
## I see a banner on my assessment that the assessment now also considers processor parameters. What will be the impact of recalculating the assessment?
-The assessment now considers processor parameters such as number of operational cores, sockets, etc. and calculating its optimal performance over a period in a simulated environment. This is done to benchmark all processor-based available processor information. Recalculate your assessments to see the updated recommendations.
+The assessment now considers processor parameters such as number of operational cores, sockets, etc. and calculates its optimal performance over a period in a simulated environment. This is done to benchmark all processor-based available processor information. Recalculate your assessments to see the updated recommendations.
-The processor benchmark numbers are now considered along with the resource utilization to ensure, we match the processor performance of your on-premises VMware environment and recommend the target Azure SKU sizes accordingly. This is a way to further improve the assessment recommendations to match your performance needs more closely.
+The processor benchmark numbers are now considered along with the resource utilization to ensure, we match the processor performance of your on-premises VMware, Hyper-V, and Physical servers and recommend the target Azure SKU sizes accordingly. This is a way to further improve the assessment recommendations to match your performance needs more closely.
-Due to this, the target Azure VM cost can differ from your earlier assessments of the same target. Also, the number of cores allocated in the target Azure SKU could also vary if the processor performance of target is a match for your on-premises VMware environment.
+Due to this, the target Azure VM cost can differ from your earlier assessments of the same target. Also, the number of cores allocated in the target Azure SKU could also vary if the processor performance of target is a match for your on-premises VMware, Hyper-V, and Physical servers.
## For scenarios where customers choose "as on premises", is there any impact due to processor benchmarking?
No, there will be no impact as we don't consider it for as on premises scenario.
## I see an increase in my monthly costs after I recalculate my assessments? Is this the most optimized cost for me?
-If you've selected all available options for your ΓÇ£VM SeriesΓÇ¥ in your assessment settings, you will get the most optimized cost recommendation for your VMs. However, if you choose only some of the available options for the VM series, the recommendation might skip the most optimized option for you while assigning you an Azure VM SKU while matching your processor performance numbers.
+If you've selected all available options for your ΓÇ£VM SeriesΓÇ¥ in your assessment settings, you'll get the most optimized cost recommendation for your VMs. However, if you choose only some of the available options for the VM series, the recommendation might skip the most optimized option for you while assigning you an Azure VM SKU while matching your processor performance numbers.
## Why can't I see all Azure VM families in the Azure VM assessment properties?
migrate How To Delete Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-delete-project.md
Title: Delete an Azure Migrate project description: In this article, learn how you can delete an Azure Migrate project by using the Azure portal.--
-ms.
++ Last updated 04/14/2021
migrate How To Migrate Vmware Vms With Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md
Title: Migrate VMware virtual machines to Azure with server-side encryption(SSE) and customer-managed keys(CMK) using the Migration and modernization tool description: Learn how to migrate VMware VMs to Azure with server-side encryption(SSE) and customer-managed keys(CMK) using the Migration and modernization tool --
-ms.
++ Last updated 12/12/2022
migrate Hyper V Migration Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/hyper-v-migration-architecture.md
Title: How does Hyper-V migration work in Azure Migrate? description: Learn about Hyper-V migration with Azure Migrate --++ ms. Last updated 12/12/2022
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Title: Azure Migrate support matrix description: Provides a summary of support settings and limitations for the Azure Migrate service.--++ ms. Last updated 01/03/2023
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md
Title: Work with the previous version of Azure Migrate description: Describes how to work with the previous version of Azure Migrate.--++ ms. Last updated 03/08/2023
migrate Prepare Isv Movere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-isv-movere.md
Title: Prepare Azure Migrate to work with an ISV tool/Movere description: This article describes how to prepare Azure Migrate to work with an ISV tool or Movere, and then how to start using the tool. --++ ms. Last updated 10/15/2020
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md
Title: Azure Migrate FAQ description: Get answers to common questions about the Azure Migrate service.--
-ms.
++ Last updated 12/12/2022
migrate Troubleshoot General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-general.md
Title: Troubleshoot Azure Migrate issues | Microsoft Docs description: Provides an overview of known issues in the Azure Migrate service, as well as troubleshooting tips for common errors.--
-ms.
++ Last updated 07/01/2021
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
Title: Troubleshoot network connectivity issues | Microsoft Docs description: Provides troubleshooting tips for common errors in using Azure Migrate with private endpoints. - Last updated 12/12/2022
migrate Troubleshoot Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-project.md
Title: Troubleshoot Azure Migrate projects description: Helps you to troubleshoot issues with creating and managing Azure Migrate projects.--
-ms.
++ Last updated 02/18/2022
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-policy-portal.md
Title: Manage NSG flow logs by using Azure Policy
+ Title: Manage NSG flow logs using Azure Policy
-description: Learn how to use built-in policies to audit network security groups and deploy Azure Network Watcher NSG flow logs.
+description: Learn how to use Azure Policy built-in policies to audit network security groups and deploy Azure Network Watcher NSG flow logs.
Previously updated : 04/30/2023 Last updated : 05/30/2023
-# Manage NSG flow logs by using Azure Policy
+# Manage NSG flow logs using Azure Policy
Azure Policy helps you enforce organizational standards and assess compliance at scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. To learn more about Azure policy, see [What is Azure Policy?](../governance/policy/overview.md) and [Quickstart: Create a policy assignment to identify non-compliant resources](../governance/policy/assign-policy-portal.md). In this article, you learn how to use two built-in policies to manage your setup of network security group (NSG) flow logs. The first policy flags any network security group that doesn't have flow logs enabled. The second policy automatically deploys NSG flow logs that don't have flow logs enabled.
-## Audit network security groups by using a built-in policy
+## Audit network security groups using a built-in policy
The **Flow logs should be configured for every network security group** policy audits all existing network security groups in a scope by checking all Azure Resource Manager objects of type `Microsoft.Network/networkSecurityGroups`. This policy then checks for linked flow logs via the flow logs property of the network security group, and it flags any network security group that doesn't have flow logs enabled.
-To audit your flow logs by using the built-in policy:
+To audit your flow logs using the built-in policy:
1. Sign in to the [Azure portal](https://portal.azure.com).
To audit your flow logs by using the built-in policy:
1. Select **Resource compliance** to get a list of all non-compliant network security groups.
- :::image type="content" source="./media/nsg-flow-logs-policy-portal/audit-policy-compliance-details.png" alt-text="Screenshot of the page for audit policy compliance in the Azure portal." lightbox="./media/nsg-flow-logs-policy-portal/audit-policy-compliance-details.png":::
+ :::image type="content" source="./media/nsg-flow-logs-policy-portal/audit-policy-compliance-details.png" alt-text="Screenshot of the Policy compliance page that shows the noncompliant resources based on the audit policy." lightbox="./media/nsg-flow-logs-policy-portal/audit-policy-compliance-details.png":::
-## Deploy and configure NSG flow logs by using a built-in policy
+## Deploy and configure NSG flow logs using a built-in policy
The **Deploy a flow log resource with target network security group** policy checks all existing network security groups in a scope by checking all Azure Resource Manager objects of type `Microsoft.Network/networkSecurityGroups`. It then checks for linked flow logs via the flow logs property of the network security group. If the property doesn't exist, the policy deploys a flow log.
To assign the *deployIfNotExists* policy:
:::image type="content" source="./media/nsg-flow-logs-policy-portal/policy-scope.png" alt-text="Screenshot of selecting the scope of the policy in the Azure portal." lightbox="./media/nsg-flow-logs-policy-portal/policy-scope.png":::
-1. Select the ellipsis (**...**) next to **Policy definition** to choose the built-in policy that you want to assign. Enter *flow log* in the search box, and the select the **Built-in** filter. From the search results, select **Deploy a flow log resource with target network security group**, and then select **Add**.
+1. Select the ellipsis (**...**) next to **Policy definition** to choose the built-in policy that you want to assign. Enter *flow log* in the search box, and then select the **Built-in** filter. From the search results, select **Deploy a flow log resource with target network security group**, and then select **Add**.
:::image type="content" source="./media/nsg-flow-logs-policy-portal/deploy-policy.png" alt-text="Screenshot of selecting the deployment policy in the Azure portal." lightbox="./media/nsg-flow-logs-policy-portal/deploy-policy.png":::
To assign the *deployIfNotExists* policy:
| **Create a remediation task** | Select the checkbox if you want the policy to affect existing resources. | | **Create a Managed Identity** | Select the checkbox. | | **Type of Managed Identity** | Select the type of managed identity that you want to use. |
- | **System assigned identity location** | Select the region of your system-assigned identity. |
+ | **System assigned identity location** | Select the region of your system assigned identity. |
| **Scope** | Select the scope of your user-assigned identity. | | **Existing user assigned identities** | Select your user-assigned identity. |
To assign the *deployIfNotExists* policy:
1. Select **Resource compliance** to get a list of all non-compliant network security groups.
- :::image type="content" source="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details.png" alt-text="Screenshot of the page for deployment policy compliance in the Azure portal." lightbox="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details.png":::
+ :::image type="content" source="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details.png" alt-text="Screenshot of the Policy compliance page that shows the noncompliant resources." lightbox="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details.png":::
+
+1. Leave the policy runs to evaluate and deploy flow logs for all non-compliant network security groups. Then select **Resource compliance** again to check the status of network security groups (you don't see noncompliant network security groups if the policy completed its remediation).
+
+ :::image type="content" source="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details-compliant-resources.png" alt-text="Screenshot of the Policy compliance page that shows all resources are compliant." lightbox="./media/nsg-flow-logs-policy-portal/deploy-policy-compliance-details-compliant-resources.png":::
## Next steps - To learn more about NSG flow logs, see [Flow logs for network security groups](./network-watcher-nsg-flow-logging-overview.md). - To learn about using built-in policies with traffic analytics, see [Manage traffic analytics using Azure Policy](./traffic-analytics-policy-portal.md).-- To learn how to use an Azure Resource Manager template (ARM template) to deploy flow logs and traffic analytics, see [Configure NSG flow logs using an Azure Resource Manager template](./quickstart-configure-network-security-group-flow-logs-from-arm-template.md).
+- To learn how to use an Azure Resource Manager (ARM) template to deploy flow logs and traffic analytics, see [Configure NSG flow logs using an Azure Resource Manager template](./quickstart-configure-network-security-group-flow-logs-from-arm-template.md).
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
Title: Manage traffic analytics using Azure Policy
-description: Learn how to use Azure built-in policies to manage the deployment of Azure Network Watcher traffic analytics.
+description: Learn how to use Azure Policy built-in policies to audit Azure Network Watcher NSG flow logs and manage the deployment of traffic analytics.
Previously updated : 05/10/2023 Last updated : 05/30/2023
In this article, you learn how to use three built-in policies available for [Azu
## Audit flow logs using a built-in policy
-**Network Watcher flow logs should have traffic analytics enabled** policy audits all existing Azure Resource Manager objects of type `Microsoft.Network/networkWatchers/flowLogs` and checks if traffic analytics is enabled via the `networkWatcherFlowAnalyticsConfiguration.enabled` property of the flow logs resource. It flags the flow logs resource that has the property set to false.
+The **Network Watcher flow logs should have traffic analytics enabled** policy audits all existing flow logs by auditing Azure Resource Manager objects of type `Microsoft.Network/networkWatchers/flowLogs` and checks if traffic analytics is enabled via the `networkWatcherFlowAnalyticsConfiguration.enabled` property of the flow logs resource. This policy then flags the flow logs resource that has the property set to false.
-To assign policy and audit your flow logs, follow these steps:
+To audit your flow logs by using the built-in policy:
1. Sign in to the [Azure portal](https://portal.azure.com).
To assign policy and audit your flow logs, follow these steps:
1. Select **Review + create** and then **Create**.
- :::image type="content" source="./media/traffic-analytics-policy-portal/assign-audit-policy.png" alt-text="Screenshot of Basics tab to assign an audit policy in the Azure portal.":::
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-audit-policy.png" alt-text="Screenshot of the Basics tab to assign an audit policy in the Azure portal.":::
> [!NOTE] > This policy doesn't require any parameters. It also doesn't contain any role definitions so you don't need create role assignments for the managed identity in the **Remediation** tab. 1. Select **Compliance**. Search for the name of your assignment and then select it.
- :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance.png" alt-text="Screenshot of Compliance page showing the audit policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance.png":::
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance.png" alt-text="Screenshot of the Compliance page showing the audit policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance.png":::
1. **Resource compliance** lists all non-compliant flow logs.
To assign any of the *deployIfNotExists* two policies, follow these steps:
:::image type="content" source="./media/traffic-analytics-policy-portal/azure-portal.png" alt-text="Screenshot of searching for policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/azure-portal.png":::
-1. Select **Assignments**, then select on **Assign Policy**.
+1. Select **Assignments**, and then select on **Assign policy**.
:::image type="content" source="./media/traffic-analytics-policy-portal/assign-policy.png" alt-text="Screenshot of selecting Assign policy button in the Azure portal.":::
-1. Select the ellipsis **...** next to **Scope** to choose your Azure subscription that has the flow logs that you want the policy to audit. You can also choose the resource group that has the flow logs. After you made your selections, select **Select** button.
+1. Select the ellipsis **...** next to **Scope** to choose your Azure subscription that has the flow logs that you want the policy to audit. You can also choose the resource group that has the flow logs. After you make your selections, choose the **Select** button.
:::image type="content" source="./media/traffic-analytics-policy-portal/policy-scope.png" alt-text="Screenshot of selecting the scope of the policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/policy-scope.png":::
-1. Select the ellipsis **...** next to **Policy definition** to choose the built-in policy that you want to assign. Enter *traffic analytics* in the search box, and select **Built-in** filter. From the search results, select **Configure network security groups to use specific workspace, storage account and flow log retention policy for traffic analytics** and then select **Add**.
+1. Select the ellipsis **...** next to **Policy definition** to choose the built-in policy that you want to assign. Enter *traffic analytics* in the search box, and select the **Built-in** filter. From the search results, select **Configure network security groups to use specific workspace, storage account and flow log retention policy for traffic analytics** and then select **Add**.
:::image type="content" source="./media/traffic-analytics-policy-portal/deploy-policy.png" alt-text="Screenshot of selecting a deployIfNotExists policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/deploy-policy.png":::
To assign any of the *deployIfNotExists* two policies, follow these steps:
:::image type="content" source="./media/traffic-analytics-policy-portal/assign-deploy-policy-basics.png" alt-text="Screenshot of the Basics tab of assigning a deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/assign-deploy-policy-basics.png":::
-1. Select **Next** button twice or select **Parameters** tab. Enter or select the following values:
+1. Select **Next** button twice, or select the **Parameters** tab. Then, enter or select the following values:
| Setting | Value | | | |
To assign any of the *deployIfNotExists* two policies, follow these steps:
:::image type="content" source="./media/traffic-analytics-policy-portal/deploy-policy-compliance.png" alt-text="Screenshot of Compliance page showing the deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/deploy-policy-compliance.png":::
-1. **Resource compliance** lists all non-compliant flow logs.
+1. Select **Resource compliance** to get a list of all non-compliant flow logs.
:::image type="content" source="./media/traffic-analytics-policy-portal/deploy-policy-compliance-details.png" alt-text="Screenshot showing details of the deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/deploy-policy-compliance-details.png":::
In such scenario, the managed identity must be manually granted access. Go to th
## Next steps -- Learn about [NSG flow logs built-in policies](./nsg-flow-logs-policy-portal.md)-- Learn more about [traffic analytics](./traffic-analytics.md)
+- Learn about [NSG flow logs built-in policies](./nsg-flow-logs-policy-portal.md).
+- Learn more about [traffic analytics](./traffic-analytics.md).
+- Learn how to use an Azure Resource Manager (ARM) template to deploy flow logs and traffic analytics, see [Configure NSG flow logs using an Azure Resource Manager template](./quickstart-configure-network-security-group-flow-logs-from-arm-template.md).
networking Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/cli-samples.md
- Title: Azure CLI Samples - Networking
-description: Learn about Azure CLI samples for networking that include connectivity between Azure resources and for load balancing and traffic direction.
--- Previously updated : 03/23/2023----
-# Azure CLI Samples for networking
-
-The following table includes links to bash scripts built using the Azure CLI.
-
-| Script | Description |
-|-|-|
-|**Connectivity between Azure resources**||
-| [Create a virtual network for multi-tier applications](./scripts/virtual-network-cli-sample-multi-tier-application.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. |
-| [Peer two virtual networks](./scripts/virtual-network-cli-sample-peer-two-virtual-networks.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates and connects two virtual networks in the same region. |
-| [Route traffic through a network virtual appliance](./scripts/virtual-network-cli-sample-route-traffic-through-nva.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets and a VM that is able to route traffic between the two subnets. |
-| [Filter inbound and outbound VM network traffic](./scripts/virtual-network-filter-network-traffic.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS and SSH. Outbound traffic to the Internet from the back-end subnet isn't permitted. |
-|**Load balancing and traffic direction**||
-| [Load balance multiple websites on VMs](./scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates two VMs with multiple IP configurations, joined to an Azure Availability Set, accessible through an Azure Load Balancer. |
-| [Direct traffic across multiple regions for high application availability](./scripts/traffic-manager-cli-websites-high-availability.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. |
-| | |
networking Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/powershell-samples.md
- Title: Azure PowerShell Samples - Networking
-description: Learn about Azure PowerShell samples for networking, including a sample for creating a virtual network for multi-tier applications.
---- Previously updated : 03/23/2023--
-# Azure PowerShell Samples for networking
-
-The following table includes links to scripts for Azure PowerShell.
-
-| Script | Description |
-|-|-|
-|**Connectivity between Azure resources**||
-| [Create a virtual network for multi-tier applications](./scripts/virtual-network-powershell-sample-multi-tier-application.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP, while traffic to the back-end subnet is limited to SQL, port 1433. |
-| [Peer two virtual networks](./scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates and connects two virtual networks in the same region. |
-| [Route traffic through a network virtual appliance](./scripts/virtual-network-powershell-sample-route-traffic-through-nva.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets and a VM that is able to route traffic between the two subnets. |
-| [Filter inbound and outbound VM network traffic](./scripts/virtual-network-powershell-filter-network-traffic.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP and HTTPS. Outbound traffic to the Internet from the back-end subnet isn't permitted. |
-|**Load balancing and traffic direction**||
-| [Load balance traffic to VMs for high availability](./scripts/load-balancer-windows-powershell-sample-nlb.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates several virtual machines in a highly available and load balanced configuration. |
-| [Direct traffic across multiple regions for high application availability](./scripts/traffic-manager-powershell-websites-high-availability.md?toc=%2fazure%2fnetworking%2ftoc.json) | Creates two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. |
-| | |
networking Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-multi-tier-application.md
- Title: Azure CLI script sample - Create a network for multi-tier applications
-description: Azure CLI script sample - Create a virtual network for multi-tier applications.
--- Previously updated : 03/23/2023-----
-# Use an Azure CLI script sample to create a network for multi-tier applications
-
-This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you'll have two virtual machines, one in each subnet that you can deploy web server and MySQL software to.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/virtual-network-multi-tier-application/virtual-network-multi-tier-application.sh "Virtual network for multi-tier application")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```azurecli
-az group delete --name $resourceGroup --yes
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates a back-end subnet. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the Internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [az vm create](/cli/azure/vm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-More networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md)
networking Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
- Title: Azure CLI Script Sample - Peer two virtual networks
-description: Use an Azure CLI script sample to create and connect two virtual networks in the same region through the Azure network.
--- Previously updated : 03/23/2023----
-# Use an Azure CLI sample script to connect two virtual networks
-
-This script creates and connects two virtual networks in the same region through the Azure network. After running the script, you'll create a peering between two virtual networks.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/peer-two-virtual-networks/peer-two-virtual-networks.sh "Peer two networks")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```azurecli
-az group delete --name $resourceGroup --yes
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual machine, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and subnet. |
-| [az network vnet peering create](/cli/azure/network/vnet/peering) | Creates a peering between two virtual networks. |
-| [az group delete](/cli/azure/vm/extension) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-More networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md).
networking Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
- Title: Azure CLI script sample - Route traffic through a network virtual appliance
-description: Azure CLI script sample - Route traffic through a firewall network virtual appliance.
--- Previously updated : 03/23/2023----
-# Use an Azure CLI script to route traffic through a network virtual appliance
-
-This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/route-traffic-through-nva/route-traffic-through-nva.sh "Route traffic through a network virtual appliance")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```azurecli
-az group delete --name $resourceGroup --yes
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates back-end and DMZ subnets. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the Internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates a virtual network interface and enable IP forwarding for it. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates a network security group (NSG). |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) | Creates NSG rules that allow HTTP and HTTPS ports inbound to the VM. |
-| [az network vnet subnet update](/cli/azure/network/vnet/subnet)| Associates the NSGs and route tables to subnets. |
-| [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create)| Creates a route table for all routes. |
-| [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create)| Creates routes to route traffic between subnets and the Internet through the VM. |
-| [az vm create](/cli/azure/vm) | Creates a virtual machine and attaches the NIC to it. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-More networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md)
networking Virtual Network Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-filter-network-traffic.md
- Title: Azure CLI script sample - Filter VM network traffic
-description: Use an Azure CLI script to filter inbound and outbound virtual machine (VM) network traffic with front-end and back-end subnets.
---- Previously updated : 03/23/2023---
-# Use an Azure CLI script to filter inbound and outbound VM network traffic
-
-This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS and SSH, while outbound traffic to the Internet from the back-end subnet isn't permitted. After running the script, you'll have one virtual machine with two NICs. Each NIC is connected to a different subnet.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/filter-network-traffic/filter-network-traffic.sh "Filter VM network traffic")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```azurecli
-az group delete --name $resourceGroup --yes
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates a back-end subnet. |
-| [az network vnet subnet update](/cli/azure/network/vnet/subnet) | Associates NSGs to subnets. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the Internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [az vm create](/cli/azure/vm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-More networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md)
networking Virtual Network Powershell Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-filter-network-traffic.md
- Title: Azure PowerShell script sample - Filter VM network traffic
-description: Azure PowerShell script sample - Filter inbound and outbound VM network traffic.
--- Previously updated : 05/02/2023-----
-# Filter inbound and outbound VM network traffic
-
-This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, and HTTPS, while outbound traffic to the Internet from the back-end subnet isn't permitted. After running the script, you'll have one virtual machine with two NICs. Each NIC is connected to a different subnet.
-
-If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-powershell[main](../../../powershell_scripts/virtual-network/filter-network-traffic/filter-network-traffic.ps1 "Filter VM network traffic")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a subnet configuration object |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates security rules to be assigned to a network security group. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) | Associates NSGs to subnets. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the Internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) | Creates a VM configuration. This configuration includes information such as VM name, operating system, and administrative credentials. The configuration is used during VM creation. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Create a virtual machine. |
-|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
networking Virtual Network Powershell Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-multi-tier-application.md
- Title: Azure PowerShell script sample - Create a network for multi-tier applications
-description: Azure PowerShell script sample - Create a virtual network for multi-tier applications.
--- Previously updated : 03/23/2023-----
-# Create a network for multi-tier applications
-
-This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you'll have two virtual machines, one in each subnet that you can deploy web server and MySQL software to.
-
-If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-powershell[main](../../../powershell_scripts/virtual-network/virtual-network-multi-tier-application/virtual-network-multi-tier-application.ps1 "Virtual network for multi-tier application")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name $rgName
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a back-end subnet. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the Internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
networking Virtual Network Powershell Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md
- Title: Azure PowerShell Script Sample - Peer two virtual networks
-description: Create and connect two virtual networks in the same region. Use the Azure script for two peer virtual networks to connect the networks through Azure.
--- Previously updated : 03/23/2023----
-# Peer two virtual networks
-
-This script creates and connects two virtual networks in the same region through the Azure network. After running the script, you'll create a peering between two virtual networks.
-
-If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-azurepowershell[main](../../../powershell_scripts/virtual-network/peer-two-virtual-networks/peer-two-virtual-networks.ps1 "Peer two networks")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual machine, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)| Creates an Azure virtual network and subnet. |
-| [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) | Creates a peering between two virtual networks. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
networking Virtual Network Powershell Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-route-traffic-through-nva.md
- Title: Azure PowerShell script sample - Route traffic through a network virtual appliance
-description: Azure PowerShell script sample - Route traffic through a firewall network virtual appliance.
--- Previously updated : 03/23/2023----
-# Route traffic through a network virtual appliance
-
-This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
-
-If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-powershell[main](../../../powershell_scripts/virtual-network/route-traffic-through-nva/route-traffic-through-nva.ps1 "Route traffic through a network virtual appliance")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup
-```
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates back-end and DMZ subnets. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the Internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates a virtual network interface and enable IP forwarding for it. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates a network security group (NSG). |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates NSG rules that allow HTTP and HTTPS ports inbound to the VM. |
-| [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig)| Associates the NSGs and route tables to subnets. |
-| [New-AzRouteTable](/powershell/module/az.network/new-azroutetable)| Creates a route table for all routes. |
-| [New-AzRouteConfig](/powershell/module/az.network/new-azrouteconfig)| Creates routes to route traffic between subnets and the Internet through the VM. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates a virtual machine and attaches the NIC to it. This command also specifies the virtual machine image to use and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
New-AzResourceGroup -Name $resourceGroup -Location $location
```powershell $suffix=Get-Random # random suffix for the Service Principal $spDisplayName="sp-$resourceGroup-$suffix"
-$azureADAppSp = New-AzADServicePrincipal -DisplayName $displayName -Role Contributor
+$azureADAppSp = New-AzADServicePrincipal -DisplayName $spDisplayName -Role Contributor
New-AzRoleAssignment -ObjectId $azureADAppSp.Id -RoleDefinitionName 'User Access Administrator' -ResourceGroupName $resourceGroup -ObjectType 'ServicePrincipal' New-AzRoleAssignment -ObJectId $azureADAppSp.Id -RoleDefinitionName 'Contributor' -ResourceGroupName $resourceGroup -ObjectType 'ServicePrincipal'
New-AzRoleAssignment -ObJectId $azureADAppSp.Id -RoleDefinitionName 'Contributor
```powershell $aadClientSecretDigest = ConvertTo-SecureString -String $azureADAppSp.PasswordCredentials.SecretText -AsPlainText -Force
-$aadClientSecretDigest = ConvertTo-SecureString -String $azureADAppSp.PasswordCredentials.SecretText -AsPlainText -Force
``` ### Get the service principal for the OpenShift resource provider - PowerShell
Write-Verbose (ConvertTo-Json $templateParams) -Verbose
```powershell New-AzResourceGroupDeployment -ResourceGroupName $resourceGroup @templateParams `
- -TemplateParameterFile azuredeploy.json
+ -TemplateFile azuredeploy.json
``` ::: zone-end
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
You can create an Azure Database for PostgreSQL server in one of three different
| VM series | B-series | Ddsv4-series, <br> Dsv3-series | Edsv4-series, <br> Esv3 series | | vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 20(v4), 32, 48, 64 | | Memory per vCore | Variable | 4 GB | 6.75 to 8 GB |
-| Storage size | 32 GB to 16 TB | 32 GB to 16 TB | 32 GB to 16 TB |
+| Storage size | 32 GB to 32 TB | 32 GB to 32 TB | 32 GB to 32 TB |
| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | To choose a pricing tier, use the following table as a starting point.
Compute resources can be selected based on the tier, vCores and memory size. vCo
The detailed specifications of the available server types are as follows:
-| SKU Name | vCores | Memory Size | Max Supported IOPS | Max Supported I/O bandwidth |
-|-|--|-|--|--|
-| **Burstable** | | | | |
-| B1ms | 1 | 2 GiB | 640 | 10 MiB/sec |
-| B2s | 2 | 4 GiB | 1280 | 15 MiB/sec |
-| B2ms | 2 | 4 GiB | 1700 | 22.5 MiB/sec |
-| B4ms | 4 | 8 GiB | 2400 | 35 MiB/sec |
-| B8ms | 8 | 16 GiB | 3100 | 50 MiB/sec |
-| B12ms | 12 | 24 GiB | 3800 | 50 MiB/sec |
-| B16ms | 16 | 32 GiB | 4300 | 50 MiB/sec |
-| B20ms | 20 | 40 GiB | 5000 | 50 MiB/sec |
-| **General Purpose** | | | | |
+| SKU Name | vCores | Memory Size | Max Supported IOPS | Max Supported I/O bandwidth |
+|-|--|-|-- |--|
+| **Burstable** | | | | |
+| B1ms | 1 | 2 GiB | 640 | 10 MiB/sec |
+| B2s | 2 | 4 GiB | 1280 | 15 MiB/sec |
+| B2ms | 2 | 4 GiB | 1700 | 22.5 MiB/sec |
+| B4ms | 4 | 8 GiB | 2400 | 35 MiB/sec |
+| B8ms | 8 | 16 GiB | 3100 | 50 MiB/sec |
+| B12ms | 12 | 24 GiB | 3800 | 50 MiB/sec |
+| B16ms | 16 | 32 GiB | 4300 | 50 MiB/sec |
+| B20ms | 20 | 40 GiB | 5000 | 50 MiB/sec |
+| **General Purpose** | | | | |
| D2s_v3 / D2ds_v4 | 2 | 8 GiB | 3200 | 48 MiB/sec | | D4s_v3 / D4ds_v4 | 4 | 16 GiB | 6400 | 96 MiB/sec | | D8s_v3 / D8ds_V4 | 8 | 32 GiB | 12800 | 192 MiB/sec | | D16s_v3 / D16ds_v4 | 16 | 64 GiB | 18000 | 384 MiB/sec |
-| D32s_v3 / D32ds_v4 | 32 | 128 GiB | 18000 | 750 MiB/sec |
-| D48s_v3 / D48ds_v4 | 48 | 192 GiB | 18000 | 750 MiB/sec |
-| D64s_v3 / D64ds_v4 | 64 | 256 GiB | 18000 | 750 MiB/sec |
-| **Memory Optimized** | | | | |
+| D32s_v3 / D32ds_v4 | 32 | 128 GiB | 18000 | 900 MiB/sec |
+| D48s_v3 / D48ds_v4 | 48 | 192 GiB | 18000 | 900 MiB/sec |
+| D64s_v3 / D64ds_v4 | 64 | 256 GiB | 18000 | 900 MiB/sec |
+| **Memory Optimized** | | | | |
| E2s_v3 / E2ds_v4 | 2 | 16 GiB | 3200 | 48 MiB/sec | | E4s_v3 / E4ds_v4 | 4 | 32 GiB | 6400 | 96 MiB/sec | | E8s_v3 / E8ds_v4 | 8 | 64 GiB | 12800 | 192 MiB/sec | | E16s_v3 / E16ds_v4 | 16 | 128 GiB | 18000 | 384 MiB/sec |
-| E20ds_v4 | 20 | 160 GiB | 18000 | 480 MiB/sec |
-| E32s_v3 / E32ds_v4 | 32 | 256 GiB | 18000 | 750 MiB/sec |
-| E48s_v3 / E48ds_v4 | 48 | 384 GiB | 18000 | 750 MiB/sec |
-| E64s_v3 / E64ds_v4 | 64 | 432 GiB | 18000 | 750 MiB/sec |
+| E20ds_v4 | 20 | 160 GiB | 18000 | 480 MiB/sec |
+| E32s_v3 / E32ds_v4 | 32 | 256 GiB | 18000 | 900 MiB/sec |
+| E48s_v3 / E48ds_v4 | 48 | 384 GiB | 18000 | 900 MiB/sec |
+| E64s_v3 / E64ds_v4 | 64 | 432 GiB | 18000 | 900 MiB/sec |
## Storage
Storage is available in the following fixed sizes:
| 4 TiB | 7,500 | | 8 TiB | 16,000 | | 16 TiB | 18,000 |
+| 32 TiB | 20,000 |
Note that IOPS are also constrained by your VM type. Even though you can select any storage size independent of the server type, you may not be able to use all IOPS that the storage provides, especially when you choose a server with a small number of vCores.
You can monitor your I/O consumption in the Azure portal or by using Azure CLI c
### Maximum IOPS for your configuration
-|SKU Name |Storage Size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|
+|SKU Name |Storage Size in GiB |32 |64 |128 |256 |512 |1,024|2,048|4,096|8,192 |16,384|32768 |
|--||||-|-|--|--|--|--|||
-| |Maximum IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|**Burstable** | | | | | | | | | | | |
-|B1ms |640 IOPS |120|240|500 |640*|640* |640* |640* |640* |640* |640* |
-|B2s |1280 IOPS |120|240|500 |1100|1280*|1280*|1280*|1280*|1280* |1280* |
-|B2ms |1280 IOPS |120|240|500 |1100|1700*|1700*|1700*|1700*|1700* |1700* |
-|B4ms |1280 IOPS |120|240|500 |1100|2300 |2400*|2400*|2400*|2400* |2400* |
-|B8ms |1280 IOPS |120|240|500 |1100|2300 |3100*|3100*|3100*|3100* |3100* |
-|B12ms |1280 IOPS |120|240|500 |1100|2300 |3800*|3800*|3800*|3800* |3800* |
-|B16ms |1280 IOPS |120|240|500 |1100|2300 |4300*|4300*|4300*|4300* |4300* |
-|B20ms |1280 IOPS |120|240|500 |1100|2300 |5000 |5000*|5000*|5000* |5000* |
+| |Maximum IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|**Burstable** | | | | | | | | | | | | |
+|B1ms |640 IOPS |120|240|500 |640*|640* |640* |640* |640* |640* |640* |640* |
+|B2s |1280 IOPS |120|240|500 |1100|1280*|1280*|1280*|1280*|1280* |1280* |1280* |
+|B2ms |1280 IOPS |120|240|500 |1100|1700*|1700*|1700*|1700*|1700* |1700* |1700* |
+|B4ms |1280 IOPS |120|240|500 |1100|2300 |2400*|2400*|2400*|2400* |2400* |2400* |
+|B8ms |1280 IOPS |120|240|500 |1100|2300 |3100*|3100*|3100*|3100* |2400* |2400* |
+|B12ms |1280 IOPS |120|240|500 |1100|2300 |3800*|3800*|3800*|3800* |3800* |3800* |
+|B16ms |1280 IOPS |120|240|500 |1100|2300 |4300*|4300*|4300*|4300* |4300* |4300* |
+|B20ms |1280 IOPS |120|240|500 |1100|2300 |5000 |5000*|5000*|5000* |5000* |5000* |
|**General Purpose** | | | | | | | | | | | |
-|D2s_v3 / D2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |
-|D4s_v3 / D4ds_v4 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |
-|D8s_v3 / D8ds_v4 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|
-|D16s_v3 / D16ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|D32s_v3 / D32ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|D48s_v3 / D48ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|D64s_v3 / D64ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
+|D2s_v3 / D2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
+|D4s_v3 / D4ds_v4 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |6400* |
+|D8s_v3 / D8ds_v4 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|12800*|
+|D16s_v3 / D16ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|D32s_v3 / D32ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|D48s_v3 / D48ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|D64s_v3 / D64ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
|**Memory Optimized**| | | | | | | | | | | |
-|E2s_v3 / E2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |
-|E4s_v3 / E4ds_v4 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |
-|E8s_v3 / E8ds_v4 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|
-|E16s_v3 / E16ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|E20ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|E32s_v3 / E32ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|E48s_v3 / E48ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
-|E64s_v3 / E64ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |
+|E2s_v3 / E2ds_v4 |3200 IOPS |120|240|500 |1100|2300 |3200*|3200*|3200*|3200* |3200* |3200* |
+|E4s_v3 / E4ds_v4 |6,400 IOPS |120|240|500 |1100|2300 |5000 |6400*|6400*|6400* |6400* |6400* |
+|E8s_v3 / E8ds_v4 |12,800 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |12800*|12800*|12800*|
+|E16s_v3 / E16ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|E20ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|E32s_v3 / E32ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|E48s_v3 / E48ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
+|E64s_v3 / E64ds_v4 |18,000 IOPS |120|240|500 |1100|2300 |5000 |7500 |7500 |16000 |18000 |20000 |
When marked with a \*, IOPS are limited by the VM type you selected. Otherwise IOPS are limited by the selected storage size.
When marked with a \*, IOPS are limited by the VM type you selected. Otherwise I
### Maximum I/O bandwidth (MiB/sec) for your configuration
-|SKU Name |Storage Size, GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|
-|--|-| | |- |- |-- |-- |-- |-- || |
-| |**Storage Bandwidth, MiB/sec** |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|**Burstable** | | | | | | | | | | | |
-|B1ms |10 MiB/sec |10* |10* |10* |10* |10* |10* |10* |10* |10* |10* |
-|B2s |15 MiB/sec |15* |15* |15* |15* |15* |15* |15* |15* |15* |15* |
-|B2ms |22.5 MiB/sec |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |
-|B4ms |15 MiB/sec |32 |35* |35* |35* |35* |35* |35* |35* |35* |35* |
-|B8ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |
-|B12ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |
-|B16ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |
-|B20ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |
-|**General Purpose** | | | | | | | | | | | |
-|D2s_v3 / D2ds_v4 |48 MiB/sec |25 |48* |48* |48* |48* |48* |48* |48* |48* |48* |
-|D4s_v3 / D4ds_v4 |96 MiB/sec |25 |50 |96* |96* |96* |96* |96* |96* |96* |96* |
-|D8s_v3 / D8ds_v4 |192 MiB/sec |25 |50 |100 |125 |150 |192* |192* |192* |192* |192* |
-|D16s_v3 / D16ds_v4 |384 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |384* |384* |
-|D32s_v3 / D32ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|D48s_v3 / D48ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|D64s_v3 / Dd64ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|**Memory Optimized**| | | | | | | | | | | |
-|E2s_v3 / E2ds_v4 |48 MiB/sec |25 |48* |48* |48* |48* |48* |48* |48* |48* |48* |
-|E4s_v3 / E4ds_v4 |96 MiB/sec |25 |50 |96* |96* |96* |96* |96* |96* |96* |96* |
-|E8s_v3 / E8ds_v4 |192 MiB/sec |25 |50 |100 |125 |150 |192* |192* |192* |192* |192* |
-|E16s_v3 / E16ds_v4 |384 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |384* |384* |
-|E20ds_v4 |480 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |480* |480* |
-|E32s_v3 / E32ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|E48s_v3 / E48ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
-|E64s_v3 / E64ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |
+|SKU Name |Storage Size, GiB |32 |64 |128 |256 |512 |1,024 |2,048 |4,096 |8,192 |16,384|37,768|
+|--|-| | |- |- |-- |-- |-- |-- |||
+| |**Storage Bandwidth, MiB/sec** |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |750 |
+|**Burstable** | | | | | | | | | | | | |
+|B1ms |10 MiB/sec |10* |10* |10* |10* |10* |10* |10* |10* |10* |10* |10* |
+|B2s |15 MiB/sec |15* |15* |15* |15* |15* |15* |15* |15* |15* |10* |10* |
+|B2ms |22.5 MiB/sec |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |22.5* |
+|B4ms |15 MiB/sec |32 |35* |35* |35* |35* |35* |35* |35* |35* |35* |35* |
+|B8ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |50* |
+|B12ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |50* |
+|B16ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |50* |
+|B20ms |15 MiB/sec |32 |50 |50* |50* |50* |50* |50* |50* |50* |50* |50* |
+|**General Purpose** | | | | | | | | | | | | |
+|D2s_v3 / D2ds_v4 |48 MiB/sec |25 |48* |48* |48* |48* |48* |48* |48* |48* |48* |48* |
+|D4s_v3 / D4ds_v4 |96 MiB/sec |25 |50 |96* |96* |96* |96* |96* |96* |96* |96* |96* |
+|D8s_v3 / D8ds_v4 |192 MiB/sec |25 |50 |100 |125 |150 |192* |192* |192* |192* |192* |192* |
+|D16s_v3 / D16ds_v4 |384 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |384* |384* |384* |
+|D32s_v3 / D32ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+|D48s_v3 / D48ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+|D64s_v3 / Dd64ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+|**Memory Optimized**| | | | | | | | | | | | |
+|E2s_v3 / E2ds_v4 |48 MiB/sec |25 |48* |48* |48* |48* |48* |48* |48* |48* |48* |48* |
+|E4s_v3 / E4ds_v4 |96 MiB/sec |25 |50 |96* |96* |96* |96* |96* |96* |96* |96* |96* |
+|E8s_v3 / E8ds_v4 |192 MiB/sec |25 |50 |100 |125 |150 |192* |192* |192* |192* |192* |192* |
+|E16s_v3 / E16ds_v4 |384 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |384* |384* |384* |
+|E20ds_v4 |480 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |480* |480* |480* |
+|E32s_v3 / E32ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+|E48s_v3 / E48ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
+|E64s_v3 / E64ds_v4 |750 MiB/sec |25 |50 |100 |125 |150 |200 |250 |250 |500 |750 |900 |
When marked with a \*, I/O bandwidth is limited by the VM type you selected. Otherwise I/O bandwidth is limited by the selected storage size.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Is-db-alive is an database server availability metric for Azure Postgres Flexibl
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||
-|**Database Is Alive** (Preview) |`is-db-alive` |Count |Indicates if the database is up or not |N/a |Yes |
+|**Database Is Alive** (Preview) |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes |
#### Considerations when using the Database availability metrics
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions. * General availability: [Pgvector extension](how-to-use-pgvector.md) for Azure Database for PostgreSQL - Flexible Server. * General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL- Flexible server
+* General availability [32 TB Storage](./concepts-compute-storage.md) with Azure Database for PostgreSQL- Flexible server
## Release: April 2023 * Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net |
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net <br/> privatelink.applicationinsights.azure.com| monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net <br/> applicationinsights.azure.com |
+| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com | | Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | {regionName}.privatelink.afs.azure.net | {regionName}.afs.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Select an area for resources about how to integrate SAP and Azure in that space.
| Area | Description | | - | -- |
+| [Azure OpenAI service](#azure-openai-service) | Learn how to integrate your SAP workloads with Azure OpenAI service. |
+| [Microsoft Copilot](#microsoft-copilot) | Learn how to integrate your SAP workloads with Microsoft Copilots. |
| [SAP RISE managed workloads](rise-integration.md#sap-btp-connectivity) | Learn how to integrate your SAP RISE managed workloads with Azure services. | | [Microsoft Office](#microsoft-office) | Learn about Office Add-ins in Excel, doing SAP Principal Propagation with Office 365, SAP Analytics Cloud and Data Warehouse Cloud integration and more. | | [Microsoft Teams](#microsoft-teams) | Discover collaboration scenarios boosting your daily productivity by interacting with your SAP applications directly from Microsoft Teams. |
Select an area for resources about how to integrate SAP and Azure in that space.
| [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. | | [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. |
+### Azure OpenAI service
+
+For more information about integration with [Azure OpenAI service](/azure/cognitive-services/openai/overview), see the following Azure documentation:
+
+- [Microsoft AI SDK for SAP](https://microsoft.github.io/aisdkforsapabap/docs/intro)
+- [ABAP SDK for Azure](https://github.com/microsoft/ABAP-SDK-for-Azure)
+
+Also see these SAP resources:
+
+- [empower SAP RISE enterprise users with Azure OpenAI in multi-cloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)
+- [Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core](https://github.com/SAP-samples/azure-openai-aicore-cap-api)
+- [SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News](https://news.sap.com/2023/05/sap-successfactors-helps-hr-solve-skills-gap-with-generative-ai/)
+
+### Microsoft Copilot
+
+For more information about integration with [Microsoft 365 Copilot](https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/), see the following Microsoft resources:
+
+- [The synergy of market leaders: Exploring Microsoft and SAPΓÇÖs game-changing collaboration | blog](https://azure.microsoft.com/blog/the-synergy-of-market-leaders-exploring-microsoft-and-saps-game-changing-collaboration/)
+
+Also see these SAP resources:
+
+- [The future of work is now: An update on generative AI at SAP SuccessFactors](https://blogs.sap.com/2023/05/15/the-future-of-work-is-now-an-update-on-generative-ai-at-sap-successfactors/)
+- [SAP and Microsoft Collaborate on Joint Generative AI Offerings to Help Customers Address the Talent Gap | SAP News](https://news.sap.com/2023/05/sap-microsoft-joint-generative-ai-offerings-talent-gap/)
+ ### Microsoft Office For more information about integration with Microsoft Office, see the following Azure documentation:
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 04/21/2023 Last updated : 05/30/2023
Any entity trying to access Azure Active Directory (Azure AD) identity services
| [DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | [DigiCert Global Root G3](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | | [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
-| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/archived/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
+| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/archived/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 29c87039f4dbfdb94dbcda6ca792836b<br>ee68c3e94ab5d55eb9395116424e25b0cadd9009 |
### Subordinate Certificate Authorities
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
The script takes the following actions:
### Prerequisites
-You must have PowerShell and the AWS CLI on your machine.
+- Install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content (Public preview)](sentinel-solutions-deploy.md).
+
+- You must have PowerShell and the AWS CLI on your machine.
+ - [Installation instructions for PowerShell](/powershell/scripting/install/installing-powershell)
+ - [Installation instructions for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
+
- - [Installation instructions for PowerShell](/powershell/scripting/install/installing-powershell)
- - [Installation instructions for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
### Instructions
To run the script to set up the connector, use the following steps:
1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-1. Select **Amazon Web Services S3** from the data connectors gallery, and in the details pane, select **Open connector page**.
+1. Select **Amazon Web Services S3** from the data connectors gallery.
+
+ If you don't see the connector, install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel.
+1. In the details pane for the connector, select **Open connector page**.
1. In the **Configuration** section, under **1. Set up your AWS environment**, expand **Setup with PowerShell script (recommended)**. 1. Follow the on-screen instructions to download and extract the [AWS S3 Setup Script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/ConfigAwsS3DataConnectorScripts.zip?raw=true) (link downloads a zip file containing the main setup script and helper scripts) from the connector page.
Microsoft recommends using the automatic setup script to deploy this connector.
- Create a [standard Simple Queue Service (SQS) queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-create-queue.html) in AWS.
+- Install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+ ### Instructions The manual setup consists of the following steps:
The manual setup consists of the following steps:
#### Create an AWS assumed role and grant access to the AWS Sentinel account
-1. In Microsoft Sentinel, select **Data connectors** and then select the **Amazon Web Services S3** line in the table and in the AWS pane to the right, select **Open connector page**.
+1. In Microsoft Sentinel, select **Data connectors**.
+
+1. Select **Amazon Web Services S3** from the data connectors gallery.
+ If you don't see the connector, install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel.
+
+1. In the details pane for the connector, select **Open connector page**.
1. Under **Configuration**, copy the **External ID (Workspace ID)** and paste it aside. 1. In your AWS management console, under **Security, Identity & Compliance**, select **IAM**.
Learn how to [troubleshoot Amazon Web Services S3 connector issues](aws-s3-troub
## Prerequisites
-You must have write permission on the Microsoft Sentinel workspace.
+- You must have write permission on the Microsoft Sentinel workspace.
+- Install the Amazon Web Services solution from the Content Hub in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
> [!NOTE] > Microsoft Sentinel collects CloudTrail management events from all regions. It is recommended that you do not stream events from one region to another. ## Connect AWS CloudTrail
-1. In Microsoft Sentinel, select **Data connectors** and then select the **Amazon Web Services** line in the table and in the AWS pane to the right, select **Open connector page**.
+1. In Microsoft Sentinel, select **Data connectors**.
+
+1. Select **Amazon Web Services** from the data connectors gallery.
+
+ If you don't see the connector, install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel.
+
+1. In the details pane for the connector, select **Open connector page**.
1. Follow the instructions under **Configuration** using the following steps.
sentinel Connect Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-active-directory.md
# Connect Azure Active Directory (Azure AD) data to Microsoft Sentinel
-> [!IMPORTANT]
-> As indicated below, some of the available log types are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-- You can use Microsoft Sentinel's built-in connector to collect data from [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) and stream it into Microsoft Sentinel. The connector allows you to stream the following log types: - [**Sign-in logs**](../active-directory/reports-monitoring/concept-all-sign-ins.md), which contain information about interactive user sign-ins where a user provides an authentication factor.
You can use Microsoft Sentinel's built-in connector to collect data from [Azure
- [**Provisioning logs**](../active-directory/reports-monitoring/concept-provisioning-logs.md) (also in **PREVIEW**), which contain system activity information about users, groups, and roles provisioned by the Azure AD provisioning service.
+> [!IMPORTANT]
+> Some of the available log types are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Prerequisites
You can use Microsoft Sentinel's built-in connector to collect data from [Azure
- Your user must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles on the tenant you want to stream the logs from. - Your user must have read and write permissions to the Azure AD diagnostic settings in order to be able to see the connection status.
+- Install the solution for **Azure Active Directory** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
## Connect to Azure Active Directory
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
# Microsoft Sentinel data connectors - After you onboard Microsoft Sentinel into your workspace, you can use data connectors to start ingesting your data into Microsoft Sentinel. Microsoft Sentinel comes with many out of the box connectors for Microsoft services, which you can integrate in real time. For example, the Microsoft 365 Defender connector is a [service-to-service connector](#service-to-service-integration-for-data-connectors) that integrates data from Office 365, Azure Active Directory (Azure AD), Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps. You can also enable built-in connectors to the broader security ecosystem for non-Microsoft products. For example, you can use [Syslog](#syslog), [Common Event Format (CEF)](#common-event-format-cef), or [REST APIs](#rest-api-integration-for-data-connectors) to connect your data sources with Microsoft Sentinel.
-Learn about [types of Microsoft Sentinel data connectors](data-connectors-reference.md) or learn about the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
-
-The Microsoft Sentinel **Data connectors** page shows the full list of connectors and their status in your workspace.
+The Microsoft Sentinel **Data connectors** page shows the list of connectors installed in your workspace and their status.
:::image type="content" source="media/collect-data/collect-data-page.png" alt-text="Screenshot of the data connectors gallery." lightbox="media/collect-data/collect-data-page.png":::
+For more data connectors, install the solution or standalone content items from the content hub. For more information, see the following articles:
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+- [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md)
+- [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md)
+ <a name="agent-options"></a> <a name="data-connection-methods"></a> <a name="map-data-types-with-microsoft-sentinel-connection-options"></a> + ## Enable a data connector
-Select the connector you want to connect, and then select **Open connector page**.
+From the **Data connectors** page, select the active or custom connector you want to connect, and then select **Open connector page**. If you don't see the data connector you want, install the solution or standalone content items from the **Content Hub**.
-- Once you fulfill all the prerequisites listed in the **Instructions** tab, the connector page describes how to ingest the data to Microsoft Sentinel. It may take some time for data to start arriving. After you connect, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
+Once you fulfill all the prerequisites listed in the **Instructions** tab, the connector page describes how to ingest the data to Microsoft Sentinel. It may take some time for data to start arriving. After you connect, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
- :::image type="content" source="media/collect-data/opened-connector-page.png" alt-text="Screenshot showing how to configure data connectors." border="false":::
+ :::image type="content" source="media/collect-data/opened-connector-page.png" alt-text="Screenshot showing how to configure data connectors." border="false":::
-- In the **Next steps** tab, you'll see more content for the specific data type: Sample queries, visualization workbooks, and analytics rule templates to help you detect and investigate threats.
+In the **Next steps** tab, you'll see more content for the specific data type: Sample queries, visualization workbooks, and analytics rule templates to help you detect and investigate threats.
- :::image type="content" source="media/collect-data/data-insights.png" alt-text="Screenshot showing the data connecter Next steps tab." border="false":::
+ :::image type="content" source="media/collect-data/data-insights.png" alt-text="Screenshot showing the data connecter Next steps tab." border="false":::
Learn about your specific data connector in the [data connectors reference](data-connectors-reference.md).
sentinel Connect Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-defender-for-cloud.md
# Connect Microsoft Defender for Cloud alerts to Microsoft Sentinel
-## Background
-
-> [!NOTE]
-> - Microsoft Defender for Cloud was formerly known as Azure Security Center.
-> - Defender for Cloud's enhanced security features were formerly known collectively as Azure Defender.
- [Microsoft Defender for Cloud](../defender-for-cloud/index.yml)'s integrated cloud workload protections allow you to detect and quickly respond to threats across hybrid and multi-cloud workloads. This connector allows you to stream [security alerts from Defender for Cloud](../defender-for-cloud/alerts-reference.md) into Microsoft Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context. As [Microsoft Defender for Cloud Defender plans](../defender-for-cloud/defender-for-cloud-introduction.md#protect-cloud-workloads) are enabled per subscription, this data connector is also enabled or disabled separately for each subscription.
+Microsoft Defender for Cloud was formerly known as Azure Security Center. Defender for Cloud's enhanced security features were formerly known collectively as Azure Defender.
++ [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-### Alert synchronization
+## Alert synchronization
- When you connect Microsoft Defender for Cloud to Microsoft Sentinel, the status of security alerts that get ingested into Microsoft Sentinel is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert will display as closed in Microsoft Sentinel as well. - Changing the status of an alert in Defender for Cloud will *not* affect the status of any Microsoft Sentinel **incidents** that contain the Microsoft Sentinel alert, only that of the alert itself.
-### Bi-directional alert synchronization
+## Bi-directional alert synchronization
-- Enabling **bi-directional sync** will automatically sync the status of original security alerts with that of the Microsoft Sentinel incidents that contain those alerts. So, for example, when a Microsoft Sentinel incident containing a security alerts is closed, the corresponding original alert will be closed in Microsoft Defender for Cloud automatically.
+Enabling **bi-directional sync** will automatically sync the status of original security alerts with that of the Microsoft Sentinel incidents that contain those alerts. So, for example, when a Microsoft Sentinel incident containing a security alerts is closed, the corresponding original alert will be closed in Microsoft Defender for Cloud automatically.
## Prerequisites
As [Microsoft Defender for Cloud Defender plans](../defender-for-cloud/defender-
- You will need the `SecurityInsights` resource provider to be registered for each subscription where you want to enable the connector. Review the guidance on the [resource provider registration status](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) and the ways to register it. - To enable bi-directional sync, you must have the **Contributor** or **Security Admin** role on the relevant subscription.
+- Install the solution for **Microsoft Defender for Cloud** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
## Connect to Microsoft Defender for Cloud
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Using the link provided below, you will run a script on the designated machine t
[!INCLUDE [data-connector-prereq](includes/data-connector-prereq.md)]
+Install the product solution from the **Content Hub** in Microsoft Sentinel. If the product isn't listed, install the solution for **Common Event Format**. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+ Your machine must meet the following requirements: - **Hardware (physical/virtual)**
If your devices are sending Syslog and CEF logs over TLS (because, for example,
## Run the deployment script
-1. From the Microsoft Sentinel navigation menu, select **Data connectors**. Select the connector for your product from the connectors gallery (or the **Common Event Format (CEF)** if your product isn't listed), and then the **Open connector page** button on the lower right.
-
+1. In Microsoft Sentinel, select **Data connectors**.
+1. Select the connector for your product from the connectors gallery. If your product isn't listed, select **Common Event Format (CEF)**.
+1. In the details pane for the connector, select **Open connector page**.
1. On the connector page, in the instructions under **1.2 Install the CEF collector on the Linux machine**, copy the link provided under **Run the following script to install and apply the CEF collector**. If you don't have access to that page, copy the link from the text below (copying and pasting the **Workspace ID** and **Primary Key** from above in place of the placeholders):
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
Last updated 02/01/2023
# Connect data from Microsoft 365 Defender to Microsoft Sentinel - Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)** and **Azure Active Directory Identity Protection (AADIP)**. The connector also lets you stream **advanced hunting** events from *all* of the above Defender components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics. For more information about incident integration and advanced hunting event collection, see [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
-> [!IMPORTANT]
->
-> The Microsoft 365 Defender connector is now generally available!
+ The Microsoft 365 Defender connector is now generally available.
## Prerequisites - You must have a valid license for Microsoft 365 Defender, as described in [Microsoft 365 Defender prerequisites](/microsoft-365/security/mtp/prerequisites).
For more information about incident integration and advanced hunting event colle
- Your user must have read and write permissions on your Microsoft Sentinel workspace. - To make any changes to the connector settings, your user must be a member of the same Azure Active Directory tenant with which your Microsoft Sentinel workspace is associated.
+- Install the solution for **Microsoft 365 Defender** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
### Prerequisites for Active Directory sync via MDI
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
There are three steps to configuring Syslog collection:
- **Configure the Log Analytics agent itself**. This is done from within Microsoft Sentinel, and the configuration is sent to all installed agents.
+## Prerequisites
+
+Before you begin, install the solution for **Syslog** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+ ## Configure your Linux machine or appliance 1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
Number of disks | 2, including the OS disk - 80 GB and a data disk - 620 GB
**Component** | **Requirement** |
-Operating system | Windows Server 2019, Windows Server 2016
+Operating system | Windows Server 2019
Operating system locale | English (en-*) Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Disaster recovery of physical servers | Replication of on-premises Windows/Linux
**Server** | **Requirements** | **Details** | |
-vCenter Server | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
-vSphere hosts | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md).
+vCenter Server | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7 or 6.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
+vSphere hosts | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7 or 6.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md).
## Azure Site Recovery replication appliance
spring-apps How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-outbound-public-ip.md
An Azure Spring Apps service has one or more outbound public IP addresses. The n
The outbound public IP addresses are usually constant and remain the same, but there are exceptions.
+> [!IMPORTANT]
+> If the Azure Spring Apps instance is deployed in your own virtual network, the static outbound IP might be changed after the Start/Stop Azure Spring Apps service instance operation.
+ ## When outbound IPs change Each Azure Spring Apps instance has a set number of outbound public IP addresses at any given time. Any outbound connection from the applications, such as to a back-end database, uses one of the outbound public IP addresses as the origin IP address. The IP address is selected randomly at runtime, so your back-end service must open its firewall to all the outbound IP addresses.
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
Begin by initializing a new Next.js application.
1. Initialize the application using `npm init`. If you are prompted to install `create-next-app`, say yes. ```bash
- npm init next-app@latest --typescript
+ npm init next-app@next-12-3-2 --typescript
``` 1. When prompted for an app name, enter **nextjs-app**.
Begin by initializing a new Next.js application.
1. Stop the development server by pressing **CMD/CTRL + C**.
-## Deploy your static website
+## Configure your Next.js app for deployment to Static Web Apps
+
+To configure your Next.js app for deployment to Static Web Apps, enable the standalone feature for your Next.js project. This step reduces the size of your Next.js project to ensure it's below the size limits for Static Web Apps. Refer to the [standalone](#enable-standalone-feature) section for more information.
+
+```js
+module.exports = {
+ output: "standalone",
+}
+```
+
+## Deploy your Next.js app
The following steps show how to link your app to Azure Static Web Apps. Once in Azure, you can deploy the application to a production environment.
static-web-apps Publish Gatsby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-gatsby.md
The following steps show you how to create a new static site app and deploy it t
| _Repository_ | Select **gatsby-static-web-app**. | | _Branch_ | Select **main**. |
+ > [!NOTE]
+ > If you don't see any repositories, you may need to authorize Azure Static Web Apps on GitHub.
+ > Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
+ 1. In the _Build Details_ section, select **Gatsby** from the _Build Presets_ drop-down and keep the default values. ### Review and create
static-web-apps Publish Hugo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-hugo.md
The following steps show you how to create a new static site app and deploy it t
| _Repository_ | Select **hugo-static-app**. | | _Branch_ | Select **main**. |
+ > [!NOTE]
+ > If you don't see any repositories, you may need to authorize Azure Static Web Apps on GitHub.
+ > Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
+ 1. In the _Build Details_ section, select **Hugo** from the _Build Presets_ drop-down and keep the default values. ### Review and create
static-web-apps Publish Jekyll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-jekyll.md
The following steps show you how to create a new static site app and deploy it t
| _Repository_ | Select **jekyll-static-app**. | | _Branch_ | Select **main**. |
+ > [!NOTE]
+ > If you don't see any repositories, you may need to authorize Azure Static Web Apps on GitHub.
+ > Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
+ 1. In the _Build Details_ section, select **Custom** from the _Build Presets_ drop-down and keep the default values. 1. In the _App location_ box, enter **./**.
static-web-apps Publish Vuepress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-vuepress.md
The following steps show you how to create a new static site app and deploy it t
| _Repository_ | Select **vuepress-static-app**. | | _Branch_ | Select **main**. |
+ > [!NOTE]
+ > If you don't see any repositories, you may need to authorize Azure Static Web Apps on GitHub.
+ > Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
+ 1. In the _Build Details_ section, select **VuePress** from the _Build Presets_ drop-down and keep the default values. ### Review and create
storage Access Tiers Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-best-practices.md
description: Learn about best practice guidelines that help you use access tiers
Previously updated : 05/02/2023 Last updated : 05/30/2023
This article provides best practice guidelines that help you use access tiers to
## Choose the most cost-efficient access tiers
-You can reduce costs by placing blob data into the most cost-efficient access tiers. Choose from three tiers that are designed to optimize your costs around data use. For example, the hot tier has a higher storage cost but lower read cost. Therefore, if you plan to access data frequently, the hot tier might be the most cost-efficient choice. If you plan to read data less frequently, the cool or archive tier might make the most sense because it raises the cost of reading data while reducing the cost of storing data.
+You can reduce costs by placing blob data into the most cost-efficient access tiers. Choose from three tiers that are designed to optimize your costs around data use. For example, the hot tier has a higher storage cost but lower read cost. Therefore, if you plan to access data frequently, the hot tier might be the most cost-efficient choice. If you plan to read data less frequently, the cool, cold or archive tier might make the most sense because it raises the cost of reading data while reducing the cost of storing data.
To identify the most optimal access tier, try to estimate what percentage of the data will be read on a monthly basis. The following chart shows the impact on monthly spending given various read percentages. > [!div class="mx-imgBorder"] > ![Chart that shows a bar for each tier which represents the monthly cost based on percentage read pattern](./media/access-tiers-best-practices/read-pattern-access-tiers.png)
-To model and analyze the cost of using cool versus archive storage, see [Archive versus cool](archive-cost-estimation.md#archive-versus-cool). You can apply similar modeling techniques to compare the cost of hot to cool or archive.
+To model and analyze the cost of using cool or cold versus archive storage, see [Archive versus cold and cool](archive-cost-estimation.md#archive-versus-cold-and-cool). You can apply similar modeling techniques to compare the cost of hot to cool, cold or archive.
+
+> [!IMPORTANT]
+> The cold tier is currently in PREVIEW. To learn more, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
## Migrate data directly to the most cost-efficient access tiers
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
description: Learn how to calculate the cost of storing and maintaining data in
Previously updated : 11/09/2022 Last updated : 05/30/2023
# Estimate the cost of archiving data
-The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot and cool tiers.
+The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot, cool and cold tiers.
This article explains how to calculate the cost of using archive storage and then presents a few example scenarios.
If you upload a blob by using the [Put Block](/rest/api/storageservices/put-bloc
###### Set Blob Tier
-If you use the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation to move a blob from the cool or hot tier to the archive tier, you're charged the price of an **archive** write operation.
+If you use the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation to move a blob from the cool, cold, or hot tier to the archive tier, you're charged the price of an **archive** write operation.
#### The cost to store
For example (assuming the sample pricing), if you plan to store 10 TB archived b
#### The cost to rehydrate
-Blobs in the archive tier are offline and can't be read or modified. To read or modify data in an archived blob, you must first rehydrate the blob to an online tier (either the hot or cool tier).
+Blobs in the archive tier are offline and can't be read or modified. To read or modify data in an archived blob, you must first rehydrate the blob to an online tier (either the hot cool, or cold tier).
You can calculate the cost to rehydrate data by adding the <u>cost to retrieve data</u> to the <u>cost of reading the data</u>.
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
> [!TIP] > To view these costs over 12 months, open the **Continuous Tiering** tab of this [workbook](https://azure.github.io/Storage/docs/backup-and-archive/azure-archive-storage-cost-estimation/azure-archive-storage-cost-estimation.xlsx). You can modify the values in that worksheet to estimate your costs.
-## Archive versus cool
+## Archive versus cold and cool
-Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool and archive tiers.
+Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool, cold, and archive tiers.
+
+The following table compares the cost of archive storage with the cost of cool and cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000).
+
+> [!IMPORTANT]
+> The cold tier is currently in PREVIEW. To learn more, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
-The following table compares the cost of archive storage with the cost of cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000).
<br><br> <table> <tr> <th>Cost factor</th> <th>Archive</th>
+ <th>Cold</th>
<th>Cool</th> </tr> <tr> <td>Write transactions</td> <td>200,000</td> <td>200,000</td>
+ <td>200,000</td>
</tr> <tr> <td>Price of a single write operation</td> <td>$0.00001</td>
+ <td>$0.000018</td>
<td>$0.00001</td> </tr> <tr> <td><strong>Cost to write (transactions * price of a write operation)</strong></td> <td><strong>$2.00</strong></td>
+ <td><strong>$3.60</strong></td>
<td><strong>$2.00</strong></td> </tr> <tr> <td>Total file size (GB)</td> <td>10,240</td> <td>10,240</td>
+ <td>10,240</td>
</tr> <tr> <td>Data prices (pay-as-you-go)</td> <td>$0.00099</td>
+ <td>$0.0036</td>
<td>$0.0152</td> </tr> <tr> <td><strong>Cost to store (file size * data price)</strong></td> <td><strong>$10.14</strong></td>
+ <td><strong>$36.86</strong></td>
<td><strong>$155.65</strong></td> </tr> <tr> <td>Data retrieval size</td> <td>1,024</td> <td>1,024</td>
+ <td>1,024</td>
</tr> <tr> <td>Price of data retrieval per GB</td> <td>$0.02</td>
+ <td>$0.03</td>
<td>$0.01</td> </tr> <tr> <td>Number of read transactions</td> <td>20,000</td> <td>20,000</td>
+ <td>20,000</td>
</tr> <tr> <td>Price of a single read operation</td> <td>$0.0005</td>
+ <td>$0.00001</td>
<td>$0.000001</td> </tr> <tr> <td><strong>Cost to rehydrate (cost to retrieve + cost to read)</strong></td> <td><strong>$30.48</strong></td>
+ <td><strong>$30.92</strong></td>
<td><strong>$10.26</strong></td> </tr> <tr> <td><strong>Monthly cost</strong></td> <td><strong>$42.62</strong></td>
+ <td><strong>$71.38</strong></td>
<td><strong>$167.91</strong></td> </tr> </table>
This article uses the following fictitious prices.
> [!IMPORTANT] > These prices are meant only as examples, and should not be used to calculate your costs.
-| Price factor | Archive | Cool |
-|-|-|--|
-| Price of write transactions (per 10,000) | $0.10 | $0.10 |
-| Price of a single write operation (cost / 10,000) | $0.00001 | $0.00001 |
-| Data prices (pay-as-you-go) | $0.00099 | $0.0152 |
-| Price of read transactions (per 10,000) | $5.00 | $0.01 |
-| Price of a single read operation (cost / 10,000) | $0.0005 | $0.000001 |
-| Price of high priority read transactions (per 10,000) | $50.00 | N/A |
-| Price of data retrieval (per GB) | $0.02 | $0.01 |
-| Price of high priority data retrieval (per GB) | $0.10 | N/A |
+| Price factor | Archive | Cold | Cool |
+|-|-|--|--|
+| Price of write transactions (per 10,000) | $0.10 | $0.18 | $0.10 |
+| Price of a single write operation (cost / 10,000) | $0.00001 | $0.000018 | $0.00001 |
+| Data prices (pay-as-you-go) | $0.00099 | $0.0036 | $0.0152 |
+| Price of read transactions (per 10,000) | $5.00 | $0.10 | $0.01 |
+| Price of a single read operation (cost / 10,000) | $0.0005 | $0.00001 | $0.000001 |
+| Price of high priority read transactions (per 10,000) | $50.00 | N/A | N/A |
+| Price of data retrieval (per GB) | $0.02 | $0.03 | $0.01 |
+| Price of high priority data retrieval (per GB) | $0.10 | N/A | N/A |
For official prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
If the replication status for a blob in the source account indicates failure, th
## Billing
-There's not cost to configure object replication. This includes the task of enabling change feed, enabling versioning, as well as adding replication policies. However, object replication incurs costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
+There is no cost to configure object replication. This includes the task of enabling change feed, enabling versioning, as well as adding replication policies. However, object replication incurs costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
Here's a breakdown of the costs. To find the price of each cost component, see [Azure Blob Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
Here's a breakdown of the costs. To find the price of each cost component, see [
||Cost of network egress<sup>2</sup>| + <sup>1</sup> See [Blob versioning pricing and Billing](versioning-overview.md#pricing-and-billing). <sup>2</sup> This includes only blob versions created since the last replication completed.
Here's a breakdown of the costs. To find the price of each cost component, see [
<sup>3</sup> See [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/). + ## Next steps - [Configure object replication](object-replication-configure.md) - [Prevent object replication across Azure Active Directory tenants](object-replication-prevent-cross-tenant-policies.md) - [Blob versioning](versioning-overview.md) - [Change feed support in Azure Blob Storage](storage-blob-change-feed.md)++
storage Storage Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-java.md
Title: Azure Storage samples using Java description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Java storage client libraries.--+ Last updated 10/01/2020
storage Storage Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples.md
Last updated 10/01/2020
-+ # Azure Storage samples
storage Storage Stored Access Policy Define Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-stored-access-policy-define-dotnet.md
ms.devlang: csharp-+ # Create a stored access policy with .NET
async static Task CreateStoredAccessPolicyAsync(string containerName)
## Resources
-For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](../blobs/blob-v11-samples-dotnet.md#create-a-stored-access-policy).
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](../blobs/blob-v11-samples-dotnet.md#create-a-stored-access-policy).
storage Storage Use Data Movement Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-data-movement-library.md
Last updated 06/16/2020
ms.devlang: csharp-+ # Transfer data with the Data Movement library
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Learn how to install and configure Azure Container Storage Preview
Previously updated : 05/15/2023 Last updated : 05/30/2023
az aks nodepool update --resource-group <resource group> --cluster-name <cluster
## Assign Contributor role to AKS managed identity
-Azure Container Service is a separate service from AKS, so you'll need to grant permissions to allow Azure Container Storage to provision storage for your cluster. Specifically, you must assign the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) Azure RBAC built-in role to the AKS managed identity. You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps.
+Azure Container Service is a separate service from AKS, so you'll need to grant permissions to allow Azure Container Storage to provision storage for your cluster. Specifically, you must assign the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) Azure RBAC built-in role to the AKS managed identity. You can do this using the Azure portal or Azure CLI. You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps.
+
+# [Azure portal](#tab/portal)
1. Sign into the [Azure portal](https://portal.azure.com?azure-portal=true), and search for and select **Kubernetes services**. 1. Locate and select your AKS cluster. Select **Settings** > **Properties** from the left navigation.
Azure Container Service is a separate service from AKS, so you'll need to grant
1. Under **Select**, search for and select the managed identity with your cluster name and `-agentpool` appended. 1. Select **Review + assign**.
-Run the following command to assign Contributor role to AKS managed identity. Remember to replace `<resource-group>` and `<cluster-name>` with your own values.
+# [Azure CLI](#tab/cli)
+
+Run the following commands to assign Contributor role to AKS managed identity. Remember to replace `<resource-group>` and `<cluster-name>` with your own values.
```azurecli-interactive export AKS_MI_OBJECT_ID=$(az aks show --name <cluster-name> --resource-group <resource-group> --query "identityProfile.kubeletidentity.objectId" -o tsv) export AKS_NODE_RG=$(az aks show --name <cluster-name> --resource-group <resource-group> --query "nodeResourceGroup" -o tsv)- az role assignment create --assignee $AKS_MI_OBJECT_ID --role "Contributor" --resource-group "$AKS_NODE_RG" ```
-
++ ## Install Azure Container Storage The initial install uses Azure Arc CLI commands to download a new extension. Replace `<cluster-name>` and `<resource-group>` with your own values. The `<name>` value can be whatever you want; it's just a label for the extension you're installing.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Last updated 05/04/2023 -+ # Configure Elastic SAN networking Preview
storage File Sync Troubleshoot Sync Group Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-group-management.md
Last updated 10/25/2022 -+ # Troubleshoot Azure File Sync sync group management
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
Last updated 03/31/2023 -+ # SMB file shares in Azure Files
storage Files Troubleshoot Linux Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-linux-nfs.md
Last updated 02/21/2023 -+ # Troubleshoot NFS Azure file shares
storage Storage Dotnet How To Use Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md
Last updated 10/02/2020
ms.devlang: csharp-+ # Develop for Azure Files with .NET
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
Title: Migrate to Azure file shares
-description: Learn about migrations to Azure file shares and find your migration guide.
+description: Learn how to migrate to Azure file shares and find your migration guide.
Previously updated : 3/18/2020 Last updated : 05/30/2023 # Migrate to Azure file shares
-This article covers the basic aspects of a migration to Azure file shares.
-
-This article contains migration basics and a table of migration guides. These guides help you move your files into Azure file shares. The guides are organized based on where your data is and what deployment model (cloud-only or hybrid) you're moving to.
+This article covers the basic aspects of a migration to Azure file shares and contains a table of migration guides. These guides help you move your files into Azure file shares. The guides are organized based on where your data is and what deployment model (cloud-only or hybrid) you're moving to.
## Migration basics
The following table classifies Microsoft tools and their current suitability for
| :-: | :-- | :- | :- | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| RoboCopy | Supported. Azure file shares can be mounted as network drives. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Azure File Sync | Natively integrated into Azure file shares. | Full fidelity.* |
+|![Yes, recommended](medi) | Supported. | Full fidelity.* |
|![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Storage Migration Service | Indirectly supported. Azure file shares can be mounted as network drives on SMS target servers. | Full fidelity.* | |![Yes, recommended](medi) to load files onto the device)| Supported. </br>(Data Box Disks does not support large file shares) | Data Box and Data Box Heavy fully support metadata. </br>Data Box Disks does not preserve file metadata. | |![Not fully recommended](medi) |
storage Storage Files Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-endpoints.md
We recommend reading [Azure Files networking considerations](storage-files-netwo
You can configure your endpoints to restrict network access to your storage account. There are two approaches to restricting access to a storage account to a virtual network: -- [Create one or more private endpoints for the storage account](#create-a-private-endpoint) and restrict all access to the public endpoint. This ensures that only traffic originating from within the desired virtual networks can access the Azure file shares within the storage account.
+- [Create one or more private endpoints for the storage account](#create-a-private-endpoint) and restrict all access to the public endpoint. This ensures that only traffic originating from within the desired virtual networks can access the Azure file shares within the storage account.
+*See [Private Link cost](https://azure.microsoft.com/pricing/details/private-link/).
- [Restrict the public endpoint to one or more virtual networks](#restrict-public-endpoint-access). This works by using a capability of the virtual network called *service endpoints*. When you restrict the traffic to a storage account via a service endpoint, you are still accessing the storage account via the public IP address, but access is only possible from the locations you specify in your configuration. ### Create a private endpoint
When you restrict the storage account to specific virtual networks, you are allo
- [Azure Files networking considerations](storage-files-networking-overview.md) - [Configuring DNS forwarding for Azure Files](storage-files-networking-dns.md)-- [Configuring S2S VPN for Azure Files](storage-files-configure-s2s-vpn.md)
+- [Configuring S2S VPN for Azure Files](storage-files-configure-s2s-vpn.md)
storage Storage Java How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md
Last updated 05/26/2021-+
If you would like to learn more about other Azure storage APIs, follow these lin
- [Transfer data with the AzCopy Command-Line Utility](../common/storage-use-azcopy-v10.md) - [Troubleshoot Azure Files](files-troubleshoot.md)
-For related code samples using deprecated Java version 8 SDKs, see [Code samples using Java version 8](files-samples-java-v8.md).
+For related code samples using deprecated Java version 8 SDKs, see [Code samples using Java version 8](files-samples-java-v8.md).
storage Queues V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v11-samples-dotnet.md
+ Last updated 04/26/2023
storage Queues V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v2-samples-python.md
+ Last updated 04/26/2023
To delete a queue and all the messages contained in it, call the [`delete_queue`
```python print("Deleting queue: " + queue_name) queue_service.delete_queue(queue_name)
-```
+```
storage Queues V8 Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v8-samples-java.md
+ Last updated 04/26/2023
catch (Exception e)
// Output the stack trace. e.printStackTrace(); }
-```
+```
storage Storage Dotnet How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-dotnet-how-to-use-queues.md
ms.devlang: csharp-+ # Get started with Azure Queue Storage using .NET
storage Storage Java How To Use Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-java-how-to-use-queue-storage.md
ms.devlang: java-+ # How to use Queue Storage from Java
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-performance-checklist.md
ms.devlang: csharp-+ <!-- docutune:casing "Timeout and Server Busy errors" -->
storage Storage Quickstart Queues Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-dotnet.md
ms.devlang: csharp-+ # Quickstart: Azure Queue Storage client library for .NET
storage Storage Quickstart Queues Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-java.md
ms.devlang: java-+ # Quickstart: Azure Queue Storage client library for Java
storage Storage Tutorial Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-tutorial-queues.md
ms.devlang: csharp-+ # Customer intent: As a developer, I want to use queues in my app so that my service will scale automatically during high demand times without losing data.
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/storage-performance-checklist.md
Last updated 10/10/2019 ms.devlang: csharp-+ # Performance and scalability checklist for Table storage
If you are performing batch inserts and then retrieving ranges of entities toget
- [Scalability and performance targets for Table storage](scalability-targets.md) - [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/tables/toc.json)-- [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)
+- [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)
storsimple Storsimple Data Manager Dotnet Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-dotnet-jobs.md
Title: Use .NET SDK for Microsoft Azure StorSimple Data Manager jobs
description: Learn how to use the .NET SDK within the StorSimple Data Manager service to transform StorSimple device data. + Last updated 08/22/2022
Perform the following steps to use .NET to launch a data transformation job.
## Next steps
-[Use StorSimple Data Manager UI to transform your data](storsimple-data-manager-ui.md).
+[Use StorSimple Data Manager UI to transform your data](storsimple-data-manager-ui.md).
stream-analytics Connect Job To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/connect-job-to-vnet.md
Last updated 01/04/2021-+ # Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)
If your jobs need to connect to other input or output types, you could write fro
* [Create and remove Private Endpoints in Stream Analytics clusters](./private-endpoints.md) * [Connect to Event Hubs in a VNet using Managed Identity authentication](./event-hubs-managed-identity.md)
-* [Connect to Blob storage and ADLS Gen2 in a VNet using Managed Identity authentication](./blob-output-managed-identity.md)
+* [Connect to Blob storage and ADLS Gen2 in a VNet using Managed Identity authentication](./blob-output-managed-identity.md)
stream-analytics Custom Deserializer Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer-examples.md
Last updated 6/16/2021-+ # Read input in any format using .NET custom deserializers (Preview)
stream-analytics Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer.md
description: This doc demonstrates how to create a custom .NET deserializer for
+ Last updated 01/12/2023
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
Last updated 03/31/2022-+ # Integrate Azure Stream Analytics with Azure Machine Learning
The following JSON is an example request from the previous query:
## Optimize the performance for Azure Machine Learning UDFs
-When you deploy your model to Azure Kubernetes Service, you can [profile your model to determine resource utilization](../machine-learning/how-to-deploy-profile-model.md). You can also [enable App Insights for your deployments](../machine-learning/how-to-enable-app-insights.md) to understand request rates, response times, and failure rates.
+When you deploy your model to Azure Kubernetes Service, you can [profile your model to determine resource utilization](../machine-learning/v1/how-to-deploy-profile-model.md). You can also [enable App Insights for your deployments](../machine-learning/v1/how-to-enable-app-insights.md) to understand request rates, response times, and failure rates.
If you have a scenario with high event throughput, you may need to change the following parameters in Stream Analytics to achieve optimal performance with low end-to-end latencies:
stream-analytics Stream Analytics Clean Up Your Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-clean-up-your-job.md
Last updated 06/21/2019-+ # Stop or delete your Azure Stream Analytics job
stream-analytics Stream Analytics Dotnet Management Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-dotnet-management-sdk.md
description: Get started with Stream Analytics Management .NET SDK. Learn how to
Last updated 3/12/2021-+ # Management .NET SDK: Set up and run analytics jobs using the Azure Stream Analytics API for .NET Learn how to set up and run analytics jobs using the Stream Analytics API for .NET using the Management .NET SDK. Set up a project, create input and output sources, transformations, and start and stop jobs. For your analytics jobs, you can stream data from Blob storage or from an event hub.
You've learned the basics of using a .NET SDK to create and run analytics jobs.
[stream.analytics.developer.guide]: stream-analytics-developer-guide.md [stream.analytics.scale.jobs]: stream-analytics-scale-jobs.md [stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference
-[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
+[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
stream-analytics Stream Analytics Edge Csharp Udf Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md
Last updated 6/09/2021-+ # Develop .NET Standard user-defined functions for Azure Stream Analytics jobs (Preview)
stream-analytics Stream Analytics Parsing Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-parsing-json.md
Last updated 05/25/2023-+ # Parse JSON and Avro data in Azure Stream Analytics
stream-analytics Visual Studio Code Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/visual-studio-code-custom-deserializer.md
description: This tutorial demonstrates how to create a custom .NET deserializer
+ Last updated 01/21/2023
synapse-analytics Data Explorer Ingest Event Hub Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-python.md
+ # Create an Event Hub data connection for Azure Synapse Data Explorer by using Python (Preview)
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround| |Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has Workaround| |Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has Workaround|
+|Azure Synapse Apache Spark pool|[Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines](#failed-to-write-to-sql-dedicated-pool-from-synapse-spark-using-azure-synapse-dedicated-sql-pool-connector-for-apache-spark-when-using-notebooks-in-pipelines)|Has Workaround|
## Azure Synapse Analytics serverless SQL pool active known issues summary
Deleting a Synapse workspace fails with the error message:
**Workaround**: The problem can be mitigated by retrying the delete operation. The engineering team is aware of this behavior and working on a fix.
+## Azure Synapse Analytics Apache Spark pool active known issues summary
+
+The following are known issues with the Synapse Spark.
+
+### Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines
+
+While using Azure Synapse Dedicated SQL Pool Connector for Apache Spark to write Azure Synapse Dedicated pool using Notebooks in pipelines, we would see an error message:
+
+`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: COPY statement input file schema discovery failed: Cannot bulk load. The file does not exist or you don't have file access rights.`
+
+**Workaround**: The engineering team is currently aware of this behavior and working on a fix. Following steps can be followed to work around the problem.
+- Set spark config through notebook:
+<br/>`spark.conf.set("spark.synapse.runAsMsi", "true")`
+- Or set spark config at [pool level](spark/apache-spark-azure-create-spark-configuration.md#create-an-apache-spark-configuration).
++ ## Recently Closed Known issues |Synapse Component|Issue|Status|Date Resolved
synapse-analytics Quickstart Read From Gen2 To Pandas Dataframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-read-from-gen2-to-pandas-dataframe.md
Last updated 07/11/2022 -+ # Quickstart: Read data from ADLS Gen2 to Pandas dataframe in Azure Synapse Analytics
Converting to Pandas.
- [Create a serverless Apache Spark pool](get-started-analyze-spark.md#create-a-serverless-apache-spark-pool) - [How to use file mount/unmount API in Synapse](spark/synapse-file-mount-api.md) - [Azure Architecture Center: Explore data in Azure Blob storage with the pandas Python package](/azure/architecture/data-science-process/explore-data-blob)-- [Tutorial: Use Pandas to read/write Azure Data Lake Storage Gen2 data in serverless Apache Spark pool in Synapse Analytics](spark/tutorial-use-pandas-spark-pool.md)
+- [Tutorial: Use Pandas to read/write Azure Data Lake Storage Gen2 data in serverless Apache Spark pool in Synapse Analytics](spark/tutorial-use-pandas-spark-pool.md)
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Last updated 04/18/2022 -+ # Azure Synapse Runtime for Apache Spark 2.4 (EOLA)
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Last updated 11/28/2022 -+ # Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Last updated 11/28/2022 -+
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
Last updated 11/17/2022 -+
synapse-analytics Apache Spark Azure Machine Learning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-machine-learning-tutorial.md
+ Last updated 06/30/2020
-
# Tutorial: Train a model in Python with automated machine learning
synapse-analytics Apache Spark Data Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-data-visualization.md
+ Last updated 09/13/2020 # Visualize data
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
Last updated 02/15/2022-+ zone_pivot_groups: programming-languages-spark-all-minus-sql-r
synapse-analytics Apache Spark Job Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-job-definitions.md
+ Last updated 10/16/2020
In this section, you add an Apache Spark job definition into pipeline.
## Next steps Next you can use Azure Synapse Studio to create Power BI datasets and manage Power BI data. Advance to the [Linking a Power BI workspace to a Synapse workspace](../quickstart-power-bi.md) article to learn more. -
synapse-analytics Apache Spark Manage Session Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-manage-session-packages.md
Last updated 02/20/2023 +
sdf_len(sc, 5) %>%
## Next steps - View the default libraries: [Apache Spark version support](apache-spark-version-support.md)-- Manage the packages outside Synapse Studio portal: [Manage packages through Az commands and REST APIs](apache-spark-manage-packages-outside-ui.md)
+- Manage the packages outside Synapse Studio portal: [Manage packages through Az commands and REST APIs](apache-spark-manage-packages-outside-ui.md)
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
+ Last updated 02/10/2023 zone_pivot_groups: programming-languages-spark-all-minus-sql-r
productIndex2:abfss://datasets@hyperspacebenchmark.dfs.core.windows.net/hyperspa
## Next steps * [Project Hyperspace](https://microsoft.github.io/hyperspace/)
-* [Azure Synapse Analytics](../index.yml)
+* [Azure Synapse Analytics](../index.yml)
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
+ Last updated 09/26/2022
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
+ Last updated 11/17/2022
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Last updated 09/10/2020
zone_pivot_groups: programming-languages-spark-all-minus-sql-+ # Introduction to Microsoft Spark Utilities
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
+ Last updated 05/01/2020
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Last updated 11/22/2019 -+ # Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-files.md
Your first step is to **create a database** with a datasource that references [N
## Dataset
-[NYC Yellow Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) dataset is used in this sample. You can query Parquet files the same way you [read CSV files](query-parquet-files.md). The only difference is that the `FILEFORMAT` parameter should be set to `PARQUET`. Examples in this article show the specifics of reading Parquet files.
+[NYC Yellow Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) dataset is used in this sample. You can query Parquet files the same way you [read CSV files](query-single-csv-file.md). The only difference is that the `FILEFORMAT` parameter should be set to `PARQUET`. Examples in this article show the specifics of reading Parquet files.
## Query set of parquet files
synapse-analytics Resource Consumption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resource-consumption-models.md
Last updated 04/15/2020 -+ # Synapse SQL resource consumption
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
Last updated 05/19/2021 -+
traffic-manager Traffic Manager Configure Performance Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-performance-routing-method.md
Previously updated : 03/20/2017 Last updated : 05/30/2023 # Configure the performance traffic routing method
-The Performance traffic routing method allows you to direct traffic to the endpoint with the lowest latency from the client's network. Typically, the datacenter with the lowest latency is the closest in geographic distance. This traffic routing method cannot account for real-time changes in network configuration or load.
+The Performance traffic routing method allows you to direct traffic to the endpoint with the lowest latency from the client's network. Typically, the region with the lowest latency is the closest in geographic distance. This traffic routing method can't account for real-time changes in network configuration or load.
-## To configure performance routing method
+## Prerequisites
-1. From a browser, sign in to the [Azure portal](https://portal.azure.com). If you donΓÇÖt already have an account, you can sign up for a [free one-month trial](https://azure.microsoft.com/free/).
-2. In the portalΓÇÖs search bar, search for the **Traffic Manager profiles** and then click the profile name that you want to configure the routing method for.
-3. In the **Traffic Manager profile** blade, verify that both the cloud services and websites that you want to include in your configuration are present.
-4. In the **Settings** section, click **Configuration**, and in the **Configuration** blade, complete as follows:
- 1. For **traffic routing method settings**, for **Routing method** select **Performance**.
- 2. Set the **Endpoint monitor settings** identical for all every endpoint within this profile as follows:
- 1. Select the appropriate **Protocol**, and specify the **Port** number.
- 2. For **Path** type a forward slash */*. To monitor endpoints, you must specify a path and filename. A forward slash "/" is a valid entry for the relative path and implies that the file is in the root directory (default).
- 3. At the top of the page, click **Save**.
-5. Test the changes in your configuration as follows:
- 1. In the portalΓÇÖs search bar, search for the Traffic Manager profile name and click the Traffic Manager profile in the results that the displayed.
- 2. In the **Traffic Manager** profile blade, click **Overview**.
- 3. The **Traffic Manager profile** blade displays the DNS name of your newly created Traffic Manager profile. This can be used by any clients (for example, by navigating to it using a web browser) to get routed to the right endpoint as determined by the routing type. In this case all requests are routed to the endpoint with the lowest latency from the client's network.
-6. Once your Traffic Manager profile is working, edit the DNS record on your authoritative DNS server to point your company domain name to the Traffic Manager domain name.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Create a resource group
+Create a resource group for the Traffic Manager profile.
+1. Sign in to the Azure portal at https://portal.azure.com.
+1. On the left pane of the Azure portal, select **Resource groups**.
+1. In **Resource groups**, on the top of the page, select **Add**.
+1. In **Resource group name**, type a name *myResourceGroupTM1*. For **Resource group location**, select **East US**, and then select **OK**.
+
+## Create a Traffic Manager profile with performance routing method
+
+Create a Traffic Manager profile that directs user traffic by sending them to the endpoint with lowest latency from the client's network.
+
+1. On the top left-hand side of the screen, select **Create a resource** > **Networking** > **Traffic Manager profile** > **Create**.
+1. In **Create Traffic Manager profile**, enter or select, the following information, accept the defaults for the remaining settings, and then select **Create**:
+
+ | Setting | Value |
+ | | |
+ | Name | Enter a unique name for your Traffic Manager profile. |
+ | Routing method | Select the **Performance** routing method. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroupTM1**. |
+ | Location | This setting refers to the location of the resource group, and has no impact on the Traffic Manager profile that will be deployed globally. |
+++
+ :::image type="content" source="media/traffic-manager-performance-routing-method/create-traffic-manager-performance-routing-method.png" alt-text="Screenshot of creating a traffic manager profile with performance routing.":::
+
+## To configure performance routing method on an existing Traffic Manager profile
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+1. In the portalΓÇÖs search bar, search for the **Traffic Manager profiles** and then select the profile name that you want to configure the routing method for.
+1. In the **Traffic Manager profile** overview page, verify that both the cloud services and websites that you want to include in your configuration are present.
+1. In the **Settings** section, select **Configuration**, and in the **Configuration** blade, complete as follows:
+
+ | Setting | Value |
+ | | |
+ |**Routing method** | Performance |
+ | **DNS time to live (TTL)** |This value controls how often the clientΓÇÖs local caching name server will query the Traffic Manager system for updated DNS entries. In this example we chose the default **60 seconds**. |
+ | **Endpoint monitor settings** | |
+ | **Protocol** | In this example we chose the default **HTTP**. |
+ |**Port** | In this example we chose the default port **80**. |
+ | **Path** | For **Path** type a forward slash */*. To monitor endpoints, you must specify a path and filename. A forward slash "/" is a valid entry for the relative path and implies that the file is in the root directory (default). |
+
+1. At the top of the page, select **Save**.
+
+ :::image type="content" source="media/traffic-manager-performance-routing-method/traffic-manager-performance-routing-method.png" alt-text="Screenshot of configuring a traffic manager profile with performance routing.":::
+## Test the performance routing method
+
+Test the changes in your configuration as follows:
+
+1. In the portalΓÇÖs search bar, search for the Traffic Manager profile name and select the Traffic Manager profile in the results that the displayed.
+1. The **Traffic Manager profile** overview displays the DNS name of your newly created Traffic Manager profile. This can be used by any clients (for example, by navigating to it using a web browser) to get routed to the right endpoint as determined by the routing type. In this case all requests are routed to the endpoint with the lowest latency from the client's network.
+1. Once your Traffic Manager profile is working, edit the DNS record on your authoritative DNS server to point your company domain name to the Traffic Manager domain name.
-![Configuring performance traffic routing method using Traffic Manager][1]
## Next steps
The Performance traffic routing method allows you to direct traffic to the endpo
- Learn about [priority routing method](traffic-manager-configure-priority-routing-method.md). - Learn about [geographic routing method](traffic-manager-configure-geographic-routing-method.md). - Learn how to [test Traffic Manager settings](traffic-manager-testing-settings.md).-
-<!--Image references-->
-[1]: ./media/traffic-manager-performance-routing-method/traffic-manager-performance-routing-method.png
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
Title: Troubleshoot known issues with update management center (preview) description: The article provides details on the known issues and troubleshooting any problems with update management center (preview). Previously updated : 04/21/2022 Last updated : 05/30/2023
The following troubleshooting steps apply to the Azure VMs related to the patch
### Azure Linux VM
-To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the AutoPatching request, check the agent log for more details in `/var/log/waagent.log`. Every AutoPatching request has a unique sequence number associated with it on the machine. Look for a log similar to: `2021-01-20T16:57:00.607529Z INFO ExtHandler`.
+To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the Auto-Patching request, check the agent log for more details in `/var/log/waagent.log`. Every Auto-Patching request has a unique sequence number associated with it on the machine. Look for a log similar to: `2021-01-20T16:57:00.607529Z INFO ExtHandler`.
-The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>` and in the `/status` subfolder is a `<sequence number>.status` file, which includes a brief description of the actions performed during a single AutoPatching request, and the status. It also includes a short list of errors that occurred while applying updates.
+The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>` and in the `/status` subfolder is a `<sequence number>.status` file, which includes a brief description of the actions performed during a single Auto-Patching request, and the status. It also includes a short list of errors that occurred while applying updates.
To review the logs related to all actions performed by the extension, check for more details in `/var/log/azure/Microsoft.CPlat.Core.Edp.LinuxPatchExtension/`. It includes the following two log files of interest: * `<seq number>.core.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process.
-* `<Date and Time>_<Handler action>.ext.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For AutoPatching, the `<Date and Time>_Enable.ext.log` has details on whether the specific patch operation was invoked.
+* `<Date and Time>_<Handler action>.ext.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the `<Date and Time>_Enable.ext.log` has details on whether the specific patch operation was invoked.
### Azure Windows VM
-To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the AutoPatching request, check the agent log for more details in `C:\WindowsAzure\Logs\AggregateStatus`. The package directory for the extension is `C:\Packages\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`.
+To verify if the Microsoft Azure Virtual Machine Agent (VM Agent) is running, has triggered appropriate actions on the machine, and the sequence number for the Auto-Patching request, check the agent log for more details in `C:\WindowsAzure\Logs\AggregateStatus`. The package directory for the extension is `C:\Packages\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`.
To review the logs related to all actions performed by the extension, check for more details in `C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. It includes the following two log files of interest: * `WindowsUpdateExtension.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process.
-* `CommandExecution.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For AutoPatching, the log has details on whether the specific patch operation was invoked.
+* `CommandExecution.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the log has details on whether the specific patch operation was invoked.
### Arc-enabled servers
For Arc-enabled servers, review the [troubleshoot VM extensions](../azure-arc/se
To review the logs related to all actions performed by the extension, on Windows check for more details in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: * `WindowsUpdateExtension.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process.
-* `cmd_execution_<numeric>_stdout.txt`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For AutoPatching, the log has details on whether the specific patch operation was invoked.
+* `cmd_execution_<numeric>_stdout.txt`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the log has details on whether the specific patch operation was invoked.
* `cmd_excution_<numeric>_stderr.txt` ## Known issues
+### Scenario: Unable to apply patches for the shutdown machines
+
+#### Issue
+
+Patches arenΓÇÖt getting applied for the machines that are in shutdown state, and you may also see that machines are losing their associated maintenance configurations/Schedules.
+
+#### Cause
+
+The machines are in a shutdown state.
+
+### Resolution:
+
+Keep your machines turned on at least 15 minutes before the scheduled update. For more information, see, [Shut down machines](../virtual-machines/maintenance-configurations.md#shut-down-machines).
++ ### Scenario: Patch run failed with Maintenance window exceeded property showing true even if time remained #### Issue
When you view an update deployment in **Update History**, the property **Failed
* No updates are shown. * One or more updates are in a **Pending** state.
-* Reboot status is **Required**, but a reboot was not attempted even when the reboot setting passed was `IfRequired` or `Always`.
+* Reboot status is **Required**, but a reboot wasn't attempted even when the reboot setting passed was `IfRequired` or `Always`.
#### Cause
-During an update deployment, it checks for maintenance window utilization at multiple steps. Ten minutes of the maintenance window is reserved for reboot at any point. Before getting a list of missing updates or downloading/installing any update (except Windows service pack updates), it checks to verify if there are 15 minutes + 10 minutes for reboot (that is, 25 mins left in the maintenance window).
-For Windows service pack updates, we check for 20 minutes + 10 minutes for reboot (that is, 30 minutes). If the deployment doesn't have the sufficient left, it skips the scan/download/install of updates. The deployment run then checks if a reboot is needed and if there is ten minutes left in the maintenance window. If there is, the deployment triggers a reboot, otherwise the reboot is skipped. In such cases, the status is updated to **Failed**, and the Maintenance window exceeded property is updated to ***true**. For cases where the time left is less than 25 minutes, updates are not scanned or attempted for installation.
+During an update deployment, it checks for maintenance window utilization at multiple steps. 10 minutes of the maintenance window are reserved for reboot at any point. Before getting a list of missing updates or downloading/installing any update (except Windows service pack updates), it checks to verify if there are 15 minutes + 10 minutes for reboot (that is, 25 mins left in the maintenance window).
+For Windows service pack updates, we check for 20 minutes + 10 minutes for reboot (that is, 30 minutes). If the deployment doesn't have the sufficient left, it skips the scan/download/install of updates. The deployment run then checks if a reboot is needed and if there's ten minutes left in the maintenance window. If there is, the deployment triggers a reboot, otherwise the reboot is skipped. In such cases, the status is updated to **Failed**, and the Maintenance window exceeded property is updated to ***true**. For cases where the time left is less than 25 minutes, updates aren't scanned or attempted for installation.
More details can be found by reviewing the logs in the file path provided in the error message of the deployment run.
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
Title: Migrate automatically from Azure Virtual Desktop (classic) - Azure
description: How to migrate automatically from Azure Virtual Desktop (classic) to Azure Virtual Desktop by using the migration module. -+ Last updated 01/31/2022
virtual-desktop Configure Validation Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-validation-environment.md
Last updated 03/01/2023 -+ # Configure a host pool as a validation environment
virtual-desktop Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/security.md
Last updated 07/14/2021 -+
Reducing these potential threats requires a fault-proof configuration, patch man
## Next steps
-Find our recommended guidelines for configuring security for your Azure Virtual Desktop deployment at our [security best practices](./../security-guide.md).
+Find our recommended guidelines for configuring security for your Azure Virtual Desktop deployment at our [security best practices](./../security-guide.md).
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
Last updated 11/22/2022 -+ # Orchestration modes API comparison
Not supported on Flexible Virtual Machine Scale Sets.
## Next steps > [!div class="nextstepaction"]
-> [Learn about the different Orchestration Modes](virtual-machine-scale-sets-orchestration-modes.md)
+> [Learn about the different Orchestration Modes](virtual-machine-scale-sets-orchestration-modes.md)
virtual-machines Disks Enable Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-performance.md
Last updated 03/14/2023 + # Preview - Increase IOPS and throughput limits for Azure Premium SSDs and Standard SSD/HDDs
virtual-machines Dsc Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-credentials.md
Last updated 03/06/2023--+ # Pass credentials to the Azure DSCExtension handler
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
vm-windows
Last updated 03/28/2023 -+ ms.devlang: azurecli
virtual-machines Hpccompute Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-windows.md
vm-windows+ Last updated 04/06/2023
virtual-machines Issues Using Vm Extensions Python 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/issues-using-vm-extensions-python-3.md
tags: top-support-issue,azure-resource-manager+
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
Last updated 05/25/2023 -+ # Update the Network Watcher extension to the latest version
If you have auto-upgrade set to true for the Network Watcher extension, reboot y
## Support If you need more help at any point in this article, see the Network Watcher extension documentation for [Linux](./network-watcher-linux.md) or [Windows](./network-watcher-windows.md). You can also contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get support**. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).-
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
-+ Last updated 02/25/2023
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
**Applies to:** :heavy_check_mark: Linux VMs
-Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user interface to create Azure resources. This quickstart shows you how to use the Azure portal to deploy a Linux virtual machine (VM) running Ubuntu 18.04 LTS. To see your VM in action, you also SSH to the VM and install the NGINX web server.
+Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user interface to create Azure resources. This quickstart shows you how to use the Azure portal to deploy a Linux virtual machine (VM) running Ubuntu Server 22.04 LTS. To see your VM in action, you also SSH to the VM and install the NGINX web server.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Sign in to the [Azure portal](https://portal.azure.com).
![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
-1. Under **Instance details**, enter *myVM* for the **Virtual machine name**, and choose *Ubuntu 18.04 LTS - Gen2* for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
+1. Under **Instance details**, enter *myVM* for the **Virtual machine name**, and choose *Ubuntu Server 22.04 LTS - Gen2* for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
:::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image, and size.":::
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
+ Last updated 01/25/2023 - # Azure Metadata Service: Scheduled Events for Linux VMs
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
Last updated 04/06/2023 --+ #Customer intent: As an IT administrator or developer, I want learn about cloud-init so that I customize and configure Linux VMs in Azure on first boot to minimize the number of post-deployment configuration tasks required.
virtual-machines Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/java.md
Last updated 10/09/2021-+ # Create and manage Windows VMs in Azure using Java
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
+ Last updated 06/01/2020 ms.reviwer: mimckitt
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/demo.md
Last updated 03/30/2020
+ # Set up Micro Focus CICS BankDemo for Micro Focus Enterprise Developer 4.0 on Azure
Congratulations! You are now running a CICS application in Azure using Micro Foc
- [Mainframe Migration - Portal](/archive/blogs/azurecat/mainframe-migration-to-azure-portal) - [Virtual Machines](../../../linux/overview.md) - [Troubleshooting](/troubleshoot/azure/virtual-machines/welcome-virtual-machines)-- [Demystifying mainframe to Azure migration](https://azure.microsoft.com/resources/demystifying-mainframe-to-azure-migration/en-us/)
+- [Demystifying mainframe to Azure migration](https://azure.microsoft.com/resources/demystifying-mainframe-to-azure-migration/en-us/)
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
Last updated 04/12/2023-+ # Quickstart: Create a mesh network topology with Azure Virtual Network Manager by using Azure PowerShell
virtual-network-manager How To Block Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-powershell.md
Last updated 03/22/2023-+ # How to block network traffic with Azure Virtual Network Manager - Azure PowerShell
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
Last updated 03/22/2023-+ #customerintent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
virtual-network-manager How To Create Hub And Spoke Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke-powershell.md
Last updated 05/01/2023-+ # Create a hub and spoke topology in Azure - PowerShell
virtual-network-manager Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/resource-manager-template-samples.md
description: This article has links to Azure Resource Manager template examples
+ Last updated 03/28/2023
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
Title: How Accelerated Networking works in Linux and FreeBSD VMs description: How Accelerated Networking Works in Linux and FreeBSD VMs-+ vm-linux Last updated 04/18/2023-+ # How Accelerated Networking works in Linux and FreeBSD VMs
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview description: Learn how Accelerated Networking can improve the networking performance of Azure VMs.-+ Last updated 04/18/2023-+ # Accelerated Networking (AccelNet) overview
virtual-network Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/cli-samples.md
- Title: Azure CLI samples for virtual network
-description: Learn about various sample scripts you can use for completing tasks in the Azure CLI, including creating a virtual network for multi-tier applications.
---- Previously updated : 04/04/2023----
-# Azure CLI samples for virtual network
-
-The following table includes links to bash scripts with Azure CLI commands:
-
-| Script | Description |
-|-|-|
-| [Create a virtual network for multi-tier applications](./scripts/virtual-network-cli-sample-multi-tier-application.md) | Creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. |
-| [Peer two virtual networks](./scripts/virtual-network-cli-sample-peer-two-virtual-networks.md) | Creates and connects two virtual networks in the same region. |
-| [Route traffic through a network virtual appliance](./scripts/virtual-network-cli-sample-route-traffic-through-nva.md) | Creates a virtual network with front-end and back-end subnets and a VM that is able to route traffic between the two subnets. |
-| [Filter inbound and outbound VM network traffic](./scripts/virtual-network-cli-sample-filter-network-traffic.md) | Creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS, and SSH. Outbound traffic to the internet from the back-end subnet isn't permitted. |
-|[Configure IPv4 + IPv6 dual stack virtual network with Standard Load Balancer](./scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Standard Load Balancer with IPv4 and IPv6 public IP addresses. |
-|[Quickstart: Create and test a NAT gateway - Azure CLI](../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md)|Create and validate a NAT gateway using a virtual machine. |
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Use Azure CLI to create a Windows or Linux VM with Accelerated Networking description: Use Azure CLI to create and manage virtual machines that have Accelerated Networking enabled for improved network performance.-+ Last updated 04/18/2023-+ # Use Azure CLI to create a Windows or Linux VM with Accelerated Networking
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
Title: Use PowerShell to create a VM with Accelerated Networking description: Use Azure PowerShell to create and manage Windows virtual machines that have Accelerated Networking enabled for improved network performance. -+
vm-windows
Last updated 03/20/2023-+ # Use Azure PowerShell to create a VM with Accelerated Networking
virtual-network Routing Preference Azure Kubernetes Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md
Last updated 10/01/2021-+ ms.devlang: azurecli
virtual-network Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/powershell-samples.md
- Title: Azure PowerShell samples for virtual network
-description: Learn about Azure PowerShell samples for managing virtual networks, including a sample for creating a virtual network for multi-tier applications.
---- Previously updated : 04/04/2023--
-# Azure PowerShell samples for virtual network
-
-The following table includes links to Azure PowerShell scripts:
-
-| Script | Description |
-|-|-|
-| [Create a virtual network for multi-tier applications](./scripts/virtual-network-powershell-sample-multi-tier-application.md) | Creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP, while traffic to the back-end subnet is limited to SQL, port 1433. |
-| [Peer two virtual networks](./scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md) | Creates and connects two virtual networks in the same region. |
-| [Route traffic through a network virtual appliance](./scripts/virtual-network-powershell-sample-route-traffic-through-nva.md) | Creates a virtual network with front-end and back-end subnets and a VM that is able to route traffic between the two subnets. |
-| [Filter inbound and outbound VM network traffic](./scripts/virtual-network-powershell-sample-filter-network-traffic.md) | Creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP and HTTPS. Outbound traffic to the internet from the back-end subnet isn't permitted. |
-| [Configure IPv4 + IPv6 dual stack virtual network with Standard Load Balancer](./scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Standard Load Balancer with IPv4 and IPv6 public IP addresses. |
-| [Quickstart: Create and test a NAT gateway - Azure PowerShell](../virtual-network/nat-gateway/quickstart-create-nat-gateway-powershell.md) | Create and validate a NAT gateway using a virtual machine. |
virtual-network Virtual Network Cli Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-filter-network-traffic.md
- Title: Filter VM network traffic - Azure CLI script sample
-description: Filter inbound and outbound virtual machine (VM) network traffic using an Azure CLI script sample.
------ Previously updated : 02/03/2022----
-# Filter inbound and outbound VM network traffic using an Azure CLI script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS, and SSH, while outbound traffic to the internet from the back-end subnet is not permitted. After running the script, you will have one virtual machine with two NICs. Each NIC is connected to a different subnet.
---
-## Sample script
--
-### Run the script
--
-## Clean up deployment
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates a back-end subnet. |
-| [az network vnet subnet update](/cli/azure/network/vnet/subnet) | Associates NSGs to subnets. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [az vm create](/cli/azure/vm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional virtual network CLI script samples can be found in [Virtual network CLI samples](../cli-samples.md).
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md
- Title: Azure CLI script sample - Configure IPv6 frontend - Standard Load Balancer-
-description: Learn how to configure IPv6 endpoints in a virtual network script sample using Standard Load Balancer.
------ Previously updated : 02/03/2022----
-# Configure IPv6 endpoints in virtual network script sample using Standard Load Balancer(preview)
-
-This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a Standard Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules ,and dual public IPs.
---
-## Sample script
--
-### Run the script
--
-> [!TIP]
-> You can view the IPv6 dual stack virtual network in Azure portal on the virtual network page.
-> The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations in the dual stack subnet.
-
-## Clean up deployment
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates an Azure virtual network and subnet. |
-| [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) | Creates a public IP address with a static IP address and an associated DNS name. |
-| [az network lb create](/cli/azure/network/lb#az-network-lb-create) | Creates an Azure load balancer. |
-| [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic is not routed to the VM. |
-| [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it is routed to port 80 one of the VMs in the LB set. |
-| [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) | Creates load balancer Network Address Translation (NAT) rule. NAT rules map a port of the load balancer to a port on a VM. In this sample, a NAT rule is created for SSH traffic to each VM in the load balancer set. |
-| [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. |
-| [az network nic create](/cli/azure/network/nic#az-network-nic-create) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [az vm availability-set create](/cli/azure/network/lb/rule#az-network-lb-rule-create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
-| [az vm create](/cli/azure/vm#az-vm-create) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. |
-| [az group delete](/cli/azure/vm/extension#az-vm-extension-set) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack.md
- Title: Azure CLI script sample - Configure IPv6 frontend-
-description: Use an Azure CLI script sample to configure IPv6 endpoints and deploy a dual stack (IPv4 + IPv6) application in Azure.
------ Previously updated : 02/03/2022----
-# Configure IPv6 endpoints in virtual network script sample
-
-This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules ,and dual public IPs.
---
-## Sample script
--
-### Run the script
--
-> [!TIP]
-> You can view the IPv6 dual stack virtual network in Azure portal on the virtual network page.
-> The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations in the dual stack subnet.
-
-## Clean up deployment
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates an Azure virtual network and subnet. |
-| [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) | Creates a public IP address with a static IP address and an associated DNS name. |
-| [az network lb create](/cli/azure/network/lb#az-network-lb-create) | Creates an Azure load balancer. |
-| [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic is not routed to the VM. |
-| [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it is routed to port 80 one of the VMs in the LB set. |
-| [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) | Creates load balancer Network Address Translation (NAT) rule. NAT rules map a port of the load balancer to a port on a VM. In this sample, a NAT rule is created for SSH traffic to each VM in the load balancer set. |
-| [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. |
-| [az network nic create](/cli/azure/network/nic#az-network-nic-create) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [az vm availability-set create](/cli/azure/network/lb/rule#az-network-lb-rule-create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
-| [az vm create](/cli/azure/vm#az-vm-create) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. |
-| [az group delete](/cli/azure/vm/extension#az-vm-extension-set) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
virtual-network Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-multi-tier-application.md
- Title: Create a VNet for multi-tier applications - Azure CLI script sample
-description: Create a virtual network for multi-tier applications - Azure CLI script sample.
------ Previously updated : 02/03/2022----
-# Create a virtual network for multi-tier applications using an Azure CLI script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you have two virtual machines, one in each subnet, that you can deploy web server and MySQL software to.
---
-## Sample script
--
-### Run the script
--
-## Clean up deployment
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates a back-end subnet. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [az vm create](/cli/azure/vm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional virtual network CLI script samples can be found in [Virtual network CLI samples](../cli-samples.md).
virtual-network Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
- Title: Route traffic via network virtual appliance - Azure CLI script sample
-description: Route traffic through a firewall network virtual appliance - Azure CLI script sample.
------ Previously updated : 02/03/2022----
-# Route traffic through a network virtual appliance - Azure CLI script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
---
-## Sample script
--
-### Run the script
--
-## Clean up deployment
--
-```azurecli
-az group delete --name $resourceGroup
-```
-
-## Sample reference
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates an Azure virtual network and front-end subnet. |
-| [az network subnet create](/cli/azure/network/vnet/subnet) | Creates back-end and DMZ subnets. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates a public IP address to access the VM from the internet. |
-| [az network nic create](/cli/azure/network/nic) | Creates a virtual network interface and enable IP forwarding for it. |
-| [az network nsg create](/cli/azure/network/nsg) | Creates a network security group (NSG). |
-| [az network nsg rule create](/cli/azure/network/nsg/rule) | Creates NSG rules that allow HTTP and HTTPS ports inbound to the VM. |
-| [az network vnet subnet update](/cli/azure/network/vnet/subnet)| Associates the NSGs and route tables to subnets. |
-| [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create)| Creates a route table for all routes. |
-| [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create)| Creates routes to route traffic between subnets and the internet through the VM. |
-| [az vm create](/cli/azure/vm) | Creates a virtual machine and attaches the NIC to it. This command also specifies the virtual machine image to use and administrative credentials. |
-| [az group delete](/cli/azure/group) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional virtual network CLI script samples can be found in [Virtual network CLI samples](../cli-samples.md).
virtual-network Virtual Network Powershell Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-filter-network-traffic.md
- Title: Filter VM network traffic - Azure PowerShell script sample
-description: Filter inbound and outbound VM network traffic - Azure PowerShell script sample.
--- Previously updated : 03/23/2023----
-# Filter inbound and outbound VM network traffic script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, and HTTPS, while outbound traffic to the internet from the back-end subnet isn't permitted. After running the script, you have one virtual machine with two NICs. Each NIC is connected to a different subnet.
-
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-InstalledModule -Name Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-azurepowershell-interactive[main](../../../powershell_scripts/virtual-network/filter-network-traffic/filter-network-traffic.ps1 "Filter VM network traffic")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources:
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup -Force
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a subnet configuration object |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates security rules to be assigned to a network security group. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) | Associates NSGs to subnets. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) | Creates a VM configuration. This configuration includes information such as VM name, operating system, and administrative credentials. The configuration is used during VM creation. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Create a virtual machine. |
-|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More virtual network PowerShell script samples can be found in [Virtual network PowerShell samples](../powershell-samples.md).
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md
- Title: Azure PowerShell script sample - Configure IPv6 frontend with an Azure Standard Load Balancer-
-description: Learn about configuring an IPv6 frontend in a virtual network script sample with an Azure Standard Load Balancer.
--- Previously updated : 04/04/2023----
-# Configure IPv6 frontend in virtual network script sample with an Azure Standard Load Balancer
-
-This article outlines the necessary steps to create a dual stack virtual network, load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- Azure PowerShell installed locally or Azure Cloud Shell.--- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).--- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-```azurepowershell
-# Deploys Dual-stack (IPv4+IPv6) Azure virtual network with 2 VMs and Standard Load Balancer with IPv4 and IPv6 public IPs
-
-# Create resource group to contain the deployment
- $rg = New-AzResourceGroup `
- -ResourceGroupName "dsRG1" `
- -Location "east us"
-
-# Create the public IPs needed for the deployment
- $PublicIP_v4 = New-AzPublicIpAddress `
- -Name "dsPublicIP_v4" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Static `
- -IpAddressVersion IPv4 `
- -Sku Standard
-
-$PublicIP_v6 = New-AzPublicIpAddress `
- -Name "dsPublicIP_v6" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Static `
- -IpAddressVersion IPv6 `
- -Sku Standard
-
-$RdpPublicIP_1 = New-AzPublicIpAddress `
- -Name "RdpPublicIP_1" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Static `
- -Sku Standard `
- -IpAddressVersion IPv4
-
-$RdpPublicIP_2 = New-AzPublicIpAddress `
- -Name "RdpPublicIP_2" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Static `
- -Sku Standard `
- -IpAddressVersion IPv4
-
-# Create front-end IP address
-$frontendIPv4 = New-AzLoadBalancerFrontendIpConfig `
- -Name "dsLbFrontEnd_v4" `
- -PublicIpAddress $PublicIP_v4
-
-$frontendIPv6 = New-AzLoadBalancerFrontendIpConfig `
- -Name "dsLbFrontEnd_v6" `
- -PublicIpAddress $PublicIP_v6
-
-# Configure back-end address pool
-$backendPoolv4 = New-AzLoadBalancerBackendAddressPoolConfig `
--Name "dsLbBackEndPool_v4"
-$backendPoolv6 = New-AzLoadBalancerBackendAddressPoolConfig `
--Name "dsLbBackEndPool_v6"-
-# Create load balancer rule
-$lbrule_v4 = New-AzLoadBalancerRuleConfig `
- -Name "dsLBrule_v4" `
- -FrontendIpConfiguration $frontendIPv4 `
- -BackendAddressPool $backendPoolv4 `
- -Protocol Tcp `
- -FrontendPort 80 `
- -BackendPort 80
-
-$lbrule_v6 = New-AzLoadBalancerRuleConfig `
- -Name "dsLBrule_v6" `
- -FrontendIpConfiguration $frontendIPv6 `
- -BackendAddressPool $backendPoolv6 `
- -Protocol Tcp `
- -FrontendPort 80 `
- -BackendPort 80
-
-# Create Standard Load Balancer
-$lb = New-AzLoadBalancer `
--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--Name "MyLoadBalancer" `--Sku "Standard" `--FrontendIpConfiguration $frontendIPv4,$frontendIPv6 `--BackendAddressPool $backendPoolv4,$backendPoolv6 `--LoadBalancingRule $lbrule_v4,$lbrule_v6-
-# Create availability set
-$avset = New-AzAvailabilitySet `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -Name "dsAVset" `
- -PlatformFaultDomainCount 2 `
- -PlatformUpdateDomainCount 2 `
- -Sku aligned
-
-# Create network security group and rules
-$rule1 = New-AzNetworkSecurityRuleConfig `
--Name 'myNetworkSecurityGroupRuleRDP' `--Description 'Allow RDP' `--Access Allow `--Protocol Tcp `--Direction Inbound `--Priority 100 `--SourceAddressPrefix * `--SourcePortRange * `--DestinationAddressPrefix * `--DestinationPortRange 3389-
-$rule2 = New-AzNetworkSecurityRuleConfig `
- -Name 'myNetworkSecurityGroupRuleHTTP' `
- -Description 'Allow HTTP' `
- -Access Allow `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 200 `
- -SourceAddressPrefix * `
- -SourcePortRange 80 `
- -DestinationAddressPrefix * `
- -DestinationPortRange 80
-
-$nsg = New-AzNetworkSecurityGroup `
--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--Name "dsNSG1" `--SecurityRules $rule1,$rule2-
-#Create virtual network and subnet
-# Create dual stack subnet
-$subnet = New-AzVirtualNetworkSubnetConfig `
--Name "dsSubnet" `--AddressPrefix "10.0.0.0/24","fd00:db8:deca:deed::/64"-
-# Create the virtual network
-$vnet = New-AzVirtualNetwork `
--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--Name "dsVnet" `--AddressPrefix "10.0.0.0/16","fd00:db8:deca::/48" `--Subnet $subnet
-
-#Create network interfaces (NICs)
-$Ip4Config=New-AzNetworkInterfaceIpConfig `
--Name dsIp4Config `--Subnet $vnet.subnets[0] `--PrivateIpAddressVersion IPv4 `--LoadBalancerBackendAddressPool $backendPoolv4 `--PublicIpAddress $RdpPublicIP_1
-
-$Ip6Config=New-AzNetworkInterfaceIpConfig `
--Name dsIp6Config `--Subnet $vnet.subnets[0] `--PrivateIpAddressVersion IPv6 `--LoadBalancerBackendAddressPool $backendPoolv6
-
-$NIC_1 = New-AzNetworkInterface `
--Name "dsNIC1" `--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--NetworkSecurityGroupId $nsg.Id `--IpConfiguration $Ip4Config,$Ip6Config
-
-$Ip4Config=New-AzNetworkInterfaceIpConfig `
--Name dsIp4Config `--Subnet $vnet.subnets[0] `--PrivateIpAddressVersion IPv4 `--LoadBalancerBackendAddressPool $backendPoolv4 `--PublicIpAddress $RdpPublicIP_2 -
-$NIC_2 = New-AzNetworkInterface `
--Name "dsNIC2" `--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--NetworkSecurityGroupId $nsg.Id `--IpConfiguration $Ip4Config,$Ip6Config -
-# Create virtual machines
-$cred = get-credential -Message "DUAL STACK VNET SAMPLE: Please enter the Administrator credential to log into the VMs"
-
-$vmsize = "Standard_A2"
-$ImagePublisher = "MicrosoftWindowsServer"
-$imageOffer = "WindowsServer"
-$imageSKU = "2019-Datacenter"
-
-$vmName= "dsVM1"
-$VMconfig1 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_1.Id 3> $null
-$VM1 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig1
-
-$vmName= "dsVM2"
-$VMconfig2 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_2.Id 3> $null
-$VM2 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig2
-
-#End Of Script
-
-```
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources:
-
-```powershell
-Remove-AzResourceGroup -Name <resourcegroupname> -Force
-```
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a subnet configuration. This configuration is used with the virtual network creation process. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and subnet. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address with a static IP address and an associated DNS name. |
-| [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer) | Creates an Azure load balancer. |
-| [New-AzLoadBalancerProbeConfig](/powershell/module/az.network/new-azloadbalancerprobeconfig) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic isn't routed to the VM. |
-| [New-AzLoadBalancerRuleConfig](/powershell/module/az.network/new-azloadbalancerruleconfig) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it's routed to port 80 one of the VMs in the load balancer set. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [New-AzAvailabilitySet](/powershell/module/az.compute/new-azavailabilityset) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
-| [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) | Creates a VM configuration. This configuration includes information such as VM name, operating system, and administrative credentials. The configuration is used during VM creation. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack.md
- Title: Azure PowerShell script sample - Configure IPv6 endpoints-
-description: Configure IPv6 endpoints in virtual network with an Azure PowerShell script and find links to command-specific documentation to help with the PowerShell sample.
--- Previously updated : 04/05/2023----
-# Configure IPv6 endpoints in virtual network with Azure PowerShell script sample
-
-This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet. A load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs are also deployed.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- Azure PowerShell installed locally or Azure Cloud Shell.--- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).--- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Sample script
-
-```azurepowershell
-# Dual-Stack VNET with 2 VMs.ps1
-# Deploys Dual-stack (IPv4+IPv6) Azure virtual network with 2 VMs and Basic Load Balancer with IPv4 and IPv6 public IPs
-
-# Create resource group to contain the deployment
- $rg = New-AzResourceGroup `
- -ResourceGroupName "dsRG1" `
- -Location "east us"
-
-# Create the public IPs needed for the deployment
- $PublicIP_v4 = New-AzPublicIpAddress `
- -Name "dsPublicIP_v4" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Dynamic `
- -IpAddressVersion IPv4
-
- $PublicIP_v6 = New-AzPublicIpAddress `
- -Name "dsPublicIP_v6" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Dynamic `
- -IpAddressVersion IPv6
-
- $RdpPublicIP_1 = New-AzPublicIpAddress `
- -Name "RdpPublicIP_1" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Dynamic `
- -IpAddressVersion IPv4
-
- $RdpPublicIP_2 = New-AzPublicIpAddress `
- -Name "RdpPublicIP_2" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -AllocationMethod Dynamic `
- -IpAddressVersion IPv4
--
-# Create Basic Load Balancer
-$frontendIPv4 = New-AzLoadBalancerFrontendIpConfig `
- -Name "dsLbFrontEnd_v4" `
- -PublicIpAddress $PublicIP_v4
-
-$frontendIPv6 = New-AzLoadBalancerFrontendIpConfig `
- -Name "dsLbFrontEnd_v6" `
- -PublicIpAddress $PublicIP_v6
-
-$backendPoolv4 = New-AzLoadBalancerBackendAddressPoolConfig `
- -Name "dsLbBackEndPool_v4"
-$backendPoolv6 = New-AzLoadBalancerBackendAddressPoolConfig `
- -Name "dsLbBackEndPool_v6"
-
-$lbrule_v4 = New-AzLoadBalancerRuleConfig `
- -Name "dsLBrule_v4" `
- -FrontendIpConfiguration $frontendIPv4 `
- -BackendAddressPool $backendPoolv4 `
- -Protocol Tcp `
- -FrontendPort 80 `
- -BackendPort 80
-
-$lbrule_v6 = New-AzLoadBalancerRuleConfig `
- -Name "dsLBrule_v6" `
- -FrontendIpConfiguration $frontendIPv6 `
- -BackendAddressPool $backendPoolv6 `
- -Protocol Tcp `
- -FrontendPort 80 `
- -BackendPort 80
-
-$lb = New-AzLoadBalancer `
--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--Name "MyLoadBalancer" `--Sku "Basic" `--FrontendIpConfiguration $frontendIPv4,$frontendIPv6 `--BackendAddressPool $backendPoolv4,$backendPoolv6 `--LoadBalancingRule $lbrule_v4,$lbrule_v6-
-# Create availability set
-$avset = New-AzAvailabilitySet `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -Name "dsAVset" `
- -PlatformFaultDomainCount 2 `
- -PlatformUpdateDomainCount 2 `
- -Sku aligned
-
-# Create network security group and rules
-$rule1 = New-AzNetworkSecurityRuleConfig `
--Name 'myNetworkSecurityGroupRuleRDP' `--Description 'Allow RDP' `--Access Allow `--Protocol Tcp `--Direction Inbound `--Priority 100 `--SourceAddressPrefix * `--SourcePortRange * `--DestinationAddressPrefix * `--DestinationPortRange 3389-
-$rule2 = New-AzNetworkSecurityRuleConfig `
- -Name 'myNetworkSecurityGroupRuleHTTP' `
- -Description 'Allow HTTP' `
- -Access Allow `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 200 `
- -SourceAddressPrefix * `
- -SourcePortRange 80 `
- -DestinationAddressPrefix * `
- -DestinationPortRange 80
-
-$nsg = New-AzNetworkSecurityGroup `
--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--Name "dsNSG1" `--SecurityRules $rule1,$rule2-
-#Create virtual network and subnet
-# Create dual stack subnet config
-$subnet = New-AzVirtualNetworkSubnetConfig `
--Name "dsSubnet" `--AddressPrefix "10.0.0.0/24","fd00:db8:deca:deed::/64"-
-# Create the virtual network
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -Name "dsVnet" `
- -AddressPrefix "10.0.0.0/16","fd00:db8:deca::/48" `
- -Subnet $subnet
-
- #Create network interfaces (NICs)
- $Ip4Config=New-AzNetworkInterfaceIpConfig `
- -Name dsIp4Config `
- -Subnet $vnet.subnets[0] `
- -PrivateIpAddressVersion IPv4 `
- -LoadBalancerBackendAddressPool $backendPoolv4 `
- -PublicIpAddress $RdpPublicIP_1
-
- $Ip6Config=New-AzNetworkInterfaceIpConfig `
- -Name dsIp6Config `
- -Subnet $vnet.subnets[0] `
- -PrivateIpAddressVersion IPv6 `
- -LoadBalancerBackendAddressPool $backendPoolv6
-
- $NIC_1 = New-AzNetworkInterface `
- -Name "dsNIC1" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -NetworkSecurityGroupId $nsg.Id `
- -IpConfiguration $Ip4Config,$Ip6Config
-
- $Ip4Config=New-AzNetworkInterfaceIpConfig `
- -Name dsIp4Config `
- -Subnet $vnet.subnets[0] `
- -PrivateIpAddressVersion IPv4 `
- -LoadBalancerBackendAddressPool $backendPoolv4 `
- -PublicIpAddress $RdpPublicIP_2
-
- $NIC_2 = New-AzNetworkInterface `
- -Name "dsNIC2" `
- -ResourceGroupName $rg.ResourceGroupName `
- -Location $rg.Location `
- -NetworkSecurityGroupId $nsg.Id `
- -IpConfiguration $Ip4Config,$Ip6Config
-
-# Create virtual machines
-$cred = get-credential -Message "DUAL STACK VNET SAMPLE: Please enter the Administrator credential to log into the VMs"
-
-$vmsize = "Standard_A2"
-$ImagePublisher = "MicrosoftWindowsServer"
-$imageOffer = "WindowsServer"
-$imageSKU = "2016-Datacenter"
-
-$vmName= "dsVM1"
-$VMconfig1 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_1.Id 3> $null
-$VM1 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig1
-
-$vmName= "dsVM2"
-$VMconfig2 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_2.Id 3> $null
-$VM2 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig2
-
-#End Of Script
-
-```
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources:
-
-```powershell
-Remove-AzResourceGroup -Name <resourcegroupname> -Force
-```
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a subnet configuration. This configuration is used with the virtual network creation process. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and subnet. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address with a static IP address and an associated DNS name. |
-| [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer) | Creates an Azure load balancer. |
-| [New-AzLoadBalancerProbeConfig](/powershell/module/az.network/new-azloadbalancerprobeconfig) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic isn't routed to the VM. |
-| [New-AzLoadBalancerRuleConfig](/powershell/module/az.network/new-azloadbalancerruleconfig) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it's routed to port 80 one of the VMs in the load balancer set. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [New-AzAvailabilitySet](/powershell/module/az.compute/new-azavailabilityset) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
-| [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) | Creates a VM configuration. This configuration includes information such as VM name, operating system, and administrative credentials. The configuration is used during VM creation. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
virtual-network Virtual Network Powershell Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-route-traffic-through-nva.md
- Title: Route traffic via NVA - Azure PowerShell script sample
-description: Azure PowerShell script sample - Route traffic through a firewall NVA.
--- Previously updated : 03/23/2023----
-# Route traffic through a network virtual appliance script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
-
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-azurepowershell-interactive[main](../../../powershell_scripts/virtual-network/route-traffic-through-nva/route-traffic-through-nva.ps1 "Route traffic through a network virtual appliance")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources:
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup -Force
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates back-end and DMZ subnets. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates a virtual network interface and enable IP forwarding for it. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates a network security group (NSG). |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates NSG rules that allow HTTP and HTTPS ports inbound to the VM. |
-| [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig)| Associates the NSGs and route tables to subnets. |
-| [New-AzRouteTable](/powershell/module/az.network/new-azroutetable)| Creates a route table for all routes. |
-| [New-AzRouteConfig](/powershell/module/az.network/new-azrouteconfig)| Creates routes to route traffic between subnets and the internet through the VM. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates a virtual machine and attaches the NIC to it. This command also specifies the virtual machine image to use and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More virtual network PowerShell script samples can be found in [Virtual network PowerShell samples](../powershell-samples.md).
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
Title: DPDK in an Azure Linux VM
description: Learn the benefits of the Data Plane Development Kit (DPDK) and how to set up the DPDK on a Linux virtual machine. -+ Last updated 04/24/2023-+ # Set up DPDK in a Linux virtual machine
virtual-network Virtual Network Manage Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-subnet.md
+ Last updated 03/20/2023
virtual-wan User Groups About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-about.md
description: Learn about using user groups to assign IP addresses from specific
Previously updated : 03/31/2023 Last updated : 05/29/2023
-# About user groups and IP address pools for P2S User VPNs - Preview
+# About user groups and IP address pools for P2S User VPNs
You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article describes the different configurations and parameters the Virtual WAN P2S VPN gateway uses to determine user groups and assign IP addresses. For configuration steps, see [Configure user groups and IP address pools for P2S User VPNs](user-groups-create.md).
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
description: Learn how to configure user groups and assign IP addresses from spe
Previously updated : 03/31/2023 Last updated : 05/29/2023
-# Configure user groups and IP address pools for P2S User VPNs - Preview
+# Configure user groups and IP address pools for P2S User VPNs
P2S User VPNs provide the capability to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article helps you configure user groups, group members, and prioritize groups. For more information about working with user groups, see [About user groups](user-groups-about.md).
virtual-wan User Groups Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-radius.md
description: Learn how to configure RADIUS/NPS for user groups to assign IP addr
Previously updated : 03/31/2023 Last updated : 05/29/2023
-# RADIUS - Configure NPS for vendor-specific attributes - P2S user groups - Preview
+# RADIUS - Configure NPS for vendor-specific attributes - P2S user groups
The following section describes how to configure Windows Server Network Policy Server (NPS) to authenticate users to respond to Access-Request messages with the Vendor Specific Attribute (VSA) used for user group support in Virtual WAN point-to-site-VPN. The following steps assume that your Network Policy Server is already registered to Active Directory. The steps may vary depending on the vendor/version of your NPS server.
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
description: Learn what's new with Azure Virtual WAN such as the latest release
Previously updated : 05/24/2023 Last updated : 05/30/2023
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+|Feature|Remote User connectivity/Point-to-site VPN |[User Groups and IP address pools for P2S User VPNs](user-groups-about.md) |Ability to configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials.|May 2023| |
|Feature|Remote User connectivity/Point-to-site VPN|[Global profile include/exclude](global-hub-profile.md#include-or-exclude-a-hub-from-a-global-profile)|Ability to mark a point-to-site gateway as "excluded", meaning users who connect to global profile won't be load-balanced to that gateway.|February 2022| | |Feature|Remote User connectivity/Point-to-site VPN|[Forced tunneling for P2S VPN](how-to-forced-tunnel.md)|Ability to force all traffic to Azure Virtual WAN for egress.|October 2021|Only available for Azure VPN Client version 2:1900:39.0 or newer.| |Feature|Remote User connectivity/Point-to-site VPN|[macOS Azure VPN client](openvpn-azure-ad-client-mac.md)|General Availability of Azure VPN Client for macOS.|August 2021| |
The following features are currently in gated public preview. After working with
|Type of preview|Feature |Description|Contact alias|Limitations| |||||| | Managed preview | Route-maps | This feature allows you to perform route aggregation, route filtering, and modify BGP attributes for your routes in Virtual WAN. | preview-route-maps@microsoft.com | Known limitations are displayed here: [About Route-maps (preview)](route-maps-about.md#key-considerations).
-|Managed preview|Configure user groups and IP address pools for P2S User VPNs| This feature allows you to configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**.|| Known limitations are displayed here: [Configure User Groups and IP address pools for P2S User VPNs (preview)](user-groups-create.md).|
|Managed preview|Aruba EdgeConnect SD-WAN| Deployment of Aruba EdgeConnect SD-WAN NVA into the Virtual WAN hub| preview-vwan-aruba@microsoft.com| | |Managed preview|Checkpoint NGFW|Deployment of Checkpoint NGFW NVA into the Virtual WAN hub|DL-vwan-support-preview@checkpoint.com, previewinterhub@microsoft.com|Same limitations as routing intent. Doesn't support internet inbound scenario.| |Managed preview|Fortinet NGFW/SD-WAN|Deployment of Fortinet dual-role SD-WAN/NGFW NVA into the Virtual WAN hub|azurevwan@fortinet.com, previewinterhub@microsoft.com|Same limitations as routing intent. Doesn't support internet inbound scenario.|