Updates from: 08/18/2021 03:03:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-network-topology.md
This article explains how to optimize traffic flow and network topology consider
When an application is published through Azure AD Application Proxy, traffic from the users to the applications flows through three connections: 1. The user connects to the Azure AD Application Proxy service public endpoint on Azure
-1. The Application Proxy service connects to the Application Proxy connector
+1. The Application Proxy connector connects to the Application Proxy service (outbound)
1. The Application Proxy connector connects to the target application :::image type="content" source="./media/application-proxy-network-topology/application-proxy-three-hops.png" alt-text="Diagram showing traffic flow from user to target application." lightbox="./media/application-proxy-network-topology/application-proxy-three-hops.png":::
You can also consider using one other variant in this situation. If most users i
- [Enable Application Proxy](application-proxy-add-on-premises-application.md) - [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md) - [Enable Conditional Access](./application-proxy-integrate-with-sharepoint-server.md)-- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension.md
There are two factors that affect which authentication methods are available wit
> [!NOTE] > When you deploy the NPS extension, use these factors to evaluate which methods are available for your users. If your RADIUS client supports PAP, but the client UX doesn't have input fields for a verification code, then phone call and mobile app notification are the two supported options. >
- > Also, regardless of the authentication protocol that's used (PAP, CHAP, or EAP), if your MFA method is text-based (SMS, mobile app verification code, or OATH hardware token) and requires the user to enter a code or text in the VPN client UI input field, the authentication might succeed. *But* any RADIUS attributes that are configured in the Network Access Policy are *not* forwarded to the RADIUS cient (the Network Access Device, like the VPN gateway). As a result, the VPN client might have more access than you want it to have, or less access or no access.
+ > Also, regardless of the authentication protocol that's used (PAP, CHAP, or EAP), if your MFA method is text-based (SMS, mobile app verification code, or OATH hardware token) and requires the user to enter a code or text in the VPN client UI input field, the authentication might succeed. *But* any RADIUS attributes that are configured in the Network Access Policy are *not* forwarded to the RADIUS client (the Network Access Device, like the VPN gateway). As a result, the VPN client might have more access than you want it to have, or less access or no access.
* The input methods that the client application (VPN, Netscaler server, or other) can handle. For example, does the VPN client have some means to allow the user to type in a verification code from a text or mobile app?
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
The set of optional claims available by default for applications to use are list
| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.| | `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user clicks on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you are operating in a guest scenario, where the user is from another tenant, then you must still provide a tenant identifier in the sign in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. | | `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | |
-| `tenant_ctry` | Resource tenant's country | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. |
+| `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. |
| `tenant_region_scope` | Region of the resource tenant | JWT | | | | `upn` | UserPrincipalName | JWT, SAML | | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.| | `verified_primary_email` | Sourced from the user's PrimaryAuthoritativeEmail | JWT | | |
active-directory Groups Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-troubleshooting.md
+ # Troubleshoot and resolve groups issues ## Troubleshooting group creation issues
-**I disabled security group creation in the Azure portal but groups can still be created via Powershell**
+**I disabled security group creation in the Azure portal but groups can still be created via Powershell**
The **User can create security groups in Azure portals** setting in the Azure portal controls whether or not non-admin users can create security groups in the Access panel or the Azure portal. It does not control security group creation via Powershell. To disable group creation for non-admin users in Powershell: 1. Verify that non-admin users are allowed to create groups:
-
```powershell Get-MsolCompanyInformation | Format-List UsersPermissionToCreateGroupsEnabled ```
-
2. If it returns `UsersPermissionToCreateGroupsEnabled : True`, then non-admin users can create groups. To disable this feature:
-
- ```
+ ```powershell
Set-MsolCompanySettings -UsersPermissionToCreateGroupsEnabled $False ```
-<br/>**I received a max groups allowed error when trying to create a Dynamic Group in Powershell**<br/>
+**I received a max groups allowed error when trying to create a Dynamic Group in Powershell**
If you receive a message in Powershell indicating _Dynamic group policies max allowed groups count reached_, this means you have reached the max limit for Dynamic groups in your organization. The max number of Dynamic groups per organization is 5,000. To create any new Dynamic groups, you'll first need to delete some existing Dynamic groups. There's no way to increase the limit. ## Troubleshooting dynamic memberships for groups
-**I configured a rule on a group but no memberships get updated in the group**<br/>
-1. Verify the values for user or device attributes in the rule. Ensure there are users that satisfy the rule.
-For devices, check the device properties to ensure any synced attributes contain the expected values.<br/>
+**I configured a rule on a group but no memberships get updated in the group**
+1. Verify the values for user or device attributes in the rule. Ensure there are users that satisfy the rule.
+For devices, check the device properties to ensure any synced attributes contain the expected values.
2. Check the membership processing status to confirm if it is complete. You can check the [membership processing status](groups-create-rule.md#check-processing-status-for-a-rule) and the last updated date on the **Overview** page for the group. If everything looks good, please allow some time for the group to populate. Depending on the size of your Azure AD organization, the group may take up to 24 hours for populating for the first time or after a rule change.
-**I configured a rule, but now the existing members of the rule are removed**<br/>This is expected behavior. Existing members of the group are removed when a rule is enabled or changed. The users returned from evaluation of the rule are added as members to the group.
+**I configured a rule, but now the existing members of the rule are removed**
+This is expected behavior. Existing members of the group are removed when a rule is enabled or changed. The users returned from evaluation of the rule are added as members to the group.
-**I donΓÇÖt see membership changes instantly when I add or change a rule, why not?**<br/>Dedicated membership evaluation is done periodically in an asynchronous background process. How long the process takes is determined by the number of users in your directory and the size of the group created as a result of the rule. Typically, directories with small numbers of users will see the group membership changes in less than a few minutes. Directories with a large number of users can take 30 minutes or longer to populate.
+**I don't see membership changes instantly when I add or change a rule, why not?**
+Dedicated membership evaluation is done periodically in an asynchronous background process. How long the process takes is determined by the number of users in your directory and the size of the group created as a result of the rule. Typically, directories with small numbers of users will see the group membership changes in less than a few minutes. Directories with a large number of users can take 30 minutes or longer to populate.
-**How can I force the group to be processed now?**<br/>
-Currently, there is no way to automatically trigger the group to be processed on demand. However, you can manually trigger the reprocessing by updating the membership rule to add a whitespace at the end.
+**How can I force the group to be processed now?**
+Currently, there is no way to automatically trigger the group to be processed on demand. However, you can manually trigger the reprocessing by updating the membership rule to add a whitespace at the end.
-**I encountered a rule processing error**<br/>The following table lists common dynamic membership rule errors and how to correct them.
+**I encountered a rule processing error**
+The following table lists common dynamic membership rule errors and how to correct them.
| Rule parser error | Error usage | Corrected usage | | | | | | Error: Attribute not supported. |(user.invalidProperty -eq "Value") |(user.department -eq "value")<br/><br/>Make sure the attribute is on the [supported properties list](groups-dynamic-membership.md#supported-properties). | | Error: Operator is not supported on attribute. |(user.accountEnabled -contains true) |(user.accountEnabled -eq true)<br/><br/>The operator used is not supported for the property type (in this example, -contains cannot be used on type boolean). Use the correct operators for the property type. |
-| Error: Query compilation error. | 1. (user.department -eq "Sales") (user.department -eq "Marketing")<br>2. (user.userPrincipalName -match "*@domain.ext") | 1. Missing operator. Use -and or -or two join predicates<br>(user.department -eq "Sales") -or (user.department -eq "Marketing")<br>2. Error in regular expression used with -match<br>(user.userPrincipalName -match ".*@domain.ext")<br>or alternatively: (user.userPrincipalName -match "@domain.ext$") |
+| Error: Query compilation error. | 1. (user.department -eq "Sales") (user.department -eq "Marketing")<br>2. (user.userPrincipalName -match "\*@domain.ext") | 1. Missing operator. Use -and or -or to join predicates<br>(user.department -eq "Sales") -or (user.department -eq "Marketing")<br>2. Error in regular expression used with -match<br>(user.userPrincipalName -match ".\*@domain.ext")<br>or alternatively: (user.userPrincipalName -match "@domain.ext$") |
## Next steps
These articles provide additional information on Azure Active Directory.
* [Managing access to resources with Azure Active Directory groups](../fundamentals/active-directory-manage-groups.md) * [Application Management in Azure Active Directory](../manage-apps/what-is-application-management.md) * [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-ps-examples.md
Output:
ObjectId DisplayName Licenses -- -- -- 7023a314-6148-4d7b-b33f-6c775572879a EMS E5 ΓÇô Licensed users EMSPREMIUM
-cf41f428-3b45-490b-b69f-a349c8a4c38e PowerBi - Licensed users POWER\_BI\_STANDARD
+cf41f428-3b45-490b-b69f-a349c8a4c38e PowerBi - Licensed users POWER_BI_STANDARD
962f7189-59d9-4a29-983f-556ae56f19a5 O365 E3 - Licensed users ENTERPRISEPACK c2652d63-9161-439b-b74e-fcd8228a7074 EMSandOffice {ENTERPRISEPREMIUM,EMSPREMIUM} ```
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
These risks can be calculated in real-time or calculated offline using Microsoft
| | | | | Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, etc.) for potentially malicious intent. | | Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. |
-| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as time active and authentication from unfamiliar IP address. |
-| Token Issuer Anomaly | Offline | This risk detection indicates there is unusual activity with known attack patterns, such as updating trusted realm federation settings or changing a signing certificate. |
+| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. |
+| Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. |
| Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. | | Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. | | Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that's not already in the list of familiar locations. Newly created users will be in "learning mode" for a period of time in which unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. |
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/subscription-requirements.md
To use Azure Active Directory (Azure AD) Privileged Identity Management (PIM), a
## Valid licenses
-You will need [!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)] to use PIM and all of it's settings. Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
+You will need an Azure AD license to use PIM and all of it's settings. Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required. [!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)]
## Licenses you must have
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
By using Azure Monitor, you can collect custom metrics via your application tele
## Send custom metrics
-For this tutorial, we deploy a Linux VM that runs the Ubuntu 16.04 LTS operating system. The Telegraf agent is supported for most Linux operating systems. Both Debian and RPM packages are available along with unpackaged Linux binaries on the [InfluxData download portal](https://portal.influxdata.com/downloads). See this [Telegraf installation guide](https://docs.influxdata.com/telegraf/v1.8/introduction/installation/) for additional installation instructions and options.
+For this tutorial, we deploy a Linux VM that runs the Ubuntu 18.04 LTS operating system. The Telegraf agent is supported for most Linux operating systems. Both Debian and RPM packages are available along with unpackaged Linux binaries on the [InfluxData download portal](https://portal.influxdata.com/downloads). See this [Telegraf installation guide](https://docs.influxdata.com/telegraf/v1.8/introduction/installation/) for additional installation instructions and options.
Sign in to the [Azure portal](https://portal.azure.com).
Create a new Linux VM:
1. Select the **Create a resource** option from the left-hand navigation pane. 1. Search for **Virtual Machine**.
-1. Select **Ubuntu 16.04 LTS** and select **Create**.
+1. Select **Ubuntu 18.04 LTS** and select **Create**.
1. Provide a VM name like **MyTelegrafVM**. 1. Leave the disk type as **SSD**. Then provide a **username**, such as **azureuser**. 1. For **Authentication type**, select **Password**. Then enter a password you'll use later to SSH into this VM.
Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash
To install the Telegraf Debian package onto the VM, run the following commands from your SSH session:
-```cmd
+```bash
# download the package to the VM
-wget https://dl.influxdata.com/telegraf/releases/telegraf_1.8.0~rc1-1_amd64.deb
-# install the package
-sudo dpkg -i telegraf_1.8.0~rc1-1_amd64.deb
+curl -s https://repos.influxdata.com/influxdb.key | sudo apt-key add -
+source /etc/lsb-release
+echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
```
-TelegrafΓÇÖs configuration file defines TelegrafΓÇÖs operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
-```cmd
+Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
+
+```bash
# generate the new Telegraf config file in the current directory telegraf --input-filter cpu:mem --output-filter azure_monitor config > azm-telegraf.conf
sudo cp azm-telegraf.conf /etc/telegraf/telegraf.conf
Finally, to have the agent start using the new configuration, we force the agent to stop and start by running the following commands:
-```cmd
+```bash
# stop the telegraf agent on the VM sudo systemctl stop telegraf # start the telegraf agent on the VM to ensure it picks up the latest configuration
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Use a metric measurement rule with the following query.
Heartbeat | summarize TimeGenerated=max(TimeGenerated) by Computer | extend Duration = datetime_diff('minute',now(),TimeGenerated)
-| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,1) |
+| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m)
``` **Single alert**
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-improve-performance.md
These profiles support the following compression encodings:
- gzip (GNU zip) - DEFLATE - bzip2-- brotli -
-If the request supports more than one compression type, those compression types take precedence over brotli compression.
-When a request for an asset specifies brotli compression (HTTP header is `Accept-Encoding: br`) and the request results in a cache miss, Azure CDN performs brotli compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache.
+Azure CDN from Verizon does not support brotli compression. When the HTTP request has the header `Accept-Encoding: br`, the CDN responds with an uncompressed response.
### Azure CDN Standard from Akamai profiles
cdn Cdn Manage Expiration Of Cloud Service Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-manage-expiration-of-cloud-service-content.md
The preferred method for setting a web server's `Cache-Control` header is to use
![CDN custom caching rules example](./media/cdn-manage-expiration-of-cloud-service-content/cdn-custom-caching-rules-example.png)
- The first custom caching rule sets a cache duration of four hours for any files in the `/webfolder1` folder on the origin server specified by your endpoint. The second rule overrides the first rule for the `file1.txt` file only and sets a cache duration of two hours for it.
+ The first custom caching rule sets a cache duration of four days for any files in the `/webfolder1` folder on the origin server specified by your endpoint. The second rule overrides the first rule for the `file1.txt` file only and sets a cache duration of two days for it.
1. Select **Save**.
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-msft-http-debug-headers.md
X-Cache: TCP_REMOTE_HIT | This header is returned when the content is served fro
X-Cache: TCP_MISS | This header is returned when there is a cache miss, and the content is served from the Origin. X-Cache: PRIVATE_NOSTORE | This header is returned when the request cannot be cached as Cache-Control response header is set to either private or no-store. X-Cache: CONFIG_NOCACHE | This header is returned when the request is configured not to cache in the CDN profile.
+X-Cache: N/A | This header is returned when the request that was denied by Signed URL and Rules Set.
For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend).
cognitive-services Extractive Summarization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/extractive-summarization.md
Each document has the following parameters
All documents in one request share the following parameters. These parameters can be specified in the `tasks` definition in the request. * `model-version` to specify which version of the model to use, with `latest` being the default. For more information, see [Model version](../concepts/model-versioning.md) * `sentenceCount` to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
-* `sortyby` to specify in what order the extracted sentences will be returned. The accepted values for `sortBy` are `Offset` and `Rank`, with `Offset` being the default. The value `Offset` is the start position of a sentence in the original document.
+* `sortBy` to specify in what order the extracted sentences will be returned. The accepted values for `sortBy` are `Offset` and `Rank`, with `Offset` being the default. The value `Offset` is the start position of a sentence in the original document.
```json {
In this article, you learned concepts and workflow for extractive summarization
* [Text Analytics overview](../overview.md) * [What's new](../whats-new.md)
-* [Model versions](../concepts/model-versioning.md)
+* [Model versions](../concepts/model-versioning.md)
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
Last updated 03/08/2020
In this document, we will primarily focus on learning fundamental concepts with various examples to explore the ability to create parameterized data pipelines within Azure Data Factory. Parameterization and dynamic expressions are such notable additions to ADF because they can save a tremendous amount of time and allow for a much more flexible Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) solution, which will dramatically reduce the cost of solution maintenance and speed up the implementation of new features into existing pipelines. These gains are because parameterization minimizes the amount of hard coding and increases the number of reusable objects and processes in a solution.
-## Azure data factory UI and parameters
+## Azure Data Factory UI and parameters
-If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](./parameterize-linked-services.md#ui-experience) and [Data factory UI for metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md#data-factory-ui) for visual explanation.
+If you are new to Azure Data Factory parameter usage in ADF user interface, please review [Data Factory UI for linked services with parameters](./parameterize-linked-services.md#ui-experience) and [Data Factory UI for metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md#data-factory-ui) for visual explanation.
## Parameter and expression concepts
These functions are useful inside conditions, they can be used to evaluate any t
## Detailed examples for practice
-### Detailed Azure data factory copy pipeline with parameters
+### Detailed Azure Data Factory copy pipeline with parameters
-This [Azure Data factory copy pipeline parameter passing tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities.
+This [Azure Data Factory copy pipeline parameter passing tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities.
### Detailed Mapping data flow pipeline with parameters
Please follow [Metadata driven pipeline with parameters](./how-to-use-trigger-pa
## Next steps
-For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
+For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
This article builds on the information in the [overview article](./event-hubs-ab
## Namespace
-An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides DNS integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md) and
+An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides DNS integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md).
:::image type="content" source="./media/event-hubs-features/namespace.png" alt-text="Image showing an Event Hubs namespace":::
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-environments.md
The following diagram shows three environment definitions. Two of them have diff
![Diagram of environment caching as Docker images](./media/concept-environments/environment-caching.png) >[!IMPORTANT]
-> If you create an environment with an unpinned package dependency, for example ```numpy```, that environment will keep using the package version installed _at the time of environment creation_. Also, any future environment with matching definition will keep using the old version.
-
-To update the package, specify a version number to force image rebuild, for example ```numpy==1.18.1```. New dependencies, including nested ones, will be installed that might break a previously working scenario.
+> * If you create an environment with an unpinned package dependency, for example, `numpy`, the environment uses the package version that was *installed when the environment was created*. Also, any future environment that uses a matching definition will use the original version.
+>
+> To update the package, specify a version number to force image rebuild, for example, `numpy==1.18.1`. New dependencies, including nested ones, will be installed, and they might break a previously working scenario.
+>
+> * Using an unpinned base image like `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04` in your environment definition results in rebuilding the environment every time the latest tag is updated. It's assumed that you want to keep up to date with the latest version for various reasons, like for vulnerabilities, system updates, and patches.
> [!WARNING] > The [Environment.build](/python/api/azureml-core/azureml.core.environment.environment#build-workspace--image-build-compute-none-) method will rebuild the cached image, with possible side-effect of updating unpinned packages and breaking reproducibility for all environment definitions corresponding to that cached image.
postgresql Howto Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-migrate-using-export-and-import.md
Follow these steps to export and import your PostgreSQL database.
To export your existing PostgreSQL database on-premises or in a VM to a sql script file, run the following command in your existing environment: ```bash
-pg_dump ΓÇô-host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
+pg_dump --host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
``` For example, if you have a local server and a database called **testdb** in it: ```bash
psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --us
## Next steps - To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](howto-migrate-using-dump-and-restore.md).-- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
+- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
### Storage |Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--|
-| Azure Blob storage (including Data Lake Storage Gen2) | All public regions<br/> All Government regions | Supported on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for blob storage.](tutorial-private-endpoint-storage-portal.md) |
+| Azure Blob storage (including Data Lake Storage Gen2) | All public regions<br/> All Government regions | Supported only on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for blob storage.](tutorial-private-endpoint-storage-portal.md) |
| Azure Files | All public regions<br/> All Government regions | | GA <br/> [Learn how to create Azure Files network endpoints.](../storage/files/storage-files-networking-endpoints.md) | | Azure File Sync | All public regions | | GA <br/> [Learn how to create Azure Files network endpoints.](../storage/file-sync/file-sync-networking-endpoints.md) |
-| Azure Queue storage | All public regions<br/> All Government regions | Supported on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for queue storage.](tutorial-private-endpoint-storage-portal.md) |
-| Azure Table storage | All public regions<br/> All Government regions | Supported on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for table storage.](tutorial-private-endpoint-storage-portal.md) |
+| Azure Queue storage | All public regions<br/> All Government regions | Supported only on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for queue storage.](tutorial-private-endpoint-storage-portal.md) |
+| Azure Table storage | All public regions<br/> All Government regions | Supported only on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for table storage.](tutorial-private-endpoint-storage-portal.md) |
| Azure Batch | All public regions except: Germany CENTRAL, Germany NORTHEAST <br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) | ### Web
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / SqlOnDemand | privatelink.sql.azuresynapse.net | sqlondemand.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net |
+| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) / Web | privatelink.azuresynapse.net | azuresynapse.net |
| Storage account (Microsoft.Storage/storageAccounts) / Blob (blob, blob_secondary) | privatelink.blob.core.windows.net | blob.core.windows.net | | Storage account (Microsoft.Storage/storageAccounts) / Table (table, table_secondary) | privatelink.table.core.windows.net | table.core.windows.net | | Storage account (Microsoft.Storage/storageAccounts) / Queue (queue, queue_secondary) | privatelink.queue.core.windows.net | queue.core.windows.net |
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisCache | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net | | Azure Purview (Microsoft.Purview)| privatelink.purview.azure.com | purview.azure.com |
+| Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-firewall-and-proxy.md
The following table describes the required domains for communication:
If &ast;.afs.azure.net or &ast;.one.microsoft.com is too broad, you can limit the server's communication by allowing communication to only explicit regional instances of the Azure Files Sync service. Which instance(s) to choose depends on the region of the storage sync service you have deployed and registered the server to. That region is called "Primary endpoint URL" in the table below.
-For business continuity and disaster recovery (BCDR) reasons you may have specified your Azure file shares in a globally redundant (GRS) storage account. If that is the case, then your Azure file shares will fail over to the paired region in the event of a lasting regional outage. Azure File Sync uses the same regional pairings as storage. So if you use GRS storage accounts, you need to enable additional URLs to allow your server to talk to the paired region for Azure File Sync. The table below calls this "Paired region". Additionally, there is a traffic manager profile URL that needs to be enabled as well. This will ensure network traffic can be seamlessly re-routed to the paired region in the event of a fail-over and is called "Discovery URL" in the table below.
+For business continuity and disaster recovery (BCDR) reasons you may have created your Azure file shares in a storage account that is configured for geo-redundant storage (GRS). If that is the case, your Azure file shares will fail over to the paired region in the event of a lasting regional outage. Azure File Sync uses the same regional pairings as storage. So if you use GRS storage accounts, you need to enable additional URLs to allow your server to talk to the paired region for Azure File Sync. The table below calls this "Paired region". Additionally, there is a traffic manager profile URL that needs to be enabled as well. This will ensure network traffic can be seamlessly re-routed to the paired region in the event of a fail-over and is called "Discovery URL" in the table below.
| Cloud | Region | Primary endpoint URL | Paired region | Discovery URL | |--|--|-|||
For business continuity and disaster recovery (BCDR) reasons you may have specif
| Government | US Gov Arizona | https:\//usgovarizona01.afs.azure.us | US Gov Texas | https:\//tm-usgovarizona01.afs.azure.us | | Government | US Gov Texas | https:\//usgovtexas01.afs.azure.us | US Gov Arizona | https:\//tm-usgovtexas01.afs.azure.us | -- If you use locally redundant (LRS) or zone redundant (ZRS) storage accounts, you only need to enable the URL listed under "Primary endpoint URL".
+- If you use a storage account configured for locally redundant storage (LRS) or zone redundant storage (ZRS), you only need to enable the URL listed under "Primary endpoint URL".
-- If you use globally redundant (GRS) storage accounts, enable three URLs.
+- If you use a storage account configured for GRS, enable three URLs.
**Example:** You deploy a storage sync service in `"West US"` and register your server with it. The URLs to allow the server to communicate to for this case are:
virtual-desktop Start Virtual Machine Connect Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/start-virtual-machine-connect-faq.md
To configure the deallocation policy:
>[!NOTE] >Make sure to set the time limit for the "End a disconnected session" policy to a value greater than five minutes. A low time limit can cause users' sessions to end if their network loses connection for too long, resulting in lost work.
-Signing users out won't deallocate their VMs. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md).
+Signing users out won't deallocate their VMs. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Scale session hosts using Azure Automation](set-up-scaling-script.md) for pooled host pools.
## Can users turn off the VM from their clients?
-Yes. Users can shut down the VM by using the Start menu within their session, just like they would with a physical machine. However, shutting down the VM won't deallocate the VM. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md).
+Yes. Users can shut down the VM by using the Start menu within their session, just like they would with a physical machine. However, shutting down the VM won't deallocate the VM. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Scale session hosts using Azure Automation](set-up-scaling-script.md) for pooled host pools.
## Next steps
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dav4-dasv4-series.md
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> <br>
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-json.md
If you do not specify a buildTimeoutInMinutes value, or set it to 0, this will u
If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, do not set it too high because you might have to wait for it to timeout before seeing an error.
+> [!NOTE]
+> If you don't set the value to 0, the minimum supported value is 6 minutes. Using values 1 through 5 will fail.
## Properties: customize
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-troubleshoot.md
Image Builder service uses port 22(Linux), or 5986(Windows)to connect to the bui
#### Solution Review your scripts for firewall changes/enablement, or changes to SSH or WinRM, and ensure any changes allow for constant connectivity between the service and build VM on the ports above. For more information on Image Builder networking, please review the [requirements](./image-builder-networking.md).
+### JWT errors in log early in the build
+
+#### Error
+Early in the build process, the build fails and the log indicates a JWT error:
+
+```text
+PACKER OUT Error: Failed to prepare build: "azure-arm"
+PACKER ERR
+PACKER OUT
+PACKER ERR * client_jwt will expire within 5 minutes, please use a JWT that is valid for at least 5 minutes
+PACKER OUT 1 error(s) occurred:
+```
+
+#### Cause
+The `buildTimeoutInMinutes` value in the template is set to between 1 and 5 minutes.
+
+#### Solution
+As described in [Create an Azure Image Builder template](./image-builder-json.md), the timeout must be set to 0 to use the default or above 5 minutes to override the default. Change the timeout in your template to 0 to use the default or to a minimum of 6 minutes.
+ ## DevOps task ### Troubleshooting the task
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/image-builder.md
This article is to show you how you can create a customized Windows image using
- identity - providing an identity for Azure Image Builder to use during the build
-You can also specify a `buildTimeoutInMinutes`. The default is 240 minutes, and you can increase a build time to allow for longer running builds.
+You can also specify a `buildTimeoutInMinutes`. The default is 240 minutes, and you can increase a build time to allow for longer running builds. The minimum allowed value is 6 minutes; shorter values will cause errors.
We will be using a sample .json template to configure the image. The .json file we are using is here: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json).