Updates from: 01/25/2021 04:04:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/troubleshoot-with-application-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-with-application-insights.md
@@ -22,7 +22,7 @@ This article provides steps for collecting logs from Active Directory B2C (Azure
The detailed activity logs described here should be enabled **ONLY** during the development of your custom policies. > [!WARNING]
-> Do not set the `DeploymentMode` to `Developer` in production environments. Logs collect all claims sent to and from identity providers. You as the developer assume responsibility for any personal data collected in your Application Insights logs. These detailed logs are collected only when the policy is placed in **DEVELOPER MODE**.
+> Do not set the `DeploymentMode` to `Development` in production environments. Logs collect all claims sent to and from identity providers. You as the developer assume responsibility for any personal data collected in your Application Insights logs. These detailed logs are collected only when the policy is placed in **DEVELOPER MODE**.
## Set up Application Insights
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-nps-extension-advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension-advanced.md
@@ -19,9 +19,6 @@ ms.collection: M365-identity-device-management
The Network Policy Server (NPS) extension extends your cloud-based Azure AD Multi-Factor Authentication features into your on-premises infrastructure. This article assumes that you already have the extension installed, and now want to know how to customize the extension for you needs.
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ## Alternate login ID Since the NPS extension connects to both your on-premises and cloud directories, you might encounter an issue where your on-premises user principal names (UPNs) don't match the names in the cloud. To solve this problem, use alternate login IDs.
@@ -51,7 +48,7 @@ To configure an IP allowed list, go to `HKLM\SOFTWARE\Microsoft\AzureMfa` and co
> [!NOTE] > This registry key is not created by default by the installer and an error appears in the AuthZOptCh log when the service is restarted. This error in the log can be ignored, but if this registry key is created and left empty if not needed then the error message does not return.
-When a request comes in from an IP address that exists in the `IP_WHITELIST`, two-step verification is skipped. The IP list is compared to the IP address that is provided in the *ratNASIPAddress* attribute of the RADIUS request. If a RADIUS request comes in without the ratNASIPAddress attribute, the following warning is logged: "P_WHITE_LIST_WARNING::IP Whitelist is being ignored as source IP is missing in RADIUS request in NasIpAddress attribute."
+When a request comes in from an IP address that exists in the `IP_WHITELIST`, two-step verification is skipped. The IP list is compared to the IP address that is provided in the *ratNASIPAddress* attribute of the RADIUS request. If a RADIUS request comes in without the ratNASIPAddress attribute, a warning is logged: "IP_WHITE_LIST_WARNING::IP Whitelist is being ignored as the source IP is missing in the RADIUS request NasIpAddress attribute.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-health-adfs-risky-ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md
@@ -33,9 +33,6 @@ Additionally, it is possible for a single IP address to attempt multiple logins
> To access preview, Global Admin or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permission is required.   >
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ## What is in the report? The failed sign in activity client IP addresses are aggregated through Web Application Proxy servers. Each item in the Risky IP report shows aggregated information about failed AD FS sign-in activities which exceed designated threshold. It provides the following information: ![Screenshot that shows a Risky IP report with column headers highlighted.](./media/how-to-connect-health-adfs/report4a.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-quick-start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
@@ -24,9 +24,6 @@ ms.collection: M365-identity-device-management
Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without needing any additional on-premises components.
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- To deploy Seamless SSO, follow these steps. ## Step 1: Check the prerequisites
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-government-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-government-cloud.md
@@ -20,9 +20,6 @@ This article describes considerations for integrating a hybrid environment with
> [!NOTE] > To integrate a Microsoft Active Directory environment (either on-premises or hosted in an IaaS that is part of the same cloud instance) with the Azure Government cloud, you need to upgrade to the latest release of [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594).
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- For a full list of United States government Department of Defense endpoints, refer to the [documentation](/office365/enterprise/office-365-u-s-government-dod-endpoints). ## Azure AD Pass-through Authentication
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/g-suite-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
@@ -20,9 +20,6 @@ This tutorial describes the steps you need to perform in both G Suite and Azure
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ## Capabilities supported > [!div class="checklist"] > * Create users in G Suite
app-service https://docs.microsoft.com/en-us/azure/app-service/samples-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-cli.md
@@ -36,8 +36,8 @@ The following table includes links to bash scripts built using the Azure CLI.
| [Connect an app to a storage account](./scripts/cli-connect-to-storage.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an App Service app and a storage account, then adds the storage connection string to the app settings. | | [Connect an app to an Azure Cache for Redis](./scripts/cli-connect-to-redis.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and an Azure Cache for Redis, then adds the redis connection details to the app settings.) | | [Connect an app to Cosmos DB](./scripts/cli-connect-to-documentdb.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and a Cosmos DB, then adds the Cosmos DB connection details to the app settings. |
-|**Back up and restore app**||
-| [Back up an app](./scripts/cli-backup-onetime.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a one-time backup for it. |
+|**Backup and restore app**||
+| [Backup an app](./scripts/cli-backup-onetime.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a one-time backup for it. |
| [Create a scheduled backup for an app](./scripts/cli-backup-scheduled.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a scheduled backup for it. | | [Restores an app from a backup](./scripts/cli-backup-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Restores an App Service app from a backup. | |**Monitor app**||
app-service https://docs.microsoft.com/en-us/azure/app-service/scripts/cli-backup-onetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-backup-onetime.md
@@ -1,5 +1,5 @@
---
-title: 'CLI: Back up an app'
+title: 'CLI: Backup an app'
description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to back up an app. author: msangapu-msft tags: azure-service-management
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-backend-health-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
@@ -18,9 +18,6 @@ Overview
By default, Azure Application Gateway probes backend servers to check their health status and to check whether they're ready to serve requests. Users can also create custom probes to mention the host name, the path to be probed, and the status codes to be accepted as Healthy. In each case, if the backend server doesn't respond successfully, Application Gateway marks the server as Unhealthy and stops forwarding requests to the server. After the server starts responding successfully, Application Gateway resumes forwarding the requests.
-> [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### How to check backend health To check the health of your backend pool, you can use the
@@ -286,7 +283,7 @@ For more information about how to extract and upload Trusted Root Certificates i
**Message:** The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to
-whitelist the backend
+allowlist the backend.
**Cause:** End-to-end SSL with Application Gateway v2 requires the backend server's certificate to be verified in order to deem the server Healthy.
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-data-controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller.md
@@ -49,7 +49,7 @@ Regardless of the option you choose, during the creation process you will need t
- **Data controller username** - Any username for the data controller administrator user. - **Data controller password** - A password for the data controller administrator user. - **Name of your Kubernetes namespace** - the name of the Kubernetes namespace that you want to create the data controller in.-- **Connectivity mode** - The [connectivity mode](./connectivity.md) of your cluster. Currently only "indirect" is supported.
+- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).
- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created. - **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created. - **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compare-azure-government-global-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
@@ -425,31 +425,36 @@ Azure Information Protection Premium is part of the [Enterprise Mobility + Secur
The following Azure Security Center **features are not currently available** in Azure Government: - **1st and 3rd party integrations**
- - The Qualys Vulnerability Assessment agent.
+ - [Connect AWS account](../security-center/quickstart-onboard-aws.md)
+ - [Connect GCP account](../security-center/quickstart-onboard-gcp.md)
+ - [Integrated vulnerability assessment for machines (powered by Qualys)](../security-center/deploy-vulnerability-assessment-vm.md).
>[!NOTE] >Security Center internal assessments are provided to discover security misconfigurations, based on Common Configuration Enumeration such as password policy, windows FW rules, local machine audit and security policy, and additional OS hardening settings. - **Threat detection**
- - *Specific detections*: Detections based on VM log periodic batches, Azure core router network logs, threat intelligence reports, and detections for App Service.
+ - [Azure Defender for App Service](../security-center/defender-for-app-service-introduction.md).
+ - [Azure Defender for Key Vault](../security-center/defender-for-key-vault-introduction.md)
+ - *Specific detections*: Detections based on VM log periodic batches, Azure core router network logs, and threat intelligence reports.
+ >[!NOTE] >Near real-time alerts generated based on security events and raw data collected from the VMs are captured and displayed.
- - *Security incidents*: The aggregation of alerts for a resource, known as a security incident.
- - *Threat intelligence enrichment*: Geo-enrichment and the threat intelligence option.
- - *UEBA for Azure resources*: Integration with Microsoft Cloud App Security for user and entity behavior analytics on Azure resources.
- - *Advanced threat detection*: Azure Security Center standard tier in Azure Government does not support threat detection for App Service.
+- **Environment hardening**
+ - [Adaptive network hardening](../security-center/security-center-adaptive-network-hardening.md)
+
+- **Preview features**
+ - [Recommendation exemption rules](../security-center/exempt-resource.md)
+ - [Azure Defender for Resource Manager](../security-center/defender-for-resource-manager-introduction.md)
+ - [Azure Defender for DNS](../security-center/defender-for-dns-introduction.md)
-- **Server protection**
- - *OS Security Configuration*: Vulnerability specific metadata, such as the potential impact and countermeasures for OS security configuration vulnerabilities.
**Azure Security Center FAQ**
-For Azure Security Center FAQ, see [Azure Security Center frequently asked questions public documentation](../security-center/faq-general.md).
-Additional FAQ for Azure Security Center in Azure Government are listed below.
+For Azure Security Center FAQ, see [Azure Security Center frequently asked questions public documentation](../security-center/faq-general.md). Additional FAQ for Azure Security Center in Azure Government are listed below.
**What will customers be charged for Azure Security Center in Azure Government?**
-The standard tier of Azure Security Center is free for the first 30 days. Should you choose to continue to use public preview or generally available standard features beyond 30 days, we automatically start to charge for the service.
+Azure Security Center's integrated cloud workload protection platform (CWPP), Azure Defender, brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. Azure Defender is free for the first 30 days. Should you choose to continue to use public preview or generally available features of Azure Defender beyond 30 days, we automatically start to charge for the service.
**Is Azure Security Center available for DoD customers?** Azure Security Center is deployed on Azure Government regions but not DoD regions. Azure resources created in DoD regions can still utilize Security Center capabilities. However, using it will result in Security Center collected data being moved out from DoD regions and stored in Azure Government regions. By default, all Security Center features which collect and store data are disabled for resources hosted in DoD regions. The type of data collected and stored varies depending on the selected feature. Customers who want to enable Azure Security Center features for DoD resources are recommended to consider data residency before doing so.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
@@ -326,11 +326,3 @@ import com.microsoft.applicationinsights.web.internal.ThreadContext;
RequestTelemetry requestTelemetry = ThreadContext.getRequestTelemetryContext().getHttpRequestTelemetry(); requestTelemetry.setName("myname"); ```-
-> [!NOTE]
-> All other operations on a `RequestTelemetry` retrieved from
-> `ThreadContext.getRequestTelemetryContext().getHttpRequestTelemetry()` besides those described above,
-> will fail fast and throw an exception to let you know that is undefined behavior under the 3.0 agent.
->
-> If you need interop for any other methods on `RequestTelemetry` please let us know by opening an issue
-> https://github.com/microsoft/ApplicationInsights-Java/issues.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python-request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/opencensus-python-request.md
@@ -15,9 +15,6 @@ Incoming request data is collected using OpenCensus Python and its various integ
First, instrument your Python application with latest [OpenCensus Python SDK](./opencensus-python.md).
-> [!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ## Tracking Django applications 1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/) and instrument your application with the `django` middleware. Incoming requests sent to your `django` application will be tracked.
@@ -32,7 +29,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
) ```
-3. Make sure AzureExporter is properly configured in your `settings.py` under `OPENCENSUS`. For requests from urls that you do not wish to track, add them to `BLACKLIST_PATHS`.
+3. Make sure AzureExporter is properly configured in your `settings.py` under `OPENCENSUS`. For requests from urls that you do not wish to track, add them to `EXCLUDELIST_PATHS`.
```python OPENCENSUS = {
@@ -41,7 +38,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter( connection_string="InstrumentationKey=<your-ikey-here>" )''',
- 'BLACKLIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
+ 'EXCLUDELIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
} } ```
@@ -73,7 +70,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
```
-2. You can also configure your `flask` application through `app.config`. For requests from urls that you do not wish to track, add them to `BLACKLIST_PATHS`.
+2. You can also configure your `flask` application through `app.config`. For requests from urls that you do not wish to track, add them to `EXCLUDELIST_PATHS`.
```python app.config['OPENCENSUS'] = {
@@ -82,7 +79,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter( connection_string="InstrumentationKey=<your-ikey-here>", )''',
- 'BLACKLIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
+ 'EXCLUDELIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
} } ```
@@ -99,7 +96,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
'.pyramid_middleware.OpenCensusTweenFactory') ```
-2. You can configure your `pyramid` tween directly in the code. For requests from urls that you do not wish to track, add them to `BLACKLIST_PATHS`.
+2. You can configure your `pyramid` tween directly in the code. For requests from urls that you do not wish to track, add them to `EXCLUDELIST_PATHS`.
```python settings = {
@@ -109,7 +106,7 @@ First, instrument your Python application with latest [OpenCensus Python SDK](./
'EXPORTER': '''opencensus.ext.azure.trace_exporter.AzureExporter( connection_string="InstrumentationKey=<your-ikey-here>", )''',
- 'BLACKLIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
+ 'EXCLUDELIST_PATHS': ['https://example.com'], <--- These sites will not be traced if a request is sent to it.
} } }
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/status-monitor-v2-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-get-started.md
@@ -48,7 +48,7 @@ Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense
### Enable monitoring ```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force
-Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Enable-ApplicationInsightsMonitoring -ConnectionString xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
@@ -66,7 +66,7 @@ Expand-Archive -LiteralPath $pathToZip -DestinationPath $pathInstalledModule
``` ### Enable monitoring ```powershell
-Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Enable-ApplicationInsightsMonitoring -ConnectionString xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/faq.md
@@ -341,7 +341,9 @@ This is possible if your code sends such data. It can also happen if variables i
**All** octets of the client web address are always set to 0 after the geo location attributes are looked up.
-### My Instrumentation Key is visible in my web page source.
+The [Application Insights JavaScript SDK](app/javascript.md) does not include any personal data in its autocompletion by default. However, some personal data used in your application may be picked up by the SDK (for example, full names in `window.title` or account IDs in XHR URL query parameters). For custom personal data masking, add a [telemetry initializer](app/api-filtering-sampling.md#javascript-web-applications).
+
+### My Instrumentation Key is visible in my web page source.
* This is common practice in monitoring solutions. * It can't be used to steal your data.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/customer-managed-keys.md
@@ -121,11 +121,53 @@ These settings can be updated in Key Vault via CLI and PowerShell:
## Create cluster
-> [!NOTE]
-> Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned and each can be based depending your scenario. System-assigned managed identity is simpler and it's created automatically with the cluster creation when identity `type` is set as "*SystemAssigned*" -- this identity can be used later to grant the cluster access to your Key Vault. If you want to create a cluster while Customer-managed key is defined at cluster creation time, you should have a key defined and User-assigned identity granted in your Key Vault beforehand, then create the cluster with these settings: identity `type` as "*UserAssigned*", `UserAssignedIdentities` with the identity's resource ID and `keyVaultProperties` with key details.
+Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned, while a single identity can be defined in a cluster depending on your scenario.
+- System-assigned managed identity is simpler and being generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant the cluster access to your Key Vault.
+
+ Identity settings in cluster for System-assigned managed identity
+ ```json
+ {
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ```
+
+- If you want to configure Customer-managed key at cluster creation, you should have a key and User-assigned identity granted in your Key Vault beforehand, then create the cluster with these settings: identity `type` as "*UserAssigned*", `UserAssignedIdentities` with the resource ID of the identity.
+
+ Identity settings in cluster for User-assigned managed identity
+ ```json
+ {
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft. ManagedIdentity/UserAssignedIdentities/<cluster-assigned-managed-identity>"
+ }
+ }
+ ```
> [!IMPORTANT]
-> Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet) and you can use System-assigned managed identity in this case.
+> You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
+
+```json
+{
+ "identity": {
+ "type": "SystemAssigned"
+}
+```
+
+With:
+
+```json
+{
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft. ManagedIdentity/UserAssignedIdentities/<user-assigned-managed-identity-name>"
+ }
+}
+```
+ Follow the procedure illustrated in [Dedicated Clusters article](../log-query/logs-dedicated-clusters.md#creating-a-cluster).
@@ -240,15 +282,13 @@ Follow the procedure illustrated in [Dedicated Clusters article](../log-query/lo
## Key revocation
-You can revoke access to data by disabling your key, or deleting the cluster's access policy in your Key Vault.
- > [!IMPORTANT]
-> - If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn't applied to System-assigned managed identity.
-> - The recommended key revocation action is by disabling your key in your Key Vault.
+> - The recommended way to revoke access to your data is by disabling your key, or deleting access policy in your Key Vault.
+> - Setting the cluster's `identity` `type` to "None" also revokes access to your data, but this approach isn't recommended since you can't revert the revocation when restating the `identity` in the cluster without opening support request.
-The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data becomes inaccessible and queries on these workspaces fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Ingested data in last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This gets deleted on key revocation operation and becomes inaccessible as well.
+The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data becomes inaccessible and queries on these workspaces fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Ingested data in last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This gets deleted on key revocation operation and becomes inaccessible.
-The cluster's storage periodically polls your Key Vault to attempt to unwrap the encryption key and once accessed, data ingestion and query resume within 30 minutes.
+The cluster's storage periodically checks your Key Vault to attempt to unwrap the encryption key and once accessed, data ingestion and query are resumed within 30 minutes.
## Key rotation
@@ -256,7 +296,7 @@ Customer-managed key rotation requires an explicit update to the cluster with th
All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
-## Customer-managed key for queries
+## Customer-managed key for saved queries
The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log-alerts* queries encrypted with your key in your own storage account when connected to your workspace.
@@ -407,7 +447,7 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
- If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn' applied to System-assigned managed identity.
- - Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet) and you can use System-assigned managed identity in this case.
+ - You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
## Troubleshooting
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/synchronize-vnet-dns-servers-setting-on-virtual-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/synchronize-vnet-dns-servers-setting-on-virtual-cluster.md new file mode 100644
@@ -0,0 +1,73 @@
+---
+title: Synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster
+description: Learn how synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster.
+services: sql-database
+ms.service: sql-managed-instance
+author: srdan-bozovic-msft
+ms.author: srbozovi
+ms.topic: how-to
+ms.date: 01/17/2021
+---
+
+# Synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster
+[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
+
+This article explains when and how to synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster.
+
+## When to synchronize the DNS setting
+
+There are a few scenarios (for example, db mail, linked servers to other SQL Server instances in your cloud or hybrid environment) that require private host names to be resolved from SQL Managed Instance. In this case, you need to configure a custom DNS inside Azure. See [Configure a custom DNS for Azure SQL Managed Instance](custom-dns-configure.md) for details.
+
+If this change is implemented after [virtual cluster](connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture) hosting Managed Instance is created you'll need to synchronize DNS servers setting on the virtual cluster with the virtual network configuration.
+
+> [!IMPORTANT]
+> Synchronizing DNS servers setting will affect all of the Managed Instances hosted in the virtual cluster.
+
+## How to synchronize the DNS setting
+
+### Azure RBAC permissions required
+
+User synchronizing DNS server configuration will need to have one of the following Azure roles:
+
+- Subscription Owner role, or
+- Managed Instance Contributor role, or
+- Custom role with the following permission:
+ - `Microsoft.Sql/virtualClusters/updateManagedInstanceDnsServers/action`
+
+### Use Azure PowerShell
+
+Get virtual network where DNS servers setting has been updated.
+
+```PowerShell
+$ResourceGroup = 'enter resource group of virtual network'
+$VirtualNetworkName = 'enter virtual network name'
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroup $ResourceGroup -Name $VirtualNetworkName
+```
+Use PowerShell command [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) to synchronize DNS servers configuration for all the virtual clusters in the subnet.
+
+```PowerShell
+Get-AzSqlVirtualCluster `
+ | where SubnetId -match $virtualNetwork.Id `
+ | select Id `
+ | Invoke-AzResourceAction -Action updateManagedInstanceDnsServers -Force
+```
+### Use the Azure CLI
+
+Get virtual network where DNS servers setting has been updated.
+
+```Azure CLI
+resourceGroup="auto-failover-group"
+virtualNetworkName="vnet-fog-eastus"
+virtualNetwork=$(az network vnet show -g $resourceGroup -n $virtualNetworkName --query "id" -otsv)
+```
+
+Use Azure CLI command [az resource invoke-action](/cli/azure/resource?view=azure-cli-latest#az_resource_invoke_action) to synchronize DNS servers configuration for all the virtual clusters in the subnet.
+
+```Azure CLI
+az sql virtual-cluster list --query "[? contains(subnetId,'$virtualNetwork')].id" -o tsv \
+ | az resource invoke-action --action updateManagedInstanceDnsServers --ids @-
+```
+## Next steps
+
+- Learn more about configuring a custom DNS [Configure a custom DNS for Azure SQL Managed Instance](custom-dns-configure.md).
+- For an overview, see [What is Azure SQL Managed Instance?](sql-managed-instance-paas-overview.md).
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
@@ -17,7 +17,7 @@ You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This ar
## Supported regions
-Azure Disk Backup is available in preview in the following regions: West Central US.
+Azure Disk Backup is available in preview in the following regions: West Central US, Korea Central, Korea South.
More regions will be announced when they become available.
@@ -63,4 +63,4 @@ More regions will be announced when they become available.
## Next steps -- [Back up Azure Managed Disks](backup-managed-disks.md)\ No newline at end of file
+- [Back up Azure Managed Disks](backup-managed-disks.md)
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
@@ -12,12 +12,13 @@ ms.custom:
# Create a Cloud Service (extended support) using ARM templates
+This tutorial explains how to create a Cloud Service (extended support) deployment using [ARM templates](https://docs.microsoft.com/azure/azure-resource-manager/templates/overview).
+ > [!IMPORTANT] > Cloud Services (extended support) is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This tutorial explains how to create a Cloud Service (extended support) deployment using [ARM templates](https://docs.microsoft.com/azure/azure-resource-manager/templates/overview).
## Before you begin 1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
@@ -450,4 +451,4 @@ This tutorial explains how to create a Cloud Service (extended support) deployme
## Next steps - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).\ No newline at end of file
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-visual-studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-visual-studio.md
@@ -13,7 +13,7 @@ ms.author: ghogen
# Create and deploy a Azure Cloud Service (extended support) using Visual Studio
-Starting with Visual Studio 2019 version 16.9 Preview 1, you can work with Cloud Services (extended support) using Azure Resource Manager, which greatly simplifies and modernizes maintenance and management of Azure resources. You can also convert an existing Cloud Service project to an extended support Cloud Service project.
+Starting with [Visual Studio 2019 version 16.9](https://visualstudio.microsoft.com/vs/preview/) (currently in preview), you can work with cloud services using Azure Resource Manager (ARM), which greatly simplifies and modernizes maintenance and management of Azure resources. This is enabled by a new Azure service referred to as Cloud Services (extended support). You can publish an existing cloud service to Cloud Services (extended support). For information on this Azure service, see [Cloud Services (extended support) documentation](overview.md).
> [!IMPORTANT] > Cloud Services (extended support) is currently in public preview.
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
@@ -9,7 +9,6 @@ ms.reviewer: mimckitt
ms.date: 10/13/2020 ms.custom: ---
-
# About Azure Cloud Services (extended support) > [!IMPORTANT]
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-diagnostic-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-diagnostic-logs.md
@@ -102,7 +102,7 @@ Here's a sample entry in the JSON-formatted request log. Each blob has one root
"callerIpAddress": "::ffff:1.1.1.1", "correlationId": "4a11c709-05f5-417c-a98d-6e81b3e29c58", "identity": "1808bd5f-62af-45f4-89d8-03c5e81bac30",
- "properties": {"HttpMethod":"GET","Path":"/webhdfs/v1/Samples/Outputs/Drivers.csv","RequestContentLength":0,"ClientRequestId":"3b7adbd9-3519-4f28-a61c-bd89506163b8","StartTime":"2016-07-07T21:02:52.472Z","EndTime":"2016-07-07T21:02:53.456Z"}
+ "properties": {"HttpMethod":"GET","Path":"/webhdfs/v1/Samples/Outputs/Drivers.csv","RequestContentLength":0,"StoreIngressSize":0 ,"StoreEgressSize":4096,"ClientRequestId":"3b7adbd9-3519-4f28-a61c-bd89506163b8","StartTime":"2016-07-07T21:02:52.472Z","EndTime":"2016-07-07T21:02:53.456Z","QueryParameters":"api-version=<version>&op=<operationName>"}
} , . . . .
@@ -134,6 +134,7 @@ Here's a sample entry in the JSON-formatted request log. Each blob has one root
| EndTime |String |The time at which the server sent a response | | StoreIngressSize |Long |Size in bytes ingressed to Data Lake Store | | StoreEgressSize |Long |Size in bytes egressed from Data Lake Store |
+| QueryParameters |String |Description: These are the http query parameters. Example 1: api-version=2014-01-01&op=getfilestatus Example 2: op=APPEND&append=true&syncFlag=DATA&filesessionid=bee3355a-4925-4435-bb4d-ceea52811aeb&leaseid=bee3355a-4925-4435-bb4d-ceea52811aeb&offset=28313319&api-version=2017-08-01 |
### Audit logs Here's a sample entry in the JSON-formatted audit log. Each blob has one root object called **records** that contains an array of log objects
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-horizon-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-horizon-sdk.md
@@ -880,7 +880,7 @@ You can also use values from protocols previously parsed to extract additional i
For example, for the value, which is based on TCP, you can use the values from IPv4 layer. From this layer you can extract values such as the source of the packet, and the destination.
-In order to achieve this, the JSON configuration file needs to be updated using the `whitelist` property.
+In order to achieve this, the JSON configuration file needs to be updated using the `whitelists` property.
## Allow list (data mining) fields
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/exceptions-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/exceptions-dotnet.md
@@ -3,8 +3,7 @@ title: Azure Event Hubs - .NET exceptions
description: This article provides a list of Azure Event Hubs .NET messaging exceptions and suggested actions. services: event-hubs documentationcenter: na
-author: ShubhaVijayasarathy
-manager: timlt
+author: spelluru
ms.service: event-hubs ms.devlang: na
@@ -13,7 +12,7 @@ ms.tgt_pltfrm: na
ms.workload: na ms.custom: seodec18 ms.date: 09/23/2020
-ms.author: shvija
+ms.author: spelluru
---
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/access-fhir-postman-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/access-fhir-postman-tutorial.md
@@ -27,7 +27,7 @@ In order to use Postman, the following details are needed:
- Your FHIR server URL, for example `https://MYACCOUNT.azurehealthcareapis.com` - The identity provider `Authority` for your FHIR server, for example, `https://login.microsoftonline.com/{TENANT-ID}`-- The configured `audience`. This is is usually the URL of the FHIR server, e.g. `https://MYACCOUNT.azurehealthcareapis.com` or just `https://azurehealthcareapis.com`.
+- The configured `audience`. This is usually the URL of the FHIR server, e.g. `https://<FHIR-SERVER-NAME>.azurehealthcareapis.com` or just `https://azurehealthcareapis.com`.
- The `client_id` (or application ID) of the [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service. - The `client_secret` (or application secret) of the client application.
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/convert-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/convert-data.md new file mode 100644
@@ -0,0 +1,152 @@
+---
+title: Data conversion for Azure API for FHIR
+description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR.
+services: healthcare-apis
+author: ranvijaykumar
+ms.service: healthcare-apis
+ms.subservice: fhir
+ms.topic: overview
+ms.date: 01/19/2021
+ms.author: ranku
+---
++
+# How to convert data to FHIR
+
+The $convert-data custom endpoint in the Azure API for FHIR is meant for data conversion from different formats to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports HL7v2 to FHIR conversion.
+
+## Use the $convert-data endpoint
+
+`https://<<FHIR service base URL>>/$convert-data`
+
+$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource in the request body as described below:
+
+**Parameter Resource:**
+
+| Parameter Name | Description | Accepted values |
+| ----------- | ----------- | ----------- |
+| inputData | Data to be converted. | A valid value of JSON String datatype|
+| inputDataType | Data type of input. | ```HL7v2``` |
+| templateCollectionReference | Reference to a template collection. It can be a reference either to the **Default templates**, or a custom template image that is registered with Azure API for FHIR. See below to learn about customizing the templates, hosting those on ACR, and registering to the Azure API for FHIR. | ```microsofthealth/fhirconverter:default```, \<RegistryServer\>/\<imageName\>@\<imageDigest\> |
+| rootTemplate | The root template to use while transforming the data. | ```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04``` |
+
+> [!WARNING]
+> Default templates help you get started quickly. However, these may get updated when we upgrade the Azure API for FHIR. In order to have consistent data conversion behavior across different versions of Azure API for FHIR, you must host your own copy of templates on an Azure Container Registry, register those to the Azure API for FHIR, and use in your API calls as described later.
+
+**Sample request:**
+
+```json
+{
+ "resourceType": "Parameters",
+ "parameter": [
+ {
+ "name": "inputData",
+ "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^CURRENT||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^HOME|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
+ },
+ {
+ "name": "inputDataType",
+ "valueString": "Hl7v2"
+ },
+ {
+ "name": "templateCollectionReference",
+ "valueString": "microsofthealth/fhirconverter:default"
+ },
+ {
+ "name": "rootTemplate",
+ "valueString": "ADT_A01"
+ }
+ ]
+}
+```
+
+**Sample response:**
+
+```json
+{
+ "resourceType": "Bundle",
+ "type": "transaction",
+ "entry": [
+ {
+ "fullUrl": "urn:uuid:9d697ec3-48c3-3e17-db6a-29a1765e22c6",
+ "resource": {
+ "resourceType": "Patient",
+ "id": "9d697ec3-48c3-3e17-db6a-29a1765e22c6",
+ ...
+ ...
+ "request": {
+ "method": "PUT",
+ "url": "Location/50becdb5-ff56-56c6-40a1-6d554dca80f0"
+ }
+ }
+ ]
+}
+```
+
+## Customize templates
+
+You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. See the documentation in the extension for details.
+
+## Host and use templates
+
+It is strongly recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+
+1. Push the templates to your Azure Container Registry.
+1. Enable Managed Identity on your Azure API for FHIR instance.
+1. Provide access of the ACR to the Azure API for FHIR Managed Identity.
+1. Register the ACR servers in the Azure API for FHIR.
+
+### Push templates to Azure Container Registry
+
+After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push the customized templates to the ACR. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
+
+### Enable Managed Identity on Azure API for FHIR
+
+Browse to your instance of Azure API for FHIR service in the Azure portal and select the **Identity** blade.
+Change the status to **On** to enable managed identity in Azure API for FHIR.
+
+![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png)
+
+### Provide access of the ACR to Azure API for FHIR
+
+Navigate to Access Control (IAM) blade in your ACR instance and select _Add Role Assignments_.
+
+![ACR Role Assignment](media/convert-data/fhir-acr-role-assignment.png)
+
+Grant AcrPull role to your Azure API for FHIR service instance.
+
+![Add Role](media/convert-data/fhir-acr-role-add.png)
+
+### Register the ACR servers in Azure API for FHIR
+
+You can register up to twenty ACR servers in the Azure API for FHIR.
+
+Install the healthcareapis CLI from Azure PowerShell if needed:
+
+```powershell
+az extension add -n healthcareapis
+```
+
+Register the acr servers to Azure API for FHIR following the examples below:
+
+#### Register a single ACR server
+
+```powershell
+az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io" --resource-group fhir-test --resource-name fhirtest2021
+```
+
+#### Register multiple ACR servers
+
+```powershell
+az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.azurecr.io" --resource-group fhir-test --resource-name fhirtest2021
+```
+
+### Verify
+
+Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter.
+
+`<RegistryServer>/<imageName>@<imageDigest>`
+
+## Known issues and workarounds
+
+- Some default template files contain UTF-8 BOM. As a result, the generated ID values will contain a BOM character. This may create an issue with FHIR server. The workaround is to pull Microsoft templates using VS Code Extension, and push those to your own ACR after removing the BOM characters from _ID/_Procedure.liquid_, _ID/_Provenance.liquid_, and _ID/_Immunization.liquid_.
+
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/customer-managed-key.md
@@ -10,27 +10,38 @@ ms.date: 09/28/2020
ms.author: matjazl ---
-# Configure customer-managed keys
+# Configure customer-managed keys at rest
When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
-In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault (AKV). Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an AKV key URI. We will pass this key to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data. To get started, you can refer to the following links:
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data. To get started, you can refer to the following links:
- [Register the Azure Cosmos DB resource provider for your Azure subscription](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) -- [Configure your AKV instance](../cosmos-db/how-to-setup-cmk.md#configure-your-azure-key-vault-instance)-- [Add an access policy to your AKV instance](../cosmos-db/how-to-setup-cmk.md#add-an-access-policy-to-your-azure-key-vault-instance)-- [Generate a key in AKV](../cosmos-db/how-to-setup-cmk.md#generate-a-key-in-azure-key-vault)
+- [Configure your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#configure-your-azure-key-vault-instance)
+- [Add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-an-access-policy-to-your-azure-key-vault-instance)
+- [Generate a key in Azure Key Vault](../cosmos-db/how-to-setup-cmk.md#generate-a-key-in-azure-key-vault)
-After creating your Azure API for FHIR account on Azure portal, you can see a "Data Encryption" configuration option under the "Database Settings" on the "Additional Settings" tab. By default, the service-managed key option will be chosen. You can specify your AKV key here by selecting "Customer-managed key" option. You can enter the copied key URI here.
+## Specify the Azure Key Vault key
-:::image type="content" source="media/bring-your-own-key/bring-your-own-key-create.png" alt-text="Create Azure API for FHIR":::
+When creating your Azure API for FHIR account on Azure portal, you can see a "Data Encryption" configuration option under the "Database Settings" on the "Additional Settings" tab. By default, the service-managed key option will be chosen.
-Or, you can choose your key from the KeyPicker:
+You can choose your key from the KeyPicker:
:::image type="content" source="media/bring-your-own-key/bring-your-own-key-keypicker.png" alt-text="KeyPicker":::
+Or you can specify your Azure Key Vault key here by selecting "Customer-managed key" option. You can enter the key URI here:
+
+:::image type="content" source="media/bring-your-own-key/bring-your-own-key-create.png" alt-text="Create Azure API for FHIR":::
+ For existing FHIR accounts, you can view the key encryption choice (service- or customer-managed key) in "Database" blade as below. The configuration option can't be modified once chosen. However, you can modify and update your key. :::image type="content" source="media/bring-your-own-key/bring-your-own-key-database.png" alt-text="Database":::
-In addition, you can create a new version of the specified key, after which your data will get encrypted with the new version without any service interruption. You can also remove access to the key to remove access to the data.
\ No newline at end of file
+In addition, you can create a new version of the specified key, after which your data will get encrypted with the new version without any service interruption. You can also remove access to the key to remove access to the data. When the key is disabled, queries will result in an error. If the key is re-enabled, queries will succeed again.
+
+## Next steps
+
+In this article, you learned how to configure customer-managed keys at rest. Next, you can check out the Azure Cosmos DB FAQ section:
+
+>[!div class="nextstepaction"]
+>[Cosmos DB: how to setup CMK](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-cmk#frequently-asked-questions)
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/fhir-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir-faq.md
@@ -2,12 +2,12 @@
title: FAQs about FHIR services in Azure - Azure API for FHIR description: Get answers to frequently asked questions about the Azure API for FHIR, such as the storage location of data behind FHIR APIs and version support. services: healthcare-apis
-author: matjazl
+author: caitlinv39
ms.service: healthcare-apis ms.subservice: fhir ms.topic: reference
-ms.date: 08/03/2020
-ms.author: matjazl
+ms.date: 1/21/2021
+ms.author: cavoeg
--- # Frequently asked questions about the Azure API for FHIR
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/fhir-features-supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir-features-supported.md
@@ -2,11 +2,11 @@
title: Supported FHIR features in Azure - Azure API for FHIR description: This article explains which features of the FHIR specification that are implemented in Azure API for FHIR services: healthcare-apis
-author: matjazl
+author: caitlinv39
ms.service: healthcare-apis ms.subservice: fhir ms.topic: reference
-ms.date: 02/07/2019
+ms.date: 1/21/2021
ms.author: cavoeg ---
@@ -142,16 +142,18 @@ Currently, the allowed actions for a given role are applied *globally* on the AP
The performance of the system is dependent on the number of RUs, concurrent connections, and the type of operations you are performing (Put, Post, etc.). Below are some general ranges of what you can expect based on configured RUs. In general, performance scales linearly with an increase in RUs:
-| # of RUs | Resources/sec |
-|----------|---------------|
-| 400 | 5-10 |
-| 1,000 | 100-150 |
-| 10,000 | 225-400 |
-| 100,000 | 2,500-4,000 |
+| # of RUs | Resources/sec | Max Storage (GB)* |
+|----------|---------------|--------|
+| 400 | 5-10 | 40 |
+| 1,000 | 100-150 | 100 |
+| 10,000 | 225-400 | 1,000 |
+| 100,000 | 2,500-4,000 | 10,000 |
+
+Note: Per Cosmos DB requirement, there is a requirement of a minimum throughput of 10 RU/s per GB of storage. For more information, check out [Cosmos DB service quotas](../cosmos-db/concepts-limits.md).
## Next steps In this article, you've read about the supported FHIR features in Azure API for FHIR. Next deploy the Azure API for FHIR. >[!div class="nextstepaction"]
->[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
\ No newline at end of file
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/concepts-iot-pnp-bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-iot-pnp-bridge.md
@@ -3,7 +3,7 @@ title: IoT Plug and Play bridge | Microsoft Docs
description: Understand the IoT Plug and Play bridge and how to use it to connect existing devices attached to a Windows or Linux gateway as IoT Plug and Play devices. author: usivagna ms.author: ugans
-ms.date: 09/22/2020
+ms.date: 1/20/2021
ms.topic: conceptual ms.service: iot-pnp services: iot-pnp
@@ -33,7 +33,7 @@ IoT Plug and Play bridge supports the following types of peripherals by default,
|[SerialPnP adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/serialpnp/Readme.md) connects devices that communicate over a serial connection. |Yes|Yes| |[Windows USB peripherals](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/coredevicehealth_adapter.md) uses a list of adapter-supported device interface classes to connect devices that have a specific hardware ID. |Yes|Not Applicable|
-To learn how to extend the IoT Plug and Play bridge to support additional device protocols, see [Build, deploy, and extend the IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md).
+To learn how to extend the IoT Plug and Play bridge to support additional device protocols, see [Extend the IoT Plug and Play bridge](howto-author-pnp-bridge-adapter.md). To learn how to build and deploy the IoT Plug and Play bridge, see [Build and deploy the IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md).
## IoT Plug and Play bridge architecture
@@ -145,6 +145,7 @@ You can also download and view the source code of [IoT Plug and Play bridge on G
Now that you have an overview of the architecture of IoT Plug and Play bridge, the next steps are to learn more about: -- [How to use IoT Plug and Play bridge](./howto-use-iot-pnp-bridge.md)-- [Build, deploy, and extend IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md)
+- [How to connect an IoT Plug and Play bridge sample running on Linux or Windows to IoT Hub](./howto-use-iot-pnp-bridge.md)
+- [Build and deploy IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md)
+- [Extend IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md)
- [IoT Plug and Play bridge on GitHub](https://github.com/Azure/iot-plug-and-play-bridge)
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/howto-author-pnp-bridge-adapter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-author-pnp-bridge-adapter.md new file mode 100644
@@ -0,0 +1,496 @@
+---
+title: How to build an adapter for the IoT Plug and Play bridge | Microsoft Docs
+description: Identify the IoT Plug and Play bridge adapter components. Learn how to extend the bridge by writing your own adapter.
+author: usivagna
+ms.author: ugans
+ms.date: 1/20/2021
+ms.topic: how-to
+ms.service: iot-pnp
+services: iot-pnp
+
+# As a device builder, I want to understand the IoT Plug and Play bridge, learn how to build and IoT Plug and Play bridge adapter.
+---
+# Extend the IoT Plug and Play bridge
+The [IoT Plug and Play bridge](concepts-iot-pnp-bridge.md#iot-plug-and-play-bridge-architecture) lets you connect the existing devices attached to a gateway to your IoT hub. You use the bridge to map IoT Plug and Play interfaces to the attached devices. An IoT Plug and Play interface defines the telemetry that a device sends, the properties synchronized between the device and the cloud, and the commands that the device responds to. You can install and configure the open-source bridge application on Windows or Linux gateways. Additionally, the bridge can be run as an Azure IoT Edge runtime module.
+
+This article explains in detail how to:
+
+- Extend the IoT Plug and Play bridge with an adapter.
+- Implement common callbacks for a bridge adapter.
+
+For a simple example that shows how to use the bridge, see [How to connect the IoT Plug and Play bridge sample that runs on Linux or Windows to IoT Hub](howto-use-iot-pnp-bridge.md).
+
+The guidance and samples in this article assume basic familiarity with [Azure Digital Twins](../digital-twins/overview.md) and [IoT Plug and Play](overview-iot-plug-and-play.md). Additionally, this article assumes familiarity with how to [Build, and deploy the IoT Plug and Play bridge](howto-build-deploy-extend-pnp-bridge.md).
+
+## Design Guide to extend the IoT Plug and Play bridge with an adapter
+
+To extend the capabilities of the bridge, you can author your own bridge adapters.
+
+The bridge uses adapters to:
+
+- Establish a connection between a device and the cloud.
+- Enable data flow between a device and the cloud.
+- Enable device management from the cloud.
+
+Every bridge adapter must:
+
+- Create a digital twins interface.
+- Use the interface to bind device-side functionality to cloud-based capabilities such as telemetry, properties, and commands.
+- Establish control and data communication with the device hardware or firmware.
+
+Each bridge adapter interacts with a specific type of device based on how the adapter connects to and interacts with the device. Even if communication with a device uses a handshaking protocol, a bridge adapter may have multiple ways to interpret the data from the device. In this scenario, the bridge adapter uses information for the adapter in the configuration file to determine the *interface configuration* the adapter should use to parse the data.
+
+To interact with the device, a bridge adapter uses a communication protocol supported by the device and APIs provided either by the underlying operating system, or the device vendor.
+
+To interact with the cloud, a bridge adapter uses APIs provided by the Azure IoT Device C SDK to send telemetry, create digital twin interfaces, send property updates, and create callback functions for property updates and commands.
+
+### Create a bridge adapter
+
+The bridge expects a bridge adapter to implement the APIs defined in the [_PNP_ADAPTER](https://github.com/Azure/iot-plug-and-play-bridge/blob/9964f7f9f77ecbf4db3b60960b69af57fd83a871/pnpbridge/src/pnpbridge/inc/pnpadapter_api.h#L296) interface:
+
+```c
+typedef struct _PNP_ADAPTER {
+ // Identity of the IoT Plug and Play adapter that is retrieved from the config
+ const char* identity;
+
+ PNPBRIDGE_ADAPTER_CREATE createAdapter;
+ PNPBRIDGE_COMPONENT_CREATE createPnpComponent;
+ PNPBRIDGE_COMPONENT_START startPnpComponent;
+ PNPBRIDGE_COMPONENT_STOP stopPnpComponent;
+ PNPBRIDGE_COMPONENT_DESTROY destroyPnpComponent;
+ PNPBRIDGE_ADAPTER_DESTOY destroyAdapter;
+} PNP_ADAPTER, * PPNP_ADAPTER;
+```
+
+In this interface:
+
+- `PNPBRIDGE_ADAPTER_CREATE` creates the adapter and sets up the interface management resources. An adapter may also rely on global adapter parameters for adapter creation. This function is called once for a single adapter.
+- `PNPBRIDGE_COMPONENT_CREATE` creates the digital twin client interfaces and binds the callback functions. The adapter initiates the communication channel to the device. The adapter may set up the resources to enable the telemetry flow but doesn't start reporting telemetry until `PNPBRIDGE_COMPONENT_START` is called. This function is called once for each interface component in the configuration file.
+- `PNPBRIDGE_COMPONENT_START` is called to let the bridge adapter start forwarding telemetry from the device to the digital twin client. This function is called once for each interface component in the configuration file.
+- `PNPBRIDGE_COMPONENT_STOP` stops the telemetry flow.
+- `PNPBRIDGE_COMPONENT_DESTROY` destroys the digital twin client and associated interface resources. This function is called once for each interface component in the configuration file when the bridge is torn down or when a fatal error occurs.
+- `PNPBRIDGE_ADAPTER_DESTROY` cleans up the bridge adapter resources.
+
+### Bridge core interaction with bridge adapters
+
+The following list outlines what happens when the bridge starts:
+
+1. When the bridge starts, the bridge adapter manager looks through each interface component defined in the configuration file and calls `PNPBRIDGE_ADAPTER_CREATE` on the appropriate adapter. The adapter may use global adapter configuration parameters to set up resources to support the various *interface configurations*.
+1. For every device in the configuration file, the bridge manager initiates interface creation by calling `PNPBRIDGE_COMPONENT_CREATE` in the appropriate bridge adapter.
+1. The adapter receives any optional adapter configuration settings for the interface component and uses this information to set up connections to the device.
+1. The adapter creates the digital twin client interfaces and binds the callback functions for property updates and commands. Establishing device connections shouldn't block the return of the callbacks after digital twin interface creation succeeds. The active device connection is independent of the active interface client the bridge creates. If a connection fails, the adapter assumes the device is inactive. The bridge adapter can choose to retry making this connection.
+1. After the bridge adapter manger creates all the interface components specified in the configuration file, it registers all the interfaces with Azure IoT Hub. Registration is a blocking, asynchronous call. When the call completes, it triggers a callback in the bridge adapter that can then start handling property and command callbacks from the cloud.
+1. The bridge adapter manager then calls `PNPBRIDGE_INTERFACE_START` on each component and the bridge adapter starts reporting telemetry to the digital twin client.
+
+### Design guidelines
+
+Follow these guidelines when you develop a new bridge adapter:
+
+- Determine which device capabilities are supported and what the interface definition of the components using this adapter looks like.
+- Determine what interface and global parameters your adapter needs defined in the configuration file.
+- Identify the low-level device communication required to support the component properties and commands.
+- Determine how the adapter should parse the raw data from the device and convert it to the telemetry types that the IoT Plug and Play interface definition specifies.
+- Implement the bridge adapter interface described previously.
+- Add the new adapter to the adapter manifest and build the bridge.
+
+### Enable a new bridge adapter
+
+You enable adapters in the bridge by adding a reference in [adapter_manifest.c](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/shared/adapter_manifest.c):
+
+```c
+ extern PNP_ADAPTER MyPnpAdapter;
+ PPNP_ADAPTER PNP_ADAPTER_MANIFEST[] = {
+ .
+ .
+ &MyPnpAdapter
+ }
+```
+
+> [!IMPORTANT]
+> Bridge adapter callbacks are invoked sequentially. An adapter shouldn't block a callback because this prevents the bridge core from making progress.
+
+### Sample camera adapter
+
+The [Camera adapter readme](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/Camera/readme.md) describes a sample camera adapter that you can enable.
+
+## Code examples for common adapter scenarios/callbacks
+
+The following section will provide details on how an adapter for the bridge would implement callbacks for a number of common scenarios and usages This section covers the following callbacks:
+- [Receive property update (cloud to device)](#receive-property-update-cloud-to-device)
+- [Report a property update (device to cloud)](#report-a-property-update-device-to-cloud)
+- [Send telemetry (device to cloud)](#send-telemetry-device-to-cloud)
+- [Receive command update callback from the cloud and process it on the device side (cloud to device)](#receive-command-update-callback-from-the-cloud-and-process-it-on-the-device-side-cloud-to-device)
+- [Respond to command update on the device side (device to cloud)](#respond-to-command-update-on-the-device-side-device-to-cloud)
+
+The examples below are based on the [environmental sensor sample adapter](https://github.com/Azure/iot-plug-and-play-bridge/tree/master/pnpbridge/src/adapters/samples/environmental_sensor).
+
+### Receive property update (cloud to device)
+The first step is to register a callback function:
+
+```c
+PnpComponentHandleSetPropertyUpdateCallback(BridgeComponentHandle, EnvironmentSensor_ProcessPropertyUpdate);
+```
+The next step is to implement the callback function to read the property update on the device:
+
+```c
+void EnvironmentSensor_ProcessPropertyUpdate(
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle,
+ const char* PropertyName,
+ JSON_Value* PropertyValue,
+ int version,
+ void* userContextCallback
+)
+{
+ // User context for the callback is set to the IoT Hub client handle, and therefore can be type-cast to the client handle type
+ SampleEnvironmentalSensor_ProcessPropertyUpdate(userContextCallback, PropertyName, PropertyValue, version, PnpComponentHandle);
+}
+
+// SampleEnvironmentalSensor_ProcessPropertyUpdate receives updated properties from the server. This implementation
+// acts as a simple dispatcher to the functions to perform the actual processing.
+void SampleEnvironmentalSensor_ProcessPropertyUpdate(
+ void * ClientHandle,
+ const char* PropertyName,
+ JSON_Value* PropertyValue,
+ int version,
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle)
+{
+ if (strcmp(PropertyName, sampleEnvironmentalSensorPropertyBrightness) == 0)
+ {
+ SampleEnvironmentalSensor_BrightnessCallback(ClientHandle, PropertyName, PropertyValue, version, PnpComponentHandle);
+ }
+ else
+ {
+ // If the property is not implemented by this interface, presently we only record a log message but do not have a mechanism to report back to the service
+ LogError("Environmental Sensor Adapter:: Property name <%s> is not associated with this interface", PropertyName);
+ }
+}
+
+// Process a property update for bright level.
+static void SampleEnvironmentalSensor_BrightnessCallback(
+ void * ClientHandle,
+ const char* PropertyName,
+ JSON_Value* PropertyValue,
+ int version,
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle)
+{
+ IOTHUB_CLIENT_RESULT iothubClientResult;
+ STRING_HANDLE jsonToSend = NULL;
+ char targetBrightnessString[32];
+
+ LogInfo("Environmental Sensor Adapter:: Brightness property invoked...");
+
+ PENVIRONMENT_SENSOR EnvironmentalSensor = PnpComponentHandleGetContext(PnpComponentHandle);
+
+ if (json_value_get_type(PropertyValue) != JSONNumber)
+ {
+ LogError("JSON field %s is not a number", PropertyName);
+ }
+ else if(EnvironmentalSensor == NULL || EnvironmentalSensor->SensorState == NULL)
+ {
+ LogError("Environmental sensor device context not initialized correctly.");
+ }
+ else if (SampleEnvironmentalSensor_ValidateBrightness(json_value_get_number(PropertyValue)))
+ {
+ EnvironmentalSensor->SensorState->brightness = (int) json_value_get_number(PropertyValue);
+ if (snprintf(targetBrightnessString, sizeof(targetBrightnessString),
+ g_environmentalSensorBrightnessResponseFormat, EnvironmentalSensor->SensorState->brightness) < 0)
+ {
+ LogError("Unable to create target brightness string for reporting result");
+ }
+ else if ((jsonToSend = PnP_CreateReportedPropertyWithStatus(EnvironmentalSensor->SensorState->componentName,
+ PropertyName, targetBrightnessString, PNP_STATUS_SUCCESS, g_environmentalSensorPropertyResponseDescription,
+ version)) == NULL)
+ {
+ LogError("Unable to build reported property response");
+ }
+ else
+ {
+ const char* jsonToSendStr = STRING_c_str(jsonToSend);
+ size_t jsonToSendStrLen = strlen(jsonToSendStr);
+
+ if ((iothubClientResult = SampleEnvironmentalSensor_RouteReportedState(ClientHandle, PnpComponentHandle, (const unsigned char*)jsonToSendStr, jsonToSendStrLen,
+ SampleEnvironmentalSensor_PropertyCallback,
+ (void*) &EnvironmentalSensor->SensorState->brightness)) != IOTHUB_CLIENT_OK)
+ {
+ LogError("Environmental Sensor Adapter:: SampleEnvironmentalSensor_RouteReportedState for brightness failed, error=%d", iothubClientResult);
+ }
+ else
+ {
+ LogInfo("Environmental Sensor Adapter:: Successfully queued Property update for Brightness for component=%s", EnvironmentalSensor->SensorState->componentName);
+ }
+
+ STRING_delete(jsonToSend);
+ }
+ }
+}
+
+```
+
+### Report a property update (device to cloud)
+At any point after your component is created, your device can report properties to the cloud with status:
+```c
+// Environmental sensor's read-only property, device state indiciating whether its online or not
+//
+static const char sampleDeviceStateProperty[] = "state";
+static const unsigned char sampleDeviceStateData[] = "true";
+static const int sampleDeviceStateDataLen = sizeof(sampleDeviceStateData) - 1;
+
+// Sends a reported property for device state of this simulated device.
+IOTHUB_CLIENT_RESULT SampleEnvironmentalSensor_ReportDeviceStateAsync(
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle,
+ const char * ComponentName)
+{
+
+ IOTHUB_CLIENT_RESULT iothubClientResult = IOTHUB_CLIENT_OK;
+ STRING_HANDLE jsonToSend = NULL;
+
+ if ((jsonToSend = PnP_CreateReportedProperty(ComponentName, sampleDeviceStateProperty, (const char*) sampleDeviceStateData)) == NULL)
+ {
+ LogError("Unable to build reported property response for propertyName=%s, propertyValue=%s", sampleDeviceStateProperty, sampleDeviceStateData);
+ }
+ else
+ {
+ const char* jsonToSendStr = STRING_c_str(jsonToSend);
+ size_t jsonToSendStrLen = strlen(jsonToSendStr);
+
+ if ((iothubClientResult = SampleEnvironmentalSensor_RouteReportedState(NULL, PnpComponentHandle, (const unsigned char*)jsonToSendStr, jsonToSendStrLen,
+ SampleEnvironmentalSensor_PropertyCallback, (void*)sampleDeviceStateProperty)) != IOTHUB_CLIENT_OK)
+ {
+ LogError("Environmental Sensor Adapter:: Unable to send reported state for property=%s, error=%d",
+ sampleDeviceStateProperty, iothubClientResult);
+ }
+ else
+ {
+ LogInfo("Environmental Sensor Adapter:: Sending device information property to IoTHub. propertyName=%s, propertyValue=%s",
+ sampleDeviceStateProperty, sampleDeviceStateData);
+ }
+
+ STRING_delete(jsonToSend);
+ }
+
+ return iothubClientResult;
+}
++
+// Routes the reported property for device or module client. This function can be called either by passing a valid client handle or by passing
+// a NULL client handle after components have been started such that the client handle can be extracted from the PnpComponentHandle
+IOTHUB_CLIENT_RESULT SampleEnvironmentalSensor_RouteReportedState(
+ void * ClientHandle,
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle,
+ const unsigned char * ReportedState,
+ size_t Size,
+ IOTHUB_CLIENT_REPORTED_STATE_CALLBACK ReportedStateCallback,
+ void * UserContextCallback)
+{
+ IOTHUB_CLIENT_RESULT iothubClientResult = IOTHUB_CLIENT_OK;
+
+ PNP_BRIDGE_CLIENT_HANDLE clientHandle = (ClientHandle != NULL) ?
+ (PNP_BRIDGE_CLIENT_HANDLE) ClientHandle : PnpComponentHandleGetClientHandle(PnpComponentHandle);
+
+ if ((iothubClientResult = PnpBridgeClient_SendReportedState(clientHandle, ReportedState, Size,
+ ReportedStateCallback, UserContextCallback)) != IOTHUB_CLIENT_OK)
+ {
+ LogError("IoTHub client call to _SendReportedState failed with error code %d", iothubClientResult);
+ goto exit;
+ }
+ else
+ {
+ LogInfo("IoTHub client call to _SendReportedState succeeded");
+ }
+
+exit:
+ return iothubClientResult;
+}
+
+```
+
+### Send telemetry (device to cloud)
+```c
+//
+// SampleEnvironmentalSensor_SendTelemetryMessagesAsync is periodically invoked by the caller to
+// send telemetry containing the current temperature and humidity (in both cases random numbers
+// so this sample will work on platforms without these sensors).
+//
+IOTHUB_CLIENT_RESULT SampleEnvironmentalSensor_SendTelemetryMessagesAsync(
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle)
+{
+ IOTHUB_CLIENT_RESULT result = IOTHUB_CLIENT_OK;
+ IOTHUB_MESSAGE_HANDLE messageHandle = NULL;
+ PENVIRONMENT_SENSOR device = PnpComponentHandleGetContext(PnpComponentHandle);
+
+ float currentTemperature = 20.0f + ((float)rand() / RAND_MAX) * 15.0f;
+ float currentHumidity = 60.0f + ((float)rand() / RAND_MAX) * 20.0f;
+
+ char currentMessage[128];
+ sprintf(currentMessage, "{\"%s\":%.3f, \"%s\":%.3f}", SampleEnvironmentalSensor_TemperatureTelemetry,
+ currentTemperature, SampleEnvironmentalSensor_HumidityTelemetry, currentHumidity);
++
+ if ((messageHandle = PnP_CreateTelemetryMessageHandle(device->SensorState->componentName, currentMessage)) == NULL)
+ {
+ LogError("Environmental Sensor Adapter:: PnP_CreateTelemetryMessageHandle failed.");
+ }
+ else if ((result = SampleEnvironmentalSensor_RouteSendEventAsync(PnpComponentHandle, messageHandle,
+ SampleEnvironmentalSensor_TelemetryCallback, device)) != IOTHUB_CLIENT_OK)
+ {
+ LogError("Environmental Sensor Adapter:: SampleEnvironmentalSensor_RouteSendEventAsync failed, error=%d", result);
+ }
+
+ IoTHubMessage_Destroy(messageHandle);
+
+ return result;
+}
+
+// Routes the sending asynchronous events for device or module client
+IOTHUB_CLIENT_RESULT SampleEnvironmentalSensor_RouteSendEventAsync(
+ PNPBRIDGE_COMPONENT_HANDLE PnpComponentHandle,
+ IOTHUB_MESSAGE_HANDLE EventMessageHandle,
+ IOTHUB_CLIENT_EVENT_CONFIRMATION_CALLBACK EventConfirmationCallback,
+ void * UserContextCallback)
+{
+ IOTHUB_CLIENT_RESULT iothubClientResult = IOTHUB_CLIENT_OK;
+ PNP_BRIDGE_CLIENT_HANDLE clientHandle = PnpComponentHandleGetClientHandle(PnpComponentHandle);
+ if ((iothubClientResult = PnpBridgeClient_SendEventAsync(clientHandle, EventMessageHandle,
+ EventConfirmationCallback, UserContextCallback)) != IOTHUB_CLIENT_OK)
+ {
+ LogError("IoTHub client call to _SendEventAsync failed with error code %d", iothubClientResult);
+ goto exit;
+ }
+ else
+ {
+ LogInfo("IoTHub client call to _SendEventAsync succeeded");
+ }
+
+exit:
+ return iothubClientResult;
+}
+
+```
+### Receive command update callback from the cloud and process it on the device side (cloud to device)
+```c
+// SampleEnvironmentalSensor_ProcessCommandUpdate receives commands from the server. This implementation acts as a simple dispatcher
+// to the functions to perform the actual processing.
+int SampleEnvironmentalSensor_ProcessCommandUpdate(
+ PENVIRONMENT_SENSOR EnvironmentalSensor,
+ const char* CommandName,
+ JSON_Value* CommandValue,
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize)
+{
+ if (strcmp(CommandName, sampleEnvironmentalSensorCommandBlink) == 0)
+ {
+ return SampleEnvironmentalSensor_BlinkCallback(EnvironmentalSensor, CommandValue, CommandResponse, CommandResponseSize);
+ }
+ else if (strcmp(CommandName, sampleEnvironmentalSensorCommandTurnOn) == 0)
+ {
+ return SampleEnvironmentalSensor_TurnOnLightCallback(EnvironmentalSensor, CommandValue, CommandResponse, CommandResponseSize);
+ }
+ else if (strcmp(CommandName, sampleEnvironmentalSensorCommandTurnOff) == 0)
+ {
+ return SampleEnvironmentalSensor_TurnOffLightCallback(EnvironmentalSensor, CommandValue, CommandResponse, CommandResponseSize);
+ }
+ else
+ {
+ // If the command is not implemented by this interface, by convention we return a 404 error to server.
+ LogError("Environmental Sensor Adapter:: Command name <%s> is not associated with this interface", CommandName);
+ return SampleEnvironmentalSensor_SetCommandResponse(CommandResponse, CommandResponseSize, sampleEnviromentalSensor_NotImplemented);
+ }
+}
+
+// Implement the callback to process the command "blink". Information pertaining to the request is
+// specified in the CommandValue parameter, and the callback fills out data it wishes to
+// return to the caller on the service in CommandResponse.
+
+static int SampleEnvironmentalSensor_BlinkCallback(
+ PENVIRONMENT_SENSOR EnvironmentalSensor,
+ JSON_Value* CommandValue,
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize)
+{
+ int result = PNP_STATUS_SUCCESS;
+ int BlinkInterval = 0;
+
+ LogInfo("Environmental Sensor Adapter:: Blink command invoked. It has been invoked %d times previously", EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled);
+
+ if (json_value_get_type(CommandValue) != JSONNumber)
+ {
+ LogError("Cannot retrieve blink interval for blink command");
+ result = PNP_STATUS_BAD_FORMAT;
+ }
+ else
+ {
+ BlinkInterval = (int)json_value_get_number(CommandValue);
+ LogInfo("Environmental Sensor Adapter:: Blinking with interval=%d second(s)", BlinkInterval);
+ EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled++;
+ EnvironmentalSensor->SensorState->blinkInterval = BlinkInterval;
+
+ result = SampleEnvironmentalSensor_SetCommandResponse(CommandResponse, CommandResponseSize, sampleEnviromentalSensor_BlinkResponse);
+ }
+
+ return result;
+}
+
+```
+### Respond to command update on the device side (device to cloud)
+
+```c
+ static int SampleEnvironmentalSensor_BlinkCallback(
+ PENVIRONMENT_SENSOR EnvironmentalSensor,
+ JSON_Value* CommandValue,
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize)
+ {
+ int result = PNP_STATUS_SUCCESS;
+ int BlinkInterval = 0;
+
+ LogInfo("Environmental Sensor Adapter:: Blink command invoked. It has been invoked %d times previously", EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled);
+
+ if (json_value_get_type(CommandValue) != JSONNumber)
+ {
+ LogError("Cannot retrieve blink interval for blink command");
+ result = PNP_STATUS_BAD_FORMAT;
+ }
+ else
+ {
+ BlinkInterval = (int)json_value_get_number(CommandValue);
+ LogInfo("Environmental Sensor Adapter:: Blinking with interval=%d second(s)", BlinkInterval);
+ EnvironmentalSensor->SensorState->numTimesBlinkCommandCalled++;
+ EnvironmentalSensor->SensorState->blinkInterval = BlinkInterval;
+
+ result = SampleEnvironmentalSensor_SetCommandResponse(CommandResponse, CommandResponseSize, sampleEnviromentalSensor_BlinkResponse);
+ }
+
+ return result;
+ }
+
+ // SampleEnvironmentalSensor_SetCommandResponse is a helper that fills out a command response
+ static int SampleEnvironmentalSensor_SetCommandResponse(
+ unsigned char** CommandResponse,
+ size_t* CommandResponseSize,
+ const unsigned char* ResponseData)
+ {
+ int result = PNP_STATUS_SUCCESS;
+ if (ResponseData == NULL)
+ {
+ LogError("Environmental Sensor Adapter:: Response Data is empty");
+ *CommandResponseSize = 0;
+ return PNP_STATUS_INTERNAL_ERROR;
+ }
+
+ *CommandResponseSize = strlen((char*)ResponseData);
+ memset(CommandResponse, 0, sizeof(*CommandResponse));
+
+ // Allocate a copy of the response data to return to the invoker. Caller will free this.
+ if (mallocAndStrcpy_s((char**)CommandResponse, (char*)ResponseData) != 0)
+ {
+ LogError("Environmental Sensor Adapter:: Unable to allocate response data");
+ result = PNP_STATUS_INTERNAL_ERROR;
+ }
+
+ return result;
+}
+```
+
+## Next steps
+
+To learn more about the IoT Plug and Play bridge, visit the [IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge) GitHub repository.
iot-pnp https://docs.microsoft.com/en-us/azure/iot-pnp/howto-build-deploy-extend-pnp-bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-build-deploy-extend-pnp-bridge.md
@@ -1,9 +1,9 @@
---
-title: How to build, deploy, and extend IoT Plug and Play bridge | Microsoft Docs
-description: Identify the IoT Plug and Play bridge components. Learn how to extend the bridge, and how to run it on IoT devices, gateways, and as an IoT Edge module.
+title: How to build and deploy IoT Plug and Play bridge | Microsoft Docs
+description: Identify the IoT Plug and Play bridge components. Learn how to run it on IoT devices, gateways, and as an IoT Edge module.
author: usivagna ms.author: ugans
-ms.date: 12/11/2020
+ms.date: 1/20/2021
ms.topic: how-to ms.service: iot-pnp services: iot-pnp
@@ -11,14 +11,13 @@ services: iot-pnp
# As a device builder, I want to understand the IoT Plug and Play bridge, learn how to extend it, and learn how to run it on IoT devices, gateways, and as an IoT Edge module. ---
-# Build, deploy, and extend the IoT Plug and Play bridge
+# Build and deploy the IoT Plug and Play bridge
-The IoT Plug and Play bridge lets you connect the existing devices attached to a gateway to your IoT hub. You use the bridge to map IoT Plug and Play interfaces to the attached devices. An IoT Plug and Play interface defines the telemetry that a device sends, the properties synchronized between the device and the cloud, and the commands that the device responds to. You can install and configure the open-source bridge application on Windows or Linux gateways.
+The [IoT Plug and Play bridge](concepts-iot-pnp-bridge.md#iot-plug-and-play-bridge-architecture) lets you connect the existing devices attached to a gateway to your IoT hub. You use the bridge to map IoT Plug and Play interfaces to the attached devices. An IoT Plug and Play interface defines the telemetry that a device sends, the properties synchronized between the device and the cloud, and the commands that the device responds to. You can install and configure the open-source bridge application on Windows or Linux gateways. Additionally, the bridge can be run as an Azure IoT Edge runtime module.
This article explains in detail how to: - Configure a bridge.-- Extend a bridge by creating new adapters. - How to build and run the bridge in various environments. For a simple example that shows how to use the bridge, see [How to connect the IoT Plug and Play bridge sample that runs on Linux or Windows to IoT Hub](howto-use-iot-pnp-bridge.md).
@@ -75,97 +74,6 @@ The [configuration file schema](https://github.com/Azure/iot-plug-and-play-bridg
When the bridge runs as an IoT Edge module on an IoT Edge runtime, the configuration file is sent from the cloud as an update to the `PnpBridgeConfig` desired property. The bridge waits for this property update before it configures the adapters and components.
-## Extend the bridge
-
-To extend the capabilities of the bridge, you can author your own bridge adapters.
-
-The bridge uses adapters to:
--- Establish a connection between a device and the cloud.-- Enable data flow between a device and the cloud.-- Enable device management from the cloud.-
-Every bridge adapter must:
--- Create a digital twins interface.-- Use the interface to bind device-side functionality to cloud-based capabilities such as telemetry, properties, and commands.-- Establish control and data communication with the device hardware or firmware.-
-Each bridge adapter interacts with a specific type of device based on how the adapter connects to and interacts with the device. Even if communication with a device uses a handshaking protocol, a bridge adapter may have multiple ways to interpret the data from the device. In this scenario, the bridge adapter uses information for the adapter in the configuration file to determine the *interface configuration* the adapter should use to parse the data.
-
-To interact with the device, a bridge adapter uses a communication protocol supported by the device and APIs provided either by the underlying operating system, or the device vendor.
-
-To interact with the cloud, a bridge adapter uses APIs provided by the Azure IoT Device C SDK to send telemetry, create digital twin interfaces, send property updates, and create callback functions for property updates and commands.
-
-### Create a bridge adapter
-
-The bridge expects a bridge adapter to implement the APIs defined in the [_PNP_ADAPTER](https://github.com/Azure/iot-plug-and-play-bridge/blob/9964f7f9f77ecbf4db3b60960b69af57fd83a871/pnpbridge/src/pnpbridge/inc/pnpadapter_api.h#L296) interface:
-
-```c
-typedef struct _PNP_ADAPTER {
- // Identity of the IoT Plug and Play adapter that is retrieved from the config
- const char* identity;
-
- PNPBRIDGE_ADAPTER_CREATE createAdapter;
- PNPBRIDGE_COMPONENT_CREATE createPnpComponent;
- PNPBRIDGE_COMPONENT_START startPnpComponent;
- PNPBRIDGE_COMPONENT_STOP stopPnpComponent;
- PNPBRIDGE_COMPONENT_DESTROY destroyPnpComponent;
- PNPBRIDGE_ADAPTER_DESTOY destroyAdapter;
-} PNP_ADAPTER, * PPNP_ADAPTER;
-```
-
-In this interface:
--- `PNPBRIDGE_ADAPTER_CREATE` creates the adapter and sets up the interface management resources. An adapter may also rely on global adapter parameters for adapter creation. This function is called once for a single adapter.-- `PNPBRIDGE_COMPONENT_CREATE` creates the digital twin client interfaces and binds the callback functions. The adapter initiates the communication channel to the device. The adapter may set up the resources to enable the telemetry flow but doesn't start reporting telemetry until `PNPBRIDGE_COMPONENT_START` is called. This function is called once for each interface component in the configuration file.-- `PNPBRIDGE_COMPONENT_START` is called to let the bridge adapter start forwarding telemetry from the device to the digital twin client. This function is called once for each interface component in the configuration file.-- `PNPBRIDGE_COMPONENT_STOP` stops the telemetry flow.-- `PNPBRIDGE_COMPONENT_DESTROY` destroys the digital twin client and associated interface resources. This function is called once for each interface component in the configuration file when the bridge is torn down or when a fatal error occurs.-- `PNPBRIDGE_ADAPTER_DESTROY` cleans up the bridge adapter resources.-
-### Bridge core interaction with bridge adapters
-
-The following list outlines what happens when the bridge starts:
-
-1. When the bridge starts, the bridge adapter manager looks through each interface component defined in the configuration file and calls `PNPBRIDGE_ADAPTER_CREATE` on the appropriate adapter. The adapter may use global adapter configuration parameters to set up resources to support the various *interface configurations*.
-1. For every device in the configuration file, the bridge manager initiates interface creation by calling `PNPBRIDGE_COMPONENT_CREATE` in the appropriate bridge adapter.
-1. The adapter receives any optional adapter configuration settings for the interface component and uses this information to set up connections to the device.
-1. The adapter creates the digital twin client interfaces and binds the callback functions for property updates and commands. Establishing device connections shouldn't block the return of the callbacks after digital twin interface creation succeeds. The active device connection is independent of the active interface client the bridge creates. If a connection fails, the adapter assumes the device is inactive. The bridge adapter can choose to retry making this connection.
-1. After the bridge adapter manger creates all the interface components specified in the configuration file, it registers all the interfaces with Azure IoT Hub. Registration is a blocking, asynchronous call. When the call completes, it triggers a callback in the bridge adapter that can then start handling property and command callbacks from the cloud.
-1. The bridge adapter manager then calls `PNPBRIDGE_INTERFACE_START` on each component and the bridge adapter starts reporting telemetry to the digital twin client.
-
-### Design guidelines
-
-Follow these guidelines when you develop a new bridge adapter:
--- Determine which device capabilities are supported and what the interface definition of the components using this adapter looks like.-- Determine what interface and global parameters your adapter needs defined in the configuration file.-- Identify the low-level device communication required to support the component properties and commands.-- Determine how the adapter should parse the raw data from the device and convert it to the telemetry types that the IoT Plug and Play interface definition specifies.-- Implement the bridge adapter interface described previously.-- Add the new adapter to the adapter manifest and build the bridge.-
-### Enable a new bridge adapter
-
-You enable adapters in the bridge by adding a reference in [adapter_manifest.c](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/shared/adapter_manifest.c):
-
-```c
- extern PNP_ADAPTER MyPnpAdapter;
- PPNP_ADAPTER PNP_ADAPTER_MANIFEST[] = {
- .
- .
- &MyPnpAdapter
- }
-```
-
-> [!IMPORTANT]
-> Bridge adapter callbacks are invoked sequentially. An adapter shouldn't block a callback because this prevents the bridge core from making progress.
-
-### Sample camera adapter
-
-The [Camera adapter readme](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/src/adapters/src/Camera/readme.md) describes a sample camera adapter that you can enable.
- ## Build and run the bridge on an IoT device or gateway | Platform | Supported |
@@ -375,7 +283,6 @@ Launch VS Code, open the command palette, enter *Remote WSL: Open folder in WSL*
Open the *pnpbridge\Dockerfile.amd64* file. Edit the environment variable definitions as follows: ```dockerfile
-ENV IOTHUB_DEVICE_CONNECTION_STRING="{Add your device connection string here}"
ENV PNP_BRIDGE_ROOT_MODEL_ID="dtmi:com:example:RootPnpBridgeSampleDevice;1" ENV PNP_BRIDGE_HUB_TRACING_ENABLED="false" ENV IOTEDGE_WORKLOADURI="something"
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -489,8 +489,8 @@ Learn more about [image instance segmentation labeling](how-to-label-images.md).
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter. + **azureml-train-automl-runtime** + Improved console output when best model explanations fail.
- + Renamed "backlist_models" input parameter to "blocked_models".
- + Renamed "whitelist_models" input parameter to "allowed_models".
+ + Renamed input parameter to "blocked_models" to remove a sensitive term.
+ + Renamed input parameter to "allowed_models" to remove a sensitive term.
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
private-link https://docs.microsoft.com/en-us/azure/private-link/create-private-link-service-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-link-service-portal.md
@@ -169,7 +169,7 @@ In this section, you'll create a load balancer rule:
4. Leave the rest of the defaults and then select **OK**.
-## Create a Private Link service
+## Create a private link service
In this section, you'll create a Private Link service behind a standard load balancer.
@@ -215,9 +215,115 @@ In this section, you'll create a Private Link service behind a standard load bal
12. Select **Create** in the **Review + create** tab.
+Your private link service is created and can receive traffic. If you want to see traffic flows, configure your application behind your standard load balancer.
++
+## Create private endpoint
+
+In this section, you'll map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that will access your private link service.
+
+### Create private endpoint virtual network
+
+1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+
+2. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |------------------|-----------------------------------------------------------------|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreatePrivLinkService-rg** |
+ | **Instance details** | |
+ | Name | Enter **myVNetPE** |
+ | Region | Select **East US 2** |
+
+3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+4. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--------------------|----------------------------|
+ | IPv4 address space | Enter **11.1.0.0/16** |
+
+5. Under **Subnet name**, select the word **default**.
+
+6. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--------------------|----------------------------|
+ | Subnet name | Enter **mySubnetPE** |
+ | Subnet address range | Enter **11.1.0.0/24** |
+
+7. Select **Save**.
+
+8. Select the **Review + create** tab or select the **Review + create** button.
+
+9. Select **Create**.
+
+### Create private endpoint
+
+1. On the upper-left side of the screen in the portal, select **Create a resource** > **Networking** > **Private Link**, or in the search box enter **Private Link**.
+
+2. Select **Create**.
+
+3. In **Private Link Center**, select **Private endpoints** in the left-hand menu.
+
+4. In **Private endpoints**, select **+ Add**.
+
+5. In the **Basics** tab of **Create a private endpoint**, enter, or select this information:
+
+ | Setting | Value |
+ | ------- | ----- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePrivLinkService-rg**. You created this resource group in the previous section.|
+ | **Instance details** | |
+ | Name | Enter **myPrivateEndpoint**. |
+ | Region | Select **East US 2**. |
+
+6. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page.
+
+7. In **Resource**, enter or select this information:
+
+ | Setting | Value |
+ | ------- | ----- |
+ | Connection method | Select **Connect to an Azure resource in my directory**. |
+ | Subscription | Select your subscription. |
+ | Resource type | Select **Microsoft.Network/privateLinkServices**. |
+ | Resource | Select **myPrivateLinkService**. |
+
+8. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
+
+9. In **Configuration**, enter or select this information:
+
+ | Setting | Value |
+ | ------- | ----- |
+ | **Networking** | |
+ | Virtual Network | Select **myVNetPE**. |
+ | Subnet | Select **mySubnetPE**. |
+
+10. Select the **Review + create** tab, or the **Review + create** button at the bottom of the screen.
+
+11. Select **Create**.
+
+### IP address of private endpoint
+
+In this section, you'll find the IP address of the private endpoint that corresponds with the load balancer and private link service.
+
+1. In the left-hand column of the Azure portal, select **Resource groups**.
+
+2. Select the **CreatePrivLinkService-rg** resource group.
+
+3. In the **CreatePrivLinkService-rg** resource group, select **myPrivateEndpoint**.
+
+4. In the **Overview** page of **myPrivateEndpoint**, select the name of the network interface associated with the private endpoint. The network interface name begins with **myPrivateEndpoint.nic**.
+
+5. In the **Overview** page of the private endpoint nic, the IP address of the endpoint is displayed in **Private IP address**.
+
+ ## Clean up resources
-When you're done using the Private Link service, delete the resource group to clean up the resources used in this quickstart.
+When you're done using the private link service, delete the resource group to clean up the resources used in this quickstart.
1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal, and select **CreatePrivLinkService-rg** from the search results. 1. Select **Delete resource group**.
@@ -229,7 +335,8 @@ When you're done using the Private Link service, delete the resource group to cl
In this quickstart, you: * Created a virtual network and internal Azure Load Balancer.
-* Created a private link service
+* Created a private link service.
+* Created a virtual network and a private endpoint for the private link service.
To learn more about Azure Private endpoint, continue to: > [!div class="nextstepaction"]
private-link https://docs.microsoft.com/en-us/azure/private-link/create-private-link-service-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-link-service-powershell.md
@@ -6,7 +6,7 @@ author: asudbring
# Customer intent: As someone with a basic network background, but is new to Azure, I want to create an Azure private link service ms.service: private-link ms.topic: how-to
-ms.date: 01/20/2021
+ms.date: 01/24/2021
ms.author: allensu ---
@@ -152,7 +152,11 @@ $ipsettings = @{
$ipconfig = New-AzPrivateLinkServiceIpConfig @ipsettings ## Place the load balancer frontend configuration into a variable. ##
-$fe = Get-AzLoadBalancer -Name 'myLoadBalancer' | Get-AzLoadBalancerFrontendIpConfig
+$par = @{
+ Name = 'myLoadBalancer'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+}
+$fe = Get-AzLoadBalancer @par | Get-AzLoadBalancerFrontendIpConfig
## Create the private link service for the load balancer. ## $privlinksettings = @{
@@ -163,6 +167,129 @@ $privlinksettings = @{
IpConfiguration = $ipconfig } New-AzPrivateLinkService @privlinksettings+
+```
+
+Your private link service is created and can receive traffic. If you want to see traffic flows, configure your application behind your standard load balancer.
+
+## Create private endpoint
+
+In this section, you'll map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that will access your private link service.
+
+### Create private endpoint virtual network
+
+* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork).
+
+```azurepowershell-interactive
+## Create backend subnet config ##
+$subnet = @{
+ Name = 'mySubnetPE'
+ AddressPrefix = '11.1.0.0/24'
+ PrivateEndpointNetworkPolicies = 'Disabled'
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
+
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNetPE'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+ Location = 'eastus2'
+ AddressPrefix = '11.1.0.0/16'
+ Subnet = $subnetConfig
+}
+$vnetpe = New-AzVirtualNetwork @net
+
+```
+
+### Create endpoint and connection
+
+* Use [Get-AzPrivateLinkService](/powershell/module/az.network/get-azprivatelinkservice) to place the configuration of the private link service you created early into a variable for later use.
+
+* Use [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/new-azprivatelinkserviceconnection) to create the connection configuration.
+
+* Use [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) to create the endpoint.
+++
+```azurepowershell-interactive
+## Place the private link service configuration into variable. ##
+$par1 = @{
+ Name = 'myPrivateLinkService'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+}
+$pls = Get-AzPrivateLinkService @par1
+
+## Create the private link configuration and place in variable. ##
+$par2 = @{
+ Name = 'myPrivateLinkConnection'
+ PrivateLinkServiceId = $pls.Id
+}
+$plsConnection = New-AzPrivateLinkServiceConnection @par2
+
+## Place the virtual network into a variable. ##
+$par3 = @{
+ Name = 'myVNetPE'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+}
+$vnetpe = Get-AzVirtualNetwork @par3
+
+## Create private endpoint ##
+$par4 = @{
+ Name = 'MyPrivateEndpoint'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+ Location = 'eastus2'
+ Subnet = $vnetpe.subnets[0]
+ PrivateLinkServiceConnection = $plsConnection
+}
+New-AzPrivateEndpoint @par4 -ByManualRequest
+```
+
+### Approve the private endpoint connection
+
+In this section, you'll approve the connection you created in the previous steps.
+
+* Use [Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnnection) to approve the connection.
+
+```azurepowershell-interactive
+## Place the private link service configuration into variable. ##
+$par1 = @{
+ Name = 'myPrivateLinkService'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+}
+$pls = Get-AzPrivateLinkService @par1
+
+$par2 = @{
+ Name = $pls.PrivateEndpointConnections[0].Name
+ ServiceName = 'myPrivateLinkService'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+ Description = 'Approved'
+}
+Approve-AzPrivateEndpointConnection @par2
+
+```
+
+### IP address of private endpoint
+
+In this section, you'll find the IP address of the private endpoint that corresponds with the load balancer and private link service.
+
+* Use [Get-AzPrivateEndpoint](/powershell/module/az.network/get-azprivateendpoint) to retrieve the IP address.
+
+```azurepowershell-interactive
+## Get private endpoint and the IP address and place in a variable for display. ##
+$par1 = @{
+ Name = 'myPrivateEndpoint'
+ ResourceGroupName = 'CreatePrivLinkService-rg'
+ ExpandResource = 'networkinterfaces'
+}
+$pe = Get-AzPrivateEndpoint @par1
+
+## Display the IP address by expanding the variable. ##
+$pe.NetworkInterfaces[0].IpConfigurations[0].PrivateIpAddress
+```
+
+```bash
+Γ¥» $pe.NetworkInterfaces[0].IpConfigurations[0].PrivateIpAddress
+11.1.0.4
``` ## Clean up resources
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-aml-custom-skill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-aml-custom-skill.md
@@ -1,18 +1,18 @@
---
-title: "Tutorial: Create and deploy a custom skill with Azure Machine Learning"
+title: "Example: Create and deploy a custom skill with Azure Machine Learning"
titleSuffix: Azure Cognitive Search
-description: This tutorial demonstrates how to use Azure Machine Learning to build and deploy a custom skill for Azure Cognitive Search's AI enrichment pipeline.
+description: This example demonstrates how to use Azure Machine Learning to build and deploy a custom skill for Azure Cognitive Search's AI enrichment pipeline.
manager: nitinme author: HeidiSteen ms.author: heidist ms.service: cognitive-search
-ms.topic: tutorial
+ms.topic: conceptual
ms.date: 09/25/2020 ---
-# Tutorial: Build and deploy a custom skill with Azure Machine Learning
+# Example: Build and deploy a custom skill with Azure Machine Learning
-In this tutorial, you will use the [hotel reviews dataset](https://www.kaggle.com/datafiniti/hotel-reviews) (distributed under the Creative Commons license [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.txt)) to create a [custom skill](./cognitive-search-aml-skill.md) using Azure Machine Learning to extract aspect-based sentiment from the reviews. This allows for the assignment of positive and negative sentiment within the same review to be correctly ascribed to identified entities like staff, room, lobby, or pool.
+In this example, you will use the [hotel reviews dataset](https://www.kaggle.com/datafiniti/hotel-reviews) (distributed under the Creative Commons license [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.txt)) to create a [custom skill](./cognitive-search-aml-skill.md) using Azure Machine Learning to extract aspect-based sentiment from the reviews. This allows for the assignment of positive and negative sentiment within the same review to be correctly ascribed to identified entities like staff, room, lobby, or pool.
To train the aspect-based sentiment model in Azure Machine Learning, you will be using the [nlp recipes repository](https://github.com/microsoft/nlp-recipes/tree/master/examples/sentiment_analysis/absa). The model will then be deployed as an endpoint on an Azure Kubernetes cluster. Once deployed, the endpoint is added to the enrichment pipeline as an AML skill for use by the Cognitive Search service.
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-blob-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-blob-dotnet.md
@@ -8,7 +8,7 @@ author: MarkHeff
ms.author: maheff ms.service: cognitive-search ms.topic: tutorial
-ms.date: 10/05/2020
+ms.date: 01/23/2021
ms.custom: devx-track-csharp ---
@@ -19,8 +19,8 @@ If you have unstructured text or images in Azure Blob storage, an [AI enrichment
In this tutorial, you will learn how to: > [!div class="checklist"]
-> * Set up a development environment.
-> * Define a pipeline that over blobs using OCR, language detection, entity and key phrase recognition.
+> * Set up a development environment
+> * Define a pipeline that uses OCR, language detection, and entity and key phrase recognition.
> * Execute the pipeline to invoke transformations, and to create and load a search index. > * Explore results using full text search and a rich query syntax.
@@ -28,9 +28,11 @@ If you don't have an Azure subscription, open a [free account](https://azure.mic
## Overview
-This tutorial uses C# and the **Azure.Search.Documents** client library to create a data source, index, indexer, and skillset.
+This tutorial uses C# and the [**Azure.Search.Documents** client library](/dotnet/api/overview/azure/search.documents-readme) to create a data source, index, indexer, and skillset.
-The skillset uses built-in skills based on Cognitive Services APIs. Steps in the pipeline include Optical Character Recognition (OCR) on images, language detection on text, key phrase extraction, and entity recognition (organizations). New information is stored in new fields that you can leverage in queries, facets, and filters.
+The indexer connects to a blob container that's specified in the data source object, and sends all indexed content to an existing search index.
+
+The skillset is attached to the indexer. It uses built-in skills from Microsoft to find and extract information. Steps in the pipeline include Optical Character Recognition (OCR) on images, language detection on text, key phrase extraction, and entity recognition (organizations). New information created by the pipeline is stored in new fields in an index. Once the index is populated, you can use the fields in queries, facets, and filters.
## Prerequisites
search https://docs.microsoft.com/en-us/azure/search/search-create-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-create-app-portal.md
@@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: quickstart
-ms.date: 09/25/2020
+ms.date: 01/23/2021
--- # Quickstart: Create a demo app in the portal (Azure Cognitive Search)
@@ -68,8 +68,9 @@ In Azure Cognitive Search, faceted navigation is a cumulative filtering experien
> [!TIP] > You can view the full index schema in the portal. Look for the **Index definition (JSON)** link in each index's overview page. Fields that qualify for faceted navigation have "filterable: true" and "facetable: true" attributes.
-Accept the current selection of facets and continue to the next page.
+1. In the wizard, select the **Sidebar** tab at the top of the page. You will see a list of all fields that are attributed as filterable and facetable in the index.
+1. Accept the current selection of faceted fields and continue to the next page.
## Add typeahead
@@ -81,19 +82,43 @@ The following screenshot shows options in the wizard, juxtaposed with a rendered
:::image type="content" source="media/search-create-app-portal/suggestions.png" alt-text="Query suggestion configuration":::
+## Add suggestions
+
+Suggestions refer to automated query prompts that are attached to the search box. Cognitive Search supports two: *autocompletion* of a partially entered search term, and *suggestions* for a dropdown list of potential matching documents based.
+
+The wizard supports suggestions, and the fields that can provide suggested results are derived from a [`Suggesters`](index-add-suggesters.md) construct in the index:
+
+```JSON
+ "suggesters": [
+ {
+ "name": "sg",
+ "searchMode": "analyzingInfixMatching",
+ "sourceFields": [
+ "number",
+ "street",
+ "city",
+ "region",
+ "postCode",
+ "tags"
+ ]
+```
+
+1. In the wizard, select the **Suggestions** tab at the top of the page. You will see a list of all fields that are designated in the index schema as suggestion providers.
+
+1. Accept the current selection and continue to the next page.
+ ## Create, download and execute
-1. Select **Create demo app** to generate the HTML file.
+1. Select **Create demo app** at the bottom of the page to generate the HTML file.
1. When prompted, select **Download your app** to download the file.
-1. Open the file. You should see a page similar to the following screenshot. Enter a term and use filters to narrow results.
+1. Open the file and click the Search button. This action executes a query, which can be an empty query (`*`) that returns an arbitrary result set. The page should look similar to the following screenshot. Enter a term and use filters to narrow results.
The underlying index is composed of fictitious, generated data that has been duplicated across documents, and descriptions sometimes do not match the image. You can expect a more cohesive experience when you create an app based on your own indexes. :::image type="content" source="media/search-create-app-portal/run-app.png" alt-text="Run the app"::: - ## Clean up resources When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
@@ -104,7 +129,7 @@ If you are using a free service, remember that you are limited to three indexes,
## Next steps
-While the default app is useful for initial exploration and small tasks, reviewing the APIs early on will help you understand the concepts and workflow on a deeper level:
+The demo app is useful for prototyping because you can simulate an end-user experience without having to write JavaScript or front-end code. For more information about front-end features, start with faceted navigation:
> [!div class="nextstepaction"]
-> [Create an index using .NET SDK](./search-get-started-dotnet.md)
\ No newline at end of file
+> [How to build a facet filter](search-filters-facets.md)
\ No newline at end of file
search https://docs.microsoft.com/en-us/azure/search/search-create-service-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-create-service-portal.md
@@ -8,14 +8,14 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: quickstart
-ms.date: 10/14/2020
+ms.date: 01/23/2021
--- # Quickstart: Create an Azure Cognitive Search service in the portal
-Azure Cognitive Search is a standalone resource used to plug a search experience into custom apps. Cognitive Search integrates easily with other Azure services, with apps on network servers, or with software running on other cloud platforms.
+[Azure Cognitive Search](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps. You can integrate it easily with other Azure services that provide data or additional processing, with apps on network servers, or with software running on other cloud platforms.
-In this article, learn how to create a resource in the [Azure portal](https://portal.azure.com/).
+In this article, learn how to create a search service in the [Azure portal](https://portal.azure.com/).
[![Animated GIF](./media/search-create-service-portal/AnimatedGif-AzureSearch-small.gif)](./media/search-create-service-portal/AnimatedGif-AzureSearch.gif#lightbox)
@@ -27,7 +27,7 @@ The following service properties are fixed for the lifetime of the service - cha
* Service name becomes part of the URL endpoint ([review tips](#name-the-service) for helpful service names). * [Service tier](search-sku-tier.md) affects billing and sets an upward limit on capacity. Some features are not available on the free tier.
-* Service region can determine the availability of certain scenarios. If you need [high security features](search-security-overview.md) or [AI enrichment](cognitive-search-concept-intro.md), you will need to place Azure Cognitive Search in the same region as other services, or in regions that provide the feature in question.
+* Service region can determine the availability of certain scenarios. If you need [high security features](search-security-overview.md) or [AI enrichment](cognitive-search-concept-intro.md), you will need to create Azure Cognitive Search in the same region as other services, or in regions that provide the feature in question.
## Subscribe (free or paid)
@@ -39,7 +39,7 @@ Alternatively, [activate MSDN subscriber benefits](https://azure.microsoft.com/p
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Click the plus sign ("+ Create Resource") in the top-left corner.
+1. Click the plus sign (**"+ Create Resource"**) in the top-left corner.
1. Use the search bar to find "Azure Cognitive Search" or navigate to the resource through **Web** > **Azure Cognitive Search**.
search https://docs.microsoft.com/en-us/azure/search/search-indexer-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-tutorial.md
@@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: tutorial
-ms.date: 09/25/2020
+ms.date: 01/23/2021
ms.custom: devx-track-csharp #Customer intent: As a developer, I want an introduction the indexing Azure SQL data for Azure Cognitive Search. ---
@@ -104,14 +104,14 @@ API calls require the service URL and an access key. A search service is created
1. In Solution Explorer, open **appsettings.json** to provide connection information.
-1. For `searchServiceName`, if the full URL is "https://my-demo-service.search.windows.net", the service name to provide is "my-demo-service".
+1. For `SearchServiceEndPoint`, if the full URL on the service overview page is "https://my-demo-service.search.windows.net", then the value to provide is that URL.
1. For `AzureSqlConnectionString`, the string format is similar to this: `"Server=tcp:{your_dbname}.database.windows.net,1433;Initial Catalog=hotels-db;Persist Security Info=False;User ID={your_username};Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"` ```json {
- "SearchServiceName": "<placeholder-Azure-Search-service-name>",
- "SearchServiceAdminApiKey": "<placeholder-admin-key-for-Azure-Search>",
+ "SearchServiceEndPoint": "<placeholder-search-url>",
+ "SearchServiceAdminApiKey": "<placeholder-admin-key-for-search-service>",
"AzureSqlConnectionString": "<placeholder-ADO.NET-connection-string", } ```
@@ -127,11 +127,12 @@ Indexers require a data source object and an index. Relevant code is in two file
### In hotel.cs
-The index schema defines the fields collection, including attributes specifying allowed operations, such as whether a field is full-text searchable, filterable, or sortable as shown in the following field definition for HotelName.
+The index schema defines the fields collection, including attributes specifying allowed operations, such as whether a field is full-text searchable, filterable, or sortable as shown in the following field definition for HotelName. A [SearchableField](/dotnet/api/azure.search.documents.indexes.models.searchablefield) is full-text searchable by definition. Other attributes are assigned explicitly.
```csharp . . .
-[IsSearchable, IsFilterable, IsSortable]
+[SearchableField(IsFilterable = true, IsSortable = true)]
+[JsonPropertyName("hotelName")]
public string HotelName { get; set; } . . . ```
@@ -140,59 +141,73 @@ A schema can also include other elements, including scoring profiles for boostin
### In Program.cs
-The main program includes logic for creating a client, an index, a data source, and an indexer. The code checks for and deletes existing resources of the same name, under the assumption that you might run this program multiple times.
-
-The data source object is configured with settings that are specific to Azure SQL Database resources, including [partial or incremental indexing](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#capture-new-changed-and-deleted-rows) for leveraging the built-in [change detection features](/sql/relational-databases/track-changes/about-change-tracking-sql-server) of Azure SQL. The demo hotels database in Azure SQL has a "soft delete" column named **IsDeleted**. When this column is set to true in the database, the indexer removes the corresponding document from the Azure Cognitive Search index.
-
- ```csharp
- Console.WriteLine("Creating data source...");
-
- DataSource dataSource = DataSource.AzureSql(
- name: "azure-sql",
- sqlConnectionString: configuration["AzureSQLConnectionString"],
- tableOrViewName: "hotels",
- deletionDetectionPolicy: new SoftDeleteColumnDeletionDetectionPolicy(
- softDeleteColumnName: "IsDeleted",
- softDeleteMarkerValue: "true"));
- dataSource.DataChangeDetectionPolicy = new SqlIntegratedChangeTrackingPolicy();
-
- searchService.DataSources.CreateOrUpdateAsync(dataSource).Wait();
- ```
-
-An indexer object is platform-agnostic, where configuration, scheduling, and invocation are the same regardless of the source. This example indexer includes a schedule, a reset option that clears indexer history, and calls a method to create and run the indexer immediately.
-
- ```csharp
- Console.WriteLine("Creating Azure SQL indexer...");
- Indexer indexer = new Indexer(
- name: "azure-sql-indexer",
- dataSourceName: dataSource.Name,
- targetIndexName: index.Name,
- schedule: new IndexingSchedule(TimeSpan.FromDays(1)));
- // Indexers contain metadata about how much they have already indexed
- // If we already ran the sample, the indexer will remember that it already
- // indexed the sample data and not run again
- // To avoid this, reset the indexer if it exists
- exists = await searchService.Indexers.ExistsAsync(indexer.Name);
- if (exists)
- {
- await searchService.Indexers.ResetAsync(indexer.Name);
- }
-
- await searchService.Indexers.CreateOrUpdateAsync(indexer);
-
- // We created the indexer with a schedule, but we also
- // want to run it immediately
- Console.WriteLine("Running Azure SQL indexer...");
-
- try
- {
- await searchService.Indexers.RunAsync(indexer.Name);
- }
- catch (CloudException e) when (e.Response.StatusCode == (HttpStatusCode)429)
- {
+The main program includes logic for creating [an indexer client](/dotnet/api/azure.search.documents.indexes.models.searchindexer), an index, a data source, and an indexer. The code checks for and deletes existing resources of the same name, under the assumption that you might run this program multiple times.
+
+The data source object is configured with settings that are specific to Azure SQL Database resources, including [partial or incremental indexing](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#capture-new-changed-and-deleted-rows) for leveraging the built-in [change detection features](/sql/relational-databases/track-changes/about-change-tracking-sql-server) of Azure SQL. The source demo hotels database in Azure SQL has a "soft delete" column named **IsDeleted**. When this column is set to true in the database, the indexer removes the corresponding document from the Azure Cognitive Search index.
+
+```csharp
+Console.WriteLine("Creating data source...");
+
+var dataSource =
+ new SearchIndexerDataSourceConnection(
+ "hotels-sql-ds",
+ SearchIndexerDataSourceType.AzureSql,
+ configuration["AzureSQLConnectionString"],
+ new SearchIndexerDataContainer("hotels"));
+
+indexerClient.CreateOrUpdateDataSourceConnection(dataSource);
+```
+
+An indexer object is platform-agnostic, where configuration, scheduling, and invocation are the same regardless of the source. This example indexer includes a schedule, a reset option that clears indexer history, and calls a method to create and run the indexer immediately. To create or update an indexer, use [CreateOrUpdateIndexerAsync](/dotnet/api/azure.search.documents.indexes.searchindexerclient.createorupdateindexerasync).
+
+```csharp
+Console.WriteLine("Creating Azure SQL indexer...");
+
+var schedule = new IndexingSchedule(TimeSpan.FromDays(1))
+{
+ StartTime = DateTimeOffset.Now
+};
+
+var parameters = new IndexingParameters()
+{
+ BatchSize = 100,
+ MaxFailedItems = 0,
+ MaxFailedItemsPerBatch = 0
+};
+
+// Indexer declarations require a data source and search index.
+// Common optional properties include a schedule, parameters, and field mappings
+// The field mappings below are redundant due to how the Hotel class is defined, but
+// we included them anyway to show the syntax
+var indexer = new SearchIndexer("hotels-sql-idxr", dataSource.Name, searchIndex.Name)
+{
+ Description = "Data indexer",
+ Schedule = schedule,
+ Parameters = parameters,
+ FieldMappings =
+ {
+ new FieldMapping("_id") {TargetFieldName = "HotelId"},
+ new FieldMapping("Amenities") {TargetFieldName = "Tags"}
+ }
+};
+
+await indexerClient.CreateOrUpdateIndexerAsync(indexer);
+```
+
+Indexer runs are usually scheduled, but during development you might want to run the indexer immediately using [RunIndexerAsync](/dotnet/api/azure.search.documents.indexes.searchindexerclient.runindexerasync).
+
+```csharp
+Console.WriteLine("Running Azure SQL indexer...");
+
+try
+{
+ await indexerClient.RunIndexerAsync(indexer.Name);
+}
+catch (CloudException e) when (e.Response.StatusCode == (HttpStatusCode)429)
+{
Console.WriteLine("Failed to run indexer: {0}", e.Response.Content);
- }
- ```
+}
+```
## 4 - Build the solution
@@ -202,9 +217,9 @@ Press F5 to build and run the solution. The program executes in debug mode. A co
Your code runs locally in Visual Studio, connecting to your search service on Azure, which in turn connects to Azure SQL Database and retrieves the dataset. With this many operations, there are several potential points of failure. If you get an error, check the following conditions first:
-+ Search service connection information that you provide is limited to the service name in this tutorial. If you entered the full URL, operations stop at index creation, with a failure to connect error.
++ Search service connection information that you provide is the full URL. If you entered just the service name, operations stop at index creation, with a failure to connect error.
-+ Database connection information in **appsettings.json**. It should be the ADO.NET connection string obtained from the portal, modified to include a username and password that are valid for your database. The user account must have permission to retrieve data. Your local client IP address must be allowed access.
++ Database connection information in **appsettings.json**. It should be the ADO.NET connection string obtained from the portal, modified to include a username and password that are valid for your database. The user account must have permission to retrieve data. Your local client IP address must be allowed inbound access through the firewall. + Resource limits. Recall that the Free tier has limits of 3 indexes, indexers, and data sources. A service at the maximum limit cannot create new objects.
search https://docs.microsoft.com/en-us/azure/search/search-semi-structured-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-semi-structured-data.md
@@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: tutorial
-ms.date: 09/25/2020
+ms.date: 01/25/2021
#Customer intent: As a developer, I want an introduction the indexing Azure blob data for Azure Cognitive Search. ---
@@ -95,13 +95,13 @@ REST calls require the service URL and an access key on every request. A search
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
-:::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Get an HTTP endpoint and access key" border="false":::
+ :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Get an HTTP endpoint and access key" border="false":::
All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it. ## 2 - Set up Postman
-Start Postman and set up an HTTP request. If you are unfamiliar with this tool, see [Explore Azure Cognitive Search REST APIs](search-get-started-rest.md).
+Start Postman and set up an HTTP request. If you are unfamiliar with this tool, see [Create a search index using REST APIs](search-get-started-rest.md).
The request methods for every call in this tutorial are **POST** and **GET**. You'll make three API calls to your search service to create a data source, an index, and an indexer. The data source includes a pointer to your storage account and your JSON data. Your search service makes the connection when loading the data.
@@ -155,7 +155,7 @@ The [Create Data Source API](/rest/api/searchservice/create-data-source) creates
``` ## 4 - Create an index
-
+ The second call is [Create Index API](/rest/api/searchservice/create-index), creating an Azure Cognitive Search index that stores all searchable data. An index specifies all the parameters and their attributes. 1. Set the endpoint of this call to `https://[service name].search.windows.net/indexes?api-version=2020-06-30`. Replace `[service name]` with the name of your search service.
search https://docs.microsoft.com/en-us/azure/search/tutorial-multiple-data-sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-multiple-data-sources.md
@@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist ms.service: cognitive-search ms.topic: tutorial
-ms.date: 10/13/2020
+ms.date: 01/23/2020
ms.custom: devx-track-csharp ---
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
@@ -113,6 +113,8 @@ To generate a benign Microsoft Defender for Endpoint test alert:
1. To review the alert in Security Center, go to **Security alerts** > **Suspicious PowerShell CommandLine**. 1. From the investigation window, select the link to go to the Microsoft Defender for Endpoint portal.
+ > [!TIP]
+ > The alert is triggered with **Informational** severity.
## FAQ for Security Center's integrated Microsoft Defender for Endpoint
security-center https://docs.microsoft.com/en-us/azure/security-center/upcoming-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 01/21/2021
+ms.date: 01/24/2021
ms.author: memildin ---
@@ -27,11 +27,41 @@ If you're looking for the latest release notes, you'll find them in the [What's
## Planned changes
+- [Kubernetes workload protection recommendations will soon be released for General Availability (GA)](#kubernetes-workload-protection-recommendations-will-soon-be-released-for-general-availability-ga)
- [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) - [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) - [35 preview recommendations added to increase coverage of Azure Security Benchmark](#35-preview-recommendations-being-added-to-increase-coverage-of-azure-security-benchmark)
+### Kubernetes workload protection recommendations will soon be released for General Availability (GA)
+
+**Estimated date for change:** January 2021
+
+The Kubernetes workload protection recommendations described in [Protect your Kubernetes workloads](kubernetes-workload-protections.md) are currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy, and isn't included in the calculations of your secure score.
+
+These recommendations will soon be released for General Availability (GA) and so *will* be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score.
+
+Remediate them wherever possible (learn how in [Remediate recommendations in Azure Security Center](security-center-remediate-recommendations.md)).
+
+The Kubernetes workload protection recommendations are:
+
+- Azure Policy add-on for Kubernetes should be installed and enabled on your clusters
+- Container CPU and memory limits should be enforced
+- Privileged containers should be avoided
+- Immutable (read-only) root filesystem should be enforced for containers
+- Container with privilege escalation should be avoided
+- Running containers as root user should be avoided
+- Containers sharing sensitive host namespaces should be avoided
+- Least privileged Linux capabilities should be enforced for containers
+- Usage of pod HostPath volume mounts should be restricted to a known list
+- Containers should listen on allowed ports only
+- Services should listen on allowed ports only
+- Usage of host networking and ports should be restricted
+- Overriding or disabling of containers AppArmor profile should be restricted
+- Container images should be deployed only from trusted registries
+
+Learn more about these recommendations in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
+ ### Two recommendations from "Apply system updates" security control being deprecated **Estimated date for change:** February 2021
security https://docs.microsoft.com/en-us/azure/security/fundamentals/management-monitoring-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/management-monitoring-overview.md
@@ -14,7 +14,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: na
-ms.date: 10/28/2019
+ms.date: 01/24/2021
ms.author: terrylan ---
@@ -113,15 +113,18 @@ Learn more:
## Security Center
-Azure Security Center helps you prevent, detect, and respond to threats. Security Center gives you increased visibility into, and control over, the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions. It helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
+Azure Security Center helps you prevent, detect, and respond to threats. Security Center gives you increased visibility into, and control over, the security of your Azure resources as well as those in your hybrid cloud environment.
+
+Security Center performs continuous security assessments of your connected resources and compares their configuration and deployment against the [Azure Security Benchmark](../benchmarks/introduction.md) to provide detailed security recommendations tailored for your environment.
Security Center helps you optimize and monitor the security of your Azure resources by:
-* Enabling you to define policies for your Azure subscription resources according to:
- * Your companyΓÇÖs security needs.
- * The type of applications or sensitivity of the data in each subscription.
-* Monitoring the state of your Azure virtual machines, networking, and applications.
-* Providing a list of prioritized security alerts, including alerts from integrated partner solutions. It also provides the information that you need to quickly investigate an attack and recommendations on how to remediate it.
+- Enabling you to define policies for your Azure subscription resources according to:
+ - Your organizationΓÇÖs security needs.
+ - The type of applications or sensitivity of the data in each subscription.
+ - Any industry or regulatory standards or benchmarks you apply to your subscriptions.
+- Monitoring the state of your Azure virtual machines, networking, and applications.
+- Providing a list of prioritized security alerts, including alerts from integrated partner solutions. It also provides the information that you need to quickly investigate an attack and recommendations on how to remediate it.
Learn more:
security https://docs.microsoft.com/en-us/azure/security/fundamentals/threat-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/threat-detection.md
@@ -14,7 +14,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: na
-ms.date: 11/21/2017
+ms.date: 01/24/2021
ms.author: TomSh ---
@@ -132,21 +132,25 @@ You can create and manage DSC resources that are hosted in Azure and apply them
## Azure Security Center
-Azure Security Center helps protect your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions. Within the service, you can define polices against both your Azure subscriptions and [resource groups](../../azure-resource-manager/management/manage-resources-portal.md) for greater granularity.
+Azure Security Center helps protect your hybrid cloud environment. By performing continuous security assessments of your connected resources, it's able to provide detailed security recommendations for the discovered vulnerabilities.
-![Azure Security Center diagram](./media/threat-detection/azure-threat-detection-fig8.png)
+Security Center's recommendations are based on the [Azure Security Benchmark](../benchmarks/introduction.md) - the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud centric security.
+
+Security Center's integrated cloud workload protection platform (CWPP), **Azure Defender**, brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. Enabling Azure Defender brings a range of additional security features (see [Introduction to Azure Defender](../../security-center/azure-defender.md)). The Azure Defender dashboard in Security Center provides visibility and control of the CWP features for your environment:
+
+:::image type="content" source="../../security-center/media/azure-defender/sample-defender-dashboard.png" alt-text="An example of the Azure Defender dashboard" lightbox="../../security-center/media/azure-defender/sample-defender-dashboard.png":::
Microsoft security researchers are constantly on the lookout for threats. They have access to an expansive set of telemetry gained from MicrosoftΓÇÖs global presence in the cloud and on-premises. This wide-reaching and diverse collection of datasets enables Microsoft to discover new attack patterns and trends across its on-premises consumer and enterprise products, as well as its online services. Thus, Security Center can rapidly update its detection algorithms as attackers release new and increasingly sophisticated exploits. This approach helps you keep pace with a fast-moving threat environment.
-![Security Center threat detection](./media/threat-detection/azure-threat-detection-fig9.jpg)
+:::image type="content" source="../../security-center/media/security-center-managing-and-responding-alerts/alerts-page.png" alt-text="Azure Security Center's security alerts list":::
-Security Center threat detection works by automatically collecting security information from your Azure resources, the network, and connected partner solutions. It analyzes this information, correlating information from multiple sources, to identify threats.
+Azure Defender automatically collects security information from your resources, the network, and connected partner solutions. It analyzes this information, correlating information from multiple sources, to identify threats.
-Security alerts are prioritized in Security Center along with recommendations on how to remediate the threat.
+Azure Defender alerts are prioritized in Security Center along with recommendations on how to remediate the threats.
-Security Center employs advanced security analytics, which go far beyond signature-based approaches. Breakthroughs in big data and [machine learning](https://azure.microsoft.com/blog/machine-learning-in-azure-security-center/) technologies are used to evaluate events across the entire cloud fabric. Advanced analytics can detect threats that would be impossible to identify through manual approaches and predicting the evolution of attacks. These security analytics types are covered in the next sections.
+Security Center employs advanced security analytics, which go far beyond signature-based approaches. Breakthroughs in big data and [machine learning](https://azure.microsoft.com/blog/machine-learning-in-azure-security-center/) technologies are used to evaluate events across the entire cloud. Advanced analytics can detect threats that would be impossible to identify through manual approaches and predict the evolution of attacks. These security analytics types are covered in the next sections.
### Threat intelligence
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-agent.md
@@ -38,6 +38,11 @@ In this step, you will designate and configure the Linux machine that will forwa
- The Linux machine must not be connected to any Azure workspaces before you install the Log Analytics agent.
+- Your Linux machine must have a minimum of **4 CPU cores and 8 GB RAM**.
+
+ > [!NOTE]
+ > - A single log forwarder machine using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
+ - You may need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**. ## Run the deployment script
@@ -47,7 +52,7 @@ In this step, you will designate and configure the Linux machine that will forwa
1. Under **1.2 Install the CEF collector on the Linux machine**, copy the link provided under **Run the following script to install and apply the CEF collector**, or from the text below (applying the Workspace ID and Primary Key in place of the placeholders): ```bash
- sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [WorkspaceID] [Workspace Primary Key]
+ sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [WorkspaceID] [Workspace Primary Key]
``` 1. While the script is running, check to make sure you don't get any error or warning messages.
@@ -90,8 +95,8 @@ Choose a syslog daemon to see the appropriate description.
- Downloads the installation script for the Log Analytics (OMS) Linux agent. ```bash
- wget -O https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/
- onboard_agent.sh
+ wget -O onboard_agent.sh https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/
+ master/installer/scripts/onboard_agent.sh
``` - Installs the Log Analytics agent.
@@ -156,8 +161,8 @@ Choose a syslog daemon to see the appropriate description.
- Downloads the installation script for the Log Analytics (OMS) Linux agent. ```bash
- wget -O https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/
- onboard_agent.sh
+ wget -O onboard_agent.sh https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/
+ master/installer/scripts/onboard_agent.sh
``` - Installs the Log Analytics agent.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-verify.md
@@ -40,7 +40,7 @@ Be aware that it may take about 20 minutes until your logs start to appear in **
1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Azure Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br> ```bash
- sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
``` - You may get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-common-event-format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-common-event-format.md
@@ -76,6 +76,12 @@ Make sure the Linux machine you use as a log forwarder is running one of the fol
Make sure your machine also meets the following requirements:
+- Capacity
+ - Your machine must have a minimum of **4 CPU cores and 8 GB RAM**.
+
+ > [!NOTE]
+ > - A single log forwarder machine using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
+ - Permissions - You must have elevated permissions (sudo) on your machine.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
@@ -116,8 +116,12 @@ This detection requires a specific configuration of the Syslog data connector:
2. Allow sufficient time for syslog information to be collected. Then, navigate to **Azure Sentinel - Logs**, and copy and paste the following query:
- ```console
- Syslog |ΓÇ» where Facility in ("authpriv","auth")| extend c = extract( "Accepted\\s(publickey|password|keyboard-interactive/pam)\\sfor ([^\\s]+)",1,SyslogMessage)| where isnotempty(c) | count
+ ```kusto
+ Syslog
+ | where Facility in ("authpriv","auth")
+ | extend c = extract( "Accepted\\s(publickey|password|keyboard-interactive/pam)\\sfor ([^\\s]+)",1,SyslogMessage)
+ | where isnotempty(c)
+ | count
``` Change the **Time range** if required, and select **Run**.
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/how-to-enable-replication-proximity-placement-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
@@ -88,6 +88,46 @@ $diskconfigs += $OSDiskReplicationConfig, $DataDisk1ReplicationConfig
$TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id ```
+When enabling replication for multiple data disks, use the below PowerShell cmdlet -
+
+```azurepowershell
+#Get the resource group that the virtual machine must be created in when failed over.
+$RecoveryRG = Get-AzResourceGroup -Name "a2ademorecoveryrg" -Location "West US 2"
+
+#Specify replication properties for each disk of the VM that is to be replicated (create disk replication configuration)
+#Make sure to replace the variables $OSdiskName with OS disk name.
+
+#OS Disk
+$OSdisk = Get-AzDisk -DiskName $OSdiskName -ResourceGroupName "A2AdemoRG"
+$OSdiskId = $OSdisk.Id
+$RecoveryOSDiskAccountType = $OSdisk.Sku.Name
+$RecoveryReplicaDiskAccountType = $OSdisk.Sku.Name
+
+$OSDiskReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $OSdiskId -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryOSDiskAccountType
+
+$diskconfigs = @()
+$diskconfigs.Add($OSDiskReplicationConfig)
+
+#Data disk
+
+# Add data disks
+Foreach( $disk in $VM.StorageProfile.DataDisks)
+{
+ $datadisk = Get-AzDisk -DiskName $datadiskName -ResourceGroupName "A2AdemoRG"
+ $dataDiskId1 = $datadisk[0].Id
+ $RecoveryReplicaDiskAccountType = $datadisk[0].Sku.Name
+ $RecoveryTargetDiskAccountType = $datadisk[0].Sku.Name
+ $DataDisk1ReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id `
+ -DiskId $dataDiskId1 -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType `
+ -RecoveryTargetDiskAccountType $RecoveryTargetDiskAccountType
+ $diskconfigs.Add($DataDisk1ReplicationConfig)
+}
+
+#Start replication by creating replication protected item. Using a GUID for the name of the replication protected item to ensure uniqueness of name.
+
+$TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id
+```
+ When enabling zone to zone replication with PPG, the command to start replication will be exchanged with the PowerShell cmdlet - ```azurepowershell
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-faq.md
@@ -340,6 +340,14 @@ Yes, you can use the alternate location recovery to failback to a different host
* [For VMware virtual machines](concepts-types-of-failback.md#alternate-location-recovery-alr) * [For Hyper-V virtual machines](hyper-v-azure-failback.md#fail-back-to-an-alternate-location)
+### What is the difference between Complete Migration, Commit and Disable Replication?
+
+Once a machine from source location has been failed over to the target location then there are three options available for you to choose from. All three serve different purposes -
+
+1. **Complete Migration** means that you will not go back to the source location anymore. You migrated over to the target region and now you're done. Clicking on Complete Migration triggers Commit and then Disable Replication, internally.
+2. **Commit** means that this is not the end of your replication process. The replication item along with all the configuration will remain, and you can hit **Re-protect** at a later point in time to enable the replication of your machines back to the source region.
+3. **Disable Replication** will disable the replication and remove all the related configuration. It wonΓÇÖt affect the already existing machine in the target region.
+ ## Automation ### Can I automate Site Recovery scenarios with an SDK?
storsimple https://docs.microsoft.com/en-us/azure/storsimple/storsimple-configure-mpio-on-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-configure-mpio-on-linux.md
@@ -16,10 +16,6 @@ This procedure is applicable to all the models of StorSimple 8000 series devices
> [!NOTE] > This procedure cannot be used for a StorSimple Cloud Appliance. For more information, see how to configure host servers for your cloud appliance.
-> [!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-- ## About multipathing The multipathing feature allows you to configure multiple I/O paths between a host server and a storage device. These I/O paths are physical SAN connections that can include separate cables, switches, network interfaces, and controllers. Multipathing aggregates the I/O paths, to configure a new device that is associated with all of the aggregated paths.
@@ -48,7 +44,7 @@ The multipath.conf has five sections:
- **System level defaults** *(defaults)*: You can override system level defaults. - **Blacklisted devices** *(blacklist)*: You can specify the list of devices that should not be controlled by device-mapper.-- **Blacklist exceptions** *(blacklist_exceptions)*: You can identify specific devices to be treated as multipath devices even if listed in the blacklist.
+- **Blacklist exceptions** *(blacklist_exceptions)*: You can identify specific devices to be treated as multipath devices even if listed in the blocklist.
- **Storage controller specific settings** *(devices)*: You can specify configuration settings that will be applied to devices that have Vendor and Product information. - **Device specific settings** *(multipaths)*: You can use this section to fine-tune the configuration settings for individual LUNs.
@@ -209,12 +205,12 @@ The multipath-supported devices can be automatically discovered and configured.
``` ### Step 2: Configure multipathing for StorSimple volumes
-By default, all devices are black listed in the multipath.conf file and will be bypassed. You will need to create blacklist exceptions to allow multipathing for volumes from StorSimple devices.
+By default, all devices are blocklisted in the multipath.conf file and will be bypassed. You will need to create blocklist exceptions to allow multipathing for volumes from StorSimple devices.
1. Edit the `/etc/mulitpath.conf` file. Type: `vi /etc/multipath.conf`
-1. Locate the blacklist_exceptions section in the multipath.conf file. Your StorSimple device needs to be listed as a blacklist exception in this section. You can uncomment relevant lines in this file to modify it as shown below (use only the specific model of the device you are using):
+1. Locate the blacklist_exceptions section in the multipath.conf file. Your StorSimple device needs to be listed as a blocklist exception in this section. You can uncomment relevant lines in this file to modify it as shown below (use only the specific model of the device you are using):
```config blacklist_exceptions {
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/n-series-driver-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/n-series-driver-setup.md
@@ -18,9 +18,6 @@ If you choose to install NVIDIA GPU drivers manually, this article provides supp
For N-series VM specs, storage capacities, and disk details, see [GPU Linux VM sizes](../sizes-gpu.md?toc=/azure/virtual-machines/linux/toc.json).
-> [!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- [!INCLUDE [virtual-machines-n-series-linux-support](../../../includes/virtual-machines-n-series-linux-support.md)] ## Install CUDA drivers on N-series VMs
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-general.md
@@ -18,7 +18,7 @@ General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing
- The [Av2-series](av2-series.md) VMs can be deployed on a variety of hardware types and processors. A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled, based upon the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories. > [!NOTE]
- > The A8 ΓÇô A11 VMs are planned for retirement on 3/2021. For more information, see [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/).
+ > The A8, A9, A10 A11 VMs are planned for retirement on 3/2021. For more information, see [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/). These VM sizes are in the original "A_v1" series, NOT "v2".
- [B-series burstable](sizes-b-series-burstable.md) VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, small databases and development and test environments. These workloads typically have burstable performance requirements. The B-Series provides these customers the ability to purchase a VM size with a price conscious baseline performance that allows the VM instance to build up credits when the VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the VMΓÇÖs baseline using up to 100% of the CPU when your application requires the higher CPU performance.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
@@ -14,7 +14,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services
-ms.date: 01/18/2021
+ms.date: 01/23/2021
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -81,6 +81,7 @@ In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- 01/23/2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-storage) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
- 01/18/2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_oracle) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp) - 01/11/2021: Minor changes in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to adjust commands to work for both RHEL8 and RHEL7, and ENSA1 and ENSA2 - 01/05/2021: Changes in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md), revising the recommended configuration to allow SAP Host Agent to manage the local port range
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md
@@ -13,7 +13,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure
-ms.date: 01/18/2021
+ms.date: 01/23/2021
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -58,7 +58,13 @@ Important to understand is the performance relationship the size and that there
The table below demonstrates that it could make sense to create a large ΓÇ£StandardΓÇ¥ volume to store backups and that it does not make sense to create a ΓÇ£UltraΓÇ¥ volume larger than 12 TB because the physical bandwidth capacity of a single LIF would be exceeded.
-The maximum throughput for a LIF and a single Linux session is between 1.2 and 1.4 GB/s.
+The maximum throughput for a LIF and a single Linux session is between 1.2 and 1.4 GB/s. If you require more throughput for /hana/data, you can use SAP HANA data volume partitioning to stripe the I/O activity during data reload or HANA savepoints across multiple HANA data files that are located on multiple NFS shares. For more details on HANA data volume striping read these articles:
+
+- [The HANA Administrator's Guide](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/40b2b2a880ec4df7bac16eae3daef756.html?q=hana%20data%20volume%20partitioning)
+- [Blog about SAP HANA ΓÇô Partitioning Data Volumes](https://blogs.sap.com/2020/10/07/sap-hana-partitioning-data-volumes/)
+- [SAP Note #2400005](https://launchpad.support.sap.com/#/notes/2400005)
+- [SAP Note #2700123](https://launchpad.support.sap.com/#/notes/2700123)
+ | Size | Throughput Standard | Throughput Premium | Throughput Ultra | | --- | --- | --- | --- |
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
@@ -13,7 +13,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure
-ms.date: 11/26/2020
+ms.date: 01/23/2021
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -59,11 +59,23 @@ Some guiding principles in selecting your storage configuration for HANA can be
- Decide on the type of storage based on [Azure Storage types for SAP workload](./planning-guide-storage.md) and [Select a disk type](../../disks-types.md) - The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM storage throughput is documented in the article [Memory optimized virtual machine sizes](../../sizes-memory.md) - When deciding for the storage configuration, try to stay below the overall throughput of the VM with your **/hana/data** volume configuration. Writing savepoints, SAP HANA can be aggressive issuing I/Os. It is easily possible to push up to throughput limits of your **/hana/data** volume when writing a savepoint. If your disk(s) that build the **/hana/data** volume have a higher throughput than your VM allows, you could run into situations where throughput utilized by the savepoint writing is interfering with throughput demands of the redo log writes. A situation that can impact the application throughput-- If you are using Azure premium storage, the least expensive configuration is to use logical volume managers to build stripe sets to build the **/hana/data** and **/hana/log** volumes+ > [!IMPORTANT] > The suggestions for the storage configurations are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload. +
+## Stripe sets versus SAP HANA data volume partitioning
+Using Azure premium storage you may hit the best price/performance ratio when you stripe the **/hana/data** and/or **/hana/log** volume across multiple Azure disks. Instead of deploying larger disk volumes that provide the more on IOPS or throughput needed. So far this was accomplished with LVM and MDADM volume managers which are part of Linux. The method of striping disks is decades old and well known. As beneficial as those striped volumes are to get to the IOPS or throughput capabilities you may need, it adds complexities around managing those striped volumes. Especially in cases when the volumes need to get extended in capacity. At least for **/hana/data**, SAP introduced an alternative method that achieves the same goal as striping across multiple Azure disks. Since SAP HANA 2.0 SPS03, the HANA indexserver is able to stripe its I/O activity across multiple HANA data files which are located on different Azure disks. The advantage is that you don't have to take care of creating and managing a striped volume across different Azure disks. The SAP HANA functionality of data volume partitioning is described in detail in:
+
+- [The HANA Administrator's Guide](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/40b2b2a880ec4df7bac16eae3daef756.html?q=hana%20data%20volume%20partitioning)
+- [Blog about SAP HANA ΓÇô Partitioning Data Volumes](https://blogs.sap.com/2020/10/07/sap-hana-partitioning-data-volumes/)
+- [SAP Note #2400005](https://launchpad.support.sap.com/#/notes/2400005)
+- [SAP Note #2700123](https://launchpad.support.sap.com/#/notes/2700123)
+
+Reading through the details, it is apparent that leveraging this functionality takes away complexities of volume manager based stripe sets. You also realize that the HANA data volume partitioning is not only working for Azure block storage, like Azure premium storage. You can use this functionality as well to stripe across NFS shares in case these shares have IOPS or throughput limitations.
++ ## Linux I/O Scheduler mode Linux has several different I/O scheduling modes. Common recommendation through Linux vendors and SAP is to reconfigure the I/O scheduler mode for disk volumes from the **mq-deadline** or **kyber** mode to the **noop** (non-multiqueue) or **none** for (multiqueue) mode. Details are referenced in [SAP Note #1984787](https://launchpad.support.sap.com/#/notes/1984787).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/os-upgrade-hana-large-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md
@@ -22,9 +22,6 @@ This document describes the details on operating system upgrades on the HANA Lar
>[!NOTE] >The OS upgrade is customer's responsibility, Microsoft operations support can guide you to the key areas to watch out during the upgrade. You should consult your operating system vendor as well before you plan for an upgrade.
-> [!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- During HLI unit provisioning, the Microsoft operations team installs the operating system. Over the time, you are required to maintain the operating system (Example: Patching, tuning, upgrading etc.) on the HLI unit.
web-application-firewall https://docs.microsoft.com/en-us/azure/web-application-firewall/ag/application-gateway-crs-rulegroups-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
@@ -14,9 +14,6 @@ ms.topic: conceptual
Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits. This is done through rules that are defined based on the OWASP core rule sets 3.1, 3.0, or 2.2.9. These rules can be disabled on a rule-by-rule basis. This article contains the current rules and rule sets offered.
-> [!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ## Core rule sets The Application Gateway WAF comes pre-configured with CRS 3.0 by default. But you can choose to use CRS 3.1 or CRS 2.2.9 instead. CRS 3.1 offers new rule sets defending against Java infections, an initial set of file upload checks, fixed false positives, and more. CRS 3.0 offers reduced false positives compared with CRS 2.2.9. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md).
@@ -257,7 +254,7 @@ The following rule groups and rules are available when using Web Application Fir
|941150|XSS Filter - Category 5 = Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated Javascript|
@@ -485,7 +482,7 @@ The following rule groups and rules are available when using Web Application Fir
|941130|XSS Filter - Category 3 = Attribute Vector| |941140|XSS Filter - Category 4 = Javascript URI Vector| |941150|XSS Filter - Category 5 = Disallowed HTML Attributes|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated Javascript|