Updates from: 07/29/2021 03:08:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/javascript-and-page-layout.md
zone_pivot_groups: b2c-policy-type
-# JavaScript and page layout versions in Azure Active Directory B2C
+# Enable JavaScript and page layout versions in Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
Azure AD B2C provides a set of packaged content containing HTML, CSS, and JavaSc
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
-## Select a page layout version
+## Begin setting up a page layout version
If you intend to enable JavaScript client-side code, the elements you base your JavaScript on must be immutable. If they're not immutable, any changes could cause unexpected behavior on your user pages. To prevent these issues, enforce the use of a page layout and specify a page layout version to ensure the content definitions youΓÇÖve based your JavaScript on are immutable. Even if you donΓÇÖt intend to enable JavaScript, you can specify a page layout version for your pages.
For information about the different page layout versions, see the [Page layout v
::: zone pivot="b2c-custom-policy"
-Select a [page layout](contentdefinitions.md#select-a-page-layout) for the user interface elements of your application.
+To specify a page layout version for your custom policy pages:
-Define a [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. Learn how to [Migrating to page layout](contentdefinitions.md#migrating-to-page-layout) with page version.
+1. Select a [page layout](contentdefinitions.md#select-a-page-layout) for the user interface elements of your application.
+1. Define a [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_.
The following example shows the content definition identifiers and the corresponding **DataUri** with page contract:
You enable script execution by adding the **ScriptExecution** element to the [Re
Follow these guidelines when you customize the interface of your application using JavaScript: -- Don't bind a click event on `<a>` HTML elements.-- DonΓÇÖt take a dependency on Azure AD B2C code or comments.-- Don't change the order or hierarchy of Azure AD B2C HTML elements. Use an Azure AD B2C policy to control the order of the UI elements.
+- Don't
+ - bind a click event on `<a>` HTML elements.
+ - take a dependency on Azure AD B2C code or comments.
+ - change the order or hierarchy of Azure AD B2C HTML elements. Use an Azure AD B2C policy to control the order of the UI elements.
- You can call any RESTful service with these considerations: - You may need to set your RESTful service CORS to allow client-side HTTP calls. - Make sure your RESTful service is secure and uses only the HTTPS protocol.
In the code, replace `termsOfUseUrl` with the link to your terms of use agreemen
## Next steps
-Find more information about how you can customize the user interface of your applications in [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
+Find more information about how to [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
The Azure portal is a convenient way to configure provisioning for individual ap
1. Upon successful sign-in, you'll see the user account details in the left-hand pane. ### Retrieve the gallery application template identifier
-Applications in the Azure AD application gallery each have an [application template](/graph/api/applicationtemplate-list?tabs=http&view=graph-rest-beta) that describes the metadata for that application. Using this template, you can create an instance of the application and service principal in your tenant for management. Retrieve the identifier of the application template for **AWS Single-Account Access** and from the response, record the value of the **id** property to use later in this tutorial.
+Applications in the Azure AD application gallery each have an [application template](/graph/api/applicationtemplate-list?tabs=http&view=graph-rest-beta&preserve-view=true) that describes the metadata for that application. Using this template, you can create an instance of the application and service principal in your tenant for management. Retrieve the identifier of the application template for **AWS Single-Account Access** and from the response, record the value of the **id** property to use later in this tutorial.
#### Request
GET https://graph.microsoft.com/beta/applicationTemplates?$filter=displayName eq
```http HTTP/1.1 200 OK Content-type: application/json+ { "value": [ {
Content-type: application/json
### Create the gallery application
-Use the template ID retrieved for your application in the last step to [create an instance](/graph/api/applicationtemplate-instantiate?tabs=http&view=graph-rest-beta) of the application and service principal in your tenant.
+Use the template ID retrieved for your application in the last step to [create an instance](/graph/api/applicationtemplate-instantiate?tabs=http&view=graph-rest-beta&preserve-view=true) of the application and service principal in your tenant.
#### Request
Use the template ID retrieved for your application in the last step to [create a
```msgraph-interactive POST https://graph.microsoft.com/beta/applicationTemplates/{id}/instantiate Content-type: application/json+ { "displayName": "AWS Contoso" }
Content-type: application/json
```http HTTP/1.1 201 OK Content-type: application/json+ { "application": { "objectId": "cbc071a6-0fa5-4859-8g55-e983ef63df63",
Content-type: application/json
### Retrieve the template for the provisioning connector
-Applications in the gallery that are enabled for provisioning have templates to streamline configuration. Use the request below to [retrieve the template for the provisioning configuration](/graph/api/synchronization-synchronizationtemplate-list?tabs=http&view=graph-rest-beta). Note that you will need to provide the ID. The ID refers to the preceding resource, which in this case is the servicePrincipal resource.
+Applications in the gallery that are enabled for provisioning have templates to streamline configuration. Use the request below to [retrieve the template for the provisioning configuration](/graph/api/synchronization-synchronizationtemplate-list?tabs=http&view=graph-rest-beta&preserve-view=true). Note that you will need to provide the ID. The ID refers to the preceding resource, which in this case is the servicePrincipal resource.
#### Request
GET https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/temp
#### Response ```http HTTP/1.1 200 OK+ { "value": [ {
HTTP/1.1 200 OK
``` ### Create the provisioning job
-To enable provisioning, you'll first need to [create a job](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta). Use the following request to create a provisioning job. Use the templateId from the previous step when specifying the template to be used for the job.
+To enable provisioning, you'll first need to [create a job](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta&preserve-view=true). Use the following request to create a provisioning job. Use the templateId from the previous step when specifying the template to be used for the job.
#### Request ```msgraph-interactive POST https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs Content-type: application/json+ { "templateId": "aws" }
Content-type: application/json
```http HTTP/1.1 201 OK Content-type: application/json+ { "id": "{jobId}", "templateId": "aws",
Content-type: application/json
### Test the connection to the application
-Test the connection with the third-party application. The following example is for an application that requires a client secret and secret token. Each application has its own requirements. Applications often use a base address in place of a client secret. To determine what credentials your app requires, go to the provisioning configuration page for your application, and in developer mode, click **test connection**. The network traffic will show the parameters used for credentials. For a full list of credentials, see [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta). Most applications, such as Azure Databricks, rely on a BaseAddress and SecretToken. The BaseAddress is referred to as a tenant URL in the Azure portal.
+Test the connection with the third-party application. The following example is for an application that requires a client secret and secret token. Each application has its own requirements. Applications often use a base address in place of a client secret. To determine what credentials your app requires, go to the provisioning configuration page for your application, and in developer mode, click **test connection**. The network traffic will show the parameters used for credentials. For a full list of credentials, see [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta&preserve-view=true). Most applications, such as Azure Databricks, rely on a BaseAddress and SecretToken. The BaseAddress is referred to as a tenant URL in the Azure portal.
#### Request ```msgraph-interactive POST https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs/{id}/validateCredentials+ { "credentials": [
- { "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx" },
- { "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx" }
+ {
+ "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx"
+ },
+ {
+ "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx"
+ }
] } ```
HTTP/1.1 204 No Content
### Save your credentials
-Configuring provisioning requires establishing a trust between Azure AD and the application. Authorize access to the third-party application. The following example is for an application that requires a client secret and a secret token. Each application has its own requirements. Review the [API documentation](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta) to see the available options.
+Configuring provisioning requires establishing a trust between Azure AD and the application. Authorize access to the third-party application. The following example is for an application that requires a client secret and a secret token. Each application has its own requirements. Review the [API documentation](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta&preserve-view=true) to see the available options.
#### Request ```msgraph-interactive
PUT https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/secr
{ "value": [
- { "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx" },
- { "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx" }
+ {
+ "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx"
+ },
+ {
+ "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx"
+ }
] } ```
HTTP/1.1 204 No Content
``` ## Step 4: Start the provisioning job
-Now that the provisioning job is configured, use the following command to [start the job](/graph/api/synchronization-synchronizationjob-start?tabs=http&view=graph-rest-beta).
+Now that the provisioning job is configured, use the following command to [start the job](/graph/api/synchronization-synchronizationjob-start?tabs=http&view=graph-rest-beta&preserve-view=true).
#### Request
GET https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs
```http HTTP/1.1 200 OK Content-type: application/json
-Content-length: 2577
+ { "id": "{jobId}", "templateId": "aws",
Content-length: 2577
### Monitor provisioning events using the provisioning logs
-In addition to monitoring the status of the provisioning job, you can use the [provisioning logs](/graph/api/provisioningobjectsummary-list?tabs=http&view=graph-rest-beta) to query for all the events that are occurring. For example, query for a particular user and determine if they were successfully provisioned.
+In addition to monitoring the status of the provisioning job, you can use the [provisioning logs](/graph/api/provisioningobjectsummary-list?tabs=http&view=graph-rest-beta&preserve-view=true) to query for all the events that are occurring. For example, query for a particular user and determine if they were successfully provisioned.
#### Request ```msgraph-interactive
GET https://graph.microsoft.com/beta/auditLogs/provisioning
```http HTTP/1.1 200 OK Content-type: application/json+ { "@odata.context": "https://graph.microsoft.com/beta/$metadata#auditLogs/provisioning", "value": [
Content-type: application/json
``` ## See also -- [Review the synchronization Microsoft Graph documentation](/graph/api/resources/synchronization-overview?view=graph-rest-beta)
+- [Review the synchronization Microsoft Graph documentation](/graph/api/resources/synchronization-overview?view=graph-rest-beta&preserve-view=true)
- [Integrating a custom SCIM app with Azure AD](./use-scim-to-provision-users-and-groups.md)
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-authenticator-app.md
The Microsoft Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or Azure AD Multi-Factor Authentication events.
-Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OAUTH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
+Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OATH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md).
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-writeback.md
Previously updated : 07/14/2020 Last updated : 07/28/2021
When a federated or password hash synchronized user attempts to reset or change
* If the writeback service is down, the user is informed that their password can't be reset right now. 1. Next, the user passes the appropriate authentication gates and reaches the **Reset password** page. 1. The user selects a new password and confirms it.
-1. When the user selects **Submit**, the plaintext password is encrypted with a symmetric key created during the writeback setup process.
+1. When the user selects **Submit**, the plaintext password is encrypted with a public key created during the writeback setup process.
1. The encrypted password is included in a payload that gets sent over an HTTPS channel to your tenant-specific service bus relay (that is set up for you during the writeback setup process). This relay is protected by a randomly generated password that only your on-premises installation knows. 1. After the message reaches the service bus, the password-reset endpoint automatically wakes up and sees that it has a reset request pending. 1. The service then looks for the user by using the cloud anchor attribute. For this lookup to succeed, the following conditions must be met:
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
People who enabled phone sign-in from the Microsoft Authenticator app see a mess
To use passwordless phone sign-in with the Microsoft Authenticator app, the following prerequisites must be met: -- Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Azure Multi-Factor Auth Connector must be enabled to allow users to register for push notifications for phone sign-in.
+- Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity.
+
+ Azure Multi-Factor Auth Connector must be enabled to allow users to register for push notifications for phone sign-in.
![Screenshot of Azure Multi-Factor Auth Connector enabled.](media/howto-authentication-passwordless-phone/connector.png)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 06/24/2021 Last updated : 07/28/2021
Users in scope for these policies will get redirected to the [Interrupt mode of
- If **Temporary Access Pass sign in was blocked due to User Credential Policy** appears during sign-in with a Temporary Access Pass, check the following: - The user has a multi-use Temporary Access Pass while the authentication method policy requires a one-time Temporary Access Pass. - A one-time Temporary Access Pass was already used.
+- If Temporary Access Pass sign-in was blocked due to User Credential Policy, check that the user is in scope for the TAP policy.
## Next steps
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-windows.md
The following limitations apply to using SSPR from the Windows sign-in screen:
- *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1 - *NoLockScreen* is set to enabled or 1
+ - *BlockNonAdminUserInstall* is set to enabled or 1
- *EnableLostMode* is set on the device - Explorer.exe is replaced with a custom shell - The combination of the following specific three settings can cause this feature to not work.
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Previously updated : 04/20/2021 Last updated : 07/28/2021
# Conditional Access: Securing security info registration
-Securing when and how users register for Azure AD Multi-Factor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience.
+Securing when and how users register for Azure AD Multi-Factor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience. Users signing in to the Microsoft Authenticator app or enabling passwordless phone sign-in are subject to this policy.
Some organizations in the past may have used trusted network location or device compliance as a means to secure the registration experience. With the addition of [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) in Azure AD, administrators can provision time-limited credentials to their users that allow them to register from any device or location. Temporary Access Pass credentials satisfy Conditional Access requirements for multi-factor authentication.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-plan.md
If your Windows 10 domain joined devices are [Azure AD registered](concept-azure
To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this can be found in the article [controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md). It is also important for organizations to understand that certain Azure AD capabilities will not work in a single forest, multiple Azure AD tenants configurations. - [Device writeback](../hybrid/how-to-connect-device-writeback.md) will not work. This affects [Device based Conditional Access for on-premise apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust). - [Groups writeback](../hybrid/how-to-connect-group-writeback.md) will not work. This affects writeback of Office 365 Groups to a forest with Exchange installed.-- [Seamless SSO](../hybrid/how-to-connect-sso.md) will not work. This affects SSO scenarios that organizations may be using on cross OS/broowser platforms, for example iOS/Linux with Firefox, Safari, Chrome without the Windows 10 extension.
+- [Seamless SSO](../hybrid/how-to-connect-sso.md) will not work. This affects SSO scenarios that organizations may be using on cross OS/browser platforms, for example iOS/Linux with Firefox, Safari, Chrome without the Windows 10 extension.
- [Hybrid Azure AD join for Windows down-level devices in managed environment](./hybrid-azuread-join-managed-domains.md#enable-windows-down-level-devices) will not work. For example, hybrid Azure AD join on Windows Server 2012 R2 in a managed environment requires Seamless SSO and since Seamless SSO will not work, hybrid Azure AD join for such a setup will not work. - [On-premises Azure AD Password Protection](../authentication/concept-password-ban-bad-on-premises.md) will not work.This affects ability to perform password changes and password reset events against on-premises Active Directory Domain Services (AD DS) domain controllers using the same global and custom banned password lists that are stored in Azure AD.
The table below provides details on support for these on-premises AD UPNs in Win
> [Configure hybrid Azure Active Directory join for managed environment](hybrid-azuread-join-managed-domains.md) <!--Image references-->
-[1]: ./media/hybrid-azuread-join-plan/12.png
+[1]: ./media/hybrid-azuread-join-plan/12.png
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
For more information about how to better secure your organization by using autom
In December 2020 we have added following 18 new applications in our App gallery with Federation support:
-[AwareGo](../saas-apps/awarego-tutorial.md), [HowNow SSO](https://gethownow.com/), [ZyLAB ONE Legal Hold](https://www.zylab.com/en/product/legal-hold), [Guider](http://www.guider-ai.com/), [Softcrisis](https://www.softcrisis.se/sv/), [Pims 365](http://www.omega365.com/pims), [InformaCast](../saas-apps/informacast-tutorial.md), [RetrieverMediaDatabase](../saas-apps/retrievermediadatabase-tutorial.md), [vonage](../saas-apps/vonage-tutorial.md), [Count Me In - Operations Dashboard](../saas-apps/count-me-in-operations-dashboard-tutorial.md), [ProProfs Knowledge Base](../saas-apps/proprofs-knowledge-base-tutorial.md), [RightCrowd Workforce Management](../saas-apps/rightcrowd-workforce-management-tutorial.md), [JLL TRIRIGA](../saas-apps/jll-tririga-tutorial.md), [Shutterstock](../saas-apps/shutterstock-tutorial.md), [FortiWeb Web Application Firewall](../saas-apps/linkedin-talent-solutions-tutorial.md), [LinkedIn Talent Solutions](../saas-apps/linkedin-talent-solutions-tutorial.md), [Equinix Federation App](../saas-apps/equinix-federation-app-tutorial.md), [KFAdvance](../saas-apps/kfadvance-tutorial.md)
+[AwareGo](../saas-apps/awarego-tutorial.md), [HowNow SSO](https://gethownow.com/), [ZyLAB ONE Legal Hold](https://www.zylab.com/en/product/legal-hold), [Guider](http://www.guider-ai.com/), [Softcrisis](https://www.softcrisis.se/sv/), [Pims 365](https://www.omega365.com/products/omega-pims), [InformaCast](../saas-apps/informacast-tutorial.md), [RetrieverMediaDatabase](../saas-apps/retrievermediadatabase-tutorial.md), [vonage](../saas-apps/vonage-tutorial.md), [Count Me In - Operations Dashboard](../saas-apps/count-me-in-operations-dashboard-tutorial.md), [ProProfs Knowledge Base](../saas-apps/proprofs-knowledge-base-tutorial.md), [RightCrowd Workforce Management](../saas-apps/rightcrowd-workforce-management-tutorial.md), [JLL TRIRIGA](../saas-apps/jll-tririga-tutorial.md), [Shutterstock](../saas-apps/shutterstock-tutorial.md), [FortiWeb Web Application Firewall](../saas-apps/linkedin-talent-solutions-tutorial.md), [LinkedIn Talent Solutions](../saas-apps/linkedin-talent-solutions-tutorial.md), [Equinix Federation App](../saas-apps/equinix-federation-app-tutorial.md), [KFAdvance](../saas-apps/kfadvance-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
A hotfix roll-up package (build 4.4.1642.0) is available as of September 25, 201
For more information, see [Hotfix rollup package (build 4.4.1642.0) is available for Identity Manager 2016 Service Pack 1](https://support.microsoft.com/help/4021562). -+
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Previously updated : 06/03/2021 Last updated : 07/27/2021
If you have been made *eligible* for an administrative role, then you must *acti
This article is for administrators who need to activate their Azure AD role in Privileged Identity Management.
-> [!TIP]
-> You can use the shortcut URL [AKA.MS/PIM](https://aka.ms/PIM) to jump straight to the Azure AD roles selection page.
-
-# [New version](#tab/new)
-
-## Activate a role for new version
+## Activate a role
When you need to assume an Azure AD role, you can request activation by opening **My roles** in Privileged Identity Management.
active-directory Reports Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reports-faq.md
na
ms.devlang: na Previously updated : 05/12/2020 Last updated : 07/28/2021 -+
This article includes answers to frequently asked questions about Azure Active D
## Getting started
+**Q: How does licensing for reporting work?**
+
+**A:** All Azure AD licenses allow you to see activity logs in the Azure Portal.
+
+If your tenant has:
+
+- An Azure AD free license, you can see up to seven days of activity logs data in the Portal.
+- An Azure AD Premium license, you can see up to 30 days of data in the Azure Portal.
+
+You can also export that log data to Azure Monitor, Azure Event Hubs, and Azure Storage, or query your activity data through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in the logs after you upgrade to a premium license with no data activities before the upgrade.
++ **Q: I currently use the `https://graph.windows.net/<tenant-name>/reports/` endpoint APIs to pull Azure AD audit and integrated application usage reports into our reporting systems programmatically. What should I switch to?** **A:** Look up the [API reference](https://developer.microsoft.com/graph/) to see how you can [use the APIs to access activity reports](concept-reporting-api.md). This endpoint has two reports (**Audit** and **Sign-ins**) which provide all the data you got in the old API endpoint. This new endpoint also has a sign-ins report with the Azure AD Premium license that you can use to get app usage, device usage, and user sign-in information.
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
na Previously updated : 06/23/2021 Last updated : 07/28/2021 -+ # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an event hub so I can integrate it with my third party SIEM system.
After data is displayed in the event hub, you can access and read the data in tw
* [Integrate Azure Active Directory logs with ArcSight using Azure Monitor](howto-integrate-activity-logs-with-arcsight.md) * [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md) * [Integrate Azure AD logs with SumoLogic by using Azure Monitor](howto-integrate-activity-logs-with-sumologic.md)
+* [Integrate Azure AD logs with Elastic using an event hub](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Bynder Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bynder-tutorial.md
Previously updated : 02/03/2021 Last updated : 07/27/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Identifier** text box, type a URL using the following pattern: For a Default Domain:
- `https://<company name>.getbynder.com`
+ `https://<COMPANY_NAME>.bynder.com`
For a Custom Domain:
- `https://<subdomain>.<domain>.com`
+ `https://<SUBDOMAIN>.<DOMAIN>.com`
b. In the **Reply URL** text box, type a URL using the following pattern: For a Default Domain:
- `https://<company name>.getbynder.com/sso/SAML/authenticate/`
+ `https://<COMPANY_NAME>.bynder.com/sso/SAML/authenticate/`
For a Custom Domain:
- `https://<subdomain>.<domain>.com/sso/SAML/authenticate/`
+ `https://<SUBDOMAIN>.<DOMAIN>.com/sso/SAML/authenticate/`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern: For a Default Domain:
- `https://<company name>.getbynder.com/login/`
+ `https://<COMPANY_NAME>.bynder.com/login/`
For a Custom Domain:
- ` https://<subdomain>.<domain>.com/login/`
+ ` https://<SUBDOMAIN>.<DOMAIN>.com/login/`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Bynder Client support team](https://www.bynder.com/en/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Clockwork Recruiting Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clockwork-recruiting-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Clockwork Recruiting | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Clockwork Recruiting.
++++++++ Last updated : 07/26/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Clockwork Recruiting
+
+In this tutorial, you'll learn how to integrate Clockwork Recruiting with Azure Active Directory (Azure AD). When you integrate Clockwork Recruiting with Azure AD, you can:
+
+* Control in Azure AD who has access to Clockwork Recruiting.
+* Enable your users to be automatically signed-in to Clockwork Recruiting with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Clockwork Recruiting single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Clockwork Recruiting supports **SP** initiated SSO.
+
+## Add Clockwork Recruiting from the gallery
+
+To configure the integration of Clockwork Recruiting into Azure AD, you need to add Clockwork Recruiting from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Clockwork Recruiting** in the search box.
+1. Select **Clockwork Recruiting** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Clockwork Recruiting
+
+Configure and test Azure AD SSO with Clockwork Recruiting using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Clockwork Recruiting.
+
+To configure and test Azure AD SSO with Clockwork Recruiting, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Clockwork Recruiting SSO](#configure-clockwork-recruiting-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Clockwork Recruiting test user](#create-clockwork-recruiting-test-user)** - to have a counterpart of B.Simon in Clockwork Recruiting that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Clockwork Recruiting** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ 1. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.clockworkrecruiting.com/sp`
+
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.clockworkrecruiting.com/session/new`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Clockwork Recruiting Client support team](mailto:support@clockworkrecruiting.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Clockwork Recruiting.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Clockwork Recruiting**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Clockwork Recruiting SSO
+
+To configure single sign-on on **Clockwork Recruiting** side, you need to send the **App Federation Metadata Url** to [Clockwork Recruiting support team](mailto:support@clockworkrecruiting.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Clockwork Recruiting test user
+
+In this section, you create a user called Britta Simon in Clockwork Recruiting. Work with [Clockwork Recruiting support team](mailto:support@clockworkrecruiting.com) to add the users in the Clockwork Recruiting platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Select **Test this application** in the Azure portal. You're redirected to the Clockwork Recruiting Sign-on URL where you can initiate the login flow.
+* Go to the Clockwork Recruiting Sign-on URL directly and initiate the login flow from there.
+* You can use Microsoft My Apps. When you select the Clockwork Recruiting tile in My Apps, you're redirected the to Clockwork Recruiting Sign-on URL. For more information about My Apps, see [Introduction to My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+After you configure Clockwork Recruiting you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cloudtamer Io Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cloudtamer-io-tutorial.md
Previously updated : 07/21/2021 Last updated : 07/26/2021
To configure and test Azure AD SSO with cloudtamer.io, perform the following ste
![Screenshot for IDMS create.](./media/cloudtamer-io-tutorial/idms-creation.png)
+1. Select **SAML 2.0** as the IDMS Type.
+ 1. Leave this screen open and copy values from this screen into the Azure AD configuration. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, paste the **IDENTITY PROVIDER ISSUER (ENTITY ID)** from cloudtamer.io into this box.
+ a. In the **Identifier** text box, paste the **SERVICE PROVIDER ISSUER (ENTITY ID)** from cloudtamer.io into this box.
b. In the **Reply URL** text box, paste the **SERVICE PROVIDER ACS URL** from cloudtamer.io into this box. 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<CUSTOMERDOMAIN>.<EXTENSION>/login`
-
- > [!NOTE]
- > The value is not real. Update the value with the actual Sign-on URL. You can refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ In the **Sign-on URL** text box, paste the **SERVICE PROVIDER ACS URL** from cloudtamer.io into this box.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for IDMS adding.](./media/cloudtamer-io-tutorial/configuration.png)
- a. Select **SAML2.0** as **IDMS TYPE** from the dropdown.
-
- b. In the **IDMS Name** give a name that the users will recognize from the Login screen.
+ a. In the **IDMS Name** give a name that the users will recognize from the Login screen.
- c. In the **IDENTITY PROVIDER ISSUER (ENTITY ID)** textbox, paste the **Identifier** value which you have copied from the Azure portal.
+ b. In the **IDENTITY PROVIDER ISSUER (ENTITY ID)** textbox, paste the **Identifier** value which you have copied from the Azure portal.
- d. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **IDENTITY PROVIDER METADATA** textbox.
+ c. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **IDENTITY PROVIDER METADATA** textbox.
- e. Copy **SERVICE PROVIDER ISSUER (ENTITY ID)** value, paste this value into the **Identifier** text box in the Basic SAML Configuration section in the Azure portal.
+ d. Copy **SERVICE PROVIDER ISSUER (ENTITY ID)** value, paste this value into the **Identifier** text box in the Basic SAML Configuration section in the Azure portal.
- f. Copy **SERVICE PROVIDER ACS URL** value, paste this value into the **Reply URL** text box in the Basic SAML Configuration section in the Azure portal.
+ e. Copy **SERVICE PROVIDER ACS URL** value, paste this value into the **Reply URL** text box in the Basic SAML Configuration section in the Azure portal.
- g. Under Assertion Mapping, enter the following values:
+ f. Under Assertion Mapping, enter the following values:
| Field | Value | |--|-|
active-directory Iprova Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iprova-provisioning-tutorial.md
# Tutorial: Configure iProva for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in iProva and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [iProva](https://www.iProva.com/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+The objective of this tutorial is to demonstrate the steps to be performed in iProva and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [iProva](https://www.iProva.com/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). Before you attempt to use this tutorial, be sure that you know and meet all requirements. If you have questions, please contact Infoland.
> [!NOTE] > This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The objective of this tutorial is to demonstrate the steps to be performed in iP
## Capabilities supported > [!div class="checklist"] > * Create users in iProva
-> * Remove users in iProva when they do not require access anymore
+> * Remove/disable users in iProva when they do not require access anymore
> * Keep user attributes synchronized between Azure AD and iProva > * Provision groups and group memberships in iProva > * [Single sign-on](./iprova-tutorial.md) to iProva (recommended)
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Permanent Token** values retrieved earlier in the **Tenant URL** and **Secret Token** fields respectively. Click **Test Connection** to ensure Azure AD can connect to iProva. If the connection fails, ensure your iProva account has Admin permissions and try again.
+5. In the **Admin Credentials** section, input the **SCIM 2.0 base URL and Permanent Token** values retrieved earlier in the **Tenant URL** and add /scim/ to it. Also add the **Secret Token**. You can generate a secret token in iProva by using the **permanent token** button. Click **Test Connection** to ensure Azure AD can connect to iProva. If the connection fails, ensure your iProva account has Admin permissions and try again.
![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
||| |active|Boolean| |displayName|String|
- |title|String|
|emails[type eq "work"].value|String| |preferredLanguage|String| |userName|String|
- |addresses[type eq "work"].country|String|
- |addresses[type eq "work"].locality|String|
- |addresses[type eq "work"].postalCode|String|
- |addresses[type eq "work"].formatted|String|
- |addresses[type eq "work"].region|String|
- |addresses[type eq "work"].streetAddress|String|
- |addresses[type eq "other"].formatted|String|
- |name.givenName|String|
- |name.familyName|String|
- |name.formatted|String|
- |phoneNumbers[type eq "fax"].value|String|
- |phoneNumbers[type eq "mobile"].value|String|
|phoneNumbers[type eq "work"].value|String| |externalId|String|
- |roles[primary eq "True"].display|String|
- |roles[primary eq "True"].type|String|
- |roles[primary eq "True"].value|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to iProva**.
This section guides you through the steps to configure the Azure AD provisioning
||| |displayName|String| |members|Reference|
+ |externalID|String|
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory True Office Learning Lio Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/true-office-learning-lio-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with True Office Learning - LIO | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and True Office Learning - LIO.
++++++++ Last updated : 07/26/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with True Office Learning - LIO
+
+In this tutorial, you'll learn how to integrate True Office Learning - LIO with Azure Active Directory (Azure AD). When you integrate True Office Learning - LIO with Azure AD, you can:
+
+* Control in Azure AD who has access to True Office Learning - LIO.
+* Enable your users to be automatically signed-in to True Office Learning - LIO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* True Office Learning - LIO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* True Office Learning - LIO supports **SP** initiated SSO.
+
+## Add True Office Learning - LIO from the gallery
+
+To configure the integration of True Office Learning - LIO into Azure AD, you need to add True Office Learning - LIO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **True Office Learning - LIO** in the search box.
+1. Select **True Office Learning - LIO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for True Office Learning - LIO
+
+Configure and test Azure AD SSO with True Office Learning - LIO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in True Office Learning - LIO.
+
+To configure and test Azure AD SSO with True Office Learning - LIO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure True Office Learning - LIO SSO](#configure-true-office-learninglio-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create True Office Learning - LIO test user](#create-true-office-learninglio-test-user)** - to have a counterpart of B.Simon in True Office Learning - LIO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **True Office Learning - LIO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ 1. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://learn-sso.trueoffice.com/simplesaml/module.php/saml/sp/metadata.php/<CUSTOMER_NAME>-sp`
+
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://learn-sso.trueoffice.com/<CUSTOMER_NAME>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [True Office Learning - LIO Client support team](mailto:service@trueoffice.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up True Office Learning - LIO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to True Office Learning - LIO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **True Office Learning - LIO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure True Office Learning - LIO SSO
+
+Please contact [True Office Learning - LIO support team](mailto:service@trueoffice.com) with configuration questions and/or to request a copy of the metadata.
+In your request please provide the following information:
+* Company Name.
+* CompanyID (if known).
+* Existing or new configuration.
+
+### Create True Office Learning - LIO test user
+
+In this section, you create a user called Britta Simon in True Office Learning - LIO. Work with [True Office Learning - LIO support team](mailto:service@trueoffice.com) to add the users in the True Office Learning - LIO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration by using the following options:
+
+* Select **Test this application** in the Azure portal. You're redirected to the True Office Learning - LIO Sign-on URL where you can initiate the login flow.
+* Go to the True Office Learning - LIO Sign-on URL directly, and initiate the login flow from that site.
+* You can use Microsoft My Apps. When you select the True Office Learning - LIO tile in My Apps, you're redirected to the True Office Learning - LIO Sign-on URL. For more information about My Apps, see [Introduction to My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+After you configure True Office Learning - LIO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Multi Factor Authentication Change Sms Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/multi-factor-authentication-change-sms-phone.md
+
+ Title: Common problems using text message two-step verification - Azure Active Directory | Microsoft Docs
+description: Learn how to set up a different mobile device as your two-factor verification method.
++++++++ Last updated : 07/28/2021++++
+# Common problems with text message two-step verification
+
+Receiving a verification code in a text message is a common verification method for two-step verification. If you didnΓÇÖt expect to receive a code, or if you received a code on the wrong phone, use the following steps to fix this problem.
+
+> [!Note]
+> If your organization doesn't allow you to receive a text message for verification, you'll need to select another method or contact your administrator for more help.
+
+## If you received the code on the wrong phone
+
+1. Sign in to **My Security Info** to manage your security info.
+
+1. On the **Security info** page, select the phone number that you want to change in your list of registered authentication methods, and then select **Change**.
+
+1. Select your country or region for your new number, and then enter your mobile device phone number.
+
+1. Select **Text me a code to receive text messages for verification**, then select **Next**.
+
+1. Type the verification code from the text message sent from Microsoft when prompted, and then select **Next**.
+
+1. When notified that your phone was registered successfully, select **Done**.
+
+## If you receive a code unexpectedly
+
+### If you already registered your phone number for two-step verification
+
+Receiving an unexpected text message could mean that someone knows your password and is attempting to take over your account. Change your password immediately and notify your organization's administrator about what happened.
+
+### If you never registered your phone number for two-step verification
+
+You can reply to the text message with `STOP` in the body of the text message. This message prevents the provider from sending messages to your phone number in the future. You might need to reply to similar messages with different codes.
+
+However, if you're already using two-step verification, sending this message prevents you from using this phone number to sign in. If you want to begin receiving text messages again, reply to the initial text message with `START` in the body.
+
+## Next steps
+
+- [Get help with two-step verification](multi-factor-authentication-end-user-troubleshoot.md)
+- [Set up a mobile phone as your two-step verification method](multi-factor-authentication-setup-phone-number.md)
active-directory Multi Factor Authentication Setup Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/multi-factor-authentication-setup-phone-number.md
Title: Set up a mobile device as your two-factor verification method - Azure Active Directory | Microsoft Docs
-description: Learn how to set up a mobile device as your two-factor verification method.
+ Title: Set up a mobile device as your two-step verification method - Azure Active Directory | Microsoft Docs
+description: Learn how to set up a mobile device as your two-step verification method.
Last updated 08/12/2019
-# Set up a mobile device as your two-factor verification method
+# Set up a mobile device as your two-step verification method
-You can set up your mobile device to act as your two-factor verification method. Your mobile phone can either receive a text message with a verification code or a phone call.
+You can set up your mobile device to act as your two-step verification method. Your mobile phone can either receive a text message with a verification code or a phone call.
>[!Note] > If the authentication phone option is greyed out, it's possible that your organization doesn't allow you to use a phone number or text message for verification. In this case, you'll need to select another method or contact your administrator for more help.
You can set up your mobile device to act as your two-factor verification method.
![App passwords area of the Additional security verification page](media/multi-factor-authentication-verification-methods/multi-factor-authentication-app-passwords.png) >[!Note]
- >For information about how to use the app password with your older apps, see [Manage app passwords](multi-factor-authentication-end-user-app-passwords.md). You only need to use app passwords if you're continuing to use older apps that don't support two-factor verification.
+ >For information about how to use the app password with your older apps, see [Manage app passwords](multi-factor-authentication-end-user-app-passwords.md). You only need to use app passwords if you're continuing to use older apps that don't support two-step verification.
5. Select **Done**.
You can set up your mobile device to act as your two-factor verification method.
![App passwords area of the Additional security verification page](media/multi-factor-authentication-verification-methods/multi-factor-authentication-app-passwords.png) >[!Note]
- >For information about how to use the app password with your older apps, see [Manage app passwords](multi-factor-authentication-end-user-app-passwords.md). You only need to use app passwords if you're continuing to use older apps that don't support two-factor verification.
+ >For information about how to use the app password with your older apps, see [Manage app passwords](multi-factor-authentication-end-user-app-passwords.md). You only need to use app passwords if you're continuing to use older apps that don't support two-step verification.
5. Select **Done**. ## Next steps
-After you've set up your two-factor verification method, you can add additional methods, manage your settings and app passwords, sign-in, or get help with some common two-factor verification-related problems.
+After you've set up your two-step verification method, you can add additional methods, manage your settings and app passwords, sign-in, or get help with some common two-step verification-related problems.
-- [Manage your two-factor verification method settings](multi-factor-authentication-end-user-manage-settings.md)
+- [Manage your two-step verification method settings](multi-factor-authentication-end-user-manage-settings.md)
- [Manage app passwords](multi-factor-authentication-end-user-app-passwords.md) -- [Sign-in using two-factor verification](multi-factor-authentication-end-user-signin.md)
+- [Sign-in using two-step verification](multi-factor-authentication-end-user-signin.md)
-- [Get help with two-factor verification](multi-factor-authentication-end-user-troubleshoot.md)
+- [Get help with two-step verification](multi-factor-authentication-end-user-troubleshoot.md)
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-verification-solution.md
You can use information in presented VCs to build a user profile. If you want to
* Define a mechanism to deprovision the user profile from the application. Due to the decentralized nature of the Azure AD Verifiable Credentials system, there is no application user provisioning lifecycle.
- * Do not store personally data claims returned in the VC token.
+ * Do not store personal data claims returned in the VC token.
* Only store claims needed for the logic of the relying party.
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
In this article we will summarize migration details for:
Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following tasks: * [Containerize ASP.NET applications and migrate to AKS](../migrate/tutorial-app-containerization-aspnet-kubernetes.md)
-* [Containerize Java web applications and migrate to AKS](/azure/aks/tutorial-app-containerization-java-kubernetes)
+* [Containerize Java web applications and migrate to AKS](/azure/migrate/tutorial-app-containerization-java-kubernetes)
## AKS with Standard Load Balancer and Virtual Machine Scale Sets
In this article, we summarized migration details for:
> * Deployment of your cluster configuration
-[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
+[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/api-server-authorized-ip-ranges.md
CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32 ```
->> [!NOTE]
+> [!NOTE]
> The above example appends the API server authorized IP ranges on the cluster. To disable authorized IP ranges, use az aks update and specify an empty range "". Another option is to use the below command on Windows systems to get the public IPv4 address, or you can use the steps in [Find your IP address](https://support.microsoft.com/en-gb/help/4026518/windows-10-find-your-ip-address).
For more information, see [Security concepts for applications and clusters in AK
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [route-tables]: ../virtual-network/manage-route-table.md
-[standard-sku-lb]: load-balancer-standard.md
+[standard-sku-lb]: load-balancer-standard.md
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-disk-csi.md
-# Use the Azure disk Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS) (preview)
-
+# Use the Azure disk Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
The Azure disk Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure disks. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.
To create an AKS cluster with CSI driver support, see [Enable CSI drivers for Az
A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disks for use by a single pod in an AKS cluster. For static provisioning, see [Manually create and use a volume with Azure disks](azure-disk-volume.md). - For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage]. ## Dynamically create Azure disk PVs by using the built-in storage classes
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-csi.md
-# Use Azure Files Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS) (preview)
+# Use Azure Files Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
The Azure Files Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Files shares.
A [persistent volume (PV)](concepts-storage.md#persistent-volumes) represents a
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage]. - ## Dynamically create Azure Files PVs by using the built-in storage classes A storage class is used to define how an Azure Files share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files shares. Choose one of the following [Azure storage redundancy SKUs][storage-skus] for *skuName*:
kubectl apply -f private-pvc.yaml
This option is optimized for random access workloads with in-place data updates and provides full POSIX file system support. This section shows you how to use NFS shares with the Azure File CSI driver on an AKS cluster.
-Make sure to check the [Support for Azure Storage features](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) and [region availability](../storage/files/files-nfs-protocol.md#regional-availability) sections during the preview phase.
- ### Create NFS file share storage class Save a `nfs-sc.yaml` file with the manifest below editing the respective placeholders.
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-storage-drivers.md
-# Enable Container Storage Interface (CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service (AKS) (preview)
+# Enable Container Storage Interface (CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service (AKS)
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, Azure Kubernetes Service (AKS) can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.
The CSI storage driver support on AKS allows you to natively use:
- This feature can only be set at cluster creation time. - The minimum Kubernetes minor version that supports CSI drivers is v1.17.-- During the preview, the default storage class will still be the [same in-tree storage class](concepts-storage.md#storage-classes). After this feature is generally available, the default storage class will be the `managed-csi` storage class and in-tree storage classes will be removed.-- During the first preview phase, only Azure CLI is supported.--
-### Register the `EnableAzureDiskFileCSIDriver` preview feature
-
-To create an AKS cluster that can use CSI drivers for Azure disks and Azure Files, you must enable the `EnableAzureDiskFileCSIDriver` feature flag on your subscription.
-
-Register the `EnableAzureDiskFileCSIDriver` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableAzureDiskFileCSIDriver"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAzureDiskFileCSIDriver')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
--
-### Install aks-preview CLI extension
-
-To create an AKS cluster or a node pool that can use the CSI storage drivers, you need the latest *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
+- The default storage class will be the `managed-csi` storage class.
## Create a new cluster that can use CSI storage drivers
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-basic.md
To create the ingress controller, use Helm to install *nginx-ingress*. For added
The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node. > [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed.
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed.
> > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, SSL pass-through will not work.
ACR_URL=<REGISTRY_URL>
helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \ --set controller.image.image=$CONTROLLER_IMAGE \ --set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
Now deploy the *nginx-ingress* chart with Helm. To use the manifest file created
The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node. > [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
ACR_URL=<REGISTRY_URL>
helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \ --set controller.image.image=$CONTROLLER_IMAGE \ --set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
To create the ingress controller, use `Helm` to install *nginx-ingress*. For add
The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node. > [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
ACR_URL=<REGISTRY_URL>
helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \ --set controller.image.image=$CONTROLLER_IMAGE \ --set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
You must pass two additional parameters to the Helm release so the ingress contr
The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node. > [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster is not Kubernetes RBAC enabled, add `--set rbac.create=false` to the Helm commands.
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
DNS_LABEL=<DNS_LABEL>
helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \ --set controller.image.image=$CONTROLLER_IMAGE \ --set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \ --version $CERT_MANAGER_TAG \ --set installCRDs=true \
- --set nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set nodeSelector."kubernetes\.io/os"=linux \
--set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \ --set image.tag=$CERT_MANAGER_TAG \ --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
To create the ingress controller, use the `helm` command to install *nginx-ingre
The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node. > [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic*. Specify a namespace for your own environment as needed.
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed.
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
ACR_URL=<REGISTRY_URL>
helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \ --set controller.image.image=$CONTROLLER_IMAGE \ --set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \ --version $CERT_MANAGER_TAG \ --set installCRDs=true \
- --set nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set nodeSelector."kubernetes\.io/os"=linux \
--set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \ --set image.tag=$CERT_MANAGER_TAG \ --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-helm.md
This article shows you how to configure and use Helm in a Kubernetes cluster on
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+ You also need the Helm CLI installed, which is the client that runs on your development system. It allows you to start, stop, and manage applications with Helm. If you use the Azure Cloud Shell, the Helm CLI is already installed. For installation instructions on your local platform, see [Installing Helm][helm-install]. > [!IMPORTANT]
Hang tight while we grab the latest from your chart repositories...
Update Complete. ΓÄê Happy Helming!ΓÄê ```
+## Import the images used by the Helm chart into your ACR
+
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+
+```azurecli
+REGISTRY_NAME=<REGISTRY_NAME>
+CONTROLLER_REGISTRY=k8s.gcr.io
+CONTROLLER_IMAGE=ingress-nginx/controller
+CONTROLLER_TAG=v0.48.1
+PATCH_REGISTRY=docker.io
+PATCH_IMAGE=jettech/kube-webhook-certgen
+PATCH_TAG=v1.5.1
+DEFAULTBACKEND_REGISTRY=k8s.gcr.io
+DEFAULTBACKEND_IMAGE=defaultbackend-amd64
+DEFAULTBACKEND_TAG=1.5
+
+az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
+az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
+az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
+```
+
+> [!NOTE]
+> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
+ ### Run Helm charts To install charts with Helm, use the [helm install][helm-install-command] command and specify a release name and the name of the chart to install. To see installing a Helm chart in action, let's install a basic nginx deployment using a Helm chart.
+> [!TIP]
+> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed.
+ ```console
-helm install my-nginx-ingress ingress-nginx/ingress-nginx \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.image.registry=mcr.microsoft.com \
- --set defaultBackend.image.registry=mcr.microsoft.com \
- --set controller.admissionWebhooks.patch.image.registry=mcr.microsoft.com
+ACR_URL=<REGISTRY_URL>
+
+# Create a namespace for your ingress resources
+kubectl create namespace ingress-basic
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx \
+ --namespace ingress-basic \
+ --set controller.replicaCount=2 \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.image.registry=$ACR_URL \
+ --set controller.image.image=$CONTROLLER_IMAGE \
+ --set controller.image.tag=$CONTROLLER_TAG \
+ --set controller.image.digest="" \
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
+ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
+ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
+ --set defaultBackend.image.registry=$ACR_URL \
+ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
+ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
``` The following condensed example output shows the deployment status of the Kubernetes resources created by the Helm chart: ```console
-NAME: my-nginx-ingress
-LAST DEPLOYED: Fri Nov 22 10:08:06 2019
-NAMESPACE: default
+NAME: nginx-ingress
+LAST DEPLOYED: Wed Jul 28 11:35:29 2021
+NAMESPACE: ingress-basic
STATUS: deployed REVISION: 1 TEST SUITE: None NOTES:
-The nginx-ingress controller has been installed.
+The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
-You can watch the status by running 'kubectl --namespace default get services -o wide -w my-nginx-ingress-ingress-nginx-controller'
+You can watch the status by running 'kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller'
... ``` Use the `kubectl get services` command to get the *EXTERNAL-IP* of your service. ```console
-kubectl --namespace default get services -o wide -w my-nginx-ingress-ingress-nginx-controller
+kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
```
-For example, the below command shows the *EXTERNAL-IP* for the *my-nginx-ingress-ingress-nginx-controller* service:
+For example, the below command shows the *EXTERNAL-IP* for the *nginx-ingress-ingress-nginx-controller* service:
```console
-$ kubectl --namespace default get services -o wide -w my-nginx-ingress-ingress-nginx-controller
+$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
-my-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.2.237 <EXTERNAL-IP> 80:31380/TCP,443:32239/TCP 72s app.kubernetes.io/component=controller,app.kubernetes.io/instance=my-nginx-ingress,app.kubernetes.io/name=ingress-nginx
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
+nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.254.93 <EXTERNAL_IP> 80:30004/TCP,443:30348/TCP 61s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
``` ### List releases
my-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.2.237 <EXTERNA
To see a list of releases installed on your cluster, use the `helm list` command. ```console
-helm list
+helm list --namespace ingress-basic
``` The following example shows the *my-nginx-ingress* release deployed in the previous step: ```console
-$ helm list
-
-NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
-my-nginx-ingress default 1 2019-11-22 10:08:06.048477 -0600 CST deployed nginx-ingress-1.25.0 0.26.1
+$ helm list --namespace ingress-basic
+NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+nginx-ingress ingress-basic 1 2021-07-28 11:35:29.9623734 -0500 CDT deployed ingress-nginx-3.34.0 0.47.0
``` ### Clean up resources
my-nginx-ingress default 1 2019-11-22 10:08:06.048477 -0600 CST deploye
When you deploy a Helm chart, a number of Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, use the [helm uninstall][helm-cleanup] command and specify your release name, as found in the previous `helm list` command. ```console
-helm uninstall my-nginx-ingress
+helm uninstall --namespace ingress-basic nginx-ingress
``` The following example shows the release named *my-nginx-ingress* has been uninstalled: ```console
-$ helm uninstall my-nginx-ingress
+$ helm uninstall --namespace ingress-basic nginx-ingress
+
+release "nginx-ingress" uninstalled
+```
-release "my-nginx-ingress" uninstalled
+To delete the entire sample namespace, use the `kubectl delete` command and specify your namespace name. All the resources in the namespace are deleted.
+
+```console
+kubectl delete namespace ingress-basic
``` ## Next steps
For more information about managing Kubernetes application deployments with Helm
[helm-repo-add]: https://helm.sh/docs/intro/quickstart/#initialize-a-helm-chart-repository [helm-search]: https://helm.sh/docs/intro/using_helm/#helm-search-finding-charts [helm-repo-update]: https://helm.sh/docs/intro/using_helm/#helm-repo-working-with-repositories
+[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
<!-- LINKS - internal -->
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
[aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md [taints]: operator-best-practices-advanced-scheduler.md
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-updates-kured.md
helm repo update
kubectl create namespace kured # Install kured in that namespace with Helm 3 (only on Linux nodes, kured is not working on Windows nodes)
-helm install kured kured/kured --namespace kured --set nodeSelector."beta\.kubernetes\.io/os"=linux
+helm install kured kured/kured --namespace kured --set nodeSelector."kubernetes\.io/os"=linux
``` You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/devops-api-development-templates.md
API developers face challenges when working with Resource Manager templates:
* API publishers can validate the pull request and make sure the changes are safe and compliant. For example, they can check if only HTTPS is allowed to communicate with the API. Most validations can be automated as a step in the CI/CD pipeline.
-* Once the changes are approved and merged successfully, API publishers can choose to deploy them to the Production instance either on schedule or on demand. The deployment of the templates can be automated using [GitHub Actions](https://github.com/Azure/apimanagement-devops-samples), [Azure Pipelines](/azure/devops/pipelines), [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or other tools.
+* Once the changes are approved and merged successfully, API publishers can choose to deploy them to the Production instance either on schedule or on demand. The deployment of the templates can be automated using [GitHub Actions](https://docs.github.com/en/actions), [Azure Pipelines](/azure/devops/pipelines), [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or other tools.
With this approach, an organization can automate the deployment of API changes into API Management instances, and it's easy to promote changes from one environment to another. Because different API development teams will be working on different sets of API templates and files, it prevents interference between different teams.
With this approach, an organization can automate the deployment of API changes i
## Next steps -- See the open-source [Azure API Management DevOps Resource Kit](https://github.com/Azure/azure-api-management-devops-resource-kit) for additional information, tools, and sample templates.
+- See the open-source [Azure API Management DevOps Resource Kit](https://github.com/Azure/azure-api-management-devops-resource-kit) for additional information, tools, and sample templates.
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-custom-container.md
After deployment, your app is available at `http://<app-name>.azurewebsites.net`
A **Resource Group** is a named collection of all your application's resources in Azure. For example, a Resource Group can contain a reference to a website, a database, and an Azure Function.
-An **App Service Plan** defines the physical resources that will be used to host your website. This quickstart uses a **Basic** hosting plan on **Linux** infrastructure, which means the site will be hosted on a Linux machine alongside other websites. If you start with the **Basic** plan, you can use the Azure portal to scale up so that yours is the only site running on a machine.
+An **App Service Plan** defines the physical resources that will be used to host your website. This quickstart uses a **Basic** hosting plan on **Linux** infrastructure, which means the site will be hosted on a Linux machine alongside other websites. If you start with the **Basic** plan, you can use the Azure portal to scale up so that yours is the only site running on a machine. For pricing, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux).
## Browse the website
app-service Webjobs Dotnet Deploy Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-dotnet-deploy-vs.md
You can publish multiple WebJobs to a single web app, provided that each WebJob
With version 3.x of the Azure WebJobs SDK, you can create and publish WebJobs as .NET Core console apps. For step-by-step instructions to create and publish a .NET Core console app to Azure as a WebJob, see [Get started with the Azure WebJobs SDK for event-driven background processing](webjobs-sdk-get-started.md). > [!NOTE]
-> .NET Core WebJobs can't be linked with web projects. If you need to deploy your WebJob with a web app, [create your WebJobs as a .NET Framework console app](#webjobs-as-net-framework-console-apps).
+> .NET Core Web Apps and/or .NET Core WebJobs can't be linked with web projects. If you need to deploy your WebJob with a web app, [create your WebJobs as a .NET Framework console app](#webjobs-as-net-framework-console-apps).
### Deploy to Azure App Service
If you enable **Always on** in Azure, you can use Visual Studio to change the We
## Next steps > [!div class="nextstepaction"]
-> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
+> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
app-service Webjobs Sdk Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-get-started.md
The `QueueTrigger` attribute tells the runtime to call this function when a new
} ```
- When a message is added to a queue named `queue`, the function executes and the `message` string is written to the logs. The queue being monitored is in the default Azure Storage account, which you create next.
+ You should mark the *Functions* class as `public static` in order for the runtime to access and execute the method. In the above code sample, when a message is added to a queue named `queue`, the function executes and the `message` string is written to the logs. The queue being monitored is in the default Azure Storage account, which you create next.
The `message` parameter doesn't have to be a string. You can also bind to a JSON object, a byte array, or a [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) object. [See Queue trigger usage](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#usage). Each binding type (such as queues, blobs, or tables) has a different set of parameter types that you can bind to.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 06/09/2021 Last updated : 07/27/2021
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly.
+## July 2021
+
+### Preview Support for User Assigned Managed Identities
+
+**Type:** New feature
+
+Azure Automation now supports [User Assigned Managed Identities](automation-secure-asset-encryption.md) for cloud jobs in Azure public , Gov & China regions. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-user-assigned-identities/) for more information.
+
+### General Availability of customer-managed keys for Azure Automation
+
+**Type:** New feature
+
+Customers can manage and secure encryption of Azure Automation assets using their own managed keys. With the introduction of customer-managed keys you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault. This additional encryption should help you meet your organizationΓÇÖs regulatory or compliance needs.
+
+For more information, see [Use of customer-managed keys](automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account).
+ ## June 2021 ### Security update for Log Analytics Contributor role
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/plan-azure-arc-data-services.md
Currently, the validated list of Kubernetes services and distributions includes:
- AWS Elastic Kubernetes Service (EKS) - Azure Kubernetes Service (AKS)-- Azure Kubernetes Service Engine (AKS Engine) on Azure Stack - Azure Kubernetes Service on Azure Stack HCI - Azure RedHat OpenShift (ARO) - Google Cloud Kubernetes Engine (GKE)
There are multiple options for creating the Azure Arc data controller:
- [Create a data controller in indirect connected mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md) - [Create a data controller in indirect connected mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md) - [Create a data controller in indirect connected mode with Kubernetes tools such as kubectl or oc](create-data-controller-using-kubernetes-native-tools.md)-- [Create a data controller with Azure Arc Jumpstart for an accelerated experience of a test deployment](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/)
+- [Create a data controller with Azure Arc Jumpstart for an accelerated experience of a test deployment](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/)
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To upload the Drawing package:
To check the status of the drawing package and retrieve its unique ID (`udid`):
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
You can retrieve metadata from the Drawing package resource. The metadata contai
To retrieve content metadata:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
Now that the Drawing package is uploaded, we'll use the `udid` for the uploaded
To convert a Drawing package:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
After the conversion operation completes, it returns a `conversionId`. We can ac
To check the status of the conversion process and retrieve the `conversionId`:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
A dataset is a collection of map features, such as buildings, levels, and rooms.
To create a dataset:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
To create a dataset:
To check the status of the dataset creation process and retrieve the `datasetId`:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
A tileset is a set of vector tiles that render on the map. Tilesets are created
To create a tileset:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
To create a tileset:
To check the status of the dataset creation process and retrieve the `tilesetId`:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
Datasets can be queried using [WFS API](/rest/api/maps/v2/wfs). You can use the
To query the all collections in your dataset:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
In this section, we'll query [WFS API](/rest/api/maps/v2/wfs) for the `unit` fea
To query the unit collection in your dataset:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
Feature statesets define dynamic properties and values on specific features that
To create a stateset:
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
To create a stateset:
To update the `occupied` state of the unit with feature `id` "UNIT26":
-1. In the Postman app, select **New**..
+1. In the Postman app, select **New**.
2. In the **Create New** window, select **HTTP Request**.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
| Oracle Linux 7 | X | X | | X | | Oracle Linux 6 | | X | | | | Oracle Linux 6.4+ | | X | | X |
-| Red Hat Enterprise Linux Server 8.3 | X <sup>3</sup> | X | X | |
+| Red Hat Enterprise Linux Server 8.2, 8.3, 8.4 | X <sup>3</sup> | X | X | |
| Red Hat Enterprise Linux Server 8 | X <sup>3</sup> | X | X | | | Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | |
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
Now, whenever you use the release template to deploy a new release, an annotatio
Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment.
-## Classic annotations
+## Release annotations using API keys
Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
-### Install the Annotations extension (one time)
+### Install the annotations extension (one time)
To be able to create release annotations, you'll need to install one of the many Azure DevOps extensions available in the Visual Studio Marketplace.
To be able to create release annotations, you'll need to install one of the many
You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
-### Configure classic release annotations
+### Configure release annotations using API keys
Create a separate API key for each of your Azure Pipelines release templates.
Create a separate API key for each of your Azure Pipelines release templates.
> [!NOTE] > Limits for API keys are described in the [REST API rate limits documentation](https://dev.applicationinsights.io/documentation/Authorization/Rate-limits).
-### Transition from classic to new release annotation
+### Transition to the new release annotation
To use the new release annotations: 1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net.md
This section will guide you through manually adding Application Insights to a te
} ```
-7. Update the Web.config file as follows:
+7. If Web.config is already updated, skip this step. Otherwise, update the file as follows:
```xml <?xml version="1.0" encoding="utf-8"?>
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
union (AppAvailabilityResults),
Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". For Application Insights resources on the current pricing plans, most usage will show up as Log Analytics for the Meter category since there is a single logs backend for all Azure Monitor components.
-More understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md#download-usage-in-azure-portal).
+More understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there is a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md). ## Managing your data volume
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-get-started.md
If you don't have an Azure subscription, create a [free account](https://azure.m
> [!NOTE] > As of April 2020, PowerShell Gallery has deprecated TLS 1.1 and 1.0. >
-> For additionnal prerequisites that you might need, see [PowerShell Gallery TLS Support](https://devblogs.microsoft.com/powershell/powershell-gallery-tls-support).
+> For additional prerequisites that you might need, see [PowerShell Gallery TLS Support](https://devblogs.microsoft.com/powershell/powershell-gallery-tls-support).
> Run PowerShell as Admin.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force
Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense ```
+> [!NOTE]
+> `AllowPrerelease` switch in `Install-Module` cmdlet allows installation of beta release.
+>
+> For additional information, see [Install-Module](https://docs.microsoft.com/powershell/module/powershellget/install-module?view=powershell-7.1#parameters).
+>
+ ### Enable monitoring ```powershell
Enable-ApplicationInsightsMonitoring -ConnectionString 'xxxxxxxx-xxxx-xxxx-xxxx-
Do more with Application Insights Agent: - Review the [detailed instructions](status-monitor-v2-detailed-instructions.md) for an explanation of the commands found here.-- Use our guide to [troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
+- Use our guide to [troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/azure-key-vault-deprecated.md
The following table shows data collection methods and other details about how da
| Azure | | |&#8226; | | | on arrival | ## Use Azure Key Vault
-After you [install the solution](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.KeyVaultAnalyticsOMS?source=intercept.nl&tab=Overview), view the Key Vault data by clicking the **Key Vault Analytics** tile from the Azure Monitor **Overview** page. Open this page from the **Azure Monitor** menu by clicking **More** under the **Insights** section.
+After you install the solution, view the Key Vault data by clicking the **Key Vault Analytics** tile from the Azure Monitor **Overview** page. Open this page from the **Azure Monitor** menu by clicking **More** under the **Insights** section.
![Screenshot of the Key Vault Analytics tile on the Azure Monitor Overview page showing a graph of key vault operations volume over time.](media/azure-key-vault/log-analytics-keyvault-tile.png)
Data collected before the change is not visible in the new solution. You can con
## Next steps * Use [Log queries in Azure Monitor](../logs/log-query-overview.md) to view detailed Azure Key Vault data.-
azure-monitor Azure Data Explorer Monitor Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-data-explorer-monitor-proxy.md
union <Azure Data Explorer table>, cluster(CL1).database(<workspace-name>).<tabl
:::image type="content" source="media\azure-data-explorer-monitor-proxy\azure-data-explorer-cross-query-proxy.png" alt-text="Cross service query from the Azure Data Explorer.":::
-Using the [`join` operator](/azure/data-explorer/kusto/query/joinoperator), instead of union, may require a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to run it on an Azure Data Explorer native cluster.
+>[!TIP]
+>* Using the [`join` operator](/azure/data-explorer/kusto/query/joinoperator), instead of union, may require a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to run it on an Azure Data Explorer native cluster.
### Join data from an Azure Data Explorer cluster in one tenant with an Azure Monitor resource in another
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor prici
If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
-1. **Monitoring VMs:** with typical monitoring eanabled, 1 GB to 3 GB of data month is ingested per monitored VM.
+1. **Monitoring VMs:** with typical monitoring enabled, 1 GB to 3 GB of data month is ingested per monitored VM.
2. **Monitoring Azure Kubernetes Service (AKS) clusters:** details on expected data volumes for monitoring a typical AKS cluster are available [here](../containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster). Follow these [best practices](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to control your AKS cluster monitoring costs.
-3. **Application monitoring:** the Azure Monitor pricing calculator includes a data volume estimator using on your application's usage and based on a statistcal analysis of Application Insights data volumes. In the Application Insights section of the pricing calculator, toggle the switch next to "Estimate data volume based on application activity" to use this.
+3. **Application monitoring:** the Azure Monitor pricing calculator includes a data volume estimator using on your application's usage and based on a statistical analysis of Application Insights data volumes. In the Application Insights section of the pricing calculator, toggle the switch next to "Estimate data volume based on application activity" to use this.
## Understand your usage and estimate costs
Log Analytics charges are added to your Azure bill. You can see details of your
Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. For example, you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Azure Defender (Security Center) and Azure Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
-To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md#download-usage-in-azure-portal).
+To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
In the downloaded spreadsheet, you can see usage per Azure resource (for example, Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers), and "Azure Monitor" (used by commitment tier pricing tiers), and then adding a filter on the "Instance ID" column that is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column, and the unit for each entry is shown in the "Unit of Measure" column. For more information, see [Review your individual Azure subscription bill](../../cost-management-billing/understand/review-individual-bill.md). ## Changing pricing tier
If you observe high data ingestion reported using the `Usage` records (see the [
### Log Analytics Workspace Insights
-Start understanding your data voumes in the **Usage** tab of the [Log Analytics Workspace Insights workbook](log-analytics-workspace-insights-overview.md). On the **Usage Dashboard**, you can easily see:
+Start understanding your data volumes in the **Usage** tab of the [Log Analytics Workspace Insights workbook](log-analytics-workspace-insights-overview.md). On the **Usage Dashboard**, you can easily see:
- Which data tables are ingesting the most data volume in the main table, - What are the top resources contributing data, and - What is the trend of data ingestion.
You can pivot to the **Additional Queries** to easily execution more queries use
Learn more about the [capabilities of the Usage tab](log-analytics-workspace-insights-overview.md#usage-tab).
-While this workbook can anaswer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
+While this workbook can answer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
## Understanding nodes sending data
This table lists some suggestions for reducing the volume of logs collected.
| -- | - | | Data Collection Rules | The [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. | | Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
-| Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](https://docs.microsoft.com/azure/sentinel/azure-sentinel-billing) about Sentinel costs and billing. |
+| Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](../../sentinel/azure-sentinel-billing.md) about Sentinel costs and billing. |
| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier). <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for: <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)). <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10)). <br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10)). <br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10)). <br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10)). <br> - audit removable storage. | | Performance counters | Change the [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection. <br> - Reduce the number of performance counters. | | Event logs | Change the [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected. <br> - Collect only required event levels. For example, do not collect *Information* level events. |
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/usage-estimated-costs.md
This results in a view such as:
From here, you can drill in from this accumulated cost summary to get the finer details in the "Cost by resource" view. In the current pricing tiers, Azure Log data is charged on the same set of meters whether it originates from Log Analytics or Application Insights. To separate costs from your Log Analytics or Application Insights usage, you can add a filter on **Resource type**. To see all Application Insights costs, filter the Resource type to "microsoft.insights/components", and for Log Analytics costs, filter Resource type to "microsoft.operationalinsights/workspaces".
-More detail of your usage is available by [downloading your usage from the Azure portal](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md#download-usage-in-azure-portal).
+More detail of your usage is available by [downloading your usage from the Azure portal](../cost-management-billing/understand/download-azure-daily-usage.md).
In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there is a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md). > [!NOTE]
azure-monitor Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualizations.md
Title: Visualizing data from Azure Monitor | Microsoft Docs
description: Provides a summary of the available methods to visualize metric and log data stored in Azure Monitor. -- Previously updated : 01/25/2021++ Last updated : 07/28/2021
Here is a video walkthrough on creating dashboards.
- No Azure integration. Can't manage dashboards and models through Azure Resource Manager. - Cost to support additional Grafana infrastructure or additional cost for Grafana Cloud.
+## Azure Monitor partners
+Some [Azure Monitor partners](/azure/azure-monitor/partners) may provide visualization functionality. The previous link lists partners evaluated by Microsoft.
+
+### Advantages
+- May provide out of the box visualizations saving time
+
+### Limitations
+- May have additional costs
+- Time to research and evaluate partner offerings
## Build your own custom application You can access data in log and metric data in Azure Monitor through their API using any REST client, which allows you to build your own custom websites and applications.
You can access data in log and metric data in Azure Monitor through their API us
### Disadvantages - Significant engineering effort required. - ## Azure Monitor Views > [!IMPORTANT]
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na ms.devlang: na Previously updated : 04/22/2021 Last updated : 07/28/2021 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Resource | Default limit | Adjustable via support request | |-||--|
+| [Regional capacity quota per subscription](#regional-capacity-quota) | 25 TiB | Yes |
| Number of NetApp accounts per Azure region per subscription | 10 | Yes | | Number of capacity pools per NetApp account | 25 | Yes | | Number of volumes per subscription | 500 | Yes |
If you have already allocated at least 4 TiB of quota for a volume, you can init
You can increase the maxfiles limit to 500 million if your volume quota is at least 20 TiB. <!-- ANF-11854 -->
+## Regional capacity quota
+
+Azure NetApp Files has a regional limit based on capacity. The standard capacity limit for each subscription is 25 TiB per region, across all service levels.
+
+You can request a capacity increase by submitting a specific **Service and subscription limits (quotas)** support ticket as follows:
+
+1. Go to **Support + Troubleshooting** in the portal to start the Support request process:
+
+ ![Screenshot that shows the Support Troubleshooting menu.](../media/azure-netapp-files/support-troubleshoot-menu.png)
+
+2. Select the **Service and subscription limits (quotas)** issue type and enter all relevant details:
+
+ ![Screenshot that shows the Service and Subscription Limits menu.](../media/azure-netapp-files/service-subscription-limits-menu.png)
+
+3. Click the **Enter details** link in the Details tab, then select the **TiBs per subscription** quota type:
+
+ ![Screenshot that shows the Enter Details link in Details tab.](../media/azure-netapp-files/support-details.png)
+
+ ![Screenshot that shows the Quota Details window.](../media/azure-netapp-files/support-quota-details.png)
+
+4. On the Support Method page, make sure to select **Severity Level B ΓÇô Moderate impact**:
+
+ ![Screenshot that shows the Support Method window.](../media/azure-netapp-files/support-method-severity.png)
+
+5. Complete the request process to issue the request.
+
+After the ticket is submitted, the request will be sent to the Azure capacity management team for processing. You will receive a response typically within 2 business days. The Azure capacity management team might contact you for handling of large requests.
+
+A regional capacity quota increase does not incur a billing increase. Billing will still be based on the provisioned capacity pools.
+ ## Request limit increase <a name="limit_increase"></a>
-You can create an Azure support request to increase the adjustable limits from the table above.
+You can create an Azure support request to increase the adjustable limits from the [Resource Limits](#resource-limits) table.
From Azure portal navigation plane:
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/connect-over-cellular.md
Title: Connecting Azure Percept Over Cellular Networks
-description: This article explains how to connect the Azure Percept DK over cellular networks.
+ Title: Connect Azure Percept Over 5G or LTE Networks
+description: This article explains how to connect the Azure Percept DK over 5G or LTE networks.
-+ Last updated 05/20/2021-+
-# Connect the Azure Percept DK over cellular networks
+# Connect the Azure Percept DK over 5G or LTE networks
-The benefits of connecting Edge AI devices over cellular (LTE and 5G) networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, cellular networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on cellular networks. Where only necessary information is sent to the cloud while most of the data is processed on the device. Today, the Azure Percept DK isn't able to connect directly to cellular networks. However, they can connect to cellular gateways using the built-in Ethernet and Wi-Fi capabilities. This article covers how this works.
+The benefits of connecting Edge AI devices over 5G/LTE networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, 5G/LTE networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on 5G/LTE networks. Where only necessary information is sent to the cloud while most of the data is processed on the device. Today, the Azure Percept DK isn't able to connect directly to 5G/LTE networks. However, they can connect to 5G/LTE gateways using the built-in Ethernet and Wi-Fi capabilities. This article covers how this works.
-## Options for connecting the Azure Percept DK over cellular networks
-With additional hardware, you can connect the Azure Percept DK using cellular connectivity like LTE or 5G. There are two primary options supported today:
-- **Cellular Wi-Fi hotspot device** - where the dev kit is connected to the Wi-Fi network that the Wi-Fi hotspot provides. In this case, the dev kit connects to the network like any other Wi-Fi network. For more instructions, follow the [Azure Percept DK Setup Guide](./quickstart-percept-dk-set-up.md) and select the cellular Wi-Fi network broadcasted from the hotspot.-- **Cellular Ethernet gateway device** - here the dev kit is connected to the cellular gateway over Ethernet, which takes advantage of the improved security compared to Wi-Fi connections. The rest of this article goes into more detail on how a network like this is configured.
+## Options for connecting the Azure Percept DK over 5G or LTE networks
+With additional hardware, you can connect the Azure Percept DK using 5G/LTE connectivity. There are two primary options supported today:
+- **5G/LTE Wi-Fi hotspot device** - where the dev kit is connected to the Wi-Fi network that the Wi-Fi hotspot provides. In this case, the dev kit connects to the network like any other Wi-Fi network. For more instructions, follow the [Azure Percept DK Setup Guide](./quickstart-percept-dk-set-up.md) and select the 5G/LTE Wi-Fi network broadcasted from the hotspot.
+- **5G/LTE Ethernet gateway device** - here the dev kit is connected to the 5G/LTE gateway over Ethernet, which takes advantage of the improved security compared to Wi-Fi connections. The rest of this article goes into more detail on how a network like this is configured.
-## Cellular gateway topology
+## 5G/LTE gateway topology
-In the above diagram, you can see how a cellular gateway can be easily paired with the Azure Percept DK.
+In the above diagram, you can see how a 5G/LTE gateway can be easily paired with the Azure Percept DK.
-## Considerations when connecting to a cellular gateway
-Here are some important points to consider when connecting the Azure Percept DK to a cellular gateway.
+## Considerations when connecting to a 5G or LTE gateway
+Here are some important points to consider when connecting the Azure Percept DK to a 5G/LTE gateway.
- Set up the gateway first and then validate that it's receiving a connection via the SIM. It will then be easier to troubleshoot any issues found while connecting the Azure Percept DK. - Ensure both ends of the Ethernet cable are firmly connected to the gateway and Azure Percept DK. - Follow the [default instructions](./how-to-connect-over-ethernet.md) for connecting the Azure Percept DK over Ethernet.-- If your cellular plan has a quota, it's recommended that you optimize how much data your Azure Percept DK models send to the cloud.
+- If your 5G/LTE plan has a quota, it's recommended that you optimize how much data your Azure Percept DK models send to the cloud.
- Ensure you have a [properly configured firewall](./concept-security-configuration.md) that blocks externally originated inbound traffic.
-## SSH over a cellular network
-To SSH into the dev kit via a cellular ethernet gateway, you have these options:
+## SSH over a 5G or LTE network
+To SSH into the dev kit via a 5G/LTE ethernet gateway, you have these options:
- **Using the dev kit's Wi-Fi access point**. If you have Wi-Fi disabled, you can re-enable it by rebooting your dev kit. From there, you can connect to the dev kit's Wi-Fi access point and follow [these SSH procedures](./how-to-ssh-into-percept-dk.md).-- **Using a Ethernet connection to a local network (LAN)**. With this option, you'll unplug your dev kit from the cellular gateway and plug it into LAN router. For more information, see [How to Connect over Ethernet](./how-to-connect-over-ethernet.md). -- **Using the gateway's remote access features**. Many cellular gateways include remote access managers that can be used to connect to devices on the network via SSH. Check with manufacturer of your cellular gateway to see if it has this feature. Here's an example of a remote access manager for [Cradlepoint cellular gateways](https://customer.cradlepoint.com/s/article/NCM-Remote-Connect-LAN-Manager).
+- **Using a Ethernet connection to a local network (LAN)**. With this option, you'll unplug your dev kit from the 5G/LTE gateway and plug it into LAN router. For more information, see [How to Connect over Ethernet](./how-to-connect-over-ethernet.md).
+- **Using the gateway's remote access features**. Many 5G/LTE gateways include remote access managers that can be used to connect to devices on the network via SSH. Check with manufacturer of your 5G/LTE gateway to see if it has this feature. Here's an example of a remote access manager for [Cradlepoint 5G/LTE gateways](https://customer.cradlepoint.com/s/article/NCM-Remote-Connect-LAN-Manager).
- **Using the dev kit's serial port**. The Azure Percept DK includes a serial connection port that can be used to connect directly to the device. See [Connect your Azure Percept DK over serial](./how-to-connect-to-percept-dk-over-serial.md) for detailed instructions.
-## Considerations when selecting a cellular gateway device
-Cellular gateways support different technologies that impact the maximum data rate for downloads and uploads. The advertised data rates provide guidance for decision making but are usually never reached. Here is some guidance for selecting the right gateway for your needs.
+## Considerations when selecting a 5G or LTE gateway device
+5G/LTE gateways support different technologies that impact the maximum data rate for downloads and uploads. The advertised data rates provide guidance for decision making but are usually never reached. Here is some guidance for selecting the right gateway for your needs.
- **LTE CAT-1** provides up to 10 Mbps down and 5 Mbps up. It is enough for default Azure Percept Devkit features such as object detection and creating a voice assistant. However, it may not be enough for solutions that require video streaming data up to the cloud. - **LTE CAT-3 and 4** provides up to 100 Mbps down and 50 Mbps up, which is enough for streaming video to the cloud. However, it is not enough to stream full HD quality video.
Cellular gateways support different technologies that impact the maximum data ra
## Next steps
-If you have a cellular gateway and would like to connect your Azure Percept DK to it, follow these next steps.
+If you have a 5G/LTE gateway and would like to connect your Azure Percept DK to it, follow these next steps.
- [How to Connect your Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md)
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-troubleshoot-setup.md
Refer to the table below for workarounds to common issues found during the [Azur
|The host computer shows a security warning about the connection to the Azure Percept DK access point.|It's a known issue that will be fixed in a later update.|It's safe to continue through the setup experience.| |The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) appears in the network list but fails to connect.|It could be because of a temporary corruption of the dev kit's Wi-Fi access point.|Reboot the dev kit and try again.| |Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity to communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and enterprise EAP-TLS connectivity is currently not supported.|Ensure your Wi-Fi network type is supported and has internet connectivity.|
-|After using the Device Code and signing into Azure, you're presented with an error about policy permissions or compliance issues and will be unable to continue. Here are some of the errors you may see:<br>**BlockedByConditionalAccessOnSecurityPolicy** The tenant admin has configured a security policy that blocks this request. Check the security policies defined at the tenant level to determine if your request meets the policy. <br>**DevicePolicyError** The user tried to sign into a device from a platform that's currently not supported through Conditional Access policy.<br>**DeviceNotCompliant** - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune<br>**BlockedByConditionalAccess** Access has been blocked by Conditional Access policies. The access policy doesn't allow token issuance. |Some Azure tenants may block the usage of ΓÇ£Device CodesΓÇ¥ for manipulating Azure resources as a Security precaution. It's usually the result of your organization's IT policies. As a result, the Azure Percept Setup experience can't create any Azure resources for you. |Work with your organization to navigate their IT policies. |
-|You see the following errors when trying to receive the device code while setting up a new device: <br>**In the setup experience UI** - *Unable to get device code. Make sure the device is connected to internet*; <br>**In the browser Web Developer Mode** - *Failed to load resource: the server responded with a status of 503 (Service Unavailable)* <br><br>or <br><br>*Certificate not yet valid*. | There's an issue with your Wi-Fi network or your host computer's date/time is incorrect. | Try plugging in an Ethernet cable to the devkit or connecting to a different Wi-Fi network and try again. Less common causes could be your host computer's date/time are off.|
+|**Device Code Errors** <br><br> If you received the following errors on the device code page: <br><br>**In the setup experience UI** - Unable to get device code. Make sure the device is connected to internet; <br><br> **In the browser's Web Developer Mode** - Failed to load resource: the server responded with a status of 503 (Service Unavailable) <br><br>or <br><br>Certificate not yet valid. | There's an issue with your Wi-Fi network that's blocking the device from completing DNS queries or contacting a NTP time server. | Try plugging in an Ethernet cable to the devkit or connecting to a different Wi-Fi network then try again. <br><br> Less common causes could be that your host computer's date/time are incorrect. |
+|**Issues when using the Device Code**<br><br> After using the Device Code and signing into Azure, you're presented with an Azure error message about policy permissions or compliance issues. You'll be unable to continue the setup experience.<br><br> Here are some of the errors you may see:<br><br>**BlockedByConditionalAccessOnSecurityPolicy** The tenant admin has configured a security policy that blocks this request. Check the security policies defined at the tenant level to determine if your request meets the policy. <br><br>**DevicePolicyError** The user tried to sign into a device from a platform that's currently not supported through Conditional Access policy.<br><br>**DeviceNotCompliant** - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune<br><br>**BlockedByConditionalAccess** Access has been blocked by Conditional Access policies. The access policy doesn't allow token issuance.<br><br>**You cannot access this right now** - Your sign-in was successful but does not meet the criteria to access this resource |Some Azure tenants may block the usage of ΓÇ£Device CodesΓÇ¥ for manipulating Azure resources as a Security precaution. It's usually the result of your organization's Conditional Access IT policies. As a result, the Azure Percept Setup experience can't create any Azure resources for you. <br><br>Your Conditional Access policy requires you to be connected to your corporate network or VPN to proceed. |Work with your organization to understand their conditional access IT policies. |
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
:::image type="content" source="./media/quickstart-percept-dk-setup/main-08-copy-code.png" alt-text="Copy device code."::: > [!NOTE]
- > If you receive this error message when trying to receive the Device Code: *Unable to get device code. Please make sure the device is connected to internet*. The most common cause is your on-site network. Try plugging in an Ethernet cable to the dev kit or connecting to a different Wi-Fi network and try again. Less common causes could be your host computer's date/time are off.
+ > If you receive this error: *Unable to get device code. Please make sure the device is connected to internet*. The most common cause is your on-site network. Try plugging in an Ethernet cable to the dev kit or connecting to a different Wi-Fi network and try again. Less common causes could be your host computer's date/time are off.
1. A new browser tab will open with a window that says **Enter code**. Paste the code into the window and select **Next**. Do NOT close the **Welcome** tab with the setup experience.
azure-relay Relay Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-authentication-and-authorization.md
For Azure Relay, the management of namespaces and all related resources through
| Role | Description | | - | -- |
-| Azure Relay Owner | Use this role to grant **full** access to Azure Relay resources. |
-| Azure Relay Listener | Use this role to grant **listen and entity read** access to Azure Relay resources. |
-| Azure Relay Sender | Use this role to grant **send and entity read** access to Azure Relay resources. |
+| [Azure Relay Owner](../role-based-access-control/built-in-roles.md#azure-relay-owner) | Use this role to grant **full** access to Azure Relay resources. |
+| [Azure Relay Listener](../role-based-access-control/built-in-roles.md#azure-relay-listener) | Use this role to grant **listen and entity read** access to Azure Relay resources. |
+| [Azure Relay Sender](../role-based-access-control/built-in-roles.md#azure-relay-sender) | Use this role to grant **send and entity read** access to Azure Relay resources. |
## Shared Access Signature
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
The following limitations apply to tags:
* Each resource, resource group, and subscription can have a maximum of 50 tag name/value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name/value pairs. * The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters. * Tags can't be applied to classic resources such as Cloud Services.
+* Azure IP Groups and Azure Firewall Policies do not support PATCH operation.
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
azure-signalr Howto Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-service-tags.md
# Use service tags for Azure SignalR Service
-You can use [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) for Azure SignalR Service when configuring [Network Security Group](../virtual-network/network-security-groups-overview.md#network-security-groups). It allows you to define outbound network security rule to Azure SignalR Service endpoints without need to hardcode IP addresses.
+You can use [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) for Azure SignalR Service when configuring [Network Security Group](../virtual-network/network-security-groups-overview.md#network-security-groups). It allows you to define inbound/outbound network security rule for Azure SignalR Service endpoints without need to hardcode IP addresses.
Azure SignalR Service manages these service tags. You can't create your own service tag or modify an existing one. Microsoft manages these address prefixes that match to the service tag and automatically updates the service tag as addresses change.
+> [!Note]
+> Starting from 15 August 2021, Azure SignalR Service supports bidirectional Service Tag for both inbound and outbound traffic.
+ ## Use service tag on portal
+### Configure outbound traffic
+ You can allow outbound traffic to Azure SignalR Service by adding a new outbound network security rule: 1. Go to the network security group.
You can allow outbound traffic to Azure SignalR Service by adding a new outbound
1. Click **Add**.
+### Configure inbound traffic
+
+If you have upstreams, you can also allow inbound traffic from Azure SignalR Service by adding a new inbound network security rule:
+
+1. Go to the network security group.
+
+1. Click on the settings menu called **Inbound security rules**.
+
+1. Click the button **+ Add** on the top.
+
+1. Choose **Service Tag** under **Source**.
+
+1. Choose **AzureSignalR** under **Source service tag**.
+
+1. Fill in **\*** in **Source port ranges**.
+
+ :::image type="content" alt-text="Create an inbound security rule" source="media/howto-service-tags/portal-add-inbound-security-rule.png" :::
++
+1. Adjust other fields according to your needs.
+
+1. Click **Add**.
## Next steps
azure-sql-edge Create External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/create-external-stream-transact-sql.md
Title: CREATE EXTERNAL STREAM (Transact-SQL) - Azure SQL Edge
-description: Learn about the CREATE EXTERNAL STREAM statement in Azure SQL Edge
-keywords:
+description: Learn about the CREATE EXTERNAL STREAM statement in Azure SQL Edge
+keywords:
Azure SQL Edge currently only supports the following data sources as stream inpu
## Syntax
-```sql
-CREATE EXTERNAL STREAM {external_stream_name}
-( <column_definition> [, <column_definition> ] * ) -- Used for Inputs - optional
+```syntaxsql
+CREATE EXTERNAL STREAM {external_stream_name}
+( <column_definition> [, <column_definition> ] * ) -- Used for Inputs - optional
WITH ( <with_options> ) <column_definition> ::=
WITH ( <with_options> )
[ ( precision [ , scale ] | max ) ] <with_options> ::=
- DATA_SOURCE = data_source_name,
- LOCATION = location_name,
- [FILE_FORMAT = external_file_format_name], --Used for Inputs - optional
+ DATA_SOURCE = data_source_name,
+ LOCATION = location_name,
+ [FILE_FORMAT = external_file_format_name], --Used for Inputs - optional
[<optional_input_options>],
- [<optional_output_options>],
+ [<optional_output_options>],
TAGS = <tag_column_value>
-<optional_input_options> ::=
+<optional_input_options> ::=
INPUT_OPTIONS = '[<Input_options_data>]'
-<Input_option_data> ::=
+<Input_option_data> ::=
<input_option_values> [ , <input_option_values> ] <input_option_values> ::= PARTITIONS: [number_of_partitions]
- | CONSUMER_GROUP: [ consumer_group_name ]
- | TIME_POLICY: [ time_policy ]
- | LATE_EVENT_TOLERANCE: [ late_event_tolerance_value ]
+ | CONSUMER_GROUP: [ consumer_group_name ]
+ | TIME_POLICY: [ time_policy ]
+ | LATE_EVENT_TOLERANCE: [ late_event_tolerance_value ]
| OUT_OF_ORDER_EVENT_TOLERANCE: [ out_of_order_tolerance_value ]
-
-<optional_output_options> ::=
+
+<optional_output_options> ::=
OUTPUT_OPTIONS = '[<output_option_data>]'
-<output_option_data> ::=
+<output_option_data> ::=
<output_option_values> [ , <output_option_values> ] <output_option_values> ::=
- REJECT_POLICY: [ reject_policy ]
- | MINIMUM_ROWS: [ row_value ]
- | MAXIMUM_TIME: [ time_value_minutes]
- | PARTITION_KEY_COLUMN: [ partition_key_column_name ]
- | PROPERTY_COLUMNS: [ ( [ output_col_name ] ) ]
- | SYSTEM_PROPERTY_COLUMNS: [ ( [ output_col_name ] ) ]
- | PARTITION_KEY: [ partition_key_name ]
- | ROW_KEY: [ row_key_name ]
- | BATCH_SIZE: [ batch_size_value ]
- | MAXIMUM_BATCH_COUNT: [ batch_value ]
+ REJECT_POLICY: [ reject_policy ]
+ | MINIMUM_ROWS: [ row_value ]
+ | MAXIMUM_TIME: [ time_value_minutes]
+ | PARTITION_KEY_COLUMN: [ partition_key_column_name ]
+ | PROPERTY_COLUMNS: [ ( [ output_col_name ] ) ]
+ | SYSTEM_PROPERTY_COLUMNS: [ ( [ output_col_name ] ) ]
+ | PARTITION_KEY: [ partition_key_name ]
+ | ROW_KEY: [ row_key_name ]
+ | BATCH_SIZE: [ batch_size_value ]
+ | MAXIMUM_BATCH_COUNT: [ batch_value ]
| STAGING_AREA: [ blob_data_source ]
-
+ <tag_column_value> ::= -- Reserved for Future Usage
-);
+);
``` ## Arguments - [DATA_SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql/) - [FILE_FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql/)-- **LOCATION**: Specifies the name for the actual data or location in the data source.
+- **LOCATION**: Specifies the name for the actual data or location in the data source.
- For Edge Hub or Kafka stream objects, location specifies the name of the Edge Hub or Kafka topic to read from or write to. - For SQL stream objects(SQL Server, Azure SQL Database or Azure SQL Edge) location specifies the name of the table. If the stream is created in the same database and schema as the destination table, then just the Table name suffices. Otherwise you need to fully qualify (<database_name.schema_name.table_name) the table name. - For Azure Blob Storage stream object location refers to the path pattern to use inside the blob container. For more information on this feature refer to (/articles/stream-analytics/stream-analytics-define-outputs.md#blob-storage-and-azure-data-lake-gen2) - **INPUT_OPTIONS**: Specify options as key-value pairs for services such as Kafka, IoT Edge Hub that are inputs to streaming queries
- - PARTITIONS:
+ - PARTITIONS:
Number of partitions defined for a topic. The maximum number of partitions which can be used is limited to 32. - Applies to Kafka Input Streams - CONSUMER_GROUP: Event and IoT Hubs limit the number of readers within one consumer group (to 5). Leaving this field empty will use the '$Default' consumer group.
- - Reserved for future usage. Does not apply to Azure SQL Edge.
+ - Reserved for future usage. Does not apply to Azure SQL Edge.
- TIME_POLICY: Describes whether to drop events or adjust the event time when late events or out of order events pass their tolerance value. - Reserved for future usage. Does not apply to Azure SQL Edge.
WITH ( <with_options> )
- OUT_OF_ORDER_EVENT_TOLERANCE: Events can arrive out of order after they've made the trip from the input to the streaming query. These events can be accepted as-is, or you can choose to pause for a set period to reorder them. - Reserved for future usage. Does not apply to Azure SQL Edge.
-
-- **OUTPUT_OPTIONS**: Specify options as key-value pairs for supported services that are outputs to streaming queries
- - REJECT_POLICY: DROP | RETRY
- Species the data error handling policies when data conversion errors occur.
- - Applies to all supported outputs
- - MINIMUM_ROWS:
- Minimum rows required per batch written to an output. For Parquet, every batch will create a new file.
- - Applies to all supported outputs
- - MAXIMUM_TIME:
- Maximum wait time in minutes per batch. After this time, the batch will be written to the output even if the minimum rows requirement is not met.
- - Applies to all supported outputs
- - PARTITION_KEY_COLUMN:
- The column that is used for the partition key.
+
+- **OUTPUT_OPTIONS**: Specify options as key-value pairs for supported services that are outputs to streaming queries
+ - REJECT_POLICY: DROP | RETRY
+ Species the data error handling policies when data conversion errors occur.
+ - Applies to all supported outputs
+ - MINIMUM_ROWS:
+ Minimum rows required per batch written to an output. For Parquet, every batch will create a new file.
+ - Applies to all supported outputs
+ - MAXIMUM_TIME:
+ Maximum wait time in minutes per batch. After this time, the batch will be written to the output even if the minimum rows requirement is not met.
+ - Applies to all supported outputs
+ - PARTITION_KEY_COLUMN:
+ The column that is used for the partition key.
- Reserved for future usage. Does not apply to Azure SQL Edge.
- - PROPERTY_COLUMNS:
- A comma-separated list of the names of output columns that will be attached to messages as custom properties if provided.
- - Reserved for future usage. Does not apply to Azure SQL Edge.
- - SYSTEM_PROPERTY_COLUMNS:
- A JSON-formatted collection of name/value pairs of System Property names and output columns to be populated on Service Bus messages. e.g. { "MessageId": "column1", "PartitionKey": "column2"}
- - Reserved for future usage. Does not apply to Azure SQL Edge.
- - PARTITION_KEY:
- The name of the output column containing the partition key. The partition key is a unique identifier for the partition within a given table that forms the first part of an entity's primary key. It is a string value that may be up to 1 KB in size.
+ - PROPERTY_COLUMNS:
+ A comma-separated list of the names of output columns that will be attached to messages as custom properties if provided.
- Reserved for future usage. Does not apply to Azure SQL Edge.
- - ROW_KEY:
- The name of the output column containing the row key. The row key is a unique identifier for an entity within a given partition. It forms the second part of an entity's primary key. The row key is a string value that may be up to 1 KB in size.
+ - SYSTEM_PROPERTY_COLUMNS:
+ A JSON-formatted collection of name/value pairs of System Property names and output columns to be populated on Service Bus messages. e.g. { "MessageId": "column1", "PartitionKey": "column2"}
- Reserved for future usage. Does not apply to Azure SQL Edge.
- - BATCH_SIZE:
- This represents the number of transactions for table storage where the maximum can go up to 100 records. For Azure Functions, this represents the batch size in bytes sent to the function per call - default is 256 kB.
- - Reserved for future usage. Does not apply to Azure SQL Edge.
- - MAXIMUM_BATCH_COUNT:
- Maximum number of events sent to the function per call for Azure function - default is 100. For SQL Database, this represents the maximum number of records sent with every bulk insert transaction - default is 10,000.
- - Applies to all SQL based outputs
- - STAGING_AREA: EXTERNAL DATA SOURCE object to Blob Storage
- The staging area for high-throughput data ingestion into Azure Synapse Analytics
+ - PARTITION_KEY:
+ The name of the output column containing the partition key. The partition key is a unique identifier for the partition within a given table that forms the first part of an entity's primary key. It is a string value that may be up to 1 KB in size.
+ - Reserved for future usage. Does not apply to Azure SQL Edge.
+ - ROW_KEY:
+ The name of the output column containing the row key. The row key is a unique identifier for an entity within a given partition. It forms the second part of an entity's primary key. The row key is a string value that may be up to 1 KB in size.
+ - Reserved for future usage. Does not apply to Azure SQL Edge.
+ - BATCH_SIZE:
+ This represents the number of transactions for table storage where the maximum can go up to 100 records. For Azure Functions, this represents the batch size in bytes sent to the function per call - default is 256 kB.
+ - Reserved for future usage. Does not apply to Azure SQL Edge.
+ - MAXIMUM_BATCH_COUNT:
+ Maximum number of events sent to the function per call for Azure function - default is 100. For SQL Database, this represents the maximum number of records sent with every bulk insert transaction - default is 10,000.
+ - Applies to all SQL based outputs
+ - STAGING_AREA: EXTERNAL DATA SOURCE object to Blob Storage
+ The staging area for high-throughput data ingestion into Azure Synapse Analytics
- Reserved for future usage. Does not apply to Azure SQL Edge.
Type: Input or Output<br>
Syntax: ```sql
-CREATE EXTERNAL DATA SOURCE MyEdgeHub
-WITH
-(
- LOCATION = 'edgehub://'      
-);
-
-CREATE EXTERNAL FILE FORMAT myFileFormat
-WITH (
- FORMAT_TYPE = JSON,
-);
-
-CREATE EXTERNAL STREAM Stream_A
-WITH
-(
- DATA_SOURCE = MyEdgeHub,
- FILE_FORMAT = myFileFormat,
- LOCATION = '<mytopicname>',
- OUTPUT_OPTIONS =
+CREATE EXTERNAL DATA SOURCE MyEdgeHub
+WITH
+(
+ LOCATION = 'edgehub://'
+);
+
+CREATE EXTERNAL FILE FORMAT myFileFormat
+WITH (
+ FORMAT_TYPE = JSON
+);
+
+CREATE EXTERNAL STREAM Stream_A
+WITH
+(
+ DATA_SOURCE = MyEdgeHub,
+ FILE_FORMAT = myFileFormat,
+ LOCATION = '<mytopicname>',
+ OUTPUT_OPTIONS =
'REJECT_TYPE: Drop' ); ```
Type: Output<br>
Syntax: ```sql
-CREATE DATABASE SCOPED CREDENTIAL SQLCredName
-WITH IDENTITY = '<user>',
-SECRET = '<password>';
-
Azure SQL Database
-CREATE EXTERNAL DATA SOURCE MyTargetSQLTabl
-WITH
-(
- LOCATION = '<my_server_name>.database.windows.net',
- CREDENTIAL = SQLCredName
-);
-
+CREATE DATABASE SCOPED CREDENTIAL SQLCredName
+WITH IDENTITY = '<user>',
+SECRET = '<password>';
+
+-- Azure SQL Database
+CREATE EXTERNAL DATA SOURCE MyTargetSQLTabl
+WITH
+(
+ LOCATION = '<my_server_name>.database.windows.net',
+ CREDENTIAL = SQLCredName
+);
+ --SQL Server or Azure SQL Edge
-CREATE EXTERNAL DATA SOURCE MyTargetSQLTabl
-WITH
-(
- LOCATION = ' <sqlserver://<ipaddress>,<port>',
- CREDENTIAL = SQLCredName
-);
-
-CREATE EXTERNAL STREAM Stream_A
-WITH
-(
- DATA_SOURCE = MyTargetSQLTable,
+CREATE EXTERNAL DATA SOURCE MyTargetSQLTabl
+WITH
+(
+ LOCATION = ' <sqlserver://<ipaddress>,<port>',
+ CREDENTIAL = SQLCredName
+);
+
+CREATE EXTERNAL STREAM Stream_A
+WITH
+(
+ DATA_SOURCE = MyTargetSQLTable,
LOCATION = '<DatabaseName>.<SchemaName>.<TableName>' ,
- --Note: If table is contained in the database, <TableName> should be sufficient
- OUTPUT_OPTIONS =
+ --Note: If table is contained in the database, <TableName> should be sufficient
+ OUTPUT_OPTIONS =
'REJECT_TYPE: Drop'
-);
+);
``` ### Example 3 - Kafka
Type: Input<br>
Syntax: ```sql
-CREATE EXTERNAL DATA SOURCE MyKafka_tweets
-WITH
-(
- --The location maps to KafkaBootstrapServer
- LOCATION = 'kafka://<kafkaserver>:<ipaddress>',
- CREDENTIAL = kafkaCredName
-);
-
-CREATE EXTERNAL FILE FORMAT myFileFormat
-WITH (
- FORMAT_TYPE = JSON,
+CREATE EXTERNAL DATA SOURCE MyKafka_tweets
+WITH
+(
+ --The location maps to KafkaBootstrapServer
+ LOCATION = 'kafka://<kafkaserver>:<ipaddress>',
+ CREDENTIAL = kafkaCredName
+);
+
+CREATE EXTERNAL FILE FORMAT myFileFormat
+WITH (
+ FORMAT_TYPE = JSON,
DATA_COMPRESSION = 'org.apache.hadoop.io.compress.GzipCodec'
-);
-
-CREATE EXTERNAL STREAM Stream_A (user_id VARCHAR, tweet VARCHAR)
-WITH
-(
- DATA_SOURCE = MyKafka_tweets,
- LOCATION = '<KafkaTopicName>',
- FILE_FORMAT = myFileFormat,
- INPUT_OPTIONS =
+);
+
+CREATE EXTERNAL STREAM Stream_A (user_id VARCHAR, tweet VARCHAR)
+WITH
+(
+ DATA_SOURCE = MyKafka_tweets,
+ LOCATION = '<KafkaTopicName>',
+ FILE_FORMAT = myFileFormat,
+ INPUT_OPTIONS =
'PARTITIONS: 5'
-);
+);
``` ## See also
azure-sql-edge Data Retention Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-enable-disable.md
Last updated 09/04/2020
# Enable and disable data retention policies
-This topic describes how to enable and disable data retention policies for a database and a table.
+This topic describes how to enable and disable data retention policies for a database and a table.
## Enable data retention for a database
FROM sys.databases;
Data Retention must be enabled for each table for which you want data to be automatically purged. When Data Retention is enabled on the database and the table, a background system task will periodically scan the table to identify and delete any obsolete (aged) rows. Data Retention can be enabled on a table either during table creation using [Create Table](/sql/t-sql/statements/create-table-transact-sql) or by using [Alter Table](/sql/t-sql/statements/alter-table-transact-sql).
-The following example shows how to enable data retention for a table by using [Create Table](/sql/t-sql/statements/create-table-transact-sql).
+The following example shows how to enable data retention for a table by using [Create Table](/sql/t-sql/statements/create-table-transact-sql).
```sql
-CREATE TABLE [dbo].[data_retention_table]
+CREATE TABLE [dbo].[data_retention_table]
(
-[dbdatetime2] datetime2(7),
-[product_code] int,
-[value] char(10),
+[dbdatetime2] datetime2(7),
+[product_code] int,
+[value] char(10),
CONSTRAINT [pk_current_data_retention_table] PRIMARY KEY CLUSTERED ([product_code]) ) WITH (DATA_DELETION = ON ( FILTER_COLUMN = [dbdatetime2], RETENTION_PERIOD = 1 day ) ) ```
-The `WITH (DATA_DELETION = ON ( FILTER_COLUMN = [dbdatetime2], RETENTION_PERIOD = 1 day ) )` part of the create table command sets the data retention on the table. The command uses the following required parameters
+The `WITH (DATA_DELETION = ON ( FILTER_COLUMN = [dbdatetime2], RETENTION_PERIOD = 1 day ) )` part of the create table command sets the data retention on the table. The command uses the following required parameters
- DATA_DELETION - Indicates whether data retention is ON or OFF.-- FILTER_COLUMN - Name on the column in the table, which will be used to ascertain if the rows are obsolete or not. The filter column can only be a column with the following data types
+- FILTER_COLUMN - Name on the column in the table, which will be used to ascertain if the rows are obsolete or not. The filter column can only be a column with the following data types
- Date - SmallDateTime - DateTime
The `WITH (DATA_DELETION = ON ( FILTER_COLUMN = [dbdatetime2], RETENTION_PERIOD
- DateTimeOffset - RETENTION_PERIOD - An integer value followed by a unit descriptor. The allowed units are DAY, DAYS, WEEK, WEEKS, MONTH, MONTHS, YEAR and YEARS.
-The following example shows how to enable data retention for table by using [Alter Table](/sql/t-sql/statements/alter-table-transact-sql).
+The following example shows how to enable data retention for table by using [Alter Table](/sql/t-sql/statements/alter-table-transact-sql).
```sql Alter Table [dbo].[data_retention_table]
select name, data_retention_period, data_retention_period_unit from sys.tables
A value of data_retention_period = -1 and data_retention_period_unit as INFINITE, indicates that data retention is not set on the table.
-The following query can be used to identify the column used as the filter_column for data retention.
+The following query can be used to identify the column used as the filter_column for data retention.
```sql Select name from sys.columns
-where is_data_deletion_filter_column =1
+where is_data_deletion_filter_column =1
and object_id = object_id(N'dbo.data_retention_table', N'U') ``` ## Correlating DB and table data retention settings
-The data retention setting on the database and the table, are used in conjunction to determine if autocleanup for aged rows will run on the tables or not.
+The data retention setting on the database and the table, are used in conjunction to determine if autocleanup for aged rows will run on the tables or not.
|Database Option | Table Option | Behavior | |-|--|-|
The data retention setting on the database and the table, are used in conjunctio
| ON | OFF | Data Retention policy is enabled at the database level. However since the option is disabled at the table level, there is no retention-based cleanup of aged rows.| | ON | ON | Data Retention policy is enabled for both the database and tables. Automatic cleanup of obsolete records is enabled. |
-## Disable data retention on a table
+## Disable data retention on a table
Data Retention can be disabled on a table by using [Alter Table](/sql/t-sql/statements/alter-table-transact-sql). The following command can be used to disable data retention on a table.
Set (DATA_DELETION = OFF)
Data Retention can be disabled on a table by using [Alter Database](/sql/t-sql/statements/alter-database-transact-sql-set-options). The following command can be used to disable data retention on a database. ```sql
-ALTER DATABASE <DatabaseName> SET DATA_RETENTION OFF;
+ALTER DATABASE [<DatabaseName>] SET DATA_RETENTION OFF;
``` ## Next steps
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/date-bucket-tsql.md
Last updated 09/03/2020
# Date_Bucket (Transact-SQL)
-This function returns the datetime value corresponding to the start of each datetime bucket, from the timestamp defined by the `origin` parameter or the default origin value of `1900-01-01 00:00:00.000` if the origin parameter is not specified.
+This function returns the datetime value corresponding to the start of each datetime bucket, from the timestamp defined by the `origin` parameter or the default origin value of `1900-01-01 00:00:00.000` if the origin parameter is not specified.
See [Date and Time Data Types and Functions &#40;Transact-SQL&#41;](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql/) for an overview of all Transact-SQL date and time data types and functions.
See [Date and Time Data Types and Functions &#40;Transact-SQL&#41;](/sql/t-sql/f
## Syntax
-```sql
+```syntaxsql
DATE_BUCKET (datePart, number, date, origin) ```
The part of *date* that is used with the ΓÇÿnumberΓÇÖ parameter. Ex. Year, month
> [!NOTE] > `DATE_BUCKET` does not accept user-defined variable equivalents for the *datepPart* arguments.
-
-|*datePart*|Abbreviations|
+
+|*datePart*|Abbreviations|
|||
-|**day**|**dd**, **d**|
-|**week**|**wk**, **ww**|
+|**day**|**dd**, **d**|
+|**week**|**wk**, **ww**|
|**month**|**mm**, **m**|
-|**quarter**|**qq**, **q**|
-|**year**|**yy**, **yyyy**|
-|**hour**|**hh**|
-|**minute**|**mi**, **n**|
-|**second**|**ss**, **s**|
-|**millisecond**|**ms**|
+|**quarter**|**qq**, **q**|
+|**year**|**yy**, **yyyy**|
+|**hour**|**hh**|
+|**minute**|**mi**, **n**|
+|**second**|**ss**, **s**|
+|**millisecond**|**ms**|
*number*
-The integer number that decides the width of the bucket combined with *datePart* argument. This represents the width of the dataPart buckets from the origin time. **`This argument cannot be a negative integer value`**.
+The integer number that decides the width of the bucket combined with *datePart* argument. This represents the width of the dataPart buckets from the origin time. **`This argument cannot be a negative integer value`**.
*date*
An expression that can resolve to one of the following values:
For *date*, `DATE_BUCKET` will accept a column expression, expression, or user-defined variable if they resolve to any of the data types mentioned above.
-**Origin**
+**Origin**
An optional expression that can resolve to one of the following values:
An optional expression that can resolve to one of the following values:
+ **smalldatetime** + **time**
-The data type for `Origin` should match the data type of the `Date` parameter.
+The data type for `Origin` should match the data type of the `Date` parameter.
`DATE_BUCKET` uses a default origin date value of `1900-01-01 00:00:00.000` i.e. 12:00 AM on Monday, January 1 1900, if no Origin value is specified for the function.
The return value data type for this method is dynamic. The return type depends o
### Understanding the output from `DATE_BUCKET`
-`Date_Bucket` returns the latest date or time value, corresponding to the datePart and number parameter. For example, in the expressions below, `Date_Bucket` will return the output value of `2020-04-13 00:00:00.0000000`, as the output is calculated based on one week buckets from the default origin time of `1900-01-01 00:00:00.000`. The value `2020-04-13 00:00:00.0000000` is 6276 weeks from the origin value of `1900-01-01 00:00:00.000`.
+`Date_Bucket` returns the latest date or time value, corresponding to the datePart and number parameter. For example, in the expressions below, `Date_Bucket` will return the output value of `2020-04-13 00:00:00.0000000`, as the output is calculated based on one week buckets from the default origin time of `1900-01-01 00:00:00.000`. The value `2020-04-13 00:00:00.0000000` is 6276 weeks from the origin value of `1900-01-01 00:00:00.000`.
```sql declare @date datetime2 = '2020-04-15 21:22:11'
Select DATE_BUCKET(wk, 5, @date, @origin)
## datepart Argument **dayofyear**, **day**, and **weekday** return the same value. Each *datepart* and its abbreviations return the same value.
-
+ ## number Argument The *number* argument cannot exceed the range of positive **int** values. In the following statements, the argument for *number* exceeds the range of **int** by 1. The following statement returns the following error message: "`Msg 8115, Level 16, State 2, Line 2. Arithmetic overflow error converting expression to data type int."`
-
+ ```sql declare @date datetime2 = '2020-04-30 00:00:00' Select DATE_BUCKET(dd, 2147483648, @date)
-```
+```
-If a negative value for number is passed to the `Date_Bucket` function, the following error will be returned.
+If a negative value for number is passed to the `Date_Bucket` function, the following error will be returned.
-```sql
+```txt
Msg 9834, Level 16, State 1, Line 1 Invalid bucket width value passed to date_bucket function. Only positive values are allowed. ````
-## date Argument
+## date Argument
-`DATE_BUCKET` return the base value corresponding to the data type of the `date` argument. In the following example, an output value with datetime2 datatype is returned.
+`DATE_BUCKET` return the base value corresponding to the data type of the `date` argument. In the following example, an output value with datetime2 datatype is returned.
```sql Select DATE_BUCKET(dd, 10, SYSUTCDATETIME()) ```
-## origin Argument
+## origin Argument
The data type of the `origin` and `date` arguments in must be the same. If different data types are used, an error will be generated.
Select 'Seconds', DATE_BUCKET(ss, 1, @date)
Here is the result set.
-```sql
+```txt
Week 2020-04-27 00:00:00.0000000 Day 2020-04-30 00:00:00.0000000 Hour 2020-04-30 21:00:00.0000000
Seconds 2020-04-30 21:21:21.0000000
### B. Using expressions as arguments for the number and date parameters These examples use different types of expressions as arguments for the *number* and *date* parameters. These examples are built using the 'AdventureWorksDW2017' Database.
-
-#### Specifying user-defined variables as number and date
+
+#### Specifying user-defined variables as number and date
This example specifies user-defined variables as arguments for *number* and *date*:
-
+ ```sql DECLARE @days int = 365,
- @datetime datetime2 = '2000-01-01 01:01:01.1110000'; /* 2000 was a leap year */;
+ @datetime datetime2 = '2000-01-01 01:01:01.1110000'; /* 2000 was a leap year */;
SELECT Date_Bucket(day, @days, @datetime); ``` Here is the result set.
-```sql
+```txt
1999-12-08 00:00:00.0000000 (1 row affected)
-```
+```
#### Specifying a column as date In the example below, we are calculating the sum of OrderQuantity and sum of UnitPrice grouped over weekly date buckets.
-
+ ```sql SELECT Date_Bucket(week, 1 ,cast(Shipdate as datetime2)) AS ShippedDateBucket
FROM dbo.FactInternetSales FIS
where Shipdate between '2011-01-03 00:00:00.000' and '2011-02-28 00:00:00.000' Group by Date_Bucket(week, 1 ,cast(Shipdate as datetime2)) order by 1
-```
+```
Here is the result set.
-
-```sql
+
+```txt
ShippedDateBucket SumOrderQuantity SumUnitPrice - 2011-01-03 00:00:00.0000000 21 65589.7546
ShippedDateBucket SumOrderQuantity SumUnitPrice
2011-02-14 00:00:00.0000000 32 107804.8964 2011-02-21 00:00:00.0000000 37 119456.3428 2011-02-28 00:00:00.0000000 9 28968.6982
-```
+```
#### Specifying scalar system function as date This example specifies `SYSDATETIME` for *date*. The exact value returned depends on the day and time of statement execution:
-
+ ```sql
-SELECT Date_Bucket(wk, 10, SYSDATETIME());
-```
+SELECT Date_Bucket(wk, 10, SYSDATETIME());
+```
Here is the result set.
-```sql
+```txt
2020-03-02 00:00:00.0000000 (1 row affected)
-```
+```
#### Specifying scalar subqueries and scalar functions as number and date This example uses scalar subqueries, `MAX(OrderDate)`, as arguments for *number* and *date*. `(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100)` serves as an artificial argument for the number parameter, to show how to select a *number* argument from a value list.
-
+ ```sql
-SELECT DATE_BUCKET(week,(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100),
- (SELECT MAX(OrderDate) FROM dbo.FactInternetSales));
-```
-
+SELECT DATE_BUCKET(week,(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100),
+ (SELECT MAX(OrderDate) FROM dbo.FactInternetSales));
+```
+ #### Specifying numeric expressions and scalar system functions as number and date This example uses a numeric expression ((10/2)), and scalar system functions (SYSDATETIME) as arguments for number and date.
-
+ ```sql SELECT Date_Bucket(week,(10/2), SYSDATETIME()); ```
SELECT Date_Bucket(week,(10/2), SYSDATETIME());
#### Specifying an aggregate window function as number This example uses an aggregate window function as an argument for *number*.
-
+ ```sql
-Select
+Select
DISTINCT DATE_BUCKET(day, 30, Cast([shipdate] as datetime2)) as DateBucket, First_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(day, 30, Cast([shipdate] as datetime2))) as First_Value_In_Bucket, Last_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(day, 30, Cast([shipdate] as datetime2))) as Last_Value_In_Bucket from [dbo].[FactInternetSales] Where ShipDate between '2011-01-03 00:00:00.000' and '2011-02-28 00:00:00.000' order by DateBucket
-GO
-```
-### C. Using a non default origin value
+GO
+```
+### C. Using a non-default origin value
-This example uses a non default orgin value to generate the date buckets.
+This example uses a non-default origin value to generate the date buckets.
```sql declare @date datetime2 = '2020-06-15 21:22:11'
azure-sql-edge Imputing Missing Values https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/imputing-missing-values.md
Last updated 09/22/2020
-# Filling time gaps and imputing missing values
+# Filling time gaps and imputing missing values
When dealing with time series data, it's often possible that the time series data has missing values for the attributes. It's also possible that, because of the nature of the data, or because of interruptions in data collection, there are time *gaps* in the dataset. For example, when collecting energy usage statistics for a smart device, whenever the device isn't operational there will be gaps in the usage statistics. Similarly, in a machine telemetry data collection scenario, it's possible that the different sensors are configured to emit data at different frequencies, resulting in missing values for the sensors. For example, if there are two sensors, voltage and pressure, configured at 100 Hz and 10-Hz frequency respectively, the voltage sensor will emit data every one-hundredth of a second, while the pressure sensor will only emit data every one-tenth of a second.
-The following table describes a machine telemetry dataset, which was collected at a one-second interval.
+The following table describes a machine telemetry dataset, which was collected at a one-second interval.
-```
+```txt
timestamp VoltageReading PressureReading -- - 2020-09-07 06:14:41.000 164.990400 97.223600
timestamp VoltageReading PressureReading
2020-09-07 06:14:52.000 157.019200 NULL 2020-09-07 06:14:54.000 NULL 95.352000 2020-09-07 06:14:56.000 159.183500 100.748200- ```
-There are two important characteristics of the preceding dataset.
+There are two important characteristics of the preceding dataset.
-- The dataset doesn't contain any data points related to several timestamps `2020-09-07 06:14:47.000`, `2020-09-07 06:14:48.000`, `2020-09-07 06:14:50.000`, `2020-09-07 06:14:53.000`, and `2020-09-07 06:14:55.000`. These timestamps are *gaps* in the dataset. -- There are missing values, represented as `null`, for the Voltage and pressure readings.
+- The dataset doesn't contain any data points related to several timestamps `2020-09-07 06:14:47.000`, `2020-09-07 06:14:48.000`, `2020-09-07 06:14:50.000`, `2020-09-07 06:14:53.000`, and `2020-09-07 06:14:55.000`. These timestamps are *gaps* in the dataset.
+- There are missing values, represented as `null`, for the Voltage and pressure readings.
-## Gap filling
+## Gap filling
-Gap filling is a technique that helps create contiguous, ordered set of timestamps to ease the analysis of time series data. In Azure SQL Edge, the easiest way to fill gaps in the time series dataset is to define a temporary table with the desired time distribution and then do a `LEFT OUTER JOIN` or a `RIGHT OUTER JOIN` operation on the dataset table.
+Gap filling is a technique that helps create contiguous, ordered set of timestamps to ease the analysis of time series data. In Azure SQL Edge, the easiest way to fill gaps in the time series dataset is to define a temporary table with the desired time distribution and then do a `LEFT OUTER JOIN` or a `RIGHT OUTER JOIN` operation on the dataset table.
-Taking the `MachineTelemetry` data represented above as an example, the following query can be used to generate contiguous, ordered set of timestamps for analysis.
+Taking the `MachineTelemetry` data represented above as an example, the following query can be used to generate contiguous, ordered set of timestamps for analysis.
> [!NOTE]
-> The query below generates the missing rows, with the timestamp values and `null` values for the attributes.
+> The query below generates the missing rows, with the timestamp values and `null` values for the attributes.
```sql Create Table #SeriesGenerate(dt datetime Primary key Clustered)
Insert into #SeriesGenerate values (@startdate)
set @startdate = DATEADD(SECOND, 1, @startdate) END
-Select a.dt as timestamp, b.VoltageReading, b.PressureReading
-From
-#SeriesGenerate a LEFT OUTER JOIN MachineTelemetry b
+Select a.dt as timestamp, b.VoltageReading, b.PressureReading
+From
+#SeriesGenerate a LEFT OUTER JOIN MachineTelemetry b
on a.dt = b.[timestamp] ``` The above query produces the following output containing all *one-second* timestamps in the specified range. Here is the Result Set
-```
-
+```txt
timestamp VoltageReading PressureReading -- -- - 2020-09-07 06:14:41.000 164.990400 97.223600
timestamp VoltageReading PressureReading
## Imputing missing values
-The preceding query generated the missing timestamps for data analysis, however it did not replace any of the missing values (represented as null) for `voltage` and `pressure` readings. In Azure SQL Edge, a new syntax was added to the T-SQL `LAST_VALUE()` and `FIRST_VALUE()` functions, which provide mechanisms to impute missing values, based on the preceding or following values in the dataset.
+The preceding query generated the missing timestamps for data analysis, however it did not replace any of the missing values (represented as null) for `voltage` and `pressure` readings. In Azure SQL Edge, a new syntax was added to the T-SQL `LAST_VALUE()` and `FIRST_VALUE()` functions, which provide mechanisms to impute missing values, based on the preceding or following values in the dataset.
The new syntax adds `IGNORE NULLS` and `RESPECT NULLS` clause to the `LAST_VALUE()` and `FIRST_VALUE()` functions. A following query on the `MachineTelemetry` dataset computes the missing values using the last_value function, where missing values are replaced with the last observed value in the dataset. ```sql
-Select
+Select
timestamp, VoltageReading As OriginalVoltageValues,
- LAST_VALUE(VoltageReading) IGNORE NULLS OVER (ORDER BY timestamp) As ImputedUsingLastValue,
+ LAST_VALUE(VoltageReading) IGNORE NULLS OVER (ORDER BY timestamp) As ImputedUsingLastValue,
PressureReading As OriginalPressureValues, LAST_VALUE(PressureReading) IGNORE NULLS OVER (ORDER BY timestamp) As ImputedUsingLastValue
-From
-MachineTelemetry
-order by timestamp
+From
+MachineTelemetry
+order by timestamp
``` Here is the Result Set
-```
-
+```txt
timestamp OrigVoltageVals ImputedVoltage OrigPressureVals ImputedPressure -- - -- -- - 2020-09-07 06:14:41.000 164.990400 164.990400 97.223600 97.223600
timestamp OrigVoltageVals ImputedVoltage OrigPressureVals Impute
2020-09-07 06:14:52.000 157.019200 157.019200 NULL 103.359100 2020-09-07 06:14:54.000 NULL 157.019200 95.352000 95.352000 2020-09-07 06:14:56.000 159.183500 159.183500 100.748200 100.748200- ```
-The following query imputes the missing values using both the `LAST_VALUE()` and the `FIRST_VALUE` function. For, the output column `ImputedVoltage` the missing values are replaced by the last observed value, while for the output column `ImputedPressure` the missing values are replaced by the next observed value in the dataset.
+The following query imputes the missing values using both the `LAST_VALUE()` and the `FIRST_VALUE` function. For, the output column `ImputedVoltage` the missing values are replaced by the last observed value, while for the output column `ImputedPressure` the missing values are replaced by the next observed value in the dataset.
```sql
-Select
- dt as timestamp,
+Select
+ dt as timestamp,
VoltageReading As OrigVoltageVals,
- LAST_VALUE(VoltageReading) IGNORE NULLS OVER (ORDER BY dt) As ImputedVoltage,
+ LAST_VALUE(VoltageReading) IGNORE NULLS OVER (ORDER BY dt) As ImputedVoltage,
PressureReading As OrigPressureVals,
- First_VALUE(PressureReading) IGNORE NULLS OVER (ORDER BY dt ROWS
+ First_VALUE(PressureReading) IGNORE NULLS OVER (ORDER BY dt ROWS
BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) As ImputedPressure
-From
-(Select a.dt, b.VoltageReading,b.PressureReading from
- #SeriesGenerate a
- LEFT OUTER JOIN
- MachineTelemetry b
+From
+(Select a.dt, b.VoltageReading,b.PressureReading from
+ #SeriesGenerate a
+ LEFT OUTER JOIN
+ MachineTelemetry b
on a.dt = b.[timestamp]) A order by timestamp ``` Here is the Result Set
-```
-
+```txt
timestamp OrigVoltageVals ImputedVoltage OrigPressureVals ImputedPressure -- - -- 2020-09-07 06:14:41.000 164.990400 164.990400 97.223600 97.223600
timestamp OrigVoltageVals ImputedVoltage OrigPressureVals Imput
> [!NOTE] > The above query uses the `FIRST_VALUE()` function to replace missing values with the next observed value. The same result can be achieved by using the `LAST_VALUE()` function with a `ORDER BY <ordering_column> DESC` clause.
-## Next steps
+## Next steps
- [FIRST_VALUE (Transact-SQL)](/sql/t-sql/functions/first-value-transact-sql?toc=%2fazure%2fazure-sql-edge%2ftoc.json) - [LAST_VALUE (Transact-SQL)](/sql/t-sql/functions/last-value-transact-sql?toc=%2fazure%2fazure-sql-edge%2ftoc.json)
azure-sql-edge Sys Sp Cleanup Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-sp-cleanup-data-retention.md
Last updated 09/22/2020
**Applies to:** Azure SQL Edge
-Performs cleanup of obsolete records from tables that have data retention policies enabled. For more information, see [Data Retention](data-retention-overview.md).
+Performs cleanup of obsolete records from tables that have data retention policies enabled. For more information, see [Data Retention](data-retention-overview.md).
-## Syntax
+## Syntax
-```sql
-sys.sp_cleanup_data_retention
- { [@schema_name = ] 'schema_name' },
- { [@table_name = ] 'table_name' },
- [ [@rowcount =] rowcount OUTPUT ]
+```syntaxsql
+sys.sp_cleanup_data_retention
+ { [@schema_name = ] 'schema_name' },
+ { [@table_name = ] 'table_name' },
+ [ [@rowcount =] rowcount OUTPUT ]
```
-## Arguments
-`[ @schema_name = ] schema_name`
+## Arguments
+`[ @schema_name = ] schema_name`
Is the name of the owning schema for the table on which cleanup needs to be performed. *schema_name* is a required parameter of type **sysname**.
-
-`[ @table_name = ] 'table_name'`
+
+`[ @table_name = ] 'table_name'`
Is the name of the table on which cleanup operation needs to be performed. *table_name* is a required parameter of type **sysname**.
-## Output parameter
+## Output parameter
-`[ @rowcount = ] rowcount OUTPUT`
+`[ @rowcount = ] rowcount OUTPUT`
rowcount is an optional OUTPUT parameter that represents the number of records cleanup from the table. *rowcount* is int.
-
-## Permissions
+
+## Permissions
Requires db_owner permissions. ## Next steps - [Data Retention and Automatic Data Purging](data-retention-overview.md)-- [Manage historical data with retention policy](data-retention-cleanup.md)
+- [Manage historical data with retention policy](data-retention-cleanup.md)
- [Enable and disable data retention](data-retention-enable-disable.md)
azure-sql-edge Tutorial Set Up Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-set-up-iot-edge-modules.md
Title: Set up IoT Edge modules in Azure SQL Edge description: In part two of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll set up IoT Edge modules and connections.
-keywords:
+keywords:
Now, specify the container credentials in the IoT Edge module.
| _Field_ | _Value_ | | - | - | | Name | Registry name |
- | Address | Login server |
- | User Name | Username |
- | Password | Password |
-
+ | Address | Login server |
+ | User Name | Username |
+ | Password | Password |
+ ## Build, push, and deploy the Data Generator Module 1. Clone the [project files](https://github.com/microsoft/sqlsourabh/tree/main/SQLEdgeSamples/IoTEdgeSamples/IronOreSilica) to your machine. 2. Open the file **IronOre_Silica_Predict.sln** using Visual Studio 2019
-3. Update the container registry details in the **deployment.template.json**
+3. Update the container registry details in the **deployment.template.json**
```json "registryCredentials":{ "RegistryName":{
Now, specify the container credentials in the IoT Edge module.
"tag": } ```
-5. Execute the project in either debug or release mode to ensure the project runs without any issues
+5. Execute the project in either debug or release mode to ensure the project runs without any issues
6. Push the project to your container registry by right-clicking the project name and then selecting **Build and Push IoT Edge Modules**.
-7. Deploy the Data Generator module as an IoT Edge module to your Edge device.
+7. Deploy the Data Generator module as an IoT Edge module to your Edge device.
## Deploy the Azure SQL Edge module
-1. Deploy the Azure SQL Edge module by clicking on **+ Add** and then **Marketplace Module**.
+1. Deploy the Azure SQL Edge module by clicking on **+ Add** and then **Marketplace Module**.
-2. On the **IoT Edge Module Marketplace** blade, search for *Azure SQL Edge* and pick *Azure SQL Edge Developer*.
+2. On the **IoT Edge Module Marketplace** blade, search for *Azure SQL Edge* and pick *Azure SQL Edge Developer*.
3. Click on the newly added *Azure SQL Edge* module under **IoT Edge Modules** to configure the Azure SQL Edge module. For more information on the configuration options, see [Deploy Azure SQL Edge](./deploy-portal.md).
Now, specify the container credentials in the IoT Edge module.
7. On the routes pane of the **Set modules on device** page, specify the routes for module to IoT Edge hub communication as described below. Make sure to update the module names in the route definitions below.
- ```
+ ```syntax
FROM /messages/modules/<your_data_generator_module>/outputs/IronOreMeasures INTO BrokeredEndpoint("/modules/<your_azure_sql_edge_module>/inputs/IronOreMeasures") ``` For example:
- ```
+ ```syntax
FROM /messages/modules/ASEDataGenerator/outputs/IronOreMeasures INTO BrokeredEndpoint("/modules/AzureSQLEdge/inputs/IronOreMeasures") ```
Now, specify the container credentials in the IoT Edge module.
4. In the **File** menu tab, open a new notebook or use the keyboard shortcut Ctrl + N.
-5. In the new Query window, execute the script below to create the T-SQL Streaming job. Before executing the script, make sure to change the following variables.
- - *SQL_SA_Password:* The MSSQL_SA_PASSWORD value specified while deploy the Azure SQL Edge Module.
-
+5. In the new Query window, execute the script below to create the T-SQL Streaming job. Before executing the script, make sure to change the following variables.
+ - *SQL_SA_Password:* The MSSQL_SA_PASSWORD value specified while deploy the Azure SQL Edge Module.
+ ```sql Use IronOreSilicaPrediction Go Declare @SQL_SA_Password varchar(200) = '<SQL_SA_Password>'
- declare @query varchar(max)
+ declare @query varchar(max)
/* Create Objects Required for Streaming
Now, specify the container credentials in the IoT Edge module.
If NOT Exists (select name from sys.external_file_formats where name = 'JSONFormat') Begin
- CREATE EXTERNAL FILE FORMAT [JSONFormat]
+ CREATE EXTERNAL FILE FORMAT [JSONFormat]
WITH ( FORMAT_TYPE = JSON)
- End
+ End
If NOT Exists (select name from sys.external_data_sources where name = 'EdgeHub') Begin
- Create EXTERNAL DATA SOURCE [EdgeHub]
+ Create EXTERNAL DATA SOURCE [EdgeHub]
With( LOCATION = N'edgehub://' )
- End
+ End
If NOT Exists (select name from sys.external_streams where name = 'IronOreInput') Begin
- CREATE EXTERNAL STREAM IronOreInput WITH
+ CREATE EXTERNAL STREAM IronOreInput WITH
( DATA_SOURCE = EdgeHub, FILE_FORMAT = JSONFormat,
Now, specify the container credentials in the IoT Edge module.
set @query = 'CREATE DATABASE SCOPED CREDENTIAL SQLCredential WITH IDENTITY = ''sa'', SECRET = ''' + @SQL_SA_Password + '''' Execute(@query)
- End
+ End
If NOT Exists (select name from sys.external_data_sources where name = 'LocalSQLOutput') Begin
Now, specify the container credentials in the IoT Edge module.
If NOT Exists (select name from sys.external_streams where name = 'IronOreOutput') Begin
- CREATE EXTERNAL STREAM IronOreOutput WITH
+ CREATE EXTERNAL STREAM IronOreOutput WITH
( DATA_SOURCE = LocalSQLOutput, LOCATION = N'IronOreSilicaPrediction.dbo.IronOreMeasurements'
Now, specify the container credentials in the IoT Edge module.
exec sys.sp_start_streaming_job @name=N'IronOreData' ```
-6. Use the following query to verify that the data from the data generation module is being streamed into the database.
+6. Use the following query to verify that the data from the data generation module is being streamed into the database.
```sql Select Top 10 * from dbo.IronOreMeasurements
Now, specify the container credentials in the IoT Edge module.
```
-In this tutorial, we deployed the data generator module and the SQL Edge module. Then we created a streaming job to stream the data generated by the data generation module to SQL.
+In this tutorial, we deployed the data generator module and the SQL Edge module. Then we created a streaming job to stream the data generated by the data generation module to SQL.
## Next Steps
azure-sql Hyperscale Named Replica Security Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/hyperscale-named-replica-security-configure.md
Previously updated : 3/29/2021 Last updated : 7/27/2021
-# Configure Security to allow isolated access to Azure SQL Database Hyperscale Named Replicas
+# Configure isolated access to a Hyperscale named replica
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article describes the authentication requirements to configure an Azure SQL Hyperscale [named replica](service-tier-hyperscale-replicas.md) so that a user will be allowed access to specific replicas only. This scenario allows complete isolation of named replica from the primary - as the named replica will be running using its own compute node - and it is useful whenever isolated read only access to an Azure SQL Hyperscale database is needed. Isolated, in this context, means that CPU and memory are not shared between the primary and the named replica, and queries running on the named replica will not use any compute resource of the primary or of any other replica.
+This article describes the procedure to grant access to an Azure SQL Hyperscale [named replica](service-tier-hyperscale-replicas.md) without granting access to the primary replica or other named replicas. This scenario allows resource and security isolation of a named replica - as the named replica will be running using its own compute node - and it is useful whenever isolated read-only access to an Azure SQL Hyperscale database is needed. Isolated, in this context, means that CPU and memory are not shared between the primary and the named replica, queries running on the named replica do not use compute resources of the primary or of any other replicas, and principals accessing the named replica cannot access other replicas, including the primary.
-## Create a new login on the master database
+## Create a login in the master database on the primary server
-In the `master` database on the logical server hosting the primary database, execute the following to create a new login that will be used to manage access to the primary and the named replica:
+In the `master` database on the logical server hosting the *primary* database, execute the following to create a new login. Use your own strong and unique password.
```sql create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!'; ```
-Now get the SID from the `sys.sql_logins` system view:
+Retrieve the SID hexadecimal value for the created login from the `sys.sql_logins` system view:
```sql
-select [sid] from sys.sql_logins where name = 'third-party-login'
+select sid from sys.sql_logins where name = 'third-party-login';
```
-And as last action disable the login. This will prevent this login from accessing the any database in the server
+Disable the login. This will prevent this login from accessing any database on the server hosting the primary replica.
```sql
-alter login [third-party-login] disable
+alter login [third-party-login] disable;
```
-As an optional step, in case there are concerns about the login getting enabled in any way, you can drop the login from the server via:
+## Create a user in the primary read-write database
+
+Once the login has been created, connect to the primary read-write replica of your database, for example WideWorldImporters (you can find a sample script to restore it here: [Restore Database in Azure SQL](https://github.com/yorek/azure-sql-db-samples/tree/master/samples/01-restore-database)) and create a database user for that login:
```sql
-drop login [third-party-login]
+create user [third-party-user] from login [third-party-login];
```
-## Create database user in the primary replica
-
-Once the login has been created, connect to the primary replica of the database, for example WideWorldImporters (you can find a sample script to restore it here: [Restore Database in Azure SQL](https://github.com/yorek/azure-sql-db-samples/tree/master/samples/01-restore-database)) and create the database user for that login:
+As an optional step, once the database user has been created, you can drop the server login created in the previous step if there are concerns about the login getting re-enabled in any way. Connect to the master database on the logical server hosting the primary database, and execute the following:
```sql
-create user [third-party-user] from login [third-party-login]
+drop login [third-party-login];
```
-## Create a named replica
+## Create a named replica on a different logical server
-Create a new Azure SQL logical server that will be used to isolate access to the database to be shared. Follow the instruction available at [Create and manage servers and single databases in Azure SQL Database](single-database-manage.md) if you need help.
+Create a new Azure SQL logical server that will be used to isolate access to the named replica. Follow the instructions available at [Create and manage servers and single databases in Azure SQL Database](single-database-manage.md). To create a named replica, this server must be in the same Azure region as the server hosting the primary replica.
Using, for example, AZ CLI: ```azurecli
-az sql server create -g MyResourceGroup -n MyPrimaryServer -l MyLocation --admin-user MyAdminUser --admin-password MyStrongADM1NPassw0rd!
+az sql server create -g MyResourceGroup -n MyNamedReplicaServer -l MyLocation --admin-user MyAdminUser --admin-password MyStrongADM1NPassw0rd!
```
-Make sure the region you choose is the same where the primary server also is. Then create a named replica, for example with AZ CLI:
+Then, create a named replica for the primary database on this server. For example, using AZ CLI:
```azurecli
-az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyPrimaryServer --secondary-type Named --partner-database WideWorldImporters_NR --partner-server MySecondaryServer
+az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyPrimaryServer --secondary-type Named --partner-database WideWorldImporters_NR --partner-server MyNamedReplicaServer
```
-## Create login in the named replica
+## Create a login in the master database on the named replica server
-Connect to the `master` database on the logical server hosting the named replica. Add the login using the SID retrieved from the primary replica:
+Connect to the `master` database on the logical server hosting the named replica, created in the previous step. Add the login using the SID retrieved from the primary replica:
```sql create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!', sid = 0x0...1234; ```
-Done. Now the `third-party-login` can connect to the named replica database, but will be denied connecting to the primary replica.
+At this point, users and applications using `third-party-login` can connect to the named replica, but not to the primary replica.
-## Test access
+## Grant object-level permissions within the database
-You can try the security configuration by using any client tool to connect to the primary and the named replica. For example using `sqlcmd`, you can try to connect to the primary replica using the `third-party-login` user:
+Once you have set up login authentication as described, you can use regular `GRANT`, `DENY` and `REVOKE` statements to manage authorization, or object-level permissions within the database. In these statements, reference the name of the user you created in the database, or a database role that includes this user as a member. Remember to execute these commands on the primary replica. The changes will propagate to all secondary replicas, however they will only be effective on the named replica where the server-level login was created.
-```
-sqlcmd -S MyPrimaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters
+Remember that by default a newly created user has a minimal set of permissions granted (for example, it cannot access any user tables). If you want to allow `third-party-user` to read data in a table, you need to explicitly grant the `SELECT` permission:
+
+```sql
+grant select on [Application].[Cities] to [third-party-user];
```
-this will result in an error as the user is not allowed to connect to the server:
+As an alternative to granting permissions individually on every table, you can add the user to the `db_datareaders` [database role](/sql/relational-databases/security/authentication-access/database-level-roles) to allow read access to all tables, or you can use [schemas](/sql/relational-databases/security/authentication-access/create-a-database-schema) to [allow access](/sql/t-sql/statements/grant-schema-permissions-transact-sql) to all existing and new tables in a schema.
-```
-Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'third-party-login'. Reason: The account is disabled..
-```
+## Test access
-the same user can connect to the named replica instead:
+You can test this configuration by using any client tool and attempt to connect to the primary and the named replica. For example, using `sqlcmd`, you can try to connect to the primary replica using the `third-party-login` user:
```
-sqlcmd -S MySecondaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters_NR
+sqlcmd -S MyPrimaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters
```
-and connection will succeed without errors.
--
-## Next steps
+This will result in an error as the user is not allowed to connect to the server:
-Once you have setup security in this way, you can use the regular `grant`, `deny` and `revoke` commands to manage access to resources. Remember to use these commands on the primary replica: their effect will be applied also to all named replicas, allowing you to decide who can access what, as it would happen normally.
+```
+Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'third-party-login'. Reason: The account is disabled.
+```
-Remember that by default a newly created user has a very minimal set of permissions granted (for example they cannot access any user table), so if you want to allow `third-party-user` to access a table, you need to explicitly grant this permission:
+The attempt to connect to the named replica succeeds:
-```sql
-grant select on [Application].[Cities] to [third-party-user]
+```
+sqlcmd -S MyNamedReplicaServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters_NR
```
-Or you can add the user to the `db_datareaders` [database role](/sql/relational-databases/security/authentication-access/database-level-roles) to allow access to all tables, or you can use [schemas](/sql/relational-databases/security/authentication-access/create-a-database-schema) to [allow access](/sql/t-sql/statements/grant-schema-permissions-transact-sql) to all tables in a schema.
+No errors are returned, and queries can be executed on the named replica as allowed by granted object-level permissions.
For more information:
azure-sql Service Tier Hyperscale Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale-replicas.md
Previously updated : 6/9/2021 Last updated : 7/27/2021 # Hyperscale secondary replicas [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-As described in [Distributed functions architecture](service-tier-hyperscale.md), Azure SQL Database Hyperscale has two different types of compute nodes, also referred to as "replicas".
+As described in [Distributed functions architecture](service-tier-hyperscale.md), Azure SQL Database Hyperscale has two different types of compute nodes, also referred to as replicas:
+ - Primary: serves read and write operations-- Secondary: provides read scale-out, high availability and geo-replication
+- Secondary: provides read scale-out, high availability, and geo-replication
-A secondary replica can be of three different types:
+Secondary replicas are always read-only, and can be of three different types:
- High Availability replica - Named replica (in Preview)
Each type has a different architecture, feature set, purpose, and cost. Based on
## High Availability replica
-A High Availability (HA) replica uses the same page servers as the primary replica, so no data copy is required to add an HA replica. HA replicas are mainly used to provide High Availability as they act as a hot standby for failover purposes. If the primary replica becomes unavailable, failover to one of the existing HA replicas is automatic. Connection string doesn't need to change; during failover applications may experience minimum downtime due to active connections being dropped. As usual for this scenario, proper connection retry logic is recommended. Several drivers already provide some degree of automatic retry logic.
+A High Availability (HA) replica uses the same page servers as the primary replica, so no data copy is required to add an HA replica. HA replicas are mainly used to increase database availability; they act as hot standbys for failover purposes. If the primary replica becomes unavailable, failover to one of the existing HA replicas is automatic and quick. Connection string doesn't need to change; during failover applications may experience minimal downtime due to active connections being dropped. As usual for this scenario, proper retry logic is recommended. Several drivers already provide some degree of automatic retry logic. If you are using .NET, the [latest Microsoft.Data.SqlClient](https://devblogs.microsoft.com/azure-sql/configurable-retry-logic-for-microsoft-data-sqlclient/) library provides native full support for configurable automatic retry logic.
-If you are using .NET, the [latest Microsoft.Data.SqlClient](https://devblogs.microsoft.com/azure-sql/configurable-retry-logic-for-microsoft-data-sqlclient/) library provides native full support to configurable automatic retry logic.
-HA replicas use the same server and database name of the primary replica. Their Service Level Objective is also always the same as for the primary replica. HA replicas are not manageable as a stand-alone resource from the portal or from any other tool or DMV.
+HA replicas use the same server and database name as the primary replica. Their Service Level Objective is also always the same as for the primary replica. HA replicas are not visible or manageable as a stand-alone resource from the portal or from any API.
-There can be zero to four HA replicas. Their number can be changed during the creation of a database or after the database has been created, via the usual management endpoint and tools (for example: PowerShell, AZ CLI, Portal, REST API). Creating or removing HA replicas does not affect connections running on the primary replica.
+There can be zero to four HA replicas. Their number can be changed during the creation of a database or after the database has been created, via the common management endpoints and tools (for example: PowerShell, AZ CLI, Portal, REST API). Creating or removing HA replicas does not affect active connections on the primary replica.
### Connecting to an HA replica
-In Hyperscale databases, the ApplicationIntent argument in the connection string used by the client dictates whether the connection is routed to the read-write primary replica or to a read-only HA replica. If the ApplicationIntent set to `ReadOnly` and the database doesn't have a secondary replica, connection will be routed to the primary replica and will default to the `ReadWrite` behavior.
+In Hyperscale databases, the `ApplicationIntent` argument in the connection string used by the client dictates whether the connection is routed to the read-write primary replica or to a read-only HA replica. If `ApplicationIntent` is set to `ReadOnly` and the database doesn't have a secondary replica, connection will be routed to the primary replica and will default to the `ReadWrite` behavior.
```csharp -- Connection string with application intent Server=tcp:<myserver>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True; ```
-Given that for a given Hyperscale database all HA replicas are identical in their resource capacity, if more than one secondary replica is present, the read-intent workload is distributed across all available HA secondaries. When there are multiple HA replicas, keep in mind that each one could have different data latency with respect to data changes made on the primary. Each HA replica uses the same data as the primary on the same set of page servers. Local caches on each HA replica reflect the changes made on the primary via the transaction log service, which forwards log records from the primary replica to HA replicas. As a result, depending on the workload being processed by an HA replica, application of log records may happen at different speeds and thus different replicas could have different data latency relative to the primary replica.
+All HA replicas are identical in their resource capacity. If more than one HA replica is present, the read-intent workload is distributed arbitrarily across all available HA replicas. When there are multiple HA replicas, keep in mind that each one could have different data latency with respect to data changes made on the primary. Each HA replica uses the same data as the primary on the same set of page servers. However, local data caches on each HA replica reflect the changes made on the primary via the transaction log service, which forwards log records from the primary replica to HA replicas. As the result, depending on the workload being processed by an HA replica, application of log records may happen at different speeds, and thus different replicas could have different data latency relative to the primary replica.
## Named replica (in Preview) A named replica, just like an HA replica, uses the same page servers as the primary replica. Similar to HA replicas, there is no data copy needed to add a named replica.
-> [!NOTE]
-> For frequently asked questions on Hyperscale named replicas, see [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml).
- The difference from HA replicas is that named replicas: -- appear as regular (read-only) Azure SQL databases in the portal and in API (CLI, PowerShell, T-SQL) calls -- can have database name different from the primary replica, and optionally be located on a different logical server (as long as it is in the same region as the primary replica) -- have their own Service Level Objective that can be set and changed independently from the primary replica-- support for up to 30 named replicas (for each primary replica) -- support different authentication and authorization for each named replica by creating different logins on logical servers hosting named replicas
+- appear as regular (read-only) Azure SQL databases in the portal and in API (AZ CLI, PowerShell, T-SQL) calls;
+- can have database name different from the primary replica, and optionally be located on a different logical server (as long as it is in the same region as the primary replica);
+- have their own Service Level Objective that can be set and changed independently from the primary replica;
+- support for up to 30 named replicas (for each primary replica);
+- support different authentication for each named replica by creating different logins on logical servers hosting named replicas.
-The main goal of named replicas is to allow massive OLTP read scale-out scenario and to improve Hybrid Transactional and Analytical Processing (HTAP) workloads. Examples of how to create such solutions are available here:
+The main goal of named replicas is to enable massive OLTP read scale-out scenario, and to improve Hybrid Transactional and Analytical Processing (HTAP) workloads. Examples of how to create such solutions are available here:
- [OLTP scale-out sample](https://github.com/Azure-Samples/azure-sql-db-named-replica-oltp-scaleout) - [HTAP scale-out sample](https://github.com/Azure-Samples/azure-sql-db-named-replica-htap) Aside from the main scenarios listed above, named replicas offer flexibility and elasticity to also satisfy many other use cases:-- [Access Isolation](hyperscale-named-replica-security-configure.md): grant a login access to a named replica only and deny it from accessing the primary replica or other named replicas.-- Workload-Dependent Service Objective: as a named replica can have its own service level objective, it is possible to use different named replicas for different workloads and use cases. For example, one named replica could be used to serve Power BI requests, while another can be used to serve data to Apache Spark for Data Science tasks. Each one can have an independent service level objective and scale independently.-- Workload-Dependent Routing: with up to 30 named replicas, it is possible to use named replicas in groups so that an application can be isolated from another. For example, a group of four named replicas could be used to serve requests coming from mobile applications, while another group two named replicas can be used to serve requests coming from a web application. This approach would allow a fine-grained tuning of performance and costs for each group.
+- [Access Isolation](hyperscale-named-replica-security-configure.md): you can grant access to a specific named replica, but not the primary replica or other named replicas.
+- Workload-dependent service level objective: as a named replica can have its own service level objective, it is possible to use different named replicas for different workloads and use cases. For example, one named replica could be used to serve Power BI requests, while another can be used to serve data to Apache Spark for Data Science tasks. Each one can have an independent service level objective and scale independently.
+- Workload-dependent routing: with up to 30 named replicas, it is possible to use named replicas in groups so that an application can be isolated from another. For example, a group of four named replicas could be used to serve requests coming from mobile applications, while another group two named replicas can be used to serve requests coming from a web application. This approach would allow a fine-grained tuning of performance and costs for each group.
-The following example creates named replica `WideWorldImporters_NR` for database `WideWorldImporters` with service level objective HS_Gen5_4. Both use the same logical server `MyServer`. If you prefer to use REST API directly, this option is also possible: [Databases - Create A Database As Named Replica Secondary](/rest/api/sql/2020-11-01-preview/databases/createorupdate#creates-a-database-as-named-replica-secondary).
+The following example creates a named replica `WideWorldImporters_NR` for database `WideWorldImporters`. The primary replica uses service level objective HS_Gen5_4, while the named replica uses HS_Gen5_2. Both use the same logical server `MyServer`. If you prefer to use REST API directly, this option is also possible: [Databases - Create A Database As Named Replica Secondary](/rest/api/sql/2020-11-01-preview/databases/createorupdate#creates-a-database-as-named-replica-secondary).
# [T-SQL](#tab/tsql) ```sql
WITH (SERVICE_OBJECTIVE = 'HS_Gen5_2', SECONDARY_TYPE = Named, DATABASE_NAME = [
``` # [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzSqlDatabaseSecondary -ResourceGroupName "MyResourceGroup" -ServerName "MyServer" -DatabaseName "WideWorldImporters" -PartnerResourceGroupName "MyResourceGroup" -PartnerServerName "MyServer" -PartnerDatabaseName "WideWorldImporters_NR_" -SecondaryServiceObjectiveName HS_Gen5_2
+New-AzSqlDatabaseSecondary -ResourceGroupName "MyResourceGroup" -ServerName "MyServer" -DatabaseName "WideWorldImporters" -PartnerResourceGroupName "MyResourceGroup" -PartnerServerName "MyServer" -PartnerDatabaseName "WideWorldImporters_NR" -SecondaryServiceObjectiveName HS_Gen5_2
``` # [Azure CLI](#tab/azure-cli) ```azurecli
az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyServer --
-As there is no data movement involved, in most cases a named replica will be created in about a minute. Once the named replica is available, it will be visible from the portal or any command-line tool like AZ CLI or PowerShell. A named replica is usable as a regular database, with the exception that it is read-only.
+As there is no data movement involved, in most cases a named replica will be created in about a minute. Once the named replica is available, it will be visible from the portal or any command-line tool like AZ CLI or PowerShell. A named replica is usable as a regular read-only database.
+
+> [!NOTE]
+> For frequently asked questions on Hyperscale named replicas, see [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml).
### Connecting to a named replica
-To connect to a named replica, you must use the connection string for that named replica. There is no need to specify the option "ApplicationIntent" as named replicas are always read-only. Using it is still possible but will not have any other effect.
-Just like for HA replicas, even though the primary, HA, and named replicas share the same data on the same set of page servers, caches on each named replica are kept in sync with the primary via the transaction log service, which forwards log records from the primary to named replicas. As a result, depending on the workload being processed by a named replica, application of the log records may happen at different speeds and thus different replicas could have different data latency relative to the primary replica.
+To connect to a named replica, you must use the connection string for that named replica, referencing its server and database names. There is no need to specify the option "ApplicationIntent=ReadOnly" as named replicas are always read-only.
+
+Just like for HA replicas, even though the primary, HA, and named replicas share the same data on the same set of page servers, data caches on each named replica are kept in sync with the primary via the transaction log service, which forwards log records from the primary to named replicas. As the result, depending on the workload being processed by a named replica, application of the log records may happen at different speeds, and thus different replicas could have different data latency relative to the primary replica.
### Modifying a named replica
-You can define the service level objective of a named replica when you create it, via the `ALTER DATABASE` command or in any other supported ways (AZ CLI, PowerShell, REST API). If you need to change the service level objective after the named replica has been created, you can do it using the regular `ALTER DATABASE…MODIFY` command on the named replica itself. For example, if `WideWorldImporters_NR` is the named replica of `WideWorldImporters` database, you can do it as shown below.
+You can define the service level objective of a named replica when you create it, via the `ALTER DATABASE` command or in any other supported way (AZ CLI, PowerShell, REST API). If you need to change the service level objective after the named replica has been created, you can do it using the `ALTER DATABASE ... MODIFY` command on the named replica itself. For example, if `WideWorldImporters_NR` is the named replica of `WideWorldImporters` database, you can do it as shown below.
# [T-SQL](#tab/tsql) ```sql
az sql db update -g MyResourceGroup -s MyServer -n WideWorldImporters_NR --servi
### Removing a named replica
-To remove a named replica, you drop it just like you would do with a regular database. Make sure you are connected to the `master` database of the server with the named replica you want to drop, and then use the following command:
+To remove a named replica, you drop it just like you would a regular database. Make sure you are connected to the `master` database of the server with the named replica you want to drop, and then use the following command:
# [T-SQL](#tab/tsql) ```sql
az sql db delete -g MyResourceGroup -s MyServer -n WideWorldImporters_NR
```
-> [!NOTE]
-> Named replicas will also be removed when the primary replica from which they have been created is deleted.
+> [!IMPORTANT]
+> Named replicas will be automatically removed when the primary replica from which they have been created is deleted.
### Known issues #### Partially incorrect data returned from sys.databases
-During Public Preview, row values returned from `sys.databases`, for named replicas, in columns other than `name` and `database_id`, may be inconsistent and incorrect. For example, the `compatibility_level` column for a named replica could be reported as 140 even if the primary database from which the named replica has been created is set to 150. A workaround, when possible, is to get the same data using the system function `databasepropertyex`, that will return the correct data instead.
-
+During Public Preview, row values returned from `sys.databases`, for named replicas, in columns other than `name` and `database_id`, may be inconsistent and incorrect. For example, the `compatibility_level` column for a named replica could be reported as 140 even if the primary database from which the named replica has been created is set to 150. A workaround, when possible, is to get the same data using the `DATABASEPROPERTYEX()` function, which will return correct data.
## Geo-replica (in Preview)
-With [active geo-replication](active-geo-replication-overview.md), you can create a readable secondary replica of the primary Hyperscale database in the same or in a different region. Geo-replicas must be created on a different logical server. The database name of a geo-replica always matches the database name of the primary.
+With [active geo-replication](active-geo-replication-overview.md), you can create a readable secondary replica of the primary Hyperscale database in the same or in a different Azure region. Geo-replicas must be created on a different logical server. The database name of a geo-replica always matches the database name of the primary.
When creating a geo-replica, all data is copied from the primary to a different set of page servers. A geo-replica does not share page servers with the primary, even if they are in the same region. This architecture provides the necessary redundancy for geo-failovers.
-Geo-replicas are primarily used to maintain a transactionally consistent copy of the database via asynchronous replication in a different geographical region for disaster recovery in case of a disaster or outage in the primary region. Geo-replicas can also be used for geographic read scale-out scenarios.
+Geo-replicas are used to maintain a transactionally consistent copy of the database via asynchronous replication. If a geo-replica is in a different Azure region, it can be used for disaster recovery in case of a disaster or outage in the primary region. Geo-replicas can also be used for geographic read scale-out scenarios.
-With [active geo-replication on Hyperscale](active-geo-replication-overview.md), failover must be initiated manually. After failover, the new primary will have a different connection end point, referencing the logical server name hosting the new primary replica. For more information, see [active geo-replication](active-geo-replication-overview.md).
+In Hyperscale, a geo-failover must be initiated manually. After failover, the new primary will have a different connection end point, referencing the logical server name hosting the new primary replica. For more information, see [active geo-replication](active-geo-replication-overview.md).
Geo-replication for Hyperscale databases is currently in preview, with the following limitations: - Only one geo-replica can be created (in the same or different region). - Failover groups are not supported. - Planned failover is not supported.-- Point in time restore of the geo-replica is not supported
+- Point in time restore of the geo-replica is not supported.
- Creating a database copy of the geo-replica is not supported. - Secondary of a secondary (also known as "geo-replica chaining") is not supported.
azure-video-analyzer Create Pipeline Vs Code Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/create-pipeline-vs-code-extension.md
After completing the setup steps, you'll be able to run the simulated live video
* An Azure account that includes an active subscription. [Create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) for free if you don't already have one. * [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
- * [Video Analyzer](https://go.microsoft.com/fwlink/?linkid=2163332)
+ * [Video Analyzer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.azure-video-analyzer)
* If you didn't complete the [Get started - Azure Video Analyzer](./get-started-detect-motion-emit-events.md) quickstart, be sure to [set up Azure resources](#set-up-azure-resources).
You should now see an entry in the `Pipeline topologies` list on the left labele
1. On the left under `Pipeline topologies`, right click on `MotionDetection` and select `Create live pipeline`. 1. For `Live pipeline name`, put in `mdpipeline1`. 1. In the `Parameters` section:
- - For ΓÇ£rtspPasswordΓÇ¥ put in ΓÇ£testuserΓÇ¥.
+ - For ΓÇ£rtspPasswordΓÇ¥ put in ΓÇ£testpasswordΓÇ¥.
- For ΓÇ£rtspUrlΓÇ¥ put in ΓÇ£rtsp://rtspsim:554/media/camera-300s.mkvΓÇ¥.
- - For ΓÇ£rtspUserNameΓÇ¥ put in ΓÇ£testpasswordΓÇ¥.
+ - For ΓÇ£rtspUserNameΓÇ¥ put in ΓÇ£testuserΓÇ¥.
1. In the top right, click ΓÇ£Save and activateΓÇ¥. This gets a starting topology deployed and a live pipeline up and running on your edge device. If you have the Azure IoT Hub extension installed from the Get Started quickstart, you can monitor the build-in event endpoint in the Azure IoT-Hub Visual Studio Code extension to monitor this as shown in the [Observe Results](./get-started-detect-motion-emit-events.md#observe-results) section.
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-event-based-live-video.md
Alternatively, you can trigger recording only when an inferencing service detect
The diagram is a pictorial representation of a [pipeline](pipeline.md) and additional modules that accomplish the desired scenario. Four IoT Edge modules are involved: * Video Analyzer on an IoT Edge module.
-* An edge module running an AI model behind an HTTP endpoint. This AI module uses the [YOLOv3](https://github.com/Azure/live-video-analytics/tree/master/utilities/video-analysis/yolov3-onnx) model, which can detect many types of objects.
+* An edge module running an AI model behind an HTTP endpoint. This AI module uses the [YOLOv3](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions/yolo/yolov3) model, which can detect many types of objects.
* A custom module to count and filter objects, which is referred to as an Object Counter in the diagram. You'll build an Object Counter and deploy it in this tutorial.
-* An [RTSP simulator module](https://github.com/Azure/live-video-analytics/tree/master/utilities/rtspsim-live555) to simulate an RTSP camera.
+* An [RTSP simulator module](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) to simulate an RTSP camera.
As the diagram shows, you'll use an [RTSP source](pipeline.md#rtsp-source) node in the pipeline to capture the simulated live video of traffic on a highway and send that video to two paths:
In about 30 seconds, refresh Azure IoT Hub in the lower-left section in Visual S
1. Next, under the **livePipelineSet** and **pipelineTopologyDelete** nodes, ensure that the value of **topologyName** matches the value of the **name** property in the above pipeline topology:
- `"pipelineTopologyName" : "EVRtoVideosOnObjDetect"`
+ `"pipelineTopologyName" : "EVRtoVideoSinkOnObjDetect"`
1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-evr-video`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name. 1. Start a debugging session by selecting F5. You'll see some messages printed in the **TERMINAL** window. 1. The operations.json file starts off with calls to pipelineTopologyList and livePipelineList. If you've cleaned up resources after previous quickstarts or tutorials, this action returns empty lists and then pauses for you to select **Enter**, as shown:
In about 30 seconds, refresh Azure IoT Hub in the lower-left section in Visual S
"@apiVersion": "1.0", "name": "Sample-Pipeline-1", "properties": {
- "topologyName": "EVRtoVideosOnObjDetect",
+ "topologyName": "EVRtoVideoSinkOnObjDetect",
"description": "Sample topology description", "parameters": [ {
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
Title: Record and stream inference metadata with video - Azure Video Analyzer
description: In this tutorial, you'll learn how to use Azure Video Analyzer to record video and inference metadata to the cloud and play back the recording with the visual inference metadata. Previously updated : 05/12/2021 Last updated : 06/01/2021 # Tutorial: Record and stream inference metadata with video
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
description: Learn about storage capacity, storage policies, fault tolerance, and storage integration in Azure VMware Solution private clouds. Previously updated : 04/26/2021 Last updated : 07/28/2021 # Azure VMware Solution storage concepts
Microsoft provides alerts when capacity consumption exceeds 75%. You can monito
Now that you've covered Azure VMware Solution storage concepts, you may want to learn about: -- [Scale clusters in the private cloud][tutorial-scale-private-cloud]-- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md)-- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md)
+- [Attach disk pools to Azure VMware Solution hosts (Preview)](attach-disk-pools-to-azure-vmware-solution-hosts.md) - You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance.
+- [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis.
+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes.
+- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
<!-- LINKS - external-->
backup Backup Azure Sql Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-mabs.md
Title: Back up SQL Server by using Azure Backup Server description: In this article, learn the configuration to back up SQL Server databases by using Microsoft Azure Backup Server (MABS). Previously updated : 03/24/2017 Last updated : 07/28/2021 # Back up SQL Server to Azure by using Azure Backup Server
-This article helps you set up backups of SQL Server databases by using Microsoft Azure Backup Server (MABS).
+Microsoft Azure Backup Server (MABS) provides backup and recovery for SQL Server databases. In addition to backing up SQL Server databases, you can run a system backup or full bare-metal backup of the SQL Server computer. Here's what MABS can protect:
+
+- A standalone SQL Server instance
+- A SQL Server Failover Cluster Instance (FCI)
+
+>[!Note]
+>MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV).
+>
+>Protection of SQL Server FCI with Storage Spaces Direct on Azure, and SQL Server FCI with Azure shared disks is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect the SQL FCI instance, deployed on the Azure VMs.
+>
+>A SQL Server AlwaysOn availability group with theses preferences:
+>- Prefer Secondary
+>- Secondary only
+>- Primary
+>- Any Replica
To back up a SQL Server database and recover it from Azure:
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-whats-new-mabs.md
For information about the UR2 issues fixes and the installation instructions, se
### Support for Azure Stack HCI
-With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](/azure-stack/hci).
+With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](/azure/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines).
### Support for VMware 7.0
-With MABS v3 UR2, you can back up VMware 7.0 VMs. [Learn more](/azure/backup/backup-support-matrix-mabs-dpm).
+With MABS v3 UR2, you can back up VMware 7.0 VMs. [Learn more](/azure/backup/backup-azure-backup-server-vmware).
### Support for SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV)
MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Sh
### Optimized Volume Migration
-MABS v3 UR2 supports optimized volume migration. The optimized volume migration allows you to move data sources to the new volume much faster. The enhanced migration process migrates only the active backup copy (Active Replica) to the new volume. All new recovery points are created on the new volume, while existing recovery points are maintained on the existing volume and are purged based on the retention policy. [Learn more](https://support.microsoft.com/topic/microsoft-azure-backup-server-v3-feb4523f-8da7-da61-2f47-eaa9fca9a3de).
+MABS v3 UR2 supports optimized volume migration. The optimized volume migration allows you to move data sources to the new volume much faster. The enhanced migration process migrates only the active backup copy (Active Replica) to the new volume. All new recovery points are created on the new volume, while existing recovery points are maintained on the existing volume and are purged based on the retention policy. [Learn more](/system-center/dpm/volume-to-volume-migration?view=sc-dpm-2019&preserve-view=true).
### Offline Backup using Azure Data Box
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
The Azure Backup service provides simple, secure, and cost-effective solutions t
- **SQL Server in Azure VMs** - [Back up SQL Server databases running on Azure VMs](backup-azure-sql-database.md) - **SAP HANA databases in Azure VMs** - [Backup SAP HANA databases running on Azure VMs](backup-azure-sap-hana-database.md) - **Azure Database for PostgreSQL servers (preview)** - [Back up Azure PostgreSQL databases and retain the backups for up to 10 years](backup-azure-database-postgresql.md)-- **Azure Blobs (preview)** - [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md)
+- **Azure Blobs** - [Overview of operational backup for Azure Blobs](blob-backup-overview.md)
![Azure Backup Overview](./media/backup-overview/azure-backup-overview.png)
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
Before proceeding to configure protection, we strongly recommend you ensure the
>[!IMPORTANT] > Before proceeding to configure protection, you must have **successfully** completed the following steps: >
->1. Created your Backup vault
+>1. Created your Recovery Services vault
>1. Enabled the Recovery Services vault's system-assigned managed identity or assigned a user-assigned managed identity to the vault
->1. Assigned permissions to your Backup Vault (or the user-assigned managed identity) to access encryption keys from your Key Vault
+>1. Assigned permissions to your Recovery Services vault (or the user-assigned managed identity) to access encryption keys from your Key Vault
>1. Enabled soft delete and purge protection for your Key Vault
->1. Assigned a valid encryption key for your Backup vault
+>1. Assigned a valid encryption key for your Recovery Services vault
> >If all the above steps have been confirmed, only then proceed with configuring backup.
backup Offline Backup Azure Data Box Dpm Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/offline-backup-azure-data-box-dpm-mabs.md
Title: Offline Backup with Azure Data Box for DPM and MABS description: You can use Azure Data Box to seed initial Backup data offline from DPM and MABS. Previously updated : 08/12/2020 Last updated : 07/28/2021
-# Offline seeding using Azure Data Box for DPM and MABS (Preview)
+# Offline seeding using Azure Data Box for DPM and MABS
> [!NOTE]
-> This feature is applicable for Data Protection Manager (DPM) 2019 UR2 and later.<br><br>
-> This feature is currently in preview for Microsoft Azure Backup Server (MABS). If you're interested in using Azure Data Box for offline seeding with MABS, reach out to us at [systemcenterfeedback@microsoft.com](mailto:systemcenterfeedback@microsoft.com).
+> This feature is applicable for Data Protection Manager (DPM) 2019 UR2 (and later) and MABS v3 UR2 (and later).
This article explains how you can use Azure Data Box to seed initial backup data offline from DPM and MABS to an Azure Recovery Services vault.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-overview.md
Last updated 07/12/2021 + # What is Azure Bastion?
For frequently asked questions, see the Bastion [FAQ](bastion-faq.md).
* [Tutorial: Create an Azure Bastion host and connect to a Windows VM](tutorial-create-host-portal.md). * [Learn module: Introduction to Azure Bastion](/learn/modules/intro-to-azure-bastion/).
-* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
The following steps assume you've prepared the voice talent verbal consent files
1. Navigate to **Text-to-Speech** > **Custom Voice** > **select a project** > **Set up voice talent**.
-2. Click **Add voice talent**.
+2. Select **Add voice talent**.
-3. Next, to define voice characteristics, click **Target scenario** to be used. Then describe your **Voice characteristics**.
+3. Next, to define voice characteristics, select **Target scenario** to be used. Then describe your **Voice characteristics**.
> [!NOTE] > The scenarios you provide must be consistent with what you've applied for in the application form.
The following steps assume you've prepared the voice talent verbal consent files
> [!NOTE] > Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
-5. Finally, go to **Review and submit**, you can review the settings and click **Submit**.
+5. Finally, go to **Review and create**, you can review the settings and select **Submit**.
-## Upload your datasets
+## Upload your data
-When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A training set is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. Data readiness checking will be done per each training set. You can import multiple datasets to a training set.
+When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A training set is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. Data readiness checking will be done per each training set. You can import multiple data to a training set.
You can do the following to create and review your training data.
-1. On the **Prepare training data** tab, click **Add training set** to enter **Name** and **Description** > **Create** to add a new training set.
+1. On the **Prepare training data** tab, select **Add training set** to enter **Name** and **Description** > **Create** to add a new training set.
When the training set is successfully created, you can start to upload your data.
-2. To upload data, click **Upload data** > **Choose data type** > **Upload data** and **Specify the target training set** > Enter **Name** and **Description** for your dataset > review the settings and click **Upload**.
+2. To upload data, select **Upload data** > **Choose data type** > **Upload data** and **Specify the target training set** > Enter **Name** and **Description** for your data > review the settings and select **Submit**.
> [!NOTE]
->- Duplicate audio names will be removed from the training. Make sure the datasets you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicate, they'll be rejected.
->- If you've created datasets in the previous version of Speech Studio, you must specify a training set for your datasets in advance to use them. Or else, an exclamation mark will be appended to the dataset name, and the dataset could not be used.
+>- Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicate, they'll be rejected.
+>- If you've created data files in the previous version of Speech Studio, you must specify a training set for your data in advance to use them. Or else, an exclamation mark will be appended to the data name, and the data could not be used.
-Each dataset you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Custom Neural Voice service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md) and make sure your data has been rightly formatted.
+Each data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Custom Neural Voice service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md) and make sure your data has been rightly formatted.
> [!NOTE]
-> - Standard subscription (S0) users can upload five datasets simultaneously. If you reach the limit, wait until at least one of your datasets finishes importing. Then try again.
-> - The maximum number of datasets allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
+> - The maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
-Datasets are automatically validated once you hit the **Upload** button. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. Fix the errors if any and submit again.
+Data files are automatically validated once you hit the **Submit** button. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. Fix the errors if any and submit again.
-Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your datasets. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
+Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 50+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
-Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your dataset.
+Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
On the **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message displayed to fix them before training.
If the third type of errors listed in the table below aren't fixed, although the
| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.| | Volume | Start silence issue | The first 100 ms silence isn't clean. Reduce the recording noise floor level and leave the first 100 ms at the start silent.| | Volume| End silence issue| The last 100 ms silence isn't clean. Reduce the recording noise floor level and leave the last 100 ms at the end silent.|
-| Mismatch | Script and audio mismatch|Review the script and the audio content to make sure they match and control the noise floor level. Reduce the length of long silence or split the audio into multiple utterances if it's too long.|
+| Mismatch | Low scored words|Review the script and the audio content to make sure they match and control the noise floor level. Reduce the length of long silence or split the audio into multiple utterances if it's too long.|
| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.| | Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.| | Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
If the third type of errors listed in the table below aren't fixed, although the
## Train your custom neural voice model
-After your dataset has been validated, you can use it to build your custom neural voice model.
+After your data files have been validated, you can use them to build your custom neural voice model.
-1. On the **Train model** tab, click **Train model** to create a voice model with the data you have uploaded.
+1. On the **Train model** tab, select **Train model** to create a voice model with the data you have uploaded.
2. Select the neural training method for your model and target language. By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language (preview) for your voice model. Check the languages supported for custom neural voice and cross-lingual feature: [language for customization](language-support.md#customization).
-3. Next, choose the dataset you want to use for training, and specify a speaker file.
+3. Next, choose the data you want to use for training, and specify a speaker file.
>[!NOTE] >- You need to select at least 300 utterances to create a custom neural voice. >- To train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
->- On this page you can also select to upload your script for testing. The testing script must be a txt file, less than 1Mb. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
+>- On this page you can also select to upload your script for testing. The testing script must be a txt file, less than 1 Mb. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
4. Then, enter a **Name** and **Description** to help you identify this model.
-Choose a name carefully. The name you enter here will be the name you use to specify the voice in your request for speech synthesis as part of the SSML input. Only letters, numbers, and a few punctuation characters such as -, \_, and (', ') are allowed. Use different names for different neural voice models.
+Choose a name carefully. The name you enter here will be the name you use to specify the voice in your request for speech synthesis as part of the SSML input. Only letters, numbers, and a few punctuation characters such as -, _, and (', ') are allowed. Use different names for different neural voice models.
-A common use of the **Description** field is to record the names of the datasets that were used to create the model.
+A common use of the **Description** field is to record the names of the data that were used to create the model.
-5. Review the settings, then click **Submit** to start training the model.
+5. Review the settings, then select **Submit** to start training the model.
> [!NOTE]
-> Duplicate audio names will be removed from the training. Make sure the datasets you select don't contain the same audio names across multiple .zip files.
+> Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
The **Train model** table displays a new entry that corresponds to this newly created model. The table also displays the status: Processing, Succeeded, Failed.
-The status that's shown reflects the process of converting your dataset to a voice model, as shown here.
+The status that's shown reflects the process of converting your data to a voice model, as shown here.
| State | Meaning | | -- | - |
After you've successfully created and tested your voice model, you deploy it in
You can do the following to create a custom neural voice endpoint.
-1. On the **Deploy model** tab, click **Deploy models**.
+1. On the **Deploy model** tab, select **Deploy model**.
2. Next, enter a **Name** and **Description** for your custom endpoint. 3. Then, select a voice model you would like to associate with this endpoint.
-4. Finally, click **Deploy** to create your endpoint.
+4. Finally, select **Deploy** to create your endpoint.
After you've clicked the **Deploy** button, in the endpoint table, you'll see an entry for your new endpoint. It may take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-logging.md
self.speechConfig!.setPropertyTo(logFilePath!.absoluteString, by: SPXPropertyId.
More about iOS File System is available [here](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html).
+## Logging with multiple recognizers
+
+Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, *process-wide* facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
+
+This means that you cannot, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
+
+This also means that the lifetime of the object that configured logging is not tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
+
+To reduce potential confusion when configuring logging for multiple instances, it may be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
+
+```cpp
+void EnableSpeechSdkLogging(const char* relativePath)
+{
+ auto configForLogging = SpeechConfig::FromSubscription("unused_key", "unused_region");
+ configForLogging->SetProperty(PropertyId::Speech_LogFilename, relativePath);
+ auto emptyAudioConfig = AudioConfig::FromStreamInput(AudioInputStream::CreatePushStream());
+ auto temporaryRecognizer = SpeechRecognizer::FromConfig(configForLogging, emptyAudioConfig);
+}
+
+void DisableSpeechSdkLogging()
+{
+ EnableSpeechSdkLogging("");
+}
+```
+ ## Next steps > [!div class="nextstepaction"]
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
You can select your domain-specific scripts from the sentences that your custom
Below are some general guidelines that you can follow to create a good corpus (recorded audio samples) for Custom Neural Voice training. -- Balance your script to cover different sentence types in your domain including statements, questions, exclamations long sentences, and short sentences.
+- Balance your script to cover different sentence types in your domain including statements, questions, exclamations, long sentences, and short sentences.
In general, each sentence should contain 4 words to 30 words. It's required that no duplicate sentences are included in your script.<br>
- Statement sentences are the major part of the script, taking about 70-80% of all.
- Question sentences should take about 10%-20% of your domain script with rising and falling tones covered.<br>
- If exclamations normally result in a different tone in your target language, consider to include 10%-20% of scripts for exclamations in your samples.<br>
- Short word/phrase scripts should also take about 10% cases of the total utterances, with 5 to 7 words per case.
+ For how to balance the different sentence types, refer to the following table.
+
+ | Sentence types | Coverage |
+ | : | : |
+ | Statement sentences | Statement sentences are the major part of the script, taking about 70-80% of all. |
+ | Question sentences | Question sentences should take about 10%-20% of your domain script, including 5%-10% of rising and 5%-10% of falling tones. |
+ | Exclamation sentences| Exclamation sentences should take about 10%-20% of your scripts.|
+ | Short word/phrase| Short word/phrase scripts should also take about 10% cases of the total utterances, with 5 to 7 words per case. |
+
+ > [!NOTE]
+ > Regarding short word/phrase, actually it means that single words or phrases should be included and separated with a comma. It helps a voice talent pause briefly at the comma when reading the scripts.
Best practices include: - Balanced coverage for Part of Speech, like verb, noun, adjective, and so on. - Balanced coverage for pronunciations. Include all letters from A to Z so the TTS engine learns how to pronounce each letter in your defined style. - Readable, understandable, common-sense for speaker to read out. - Avoid too much similar pattern for word/phrase, like "easy" and "easier".
- - Include different format of numbers: address, unit, phone, quantity, date, and so on in all sentence types.
+ - Include different format of numbers: address, unit, phone, quantity, date, and so on, in all sentence types.
- Include spelling sentences if it's something your TTS voice will be used to read. For example, "Spell of Apple is A P P L E". - Don't put multiple sentences into one line/one utterance. Separate each line per utterances. -- Make sure the sentence is mostly clean. In general, don't include too many non-standard words like numbers or abbreviations as they are usually hard to read. Some application may need to read many numbers or acronyms. In this case, you can include these words, but normalize them in their spoken form.
+- Make sure the sentence is mostly clean. In general, don't include too many non-standard words like numbers or abbreviations as they're usually hard to read. Some application may need to read many numbers or acronyms. In this case, you can include these words, but normalize them in their spoken form.
Below are some best practices for example: - For lines with abbreviations, instead of "BTW", you have "by the way".
Below are some general guidelines that you can follow to create a good corpus (r
- For lines with acronyms, instead of "ABC", you have "A B C" With that, make sure your voice talent pronounces these words in the expected way. Keep your script and recordings match consistently during the training process.
+ > [!NOTE]
+ > The scripts prepared for your voice talent need to follow the native reading conventions, such as 50% and $45, while the scripts used for training need to be normalized to make sure that the scripts match the audio content, such as *fifty percent* and *forty-five dollars*. Check the scripts used for training against the recordings of your voice talent, to make sure they match.
+ - Your script should include many different words and sentences with different kinds of sentence lengths, structures, and moods. - Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your talent, you'll probably catch a few more mistakes.
The script defects generally fall into the following categories:
| Category | Example | | : | : | | Have a meaningless content in a common way. | |
-| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny" (no quote mark in the end, it is not a complete sentence) |
+| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny (no quote mark in the end, it's not a complete sentence) |
| Typo in the sentences. | - Start with a lower case<br>- No ending punctuation if needed<br> - Misspelling <br>- Lack of punctuation: no period in the end (except news title)<br>- End with symbols, except comma, question, exclamation <br>- Wrong format, such as:<br> &emsp;- 45$ (should be $45)<br> &emsp;- No space or excess space between word/punctuation | |Duplication in similar format, one per each pattern is enough. |- "Now is 1pm in New York"<br>- "Now is 2pm in New York"<br>- "Now is 3pm in New York"<br>- "Now is 1pm in Seattle"<br>- "Now is 1pm in Washington D.C." | |Uncommon foreign words: only the commonly used foreign word is acceptable in our script. | |
Discuss your project with the studio's recording engineer and listen to their ad
### Recording requirements
-To achieve high-quality training results, you need to comply with the following requirements during recording or data preparation:
+To achieve high-quality training results, follow the following requirements during recording or data preparation:
- Clear and well pronounced
The talent should *not* add distinct pauses between words. The sentence should s
Create a reference recording, or *match file,* of a typical utterance at the beginning of the session. Ask the talent to repeat this line every page or so. Each time, compare the new recording to the reference. This practice helps the talent remain consistent in volume, tempo, pitch, and intonation. Meanwhile, the engineer can use the match file as a reference for levels and overall consistency of sound.
-The match file is especially important when you resume recording after a break or on another day. You'll want to play it a few times for the talent and have them repeat it each time until they are matching well.
+The match file is especially important when you resume recording after a break or on another day. Play it a few times for the talent and have them repeat it each time until they're matching well.
Coach your talent to take a deep breath and pause for a moment before each utterance. Record a couple of seconds of silence between utterances. Words should be pronounced the same way each time they appear, considering context. For example, "record" as a verb is pronounced differently from "record" as a noun.
-Record approximately five seconds of silence before the first recording to capture the "room tone." This practice helps Speech Studio compensate for any remaining noise in the recordings.
+Record approximately five seconds of silence before the first recording to capture the "room tone". This practice helps Speech Studio compensate for noise in the recordings.
> [!TIP] > All you really need to capture is the voice talent, so you can make a monophonic (single-channel) recording of just their lines. However, if you record in stereo, you can use the second channel to record the chatter in the control room to capture discussion of particular lines or takes. Remove this track from the version that's uploaded to Speech Studio.
cognitive-services Text Offsets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/text-offsets.md
Multilingual and emoji support has led to Unicode encodings that use more than o
Because of the different lengths of possible multilingual and emoji encodings, the Text Analytics API may return offsets in the response.
-## Offsets in the API response.
+## Offsets in the API response
-Whenever offsets are returned the API response, such as [Named Entity Recognition](../how-tos/text-analytics-how-to-entity-linking.md) or [Sentiment Analysis](../how-tos/text-analytics-how-to-sentiment-analysis.md), remember:
+Whenever offsets are returned in the API response, such as [Named Entity Recognition](../how-tos/text-analytics-how-to-entity-linking.md) or [Sentiment Analysis](../how-tos/text-analytics-how-to-sentiment-analysis.md), remember:
* Elements in the response may be specific to the endpoint that was called. * HTTP POST/GET payloads are encoded in [UTF-8](https://www.w3schools.com/charsets/ref_html_utf8.asp), which may or may not be the default character encoding on your client-side compiler or operating system.
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/managed.md
For a smaller number of services, systems and protocols, such as Azure Service B
Some managed connectors for Logic Apps belong to multiple sub-categories. For example, the SAP connector is both an [enterprise connector](#enterprise-connectors) and an [on-premises connector](#on-premises-connectors). * [Standard connectors](#standard-connectors) provide access to services such as Azure Blob Storage, Office 365, SharePoint, Salesforce, Power BI, OneDrive, and many more.
+* [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270.
* [On-premises connectors](#on-premises-connectors) provide access to on-premises systems such as SQL Server, SharePoint Server, SAP, Oracle DB, file shares, and others.
-* [Integration account connectors](#integration-account-connectors) help you transform and validate XML, encode and decode flat files, and process business-to-business (B2B) messages using AS2, EDIFACT, and X12 protocols.
+* [Integration account connectors](#integration-account-connectors) help you transform and validate XML, encode and decode flat files, and process business-to-business (B2B) messages using AS2, EDIFACT, and X12 protocols.
+* [Integration service environment connectors](#ise-connectors) and are designed to run specifically in an ISE and offer benefits over their non-ISE versions.
## Standard connectors
container-registry Container Registry Access Selected Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-access-selected-networks.md
Title: Configure public registry access description: Configure IP rules to enable access to an Azure container registry from selected public IP addresses or address ranges. Previously updated : 03/08/2021 Last updated : 07/28/2021 # Configure public IP network rules
az acr update --name myContainerRegistry --public-network-enabled true
## Troubleshoot
+### Access behind HTTPS proxy
+ If a public network rule is set, or public access to the registry is denied, attempts to login to the registry from a disallowed public network will fail. Client access from behind an HTTPS proxy will also fail if an access rule for the proxy is not set. You will see an error message similar to `Error response from daemon: login attempt failed with status: 403 Forbidden` or `Looks like you don't have access to registry`. These errors can also occur if you use an HTTPS proxy that is allowed by a network access rule, but the proxy isn't properly configured in the client environment. Check that both your Docker client and the Docker daemon are configured for proxy behavior. For details, see [HTTP/HTTPS proxy](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) in the Docker documentation.
+### Access from Azure Pipelines
+
+If you use Azure Pipelines with an Azure container registry that limits access to specific IP addresses, the pipeline may be unable to access the registry, because the outbound IP address from the pipeline is not fixed. By default, the pipeline runs jobs using a Microsoft-hosted [agent](/azure/devops/pipelines/agents/agents) on a virtual machine pool with a changing set of IP addresses.
+
+One workaround is to change the agent used to run the pipeline from Microsoft-hosted to self-hosted. With a self-hosted agent running on a [Windows](/azure/devops/pipelines/agents/v2-windows) or [Linux](/azure/devops/pipelines/agents/v2-linux) machine that you manage, you control the outbound IP address of the pipeline, and you can add this address in a registry IP access rule.
## Next steps
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
The following constraints are applicable on the operational data in Azure Cosmos
### Schema representation
-There are two modes of schema representation in the analytical store. These modes have tradeoffs between the simplicity of a columnar representation, handling the polymorphic schemas, and simplicity of query experience:
+There are two modes of schema representation in the analytical store. These modes define the schema representation method for all containers in the database account and have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas.
-* Well-defined schema representation
-* Full fidelity schema representation
+* Well-defined schema representation, default option for SQL (CORE) API accounts.
+* Full fidelity schema representation, default option for Azure Cosmos DB API for MongoDB accounts.
-> [!NOTE]
-> For SQL (Core) API accounts, when analytical store is enabled, the default schema representation in the analytical store is well-defined. Whereas for Azure Cosmos DB API for MongoDB accounts, the default schema representation in the analytical store is a full fidelity schema representation.
+It is possible to use Full Fidelity Schema for SQL (Core) API accounts. Here are the considerations about this possibility:
-**Well-defined schema representation**
+ * This option is only valid for accounts that don't have Synapse Link enabled.
+ * It is not possible to turn Synapse Link off to on again, to change from well-defined to full fidelity.
+ * It is not possible to change from well-defined to full fidelity using any other process.
+ * MongoDB accounts are not compatible with this possibility of changing the method of representation.
+ * Currently this decision cannot be made through the Azure portal.
+ * The decision on this option should be made at the same time that Synapse Link is enabled on the account:
+
+ With the Azure CLI:
+ ```cli
+ az cosmosdb create --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --subscription MySubscription --analytical-storage-schema-type "FullFidelity" --enable-analytical-storage true
+ ```
+
+ With the PowerShell:
+ ```
+ New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity"
+ ```
+
+#### Well-defined schema representation
The well-defined schema representation creates a simple tabular representation of the schema-agnostic data in the transactional store. The well-defined schema representation has the following considerations:
-* A property always has the same type across multiple items.
-* We only allow 1 type change, from null to any other data type.The first non-null occurrence defines the column data type.
+* The first document defines the base schema and property must always have the same type across all documents. The only exceptions are:
+ * From null to any other data type.The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.
+ * From `float` to `integer`. All documents will be represented in analytical store.
+ * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it is possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
+
+```SQL
+SELECT CAST (num as float) as num
+FROM OPENROWSET(ΓÇïPROVIDER = 'CosmosDB',
+ CONNECTION = '<your-connection',
+ OBJECT = 'IntToFloat',
+ SERVER_CREDENTIAL = 'your-credential'
+)
+WITH (num varchar(100)) AS [IntToFloat]
+```
- * For example, `{"a":123} {"a": "str"}` does not have a well-defined schema because `"a"` is sometimes a string and sometimes a number. In this case, the analytical store registers the data type of `"a"` as the data type of `ΓÇ£aΓÇ¥` in the first-occurring item in the lifetime of the container. The document will still be included in analytical store, but items where the data type of `"a"` differs will not.
+ * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the 2 documents below, and that the first one defined the analytical store base schema. The second document, where `id` is `2`, doesn't have a well-defined schema since property `"a"` is a string and the first document has `"a"` as a number. In this case, the analytical store registers the data type of `"a"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"a"` property will not.
- This condition does not apply for null properties. For example, `{"a":123} {"a":null}` is still well defined.
-
-* Array types must contain a single repeated type.
+ * `{"id": "1", "a":123}`
+ * `{"id": "2", "a": "str"}`
+
+ > [!NOTE]
+ > This condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well defined.
- * For example, `{"a": ["str",12]}` is not a well-defined schema because the array contains a mix of integer and string types.
+* Array types must contain a single repeated type. For example, `{"a": ["str",12]}` is not a well-defined schema because the array contains a mix of integer and string types.
> [!NOTE] > If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items will not be included in the analytical store.
The well-defined schema representation creates a simple tabular representation o
* SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
-**Full fidelity schema representation**
+#### Full fidelity schema representation
The full fidelity schema representation is designed to handle the full breadth of polymorphic schemas in the schema-agnostic operational data. In this schema representation, no items are dropped from the analytical store even if the well-defined schema constraints (that is no mixed data type fields nor mixed data type arrays) are violated.
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/choose-api.md
Azure Cosmos DB's Gremlin API is based on the [Apache TinkerPop](https://tinkerp
This API stores data in key/value format. If you are currently using Azure Table storage, you may see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. Table API overcomes these limitations and itΓÇÖs recommended to migrate your app if you want to use the benefits of Azure Cosmos DB. Table API only supports OLTP scenarios.
-Applications written for Azure Table storage can migrate to the Table API with little code changes and take advantage of premium capabilities. To learn more, see [Table API](table-introduction.md) article.
+Applications written for Azure Table storage can migrate to the Table API with little code changes and take advantage of premium capabilities. To learn more, see [Table API](introduction.md) article.
## Next steps
cosmos-db Cli Samples Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples-table.md
- Title: Azure CLI Samples for Azure Cosmos DB Table API
-description: Azure CLI Samples for Azure Cosmos DB Table API
---- Previously updated : 10/13/2020----
-# Azure CLI samples for Azure Cosmos DB Table API
-
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-
-## Common Samples
-
-These samples apply to all Azure Cosmos DB APIs
-
-|Task | Description |
-|||
-| [Add or failover regions](scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-|||
-
-## Table API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account and table](scripts/cli/table/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table for Table API. |
-| [Create an Azure Cosmos account and table with autoscale](scripts/cli/table/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table with autoscale for Table API. |
-| [Throughput operations](scripts/cli/table/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a table.|
-| [Lock resources from deletion](scripts/cli/table/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
-|||
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples.md
The following table includes links to sample Azure CLI scripts for Azure Cosmos
These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cli-samples-cassandra.md), [CLI Samples for MongoDB API](cli-samples-mongodb.md), [CLI Samples for Gremlin](cli-samples-gremlin.md), [CLI Samples for Table](cli-samples-table.md)
+For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cli-samples-cassandra.md), [CLI Samples for MongoDB API](cli-samples-mongodb.md), [CLI Samples for Gremlin](cli-samples-gremlin.md), [CLI Samples for Table](table/cli-samples.md)
## Common Samples
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
Get started with Azure Cosmos DB with one of our quickstarts:
* [Get started with Azure Cosmos DB's API for MongoDB](create-mongodb-nodejs.md) * [Get started with Azure Cosmos DB Cassandra API](create-cassandra-dotnet.md) * [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
-* [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
+* [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
> [!div class="nextstepaction"] > [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/)
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
You can also checkout the learn module on how to [configure Azure Synapse Link f
> [!NOTE] > If you want to use customer-managed keys with Azure Synapse Link, you must configure your account's managed identity in your Azure Key Vault access policy before enabling Synapse Link on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
->
+
+> [!NOTE]
+> If you want to use Full Fidelity Schema for SQL (CORE) API accounts, you can't use the Azure portal to enable Synapse Link. This option can't be changed after Synapse Link is enabled in your account and to set it you must use Azure CLI or PowerShell. For more information, check [analytical store schema representation documentation](analytical-store-introduction.md#schema-representation).
+ ### Azure portal 1. Sign into the [Azure portal](https://portal.azure.com/).
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-migrationchoices.md
For APIs other than the SQL API, Mongo API and the Cassandra API, there are vari
**Table API**
-* [Data Migration Tool](table-import.md#data-migration-tool)
-* [AzCopy](table-import.md#migrate-data-by-using-azcopy)
+* [Data Migration Tool](table/table-import.md#data-migration-tool)
+* [AzCopy](table/table-import.md#migrate-data-by-using-azcopy)
**Gremlin API**
cosmos-db Find Request Unit Charge Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-cassandra.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [SQL API](find-request-unit-charge.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [SQL API](find-request-unit-charge.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](table/find-request-unit-charge.md) articles to find the RU/s charge.
When you perform operations against the Azure Cosmos DB Cassandra API, the RU charge is returned in the incoming payload as a field named `RequestCharge`. You have multiple options for retrieving the RU charge.
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-gremlin.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [SQL API](find-request-unit-charge.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [SQL API](find-request-unit-charge.md), and [Table API](table/find-request-unit-charge.md) articles to find the RU/s charge.
Headers returned by the Gremlin API are mapped to custom status attributes, which currently are surfaced by the Gremlin .NET and Java SDK. The request charge is available under the `x-ms-request-charge` key. When you use the Gremlin API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-mongodb.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](find-request-unit-charge.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](find-request-unit-charge.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](table/find-request-unit-charge.md) articles to find the RU/s charge.
The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB API for MongoDB, you have multiple options for retrieving the RU charge.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and its considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](table/find-request-unit-charge.md) articles to find the RU/s charge.
Currently, you can measure this consumption only by using the Azure portal or by inspecting the response sent back from Azure Cosmos DB through one of the SDKs. If you're using the SQL API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
cosmos-db How To Create Container Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-cassandra.md
This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Gremlin API](how-to-create-container-gremlin.md), [Table API](how-to-create-container-table.md), and [SQL API](how-to-create-container.md) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Gremlin API](how-to-create-container-gremlin.md), [Table API](table/how-to-create-container.md), and [SQL API](how-to-create-container.md) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db How To Create Container Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-gremlin.md
This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Cassandra API](how-to-create-container-cassandra.md), [Table API](how-to-create-container-table.md), and [SQL API](how-to-create-container.md) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Cassandra API](how-to-create-container-cassandra.md), [Table API](table/how-to-create-container.md), and [SQL API](how-to-create-container.md) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db How To Create Container Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-mongodb.md
This article explains the different ways to create a container in Azure Cosmos DB API for MongoDB. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](how-to-create-container.md), [Cassandra API](how-to-create-container-cassandra.md), [Gremlin API](how-to-create-container-gremlin.md), and [Table API](how-to-create-container-table.md) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](how-to-create-container.md), [Cassandra API](how-to-create-container-cassandra.md), [Gremlin API](how-to-create-container-gremlin.md), and [Table API](table/how-to-create-container.md) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container.md
This article explains the different ways to create an container in Azure Cosmos DB SQL API. It shows how to create a container using the Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Cassandra API](how-to-create-container-cassandra.md), [Gremlin API](how-to-create-container-gremlin.md), and [Table API](how-to-create-container-table.md) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Cassandra API](how-to-create-container-cassandra.md), [Gremlin API](how-to-create-container-gremlin.md), and [Table API](table/how-to-create-container.md) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/import-data.md
This tutorial provides instructions on using the Azure Cosmos DB Data Migration
> The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations. For larger migrations, view our [guide for ingesting data](cosmosdb-migrationchoices.md). * **[SQL API](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](cosmosdb-migrationchoices.md).
-* **[Table API](table-introduction.md)** - You can use the Data Migration tool or [AzCopy](table-import.md#migrate-data-by-using-azcopy) to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table-import.md).
+* **[Table API](table/introduction.md)** - You can use the Data Migration tool or [AzCopy](table/table-import.md#migrate-data-by-using-azcopy) to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
* **[Azure Cosmos DB's API for MongoDB](mongodb-introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to a Cosmos database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB SQL API collections for use with the SQL API. * **[Cassandra API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Cassandra API accounts. [Learn about migration options for importing data into Cassandra API](cosmosdb-migrationchoices.md#azure-cosmos-db-cassandra-api) * **[Gremlin API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Gremlin API accounts at this time. [Learn about migration options for importing data into Gremlin API](cosmosdb-migrationchoices.md#other-apis)
dt.exe /s:CsvFile /s.Files:.\Employees.csv /t:DocumentDBBulk /t.ConnectionString
The Azure Table storage source importer option allows you to import from an individual Azure Table storage table. Optionally, you can filter the table entities to be imported.
-You may output data that was imported from Azure Table Storage to Azure Cosmos DB tables and entities for use with the Table API. Imported data can also be output to collections and documents for use with the SQL API. However, Table API is only available as a target in the command-line utility. You can't export to Table API by using the Data Migration tool user interface. For more information, see [Import data for use with the Azure Cosmos DB Table API](table-import.md).
+You may output data that was imported from Azure Table Storage to Azure Cosmos DB tables and entities for use with the Table API. Imported data can also be output to collections and documents for use with the SQL API. However, Table API is only available as a target in the command-line utility. You can't export to Table API by using the Data Migration tool user interface. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
:::image type="content" source="./media/import-data/azuretablesource.png" alt-text="Screenshot of Azure Table storage source options":::
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Get started with Azure Cosmos DB with one of our quickstarts:
- [Get started with Azure Cosmos DB's API for MongoDB](create-mongodb-nodejs.md) - [Get started with Azure Cosmos DB Cassandra API](create-cassandra-dotnet.md) - [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)-- [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
+- [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
- [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) > [!div class="nextstepaction"]
cosmos-db Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator.md
mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mG
### Table API
-Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB Table API SDK](./tutorial-develop-table-dotnet.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableTableEndpoint". Next run the following code to connect to the table API account:
+Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB Table API SDK](./table/tutorial-develop-table-dotnet.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableTableEndpoint". Next run the following code to connect to the table API account:
```csharp using Microsoft.WindowsAzure.Storage;
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), and [Table](templates-samples-table.md) APIs.
+This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), and [Table](table/resource-manager-templates.md) APIs.
> [!IMPORTANT] >
cosmos-db Powershell Samples Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-table.md
- Title: Azure PowerShell samples for Azure Cosmos DB Table API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Table API
---- Previously updated : 01/20/2021---
-# Azure PowerShell samples for Azure Cosmos DB Table API
-
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
-
-## Common Samples
-
-|Task | Description |
-|||
-|[Update an account](scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
-|[Account keys or connection strings](scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
-|||
-
-## Table API Samples
-
-|Task | Description |
-|||
-|[Create an account and table](scripts/powershell/table/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table. |
-|[Create an account and table with autoscale](scripts/powershell/table/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table autoscale. |
-|[List or get tables](scripts/powershell/table/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get tables. |
-|[Throughput operations](scripts/powershell/table/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a table including get, update and migrate between autoscale and standard throughput. |
-|[Lock resources from deletion](scripts/powershell/table/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
-|||
cosmos-db Query Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/query-cheat-sheet.md
For more help writing queries, see the following articles:
* For SQL API queries, see [Query using the SQL API](tutorial-query-sql-api.md), [SQL queries for Azure Cosmos DB](./sql-query-getting-started.md), and [SQL syntax reference](./sql-query-getting-started.md) * For MongoDB queries, see [Query using Azure Cosmos DB's API for MongoDB](tutorial-query-mongodb.md) and [Azure Cosmos DB's API for MongoDB feature support and syntax](mongodb-feature-support.md) * For Gremlin API queries, see [Query using the Gremlin API](tutorial-query-graph.md) and [Azure Cosmos DB Gremlin graph support](gremlin-support.md)
-* For Table API queries, see [Query using the Table API](tutorial-query-table.md)
+* For Table API queries, see [Query using the Table API](table/tutorial-query-table.md)
cosmos-db Rate Limiting Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/rate-limiting-requests.md
The method to determine the cost of a request, is different for each API:
* [Cassandra API](find-request-unit-charge-cassandra.md) * [Gremlin API](find-request-unit-charge-gremlin.md) * [Mongo DB API](find-request-unit-charge-mongodb.md)
-* [Table API](find-request-unit-charge-table.md)
+* [Table API](table/find-request-unit-charge.md)
## Write requests
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB Table API
+description: Azure CLI Samples for Azure Cosmos DB Table API
++++ Last updated : 10/13/2020++++
+# Azure CLI samples for Azure Cosmos DB Table API
+
+The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+
+## Common Samples
+
+These samples apply to all Azure Cosmos DB APIs
+
+|Task | Description |
+|||
+| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
+| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+|||
+
+## Table API Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos account and table](../scripts/cli/table/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table for Table API. |
+| [Create an Azure Cosmos account and table with autoscale](../scripts/cli/table/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account and table with autoscale for Table API. |
+| [Throughput operations](../scripts/cli/table/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a table.|
+| [Lock resources from deletion](../scripts/cli/table/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+|||
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/create-table-dotnet.md
+
+ Title: 'Quickstart: Table API with .NET - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and .NET
+++
+ms.devlang: dotnet
+ Last updated : 05/28/2020+++++
+# Quickstart: Build a Table API app with .NET SDK and Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-table-dotnet.md)
+> * [Java](create-table-java.md)
+> * [Node.js](create-table-nodejs.md)
+> * [Python](how-to-use-python.md)
+>
+
+This quickstart shows how to use .NET and the Azure Cosmos DB [Table API](introduction.md) to build an app by cloning an example from GitHub. This quickstart also shows you how to create an Azure Cosmos DB account and how to use Data Explorer to create tables and entities in the web-based Azure portal.
+
+## Prerequisites
+
+If you donΓÇÖt already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
++
+## Create a database account
++
+## Add a table
++
+## Add sample data
++
+## Clone the sample application
+
+Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started.git
+ ```
+
+> [!TIP]
+> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](tutorial-develop-table-dotnet.md) article.
+
+## Open the sample application in Visual Studio
+
+1. In Visual Studio, from the **File** menu, choose **Open**, then choose **Project/Solution**.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-open-solution.png" alt-text="Open the solution":::
+
+2. Navigate to the folder where you cloned the sample application and open the TableStorage.sln file.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+
+* The following code shows how to create a table within the Azure Storage:
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/Common.cs" id="CreateTable":::
+
+* The following code shows how to insert data into the table:
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="InsertItem":::
+
+* The following code shows how to query data from the table:
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="QueryData":::
+
+* The following code shows how to delete data from the table:
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="DeleteItem":::
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+
+1. In the [Azure portal](https://portal.azure.com/), click **Connection String**. Use the copy button on the right side of the window to copy the **PRIMARY CONNECTION STRING**.
+
+ :::image type="content" source="./media/create-table-dotnet/connection-string.png" alt-text="View and copy the PRIMARY CONNECTION STRING in the Connection String pane":::
+
+2. In Visual Studio, open the **Settings.json** file.
+
+3. Paste the **PRIMARY CONNECTION STRING** from the portal into the StorageConnectionString value. Paste the string inside the quotes.
+
+ ```csharp
+ {
+ "StorageConnectionString": "<Primary connection string from Azure portal>"
+ }
+ ```
+
+4. Press CTRL+S to save the **Settings.json** file.
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Build and deploy the app
+
+1. In Visual Studio, right-click on the **CosmosTableSamples** project in **Solution Explorer** and then click **Manage NuGet Packages**.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-manage-nuget.png" alt-text="Manage NuGet Packages":::
+
+2. In the NuGet **Browse** box, type Microsoft.Azure.Cosmos.Table. This will find the Cosmos DB Table API client library. Note that this library is currently available for .NET Framework and .NET Standard.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-nuget-browse.png" alt-text="NuGet Browse tab":::
+
+3. Click **Install** to install the **Microsoft.Azure.Cosmos.Table** library. This installs the Azure Cosmos DB Table API package and all dependencies.
+
+4. When you run the entire app, sample data is inserted into the table entity and deleted at the end so you wonΓÇÖt see any data inserted if you run the whole sample. However you can insert some breakpoints to view the data. Open BasicSamples.cs file and right-click on line 52, select **Breakpoint**, then select **Insert Breakpoint**. Insert another breakpoint on line 55.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-breakpoint.png" alt-text="Add a breakpoint":::
+
+5. Press F5 to run the application. The console window displays the name of the new table database (in this case, demoa13b1) in Azure Cosmos DB.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-console.png" alt-text="Console output":::
+
+ When you hit the first breakpoint, go back to Data Explorer in the Azure portal. Click the **Refresh** button, expand the demo* table, and click **Entities**. The **Entities** tab on the right shows the new entity that was added for Walter Harp. Note that the phone number for the new entity is 425-555-0101.
+
+ :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-entity.png" alt-text="New entity":::
+
+ If you receive an error that says Settings.json file canΓÇÖt be found when running the project, you can resolve it by adding the following XML entry to the project settings. Right click on CosmosTableSamples, select Edit CosmosTableSamples.csproj and add the following itemGroup:
+
+ ```csharp
+ <ItemGroup>
+ <None Update="Settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+6. Close the **Entities** tab in Data Explorer.
+
+7. Press F5 to run the app to the next breakpoint.
+
+ When you hit the breakpoint, switch back to the Azure portal, click **Entities** again to open the **Entities** tab, and note that the phone number has been updated to 425-555-0105.
+
+8. Press F5 to run the app.
+
+ The app adds entities for use in an advanced sample app that the Table API currently does not support. The app then deletes the table created by the sample app.
+
+9. In the console window, press Enter to end the execution of the app.
+
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API.
+
+> [!div class="nextstepaction"]
+> [Import table data to the Table API](table-import.md)
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/create-table-java.md
+
+ Title: Use the Table API and Java to build an app - Azure Cosmos DB
+description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Java
+++
+ms.devlang: java
+ Last updated : 05/28/2020++++
+# Quickstart: Build a Java app to manage Azure Cosmos DB Table API data
+
+> [!div class="op_single_selector"]
+> * [.NET](create-table-dotnet.md)
+> * [Java](create-table-java.md)
+> * [Node.js](create-table-nodejs.md)
+> * [Python](how-to-use-python.md)
+>
+
+In this quickstart, you create an Azure Cosmos DB Table API account, and use Data Explorer and a Java app cloned from GitHub to create tables and entities. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi).
+- [Git](https://www.git-scm.com/downloads).
+
+## Create a database account
+
+> [!IMPORTANT]
+> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs.
+>
++
+## Add a table
++
+## Add sample data
++
+## Clone the sample application
+
+Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/storage-table-java-getting-started.git
+ ```
+
+> [!TIP]
+> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](how-to-use-java.md) article.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+
+* The following code shows how to create a table within the Azure Storage:
+
+ ```java
+ private static CloudTable createTable(CloudTableClient tableClient, String tableName) throws StorageException, RuntimeException, IOException, InvalidKeyException, IllegalArgumentException, URISyntaxException, IllegalStateException {
+
+ // Create a new table
+ CloudTable table = tableClient.getTableReference(tableName);
+ try {
+ if (table.createIfNotExists() == false) {
+ throw new IllegalStateException(String.format("Table with name \"%s\" already exists.", tableName));
+ }
+ }
+ catch (StorageException s) {
+ if (s.getCause() instanceof java.net.ConnectException) {
+ System.out.println("Caught connection exception from the client. If running with the default configuration please make sure you have started the storage emulator.");
+ }
+ throw s;
+ }
+
+ return table;
+ }
+ ```
+
+* The following code shows how to insert data into the table:
+
+ ```javascript
+ private static void batchInsertOfCustomerEntities(CloudTable table) throws StorageException {
+
+ // Create the batch operation
+ TableBatchOperation batchOperation1 = new TableBatchOperation();
+ for (int i = 1; i <= 50; i++) {
+ CustomerEntity entity = new CustomerEntity("Smith", String.format("%04d", i));
+ entity.setEmail(String.format("smith%04d@contoso.com", i));
+ entity.setHomePhoneNumber(String.format("425-555-%04d", i));
+ entity.setWorkPhoneNumber(String.format("425-556-%04d", i));
+ batchOperation1.insertOrMerge(entity);
+ }
+
+ // Execute the batch operation
+ table.execute(batchOperation1);
+ }
+ ```
+
+* The following code shows how to query data from the table:
+
+ ```java
+ private static void partitionScan(CloudTable table, String partitionKey) throws StorageException {
+
+ // Create the partition scan query
+ TableQuery<CustomerEntity> partitionScanQuery = TableQuery.from(CustomerEntity.class).where(
+ (TableQuery.generateFilterCondition("PartitionKey", QueryComparisons.EQUAL, partitionKey)));
+
+ // Iterate through the results
+ for (CustomerEntity entity : table.execute(partitionScanQuery)) {
+ System.out.println(String.format("\tCustomer: %s,%s\t%s\t%s\t%s", entity.getPartitionKey(), entity.getRowKey(), entity.getEmail(), entity.getHomePhoneNumber(), entity. getWorkPhoneNumber()));
+ }
+ }
+ ```
+
+* The following code shows how to delete data from the table:
+
+ ```java
+
+ System.out.print("\nDelete any tables that were created.");
+
+ if (table1 != null && table1.deleteIfExists() == true) {
+ System.out.println(String.format("\tSuccessfully deleted the table: %s", table1.getName()));
+ }
+
+ if (table2 != null && table2.deleteIfExists() == true) {
+ System.out.println(String.format("\tSuccessfully deleted the table: %s", table2.getName()));
+ }
+ ```
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+
+ :::image type="content" source="./media/create-table-java/cosmos-db-quickstart-connection-string.png" alt-text="View the connection string information in the Connection String pane":::
+
+2. Copy the PRIMARY CONNECTION STRING using the copy button on the right.
+
+3. Open *config.properties* from the *C:\git-samples\storage-table-java-getting-started\src\main\resources* folder.
+
+5. Comment out line one and uncomment line two. The first two lines should now look like this.
+
+ ```xml
+ #StorageConnectionString = UseDevelopmentStorage=true
+ StorageConnectionString = DefaultEndpointsProtocol=https;AccountName=[ACCOUNTNAME];AccountKey=[ACCOUNTKEY]
+ ```
+
+6. Paste your PRIMARY CONNECTION STRING from the portal into the StorageConnectionString value in line 2.
+
+ > [!IMPORTANT]
+ > If your Endpoint uses documents.azure.com, that means you have a preview account, and you need to create a [new Table API account](#create-a-database-account) to work with the generally available Table API SDK.
+ >
+
+7. Save the *config.properties* file.
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Run the app
+
+1. In the git terminal window, `cd` to the storage-table-java-getting-started folder.
+
+ ```git
+ cd "C:\git-samples\storage-table-java-getting-started"
+ ```
+
+2. In the git terminal window, run the following commands to run the Java application.
+
+ ```git
+ mvn compile exec:java
+ ```
+
+ The console window displays the table data being added to the new table database in Azure Cosmos DB.
+
+ You can now go back to Data Explorer and see, query, modify, and work with this new data.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Java app to add table data. Now you can query your data using the Table API.
+
+> [!div class="nextstepaction"]
+> [Import table data to the Table API](table-import.md)
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/create-table-nodejs.md
+
+ Title: 'Quickstart: Table API with Node.js - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and Node.js
+++
+ms.devlang: nodejs
+ Last updated : 05/28/2020+++
+# Quickstart: Build a Table API app with Node.js and Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-table-dotnet.md)
+> * [Java](create-table-java.md)
+> * [Node.js](create-table-nodejs.md)
+> * [Python](how-to-use-python.md)
+>
+
+In this quickstart, you create an Azure Cosmos DB Table API account, and use Data Explorer and a Node.js app cloned from GitHub to create tables and entities. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Node.js 0.10.29+](https://nodejs.org/) .
+- [Git](https://git-scm.com/downloads).
+
+## Create a database account
+
+> [!IMPORTANT]
+> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs.
+>
++
+## Add a table
++
+## Add sample data
++
+## Clone the sample application
+
+Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/storage-table-node-getting-started.git
+ ```
+
+> [!TIP]
+> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](how-to-use-nodejs.md) article.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+
+* The following code shows how to create a table within the Azure Storage:
+
+ ```javascript
+ storageClient.createTableIfNotExists(tableName, function (error, createResult) {
+ if (error) return callback(error);
+
+ if (createResult.isSuccessful) {
+ console.log("1. Create Table operation executed successfully for: ", tableName);
+ }
+ }
+
+ ```
+
+* The following code shows how to insert data into the table:
+
+ ```javascript
+ var customer = createCustomerEntityDescriptor("Harp", "Walter", "Walter@contoso.com", "425-555-0101");
+
+ storageClient.insertOrMergeEntity(tableName, customer, function (error, result, response) {
+ if (error) return callback(error);
+
+ console.log(" insertOrMergeEntity succeeded.");
+ }
+ ```
+
+* The following code shows how to query data from the table:
+
+ ```javascript
+ console.log("6. Retrieving entities with surname of Smith and first names > 1 and <= 75");
+
+ var storageTableQuery = storage.TableQuery;
+ var segmentSize = 10;
+
+ // Demonstrate a partition range query whereby we are searching within a partition for a set of entities that are within a specific range.
+ var tableQuery = new storageTableQuery()
+ .top(segmentSize)
+ .where('PartitionKey eq ?', lastName)
+ .and('RowKey gt ?', "0001").and('RowKey le ?', "0075");
+
+ runPageQuery(tableQuery, null, function (error, result) {
+
+ if (error) return callback(error);
+
+ ```
+
+* The following code shows how to delete data from the table:
+
+ ```javascript
+ storageClient.deleteEntity(tableName, customer, function entitiesQueried(error, result) {
+ if (error) return callback(error);
+
+ console.log(" deleteEntity succeeded.");
+ }
+ ```
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+
+ :::image type="content" source="./media/create-table-nodejs/connection-string.png" alt-text="View and copy the required connection string information from the in the Connection String pane":::
+
+2. Copy the PRIMARY CONNECTION STRING using the copy button on the right side.
+
+3. Open the *app.config* file, and paste the value into the connectionString on line three.
+
+ > [!IMPORTANT]
+ > If your Endpoint uses documents.azure.com, that means you have a preview account, and you need to create a [new Table API account](#create-a-database-account) to work with the generally available Table API SDK.
+ >
+
+3. Save the *app.config* file.
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Run the app
+
+1. In the git terminal window, `cd` to the storage-table-java-getting-started folder.
+
+ ```
+ cd "C:\git-samples\storage-table-node-getting-started"
+ ```
+
+2. Run the following command to install the [azure], [node-uuid], [nconf] and [async] modules locally as well as to save an entry for them to the *package.json* file.
+
+ ```
+ npm install azure-storage node-uuid async nconf --save
+ ```
+
+2. In the git terminal window, run the following commands to run the Node.js application.
+
+ ```
+ node ./tableSample.js
+ ```
+
+ The console window displays the table data being added to the new table database in Azure Cosmos DB.
+
+ You can now go back to Data Explorer and see, query, modify, and work with this new data.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Node.js app to add table data. Now you can query your data using the Table API.
+
+> [!div class="nextstepaction"]
+> [Import table data to the Table API](table-import.md)
cosmos-db Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/dotnet-sdk.md
+
+ Title: Azure Cosmos DB Table API .NET SDK & Resources
+description: Learn all about the Azure Cosmos DB Table API for .NET including release dates, retirement dates, and changes made between each version.
++++
+ms.devlang: dotnet
+ Last updated : 08/17/2018+++
+# Azure Cosmos DB Table .NET API: Download and release notes
+
+> [!div class="op_single_selector"]
+> * [.NET](dotnet-sdk.md)
+> * [.NET Standard](dotnet-standard-sdk.md)
+> * [Java](java-sdk.md)
+> * [Node.js](nodejs-sdk.md)
+> * [Python](python-sdk.md)
+
+| | Links|
+|||
+|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table)|
+|**Quickstart**|[Azure Cosmos DB: Build an app with .NET and the Table API](create-table-dotnet.md)|
+|**Tutorial**|[Azure Cosmos DB: Develop with the Table API in .NET](tutorial-develop-table-dotnet.md)|
+|**Current supported framework**|[Microsoft .NET Framework 4.5.1](https://www.microsoft.com/en-us/download/details.aspx?id=40779)|
+
+> [!IMPORTANT]
+> The .NET Framework SDK [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table) is in maintenance mode and it will be deprecated soon. Please upgrade to the new .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) to continue to get the latest features supported by the Table API.
+
+> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
+>
+
+## Release notes
+
+### <a name="2.1.2"></a>2.1.2
+
+* Bug fixes
+
+### <a name="2.1.0"></a>2.1.0
+
+* Bug fixes
+
+### <a name="2.0.0"></a>2.0.0
+
+* Added Multi-region write support
+* Fixed NuGet package dependencies on Microsoft.Azure.DocumentDB, Microsoft.OData.Core, Microsoft.OData.Edm, Microsoft.Spatial
+
+### <a name="1.1.3"></a>1.1.3
+
+* Fixed NuGet package dependencies on Microsoft.Azure.Storage.Common and Microsoft.Azure.DocumentDB.
+* Bug fixes on table serialization when JsonConvert.DefaultSettings are configured.
+
+### <a name="1.1.1"></a>1.1.1
+
+* Added validation for malformed ETAGs in Direct Mode.
+* Fixed LINQ query bug in Gateway Mode.
+* Synchronous APIs now run on the thread pool with SynchronizationContext.
+
+### <a name="1.1.0"></a>1.1.0
+
+* Add TableQueryMaxItemCount, TableQueryEnableScan, TableQueryMaxDegreeOfParallelism, and TableQueryContinuationTokenLimitInKb to TableRequestOptions
+* Bug Fixes
+
+### <a name="1.0.0"></a>1.0.0
+
+* General availability release
+
+### <a name="0.1.0-preview"></a>0.9.0-preview
+
+* Initial preview release
+
+## Release and Retirement dates
+
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
+
+The `Microsoft.Azure.CosmosDB.Table` library is currently available for .NET Framework only, and is in maintenance mode and will be deprecated soon. New features and functionalities and optimizations are only added to the .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table), as such it is recommended that you upgrade to [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table).
+
+The [WindowsAzure.Storage-PremiumTable](https://www.nuget.org/packages/WindowsAzure.Storage-PremiumTable/0.1.0-preview) preview package has been deprecated. The WindowsAzure.Storage-PremiumTable SDK will be retired on November 15, 2018, at which time requests to the retired SDK will not be permitted.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [2.1.2](#2.1.2) |September 16, 2019| |
+| [2.1.0](#2.1.0) |January 22, 2019|April 01, 2020 |
+| [2.0.0](#2.0.0) |September 26, 2018|March 01, 2020 |
+| [1.1.3](#1.1.3) |July 17, 2018|December 01, 2019 |
+| [1.1.1](#1.1.1) |March 26, 2018|December 01, 2019 |
+| [1.1.0](#1.1.0) |February 21, 2018|December 01, 2019 |
+| [1.0.0](#1.0.0) |November 15, 2017|November 15, 2019 |
+| 0.9.0-preview |November 11, 2017 |November 11, 2019 |
+
+## Troubleshooting
+
+If you get the error
+
+```
+Unable to resolve dependency 'Microsoft.Azure.Storage.Common'. Source(s) used: 'nuget.org',
+'CliFallbackFolder', 'Microsoft Visual Studio Offline Packages', 'Microsoft Azure Service Fabric SDK'`
+```
+
+when attempting to use the Microsoft.Azure.CosmosDB.Table NuGet package, you have two options to fix the issue:
+
+* Use Package Manage Console to install the Microsoft.Azure.CosmosDB.Table package and its dependencies. To do this, type the following in the Package Manager Console for your solution.
+
+ ```powershell
+ Install-Package Microsoft.Azure.CosmosDB.Table -IncludePrerelease
+ ```
+
+
+* Using your preferred NuGet package management tool, install the Microsoft.Azure.Storage.Common NuGet package before installing Microsoft.Azure.CosmosDB.Table.
+
+## FAQ
++
+## See also
+
+To learn more about the Azure Cosmos DB Table API, see [Introduction to Azure Cosmos DB Table API](introduction.md).
cosmos-db Dotnet Standard Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/dotnet-standard-sdk.md
+
+ Title: Azure Cosmos DB Table API .NET Standard SDK & Resources
+description: Learn all about the Azure Cosmos DB Table API and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.
++++
+ms.devlang: dotnet
+ Last updated : 03/18/2019+++
+# Azure Cosmos DB Table .NET Standard API: Download and release notes
+> [!div class="op_single_selector"]
+>
+> * [.NET](dotnet-sdk.md)
+> * [.NET Standard](dotnet-standard-sdk.md)
+> * [Java](java-sdk.md)
+> * [Node.js](nodejs-sdk.md)
+> * [Python](python-sdk.md)
+
+| | Links |
+|||
+|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table)|
+|**Sample**|[Cosmos DB Table API .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
+|**Quickstart**|[Quickstart](create-table-dotnet.md)|
+|**Tutorial**|[Tutorial](tutorial-develop-table-dotnet.md)|
+|**Current supported framework**|[Microsoft .NET Standard 2.0](https://www.nuget.org/packages/NETStandard.Library)|
+|**Report Issue**|[Report Issue](https://github.com/Azure/azure-cosmos-table-dotnet/issues)|
+
+## Release notes for 2.0.0 series
+2.0.0 series takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Cosmos DB endpoint.
+
+### <a name="2.0.0-preview"></a>2.0.0-preview
+* initial preview of 2.0.0 Table SDK that takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Cosmos DB endpoint. The public API remains the same.
+
+## Release notes for 1.0.0 series
+1.0.0 series takes the dependency on [Microsoft.Azure.DocumentDB.Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/).
+
+### <a name="1.0.8"></a>1.0.8
+* Add support to set TTL property if it's cosmosdb endpoint
+* Honor retry policy upon timeout and task cancelled exception
+* Fix intermittent task cancelled exception seen in asp .net applications
+* Fix azure table storage retrieve from secondary endpoint only location mode
+* Update `Microsoft.Azure.DocumentDB.Core` dependency version to 2.11.2 which fixes intermittent null reference exception
+* Update `Odata.Core` dependency version to 7.6.4 which fixes compatibility conflict with azure shell
+
+### <a name="1.0.7"></a>1.0.7
+* Performance improvement by setting Table SDK default trace level to SourceLevels.Off, which can be opted in via app.config
+
+### <a name="1.0.5"></a>1.0.5
+* Introduce new config under TableClientConfiguration to use Rest Executor to communicate with Cosmos DB Table API
+
+### <a name="1.0.5-preview"></a>1.0.5-preview
+* Bug fixes
+
+### <a name="1.0.4"></a>1.0.4
+* Bug fixes
+* Provide HttpClientTimeout option for RestExecutorConfiguration.
+
+### <a name="1.0.4-preview"></a>1.0.4-preview
+* Bug fixes
+* Provide HttpClientTimeout option for RestExecutorConfiguration.
+
+### <a name="1.0.1"></a>1.0.1
+* Bug fixes
+
+### <a name="1.0.0"></a>1.0.0
+* General availability release
+
+### <a name="0.11.0-preview"></a>0.11.0-preview
+* Changes were made to how CloudTableClient can be configured. It now takes an a TableClientConfiguration object during construction. TableClientConfiguration provides different properties to configure the client behavior depending on whether the target endpoint is Cosmos DB Table API or Azure Storage Table API.
+* Added support to TableQuery to return results in sorted order on a custom column. This feature is only supported on Cosmos DB Table endpoints.
+* Added support to expose RequestCharges on various result types. This feature is only supported on Cosmos DB Table endpoints.
+
+### <a name="0.10.1-preview"></a>0.10.1-preview
+* Add support for SAS token, operations of TablePermissions, ServiceProperties, and ServiceStats against Azure Storage Table endpoints.
+ > [!NOTE]
+ > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
+
+### <a name="0.10.0-preview"></a>0.10.0-preview
+* Add support for core CRUD, batch, and query operations against Azure Storage Table endpoints.
+ > [!NOTE]
+ > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
+
+### <a name="0.9.1-preview"></a>0.9.1-preview
+* Azure Cosmos DB Table .NET Standard SDK is a cross-platform .NET library that provides efficient access to the Table data model on Cosmos DB. This initial release supports the full set of Table and Entity CRUD + Query functionalities with similar APIs as the [Cosmos DB Table SDK For .NET Framework](dotnet-sdk.md).
+ > [!NOTE]
+ > Azure Storage Table endpoints are not yet supported in the 0.9.1-preview version.
+
+## Release and Retirement dates
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
+
+This cross-platform .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) will replace the .NET Framework library [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table).
+
+### 2.0.0 series
+| Version | Release Date | Retirement Date |
+| | | |
+| [2.0.0-preview](#2.0.0-preview) |Auguest 22, 2019 | |
+
+### 1.0.0 series
+| Version | Release Date | Retirement Date |
+| | | |
+| [1.0.5](#1.0.5) |September 13, 2019 | |
+| [1.0.5-preview](#1.0.5-preview) |Auguest 20, 2019 | |
+| [1.0.4](#1.0.4) |Auguest 12, 2019 | |
+| [1.0.4-preview](#1.0.4-preview) |July 26, 2019 | |
+| 1.0.2-preview |May 2, 2019 | |
+| [1.0.1](#1.0.1) |April 19, 2019 | |
+| [1.0.0](#1.0.0) |March 13, 2019 | |
+| [0.11.0-preview](#0.11.0-preview) |March 5, 2019 | |
+| [0.10.1-preview](#0.10.1-preview) |January 22, 2019 | |
+| [0.10.0-preview](#0.10.0-preview) |December 18, 2018 | |
+| [0.9.1-preview](#0.9.1-preview) |October 18, 2018 | |
++
+## FAQ
++
+## See also
+To learn more about the Azure Cosmos DB Table API, see [Introduction to Azure Cosmos DB Table API](introduction.md).
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/find-request-unit-charge.md
+
+ Title: Find request unit (RU) charge for a Table API queries in Azure Cosmos DB
+description: Learn how to find the request unit (RU) charge for Table API queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge.
++++ Last updated : 10/14/2020+++
+# Find the request unit charge for operations executed in Azure Cosmos DB Table API
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Table API. If you are using a different API, see [API for MongoDB](../find-request-unit-charge-mongodb.md), [Cassandra API](../find-request-unit-charge-cassandra.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [SQL API](../find-request-unit-charge.md) articles to find the RU/s charge.
+
+## Use the .NET SDK
+
+Currently, the only SDK that returns the RU charge for table operations is the [.NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
+
+```csharp
+CloudTable tableReference = client.GetTableReference("table");
+TableResult tableResult = tableReference.Execute(TableOperation.Insert(new DynamicTableEntity("partitionKey", "rowKey")));
+if (tableResult.RequestCharge.HasValue) // would be false when using Azure Storage Tables
+{
+ double requestCharge = tableResult.RequestCharge.Value;
+}
+```
+
+For more information, see [Quickstart: Build a Table API app by using the .NET SDK and Azure Cosmos DB](create-table-dotnet.md).
+
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-create-container.md
+
+ Title: Create a container in Azure Cosmos DB Table API
+description: Learn how to create a container in Azure Cosmos DB Table API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs.
++++ Last updated : 10/16/2020++++
+# Create a container in Azure Cosmos DB Table API
+
+This article explains the different ways to create a container in Azure Cosmos DB Table API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
+
+This article explains the different ways to create a container in Azure Cosmos DB Table API. If you are using a different API, see [API for MongoDB](../how-to-create-container-mongodb.md), [Cassandra API](../how-to-create-container-cassandra.md), [Gremlin API](../how-to-create-container-gremlin.md), and [SQL API](../how-to-create-container.md) articles to create the container.
+
+> [!NOTE]
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+
+## <a id="portal-table"></a>Create using Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos account](create-table-dotnet.md#create-a-database-account), or select an existing account.
+
+1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
+
+ * Enter a Table ID.
+ * Enter a throughput to be provisioned (for example, 1000 RUs).
+ * Select **OK**.
+
+ :::image type="content" source="../media/how-to-create-container/partitioned-collection-create-table.png" alt-text="Screenshot of Table API, Add Table dialog box":::
+
+> [!Note]
+> For Table API, the partition key is specified each time you add a new row.
+
+## <a id="cli-mongodb"></a>Create using Azure CLI
+
+[Create a Table API table with Azure CLI](../scripts/cli/table/create.md). For a listing of all Azure CLI samples across all Azure Cosmos DB APIs see, [Azure CLI samples for Azure Cosmos DB](cli-samples.md).
+
+## Create using PowerShell
+
+[Create a Table API table with PowerShell](../scripts/powershell/table/create.md). For a listing of all PowerShell samples across all Azure Cosmos DB APIs see, [PowerShell Samples](powershell-samples.md)
+
+## Next steps
+
+* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Work with Azure Cosmos account](../account-databases-containers-items.md)
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-c-plus.md
+
+ Title: Use Azure Table Storage and Azure Cosmos DB Table API with C++
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API by using C++.
++
+ms.devlang: cpp
+ Last updated : 10/07/2019+++
+# How to use Azure Table storage and Azure Cosmos DB Table API with C++
++
+This guide shows you common scenarios by using the Azure Table storage service or Azure Cosmos DB Table API. The samples are written in C++ and use the [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/blob/master/README.md). This article covers the following scenarios:
+
+* Create and delete a table
+* Work with table entities
+
+> [!NOTE]
+> This guide targets the Azure Storage Client Library for C++ version 1.0.0 and above. The recommended version is Storage Client Library 2.2.0, which is available by using [NuGet](https://www.nuget.org/packages/wastorage) or [GitHub](https://github.com/Azure/azure-storage-cpp/).
+>
+
+## Create accounts
+
+### Create an Azure service account
++
+### Create an Azure storage account
++
+### Create an Azure Cosmos DB Table API account
++
+## Create a C++ application
+
+In this guide, you use storage features from a C++ application. To do so, install the Azure Storage Client Library for C++.
+
+To install the Azure Storage Client Library for C++, use the following methods:
+
+* **Linux:** Follow the instructions given in the [Azure Storage Client Library for C++ README: Getting Started on Linux](https://github.com/Azure/azure-storage-cpp#getting-started-on-linux) page.
+* **Windows:** On Windows, use [vcpkg](https://github.com/microsoft/vcpkg) as the dependency manager. Follow the [quick-start](https://github.com/microsoft/vcpkg#quick-start) to initialize vcpkg. Then, use the following command to install the library:
+
+```powershell
+.\vcpkg.exe install azure-storage-cpp
+```
+
+You can find a guide for how to build the source code and export to Nuget in the [README](https://github.com/Azure/azure-storage-cpp#download--install) file.
+
+### Configure access to the Table client library
+
+To use the Azure storage APIs to access tables, add the following `include` statements to the top of the C++ file:
+
+```cpp
+#include <was/storage_account.h>
+#include <was/table.h>
+```
+
+An Azure Storage client or Cosmos DB client uses a connection string to store endpoints and credentials to access data management services. When you run a client application, you must provide the storage connection string or Azure Cosmos DB connection string in the appropriate format.
+
+### Set up an Azure Storage connection string
+
+This example shows how to declare a static field to hold the Azure Storage connection string:
+
+```cpp
+// Define the Storage connection string with your values.
+const utility::string_t storage_connection_string(U("DefaultEndpointsProtocol=https;AccountName=<your_storage_account>;AccountKey=<your_storage_account_key>"));
+```
+
+Use the name of your Storage account for `<your_storage_account>`. For <your_storage_account_key>, use the access key for the Storage account listed in the [Azure portal](https://portal.azure.com). For information on Storage accounts and access keys, see [Create a storage account](../../storage/common/storage-account-create.md).
+
+### Set up an Azure Cosmos DB connection string
+
+This example shows how to declare a static field to hold the Azure Cosmos DB connection string:
+
+```cpp
+// Define the Azure Cosmos DB connection string with your values.
+const utility::string_t storage_connection_string(U("DefaultEndpointsProtocol=https;AccountName=<your_cosmos_db_account>;AccountKey=<your_cosmos_db_account_key>;TableEndpoint=<your_cosmos_db_endpoint>"));
+```
+
+Use the name of your Azure Cosmos DB account for `<your_cosmos_db_account>`. Enter your primary key for `<your_cosmos_db_account_key>`. Enter the endpoint listed in the [Azure portal](https://portal.azure.com) for `<your_cosmos_db_endpoint>`.
+
+To test your application in your local Windows-based computer, you can use the Azure Storage Emulator that is installed with the [Azure SDK](https://azure.microsoft.com/downloads/). The Storage Emulator is a utility that simulates the Azure Blob, Queue, and Table services available on your local development machine. The following example shows how to declare a static field to hold the connection string to your local storage emulator:
+
+```cpp
+// Define the connection string with Azure Storage Emulator.
+const utility::string_t storage_connection_string(U("UseDevelopmentStorage=true;"));
+```
+
+To start the Azure Storage Emulator, from your Windows desktop, select the **Start** button or the Windows key. Enter and run *Microsoft Azure Storage Emulator*. For more information, see [Use the Azure Storage Emulator for development and testing](../../storage/common/storage-use-emulator.md).
+
+### Retrieve your connection string
+
+You can use the `cloud_storage_account` class to represent your storage account information. To retrieve your storage account information from the storage connection string, use the `parse` method.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+```
+
+Next, get a reference to a `cloud_table_client` class. This class lets you get reference objects for tables and entities stored within the Table storage service. The following code creates a `cloud_table_client` object by using the storage account object you retrieved previously:
+
+```cpp
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+```
+
+## Create and add entities to a table
+
+### Create a table
+
+A `cloud_table_client` object lets you get reference objects for tables and entities. The following code creates a `cloud_table_client` object and uses it to create a new table.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Retrieve a reference to a table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Create the table if it doesn't exist.
+table.create_if_not_exists();
+```
+
+### Add an entity to a table
+
+To add an entity to a table, create a new `table_entity` object and pass it to `table_operation::insert_entity`. The following code uses the customer's first name as the row key and last name as the partition key. Together, an entity's partition and row key uniquely identify the entity in the table. Entities with the same partition key can be queried faster than entities with different partition keys. Using diverse partition keys allows for greater parallel operation scalability. For more information, see [Microsoft Azure storage performance and scalability checklist](../../storage/blobs/storage-performance-checklist.md).
+
+The following code creates a new instance of `table_entity` with some customer data to store. The code next calls `table_operation::insert_entity` to create a `table_operation` object to insert an entity into a table, and associates the new table entity with it. Finally, the code calls the `execute` method on the `cloud_table` object. The new `table_operation` sends a request to the Table service to insert the new customer entity into the `people` table.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Retrieve a reference to a table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Create the table if it doesn't exist.
+table.create_if_not_exists();
+
+// Create a new customer entity.
+azure::storage::table_entity customer1(U("Harp"), U("Walter"));
+
+azure::storage::table_entity::properties_type& properties = customer1.properties();
+properties.reserve(2);
+properties[U("Email")] = azure::storage::entity_property(U("Walter@contoso.com"));
+
+properties[U("Phone")] = azure::storage::entity_property(U("425-555-0101"));
+
+// Create the table operation that inserts the customer entity.
+azure::storage::table_operation insert_operation = azure::storage::table_operation::insert_entity(customer1);
+
+// Execute the insert operation.
+azure::storage::table_result insert_result = table.execute(insert_operation);
+```
+
+### Insert a batch of entities
+
+You can insert a batch of entities to the Table service in one write operation. The following code creates a `table_batch_operation` object, and then adds three insert operations to it. Each insert operation is added by creating a new entity object, setting its values, and then calling the `insert` method on the `table_batch_operation` object to associate the entity with a new insert operation. Then, the code calls `cloud_table.execute` to run the operation.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Define a batch operation.
+azure::storage::table_batch_operation batch_operation;
+
+// Create a customer entity and add it to the table.
+azure::storage::table_entity customer1(U("Smith"), U("Jeff"));
+
+azure::storage::table_entity::properties_type& properties1 = customer1.properties();
+properties1.reserve(2);
+properties1[U("Email")] = azure::storage::entity_property(U("Jeff@contoso.com"));
+properties1[U("Phone")] = azure::storage::entity_property(U("425-555-0104"));
+
+// Create another customer entity and add it to the table.
+azure::storage::table_entity customer2(U("Smith"), U("Ben"));
+
+azure::storage::table_entity::properties_type& properties2 = customer2.properties();
+properties2.reserve(2);
+properties2[U("Email")] = azure::storage::entity_property(U("Ben@contoso.com"));
+properties2[U("Phone")] = azure::storage::entity_property(U("425-555-0102"));
+
+// Create a third customer entity to add to the table.
+azure::storage::table_entity customer3(U("Smith"), U("Denise"));
+
+azure::storage::table_entity::properties_type& properties3 = customer3.properties();
+properties3.reserve(2);
+properties3[U("Email")] = azure::storage::entity_property(U("Denise@contoso.com"));
+properties3[U("Phone")] = azure::storage::entity_property(U("425-555-0103"));
+
+// Add customer entities to the batch insert operation.
+batch_operation.insert_or_replace_entity(customer1);
+batch_operation.insert_or_replace_entity(customer2);
+batch_operation.insert_or_replace_entity(customer3);
+
+// Execute the batch operation.
+std::vector<azure::storage::table_result> results = table.execute_batch(batch_operation);
+```
+
+Some things to note on batch operations:
+
+* You can do up to 100 `insert`, `delete`, `merge`, `replace`, `insert-or-merge`, and `insert-or-replace` operations in any combination in a single batch.
+* A batch operation can have a retrieve operation, if it's the only operation in the batch.
+* All entities in a single batch operation must have the same partition key.
+* A batch operation is limited to a 4-MB data payload.
+
+## Query and modify entities
+
+### Retrieve all entities in a partition
+
+To query a table for all entities in a partition, use a `table_query` object. The following code example specifies a filter for entities where `Smith` is the partition key. This example prints the fields of each entity in the query results to the console.
+
+> [!NOTE]
+> These methods are not currently supported for C++ in Azure Cosmos DB.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Construct the query operation for all customer entities where PartitionKey="Smith".
+azure::storage::table_query query;
+
+query.set_filter_string(azure::storage::table_query::generate_filter_condition(U("PartitionKey"), azure::storage::query_comparison_operator::equal, U("Smith")));
+
+// Execute the query.
+azure::storage::table_query_iterator it = table.execute_query(query);
+
+// Print the fields for each customer.
+azure::storage::table_query_iterator end_of_results;
+for (; it != end_of_results; ++it)
+{
+ const azure::storage::table_entity::properties_type& properties = it->properties();
+
+ std::wcout << U("PartitionKey: ") << it->partition_key() << U(", RowKey: ") << it->row_key()
+ << U(", Property1: ") << properties.at(U("Email")).string_value()
+ << U(", Property2: ") << properties.at(U("Phone")).string_value() << std::endl;
+}
+```
+
+The query in this example returns all the entities that match the filter criteria. If you have large tables and need to download the table entities often, we recommend that you store your data in Azure storage blobs instead.
+
+### Retrieve a range of entities in a partition
+
+If you don't want to query all the entities in a partition, you can specify a range. Combine the partition key filter with a row key filter. The following code example uses two filters to get all entities in partition `Smith` where the row key (first name) starts with a letter earlier than `E` in the alphabet, and then prints the query results.
+
+> [!NOTE]
+> These methods are not currently supported for C++ in Azure Cosmos DB.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Create the table query.
+azure::storage::table_query query;
+
+query.set_filter_string(azure::storage::table_query::combine_filter_conditions(
+ azure::storage::table_query::generate_filter_condition(U("PartitionKey"),
+ azure::storage::query_comparison_operator::equal, U("Smith")),
+ azure::storage::query_logical_operator::op_and,
+ azure::storage::table_query::generate_filter_condition(U("RowKey"), azure::storage::query_comparison_operator::less_than, U("E"))));
+
+// Execute the query.
+azure::storage::table_query_iterator it = table.execute_query(query);
+
+// Loop through the results, displaying information about the entity.
+azure::storage::table_query_iterator end_of_results;
+for (; it != end_of_results; ++it)
+{
+ const azure::storage::table_entity::properties_type& properties = it->properties();
+
+ std::wcout << U("PartitionKey: ") << it->partition_key() << U(", RowKey: ") << it->row_key()
+ << U(", Property1: ") << properties.at(U("Email")).string_value()
+ << U(", Property2: ") << properties.at(U("Phone")).string_value() << std::endl;
+}
+```
+
+### Retrieve a single entity
+
+You can write a query to retrieve a single, specific entity. The following code uses `table_operation::retrieve_entity` to specify the customer `Jeff Smith`. This method returns just one entity, rather than a collection, and the returned value is in `table_result`. Specifying both partition and row keys in a query is the fastest way to retrieve a single entity from the Table service.
+
+```cpp
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Retrieve the entity with partition key of "Smith" and row key of "Jeff".
+azure::storage::table_operation retrieve_operation = azure::storage::table_operation::retrieve_entity(U("Smith"), U("Jeff"));
+azure::storage::table_result retrieve_result = table.execute(retrieve_operation);
+
+// Output the entity.
+azure::storage::table_entity entity = retrieve_result.entity();
+const azure::storage::table_entity::properties_type& properties = entity.properties();
+
+std::wcout << U("PartitionKey: ") << entity.partition_key() << U(", RowKey: ") << entity.row_key()
+ << U(", Property1: ") << properties.at(U("Email")).string_value()
+ << U(", Property2: ") << properties.at(U("Phone")).string_value() << std::endl;
+```
+
+### Replace an entity
+
+To replace an entity, retrieve it from the Table service, modify the entity object, and then save the changes back to the Table service. The following code changes an existing customer's phone number and email address. Instead of calling `table_operation::insert_entity`, this code uses `table_operation::replace_entity`. This approach causes the entity to be fully replaced on the server, unless the entity on the server has changed since it was retrieved. If it has been changed, the operation fails. This failure prevents your application from overwriting a change made between the retrieval and update by another component. The proper handling of this failure is to retrieve the entity again, make your changes, if still valid, and then do another `table_operation::replace_entity` operation.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Replace an entity.
+azure::storage::table_entity entity_to_replace(U("Smith"), U("Jeff"));
+azure::storage::table_entity::properties_type& properties_to_replace = entity_to_replace.properties();
+properties_to_replace.reserve(2);
+
+// Specify a new phone number.
+properties_to_replace[U("Phone")] = azure::storage::entity_property(U("425-555-0106"));
+
+// Specify a new email address.
+properties_to_replace[U("Email")] = azure::storage::entity_property(U("JeffS@contoso.com"));
+
+// Create an operation to replace the entity.
+azure::storage::table_operation replace_operation = azure::storage::table_operation::replace_entity(entity_to_replace);
+
+// Submit the operation to the Table service.
+azure::storage::table_result replace_result = table.execute(replace_operation);
+```
+
+### Insert or replace an entity
+
+`table_operation::replace_entity` operations fail if the entity has been changed since it was retrieved from the server. Furthermore, you must retrieve the entity from the server first in order for `table_operation::replace_entity` to be successful. Sometimes, you don't know if the entity exists on the server. The current values stored in it are irrelevant, because your update should overwrite them all. To accomplish this result, use a `table_operation::insert_or_replace_entity` operation. This operation inserts the entity if it doesn't exist. The operation replaces the entity if it exists. In the following code example, the customer entity for `Jeff Smith` is still retrieved, but it's then saved back to the server by using `table_operation::insert_or_replace_entity`. Any updates made to the entity between the retrieval and update operation will be overwritten.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Insert or replace an entity.
+azure::storage::table_entity entity_to_insert_or_replace(U("Smith"), U("Jeff"));
+azure::storage::table_entity::properties_type& properties_to_insert_or_replace = entity_to_insert_or_replace.properties();
+
+properties_to_insert_or_replace.reserve(2);
+
+// Specify a phone number.
+properties_to_insert_or_replace[U("Phone")] = azure::storage::entity_property(U("425-555-0107"));
+
+// Specify an email address.
+properties_to_insert_or_replace[U("Email")] = azure::storage::entity_property(U("Jeffsm@contoso.com"));
+
+// Create an operation to insert or replace the entity.
+azure::storage::table_operation insert_or_replace_operation = azure::storage::table_operation::insert_or_replace_entity(entity_to_insert_or_replace);
+
+// Submit the operation to the Table service.
+azure::storage::table_result insert_or_replace_result = table.execute(insert_or_replace_operation);
+```
+
+### Query a subset of entity properties
+
+A query to a table can retrieve just a few properties from an entity. The query in the following code uses the `table_query::set_select_columns` method to return only the email addresses of entities in the table.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Define the query, and select only the Email property.
+azure::storage::table_query query;
+std::vector<utility::string_t> columns;
+
+columns.push_back(U("Email"));
+query.set_select_columns(columns);
+
+// Execute the query.
+azure::storage::table_query_iterator it = table.execute_query(query);
+
+// Display the results.
+azure::storage::table_query_iterator end_of_results;
+for (; it != end_of_results; ++it)
+{
+ std::wcout << U("PartitionKey: ") << it->partition_key() << U(", RowKey: ") << it->row_key();
+
+ const azure::storage::table_entity::properties_type& properties = it->properties();
+ for (auto prop_it = properties.begin(); prop_it != properties.end(); ++prop_it)
+ {
+ std::wcout << ", " << prop_it->first << ": " << prop_it->second.str();
+ }
+
+ std::wcout << std::endl;
+}
+```
+
+> [!NOTE]
+> Querying a few properties from an entity is a more efficient operation than retrieving all properties.
+>
+
+## Delete content
+
+### Delete an entity
+
+You can delete an entity after you retrieve it. After you retrieve an entity, call `table_operation::delete_entity` with the entity to delete. Then call the `cloud_table.execute` method. The following code retrieves and deletes an entity with a partition key of `Smith` and a row key of `Jeff`.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Create an operation to retrieve the entity with partition key of "Smith" and row key of "Jeff".
+azure::storage::table_operation retrieve_operation = azure::storage::table_operation::retrieve_entity(U("Smith"), U("Jeff"));
+azure::storage::table_result retrieve_result = table.execute(retrieve_operation);
+
+// Create an operation to delete the entity.
+azure::storage::table_operation delete_operation = azure::storage::table_operation::delete_entity(retrieve_result.entity());
+
+// Submit the delete operation to the Table service.
+azure::storage::table_result delete_result = table.execute(delete_operation);
+```
+
+### Delete a table
+
+Finally, the following code example deletes a table from a storage account. A table that has been deleted is unavailable to be re-created for some time following the deletion.
+
+```cpp
+// Retrieve the storage account from the connection string.
+azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string);
+
+// Create the table client.
+azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client();
+
+// Create a cloud table object for the table.
+azure::storage::cloud_table table = table_client.get_table_reference(U("people"));
+
+// Delete the table if it exists
+if (table.delete_table_if_exists())
+{
+ std::cout << "Table deleted!";
+}
+else
+{
+ std::cout << "Table didn't exist";
+}
+```
+
+## Troubleshooting
+
+For Visual Studio Community Edition, if your project gets build errors because of the include files *storage_account.h* and *table.h*, remove the **/permissive-** compiler switch:
+
+1. In **Solution Explorer**, right-click your project and select **Properties**.
+1. In the **Property Pages** dialog box, expand **Configuration Properties**, expand **C/C++**, and select **Language**.
+1. Set **Conformance mode** to **No**.
+
+## Next steps
+
+[Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+
+Follow these links to learn more about Azure Storage and the Table API in Azure Cosmos DB:
+
+* [Introduction to the Table API](introduction.md)
+* [List Azure Storage resources in C++](../../storage/common/storage-c-plus-plus-enumeration.md)
+* [Storage Client Library for C++ reference](https://azure.github.io/azure-storage-cpp)
+* [Azure Storage documentation](https://azure.microsoft.com/documentation/services/storage/)
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-java.md
+
+ Title: Use Azure Table storage or the Azure Cosmos DB Table API from Java
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API from Java.
++
+ms.devlang: Java
+ Last updated : 12/10/2020+++++
+# How to use Azure Table storage or Azure Cosmos DB Table API from Java
+++
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples are written in Java and use the [Azure Storage SDK v8 for Java][Azure Storage SDK for Java]. The scenarios covered include **creating**, **listing**, and **deleting** tables, as well as **inserting**, **querying**, **modifying**, and **deleting** entities in a table. For more information on tables, see the [Next steps](#next-steps) section.
+
+> [!IMPORTANT]
+> The last version of the Azure Storage SDK supporting Table Storage is [v8][Azure Storage SDK for Java]. A new version of the Table Storage SDK for Java will be coming soon.
+
+> [!NOTE]
+> An SDK is available for developers who are using Azure Storage on Android devices. For more information, see the [Azure Storage SDK for Android][Azure Storage SDK for Android].
+>
+
+## Create an Azure service account
++
+**Create an Azure storage account**
++
+**Create an Azure Cosmos DB account**
++
+## Create a Java application
+
+In this guide, you will use storage features that you can run in a Java application locally, or in code running in a web role or worker role in Azure.
+
+To use the samples in this article, install the Java Development Kit (JDK), then create an Azure storage account or Azure Cosmos DB account in your Azure subscription. Once you have done so, verify that your development system meets the minimum requirements and dependencies that are listed in the [Azure Storage SDK for Java][Azure Storage SDK for Java] repository on GitHub. If your system meets those requirements, you can follow the instructions to download and install the Azure Storage Libraries for Java on your system from that repository. After you complete those tasks, you can create a Java application that uses the examples in this article.
+
+## Configure your application to access table storage
+
+Add the following import statements to the top of the Java file where you want to use Azure storage APIs or the Azure Cosmos DB Table API to access tables:
+
+```java
+// Include the following imports to use table APIs
+import com.microsoft.azure.storage.*;
+import com.microsoft.azure.storage.table.*;
+import com.microsoft.azure.storage.table.TableQuery.*;
+```
+
+## Add your connection string
+
+You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+
+### Add an Azure storage connection string
+
+An Azure storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the Primary access key for the storage account listed in the [Azure portal](https://portal.azure.com) for the **AccountName** and **AccountKey** values.
+
+This example shows how you can declare a static field to hold the connection string:
+
+```java
+// Define the connection-string with your values.
+public static final String storageConnectionString =
+ "DefaultEndpointsProtocol=http;" +
+ "AccountName=your_storage_account;" +
+ "AccountKey=your_storage_account_key";
+```
+
+### Add an Azure Cosmos DB Table API connection string
+
+An Azure Cosmos DB account uses a connection string to store the table endpoint and your credentials. When running in a client application, you must provide the Azure Cosmos DB connection string in the following format, using the name of your Azure Cosmos DB account and the primary access key for the account listed in the [Azure portal](https://portal.azure.com) for the **AccountName** and **AccountKey** values.
+
+This example shows how you can declare a static field to hold the Azure Cosmos DB connection string:
+
+```java
+public static final String storageConnectionString =
+ "DefaultEndpointsProtocol=https;" +
+ "AccountName=your_cosmosdb_account;" +
+ "AccountKey=your_account_key;" +
+ "TableEndpoint=https://your_endpoint;" ;
+```
+
+In an application running within a role in Azure, you can store this string in the service configuration file, *ServiceConfiguration.cscfg*, and you can access it with a call to the **RoleEnvironment.getConfigurationSettings** method. Here's an example of getting the connection string from a **Setting** element named *StorageConnectionString* in the service configuration file:
+
+```java
+// Retrieve storage account from connection-string.
+String storageConnectionString =
+ RoleEnvironment.getConfigurationSettings().get("StorageConnectionString");
+```
+
+You can also store your connection string in your project's config.properties file:
+
+```java
+StorageConnectionString = DefaultEndpointsProtocol=https;AccountName=your_account;AccountKey=your_account_key;TableEndpoint=https://your_table_endpoint/
+```
+
+The following samples assume that you have used one of these methods to get the storage connection string.
+
+## Create a table
+
+A `CloudTableClient` object lets you get reference objects for tables
+and entities. The following code creates a `CloudTableClient` object
+and uses it to create a new `CloudTable` object, which represents a table named "people".
+
+> [!NOTE]
+> There are other ways to create `CloudStorageAccount` objects; for more information, see `CloudStorageAccount` in the [Azure Storage Client SDK Reference]).
+>
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create the table if it doesn't exist.
+ String tableName = "people";
+ CloudTable cloudTable = tableClient.getTableReference(tableName);
+ cloudTable.createIfNotExists();
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## List the tables
+
+To get a list of tables, call the **CloudTableClient.listTables()** method to retrieve an iterable list of table names.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Loop through the collection of table names.
+ for (String table : tableClient.listTables())
+ {
+ // Output each table name.
+ System.out.println(table);
+ }
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Add an entity to a table
+
+Entities map to Java objects using a custom class implementing `TableEntity`. For convenience, the `TableServiceEntity` class implements `TableEntity` and uses reflection to map properties to getter and setter methods named for the properties. To add an entity to a table, first create a class that defines the properties of your entity. The following code defines an entity class that uses the customer's first name as the row key, and last name as the partition key. Together, an entity's partition and row key uniquely identify the entity in the table. Entities with the same partition key can be queried faster than those with different partition keys.
+
+```java
+public class CustomerEntity extends TableServiceEntity {
+ public CustomerEntity(String lastName, String firstName) {
+ this.partitionKey = lastName;
+ this.rowKey = firstName;
+ }
+
+ public CustomerEntity() { }
+
+ String email;
+ String phoneNumber;
+
+ public String getEmail() {
+ return this.email;
+ }
+
+ public void setEmail(String email) {
+ this.email = email;
+ }
+
+ public String getPhoneNumber() {
+ return this.phoneNumber;
+ }
+
+ public void setPhoneNumber(String phoneNumber) {
+ this.phoneNumber = phoneNumber;
+ }
+}
+```
+
+Table operations involving entities require a `TableOperation` object. This object defines the operation to be performed on an entity, which can be executed with a `CloudTable` object. The following code creates a new instance of the `CustomerEntity` class with some customer data to be stored. The code next calls `TableOperation`.insertOrReplace** to create a `TableOperation` object to insert an entity into a table, and associates the new `CustomerEntity` with it. Finally, the code calls the `execute` method on the `CloudTable` object, specifying the "people" table and the new `TableOperation`, which then sends a request to the storage service to insert the new customer entity into the "people" table, or replace the entity if it already exists.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create a new customer entity.
+ CustomerEntity customer1 = new CustomerEntity("Harp", "Walter");
+ customer1.setEmail("Walter@contoso.com");
+ customer1.setPhoneNumber("425-555-0101");
+
+ // Create an operation to add the new customer to the people table.
+ TableOperation insertCustomer1 = TableOperation.insertOrReplace(customer1);
+
+ // Submit the operation to the table service.
+ cloudTable.execute(insertCustomer1);
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Insert a batch of entities
+
+You can insert a batch of entities to the table service in one write operation. The following code creates a `TableBatchOperation` object, then adds three insert operations to it. Each insert operation is added by creating a new entity object, setting its values, and then calling the `insert` method on the `TableBatchOperation` object to associate the entity with a new insert operation. Then the code calls `execute` on the `CloudTable` object, specifying the "people" table and the `TableBatchOperation` object, which sends the batch of table operations to the storage service in a single request.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Define a batch operation.
+ TableBatchOperation batchOperation = new TableBatchOperation();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create a customer entity to add to the table.
+ CustomerEntity customer = new CustomerEntity("Smith", "Jeff");
+ customer.setEmail("Jeff@contoso.com");
+ customer.setPhoneNumber("425-555-0104");
+ batchOperation.insertOrReplace(customer);
+
+ // Create another customer entity to add to the table.
+ CustomerEntity customer2 = new CustomerEntity("Smith", "Ben");
+ customer2.setEmail("Ben@contoso.com");
+ customer2.setPhoneNumber("425-555-0102");
+ batchOperation.insertOrReplace(customer2);
+
+ // Create a third customer entity to add to the table.
+ CustomerEntity customer3 = new CustomerEntity("Smith", "Denise");
+ customer3.setEmail("Denise@contoso.com");
+ customer3.setPhoneNumber("425-555-0103");
+ batchOperation.insertOrReplace(customer3);
+
+ // Execute the batch of operations on the "people" table.
+ cloudTable.execute(batchOperation);
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+Some things to note on batch operations:
+
+* You can perform up to 100 insert, delete, merge, replace, insert or merge, and insert or replace operations in any combination in a single batch.
+* A batch operation can have a retrieve operation, if it is the only operation in the batch.
+* All entities in a single batch operation must have the same partition key.
+* A batch operation is limited to a 4-MB data payload.
+
+## Retrieve all entities in a partition
+
+To query a table for entities in a partition, you can use a `TableQuery`. Call `TableQuery.from` to create a query on a particular table that returns a specified result type. The following code specifies a filter for entities where 'Smith' is the partition key. `TableQuery.generateFilterCondition` is a helper method to create filters for queries. Call `where` on the reference returned by the `TableQuery.from` method to apply the filter to the query. When the query is executed with a call to `execute` on the `CloudTable` object, it returns an `Iterator` with the `CustomerEntity` result type specified. You can then use the `Iterator` returned in a "ForEach" loop to consume the results. This code prints the fields of each entity in the query results to the console.
+
+```java
+try
+{
+ // Define constants for filters.
+ final String PARTITION_KEY = "PartitionKey";
+ final String ROW_KEY = "RowKey";
+ final String TIMESTAMP = "Timestamp";
+
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create a filter condition where the partition key is "Smith".
+ String partitionFilter = TableQuery.generateFilterCondition(
+ PARTITION_KEY,
+ QueryComparisons.EQUAL,
+ "Smith");
+
+ // Specify a partition query, using "Smith" as the partition key filter.
+ TableQuery<CustomerEntity> partitionQuery =
+ TableQuery.from(CustomerEntity.class)
+ .where(partitionFilter);
+
+ // Loop through the results, displaying information about the entity.
+ for (CustomerEntity entity : cloudTable.execute(partitionQuery)) {
+ System.out.println(entity.getPartitionKey() +
+ " " + entity.getRowKey() +
+ "\t" + entity.getEmail() +
+ "\t" + entity.getPhoneNumber());
+ }
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Retrieve a range of entities in a partition
+
+If you don't want to query all the entities in a partition, you can specify a range by using comparison operators in a filter. The following code combines two filters to get all entities in partition "Smith" where the row key (first name) starts with a letter up to 'E' in the alphabet. Then it prints the query results. If you use the entities added to the table in the batch insert section of this guide, only two entities are returned this time (Ben and Denise Smith); Jeff Smith is not included.
+
+```java
+try
+{
+ // Define constants for filters.
+ final String PARTITION_KEY = "PartitionKey";
+ final String ROW_KEY = "RowKey";
+ final String TIMESTAMP = "Timestamp";
+
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create a filter condition where the partition key is "Smith".
+ String partitionFilter = TableQuery.generateFilterCondition(
+ PARTITION_KEY,
+ QueryComparisons.EQUAL,
+ "Smith");
+
+ // Create a filter condition where the row key is less than the letter "E".
+ String rowFilter = TableQuery.generateFilterCondition(
+ ROW_KEY,
+ QueryComparisons.LESS_THAN,
+ "E");
+
+ // Combine the two conditions into a filter expression.
+ String combinedFilter = TableQuery.combineFilters(partitionFilter,
+ Operators.AND, rowFilter);
+
+ // Specify a range query, using "Smith" as the partition key,
+ // with the row key being up to the letter "E".
+ TableQuery<CustomerEntity> rangeQuery =
+ TableQuery.from(CustomerEntity.class)
+ .where(combinedFilter);
+
+ // Loop through the results, displaying information about the entity
+ for (CustomerEntity entity : cloudTable.execute(rangeQuery)) {
+ System.out.println(entity.getPartitionKey() +
+ " " + entity.getRowKey() +
+ "\t" + entity.getEmail() +
+ "\t" + entity.getPhoneNumber());
+ }
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Retrieve a single entity
+
+You can write a query to retrieve a single, specific entity. The following code calls `TableOperation.retrieve` with partition key and row key parameters to specify the customer "Jeff Smith", instead of creating a `Table Query` and using filters to do the same thing. When executed, the retrieve operation returns just one entity, rather than a collection. The `getResultAsType` method casts the result to the type of the assignment target, a `CustomerEntity` object. If this type is not compatible with the type specified for the query, an exception is thrown. A null value is returned if no entity has an exact partition and row key match. Specifying both partition and row keys in a query is the fastest way to retrieve a single entity from the Table service.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Retrieve the entity with partition key of "Smith" and row key of "Jeff"
+ TableOperation retrieveSmithJeff =
+ TableOperation.retrieve("Smith", "Jeff", CustomerEntity.class);
+
+ // Submit the operation to the table service and get the specific entity.
+ CustomerEntity specificEntity =
+ cloudTable.execute(retrieveSmithJeff).getResultAsType();
+
+ // Output the entity.
+ if (specificEntity != null)
+ {
+ System.out.println(specificEntity.getPartitionKey() +
+ " " + specificEntity.getRowKey() +
+ "\t" + specificEntity.getEmail() +
+ "\t" + specificEntity.getPhoneNumber());
+ }
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Modify an entity
+
+To modify an entity, retrieve it from the table service, make changes to the entity object, and save the changes back to the table service with a replace or merge operation. The following code changes an existing customer's phone number. Instead of calling **TableOperation.insert** as we did to insert, this code calls **TableOperation.replace**. The **CloudTable.execute** method calls the table service, and the entity is replaced, unless another application changed it in the time since this application retrieved it. When that happens, an exception is thrown, and the entity must be retrieved, modified, and saved again. This optimistic concurrency retry pattern is common in a distributed storage system.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Retrieve the entity with partition key of "Smith" and row key of "Jeff".
+ TableOperation retrieveSmithJeff =
+ TableOperation.retrieve("Smith", "Jeff", CustomerEntity.class);
+
+ // Submit the operation to the table service and get the specific entity.
+ CustomerEntity specificEntity =
+ cloudTable.execute(retrieveSmithJeff).getResultAsType();
+
+ // Specify a new phone number.
+ specificEntity.setPhoneNumber("425-555-0105");
+
+ // Create an operation to replace the entity.
+ TableOperation replaceEntity = TableOperation.replace(specificEntity);
+
+ // Submit the operation to the table service.
+ cloudTable.execute(replaceEntity);
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Query a subset of entity properties
+
+A query to a table can retrieve just a few properties from an entity. This technique, called projection, reduces bandwidth and can improve query performance, especially for large entities. The query in the following code uses the `select` method to return only the email addresses of entities in the table. The results are projected into a collection of `String` with the help of an `Entity Resolver`, which does the type conversion on the entities returned from the server. You can learn more about projection in [Azure Tables: Introducing Upsert and Query Projection][Azure Tables: Introducing Upsert and Query Projection]. The projection is not supported on the local storage emulator, so this code runs only when using an account on the table service.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Define a projection query that retrieves only the Email property
+ TableQuery<CustomerEntity> projectionQuery =
+ TableQuery.from(CustomerEntity.class)
+ .select(new String[] {"Email"});
+
+ // Define an Entity resolver to project the entity to the Email value.
+ EntityResolver<String> emailResolver = new EntityResolver<String>() {
+ @Override
+ public String resolve(String PartitionKey, String RowKey, Date timeStamp, HashMap<String, EntityProperty> properties, String etag) {
+ return properties.get("Email").getValueAsString();
+ }
+ };
+
+ // Loop through the results, displaying the Email values.
+ for (String projectedString :
+ cloudTable.execute(projectionQuery, emailResolver)) {
+ System.out.println(projectedString);
+ }
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Insert or Replace an entity
+
+Often you want to add an entity to a table without knowing if it already exists in the table. An insert-or-replace operation allows you to make a single request, which will insert the entity if it does not exist or replace the existing one if it does. Building on prior examples, the following code inserts or replaces the entity for "Walter Harp". After creating a new entity, this code calls the **TableOperation.insertOrReplace** method. This code then calls **execute** on the **Cloud Table** object with the table and the insert or replace table operation as the parameters. To update only part of an entity, the **TableOperation.insertOrMerge** method can be used instead. Insert-or-replace is not supported on the local storage emulator, so this code runs only when using an account on the table service. You can learn more about insert-or-replace and insert-or-merge in this [Azure Tables: Introducing Upsert and Query Projection][Azure Tables: Introducing Upsert and Query Projection].
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create a new customer entity.
+ CustomerEntity customer5 = new CustomerEntity("Harp", "Walter");
+ customer5.setEmail("Walter@contoso.com");
+ customer5.setPhoneNumber("425-555-0106");
+
+ // Create an operation to add the new customer to the people table.
+ TableOperation insertCustomer5 = TableOperation.insertOrReplace(customer5);
+
+ // Submit the operation to the table service.
+ cloudTable.execute(insertCustomer5);
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Delete an entity
+
+You can easily delete an entity after you have retrieved it. After the entity is retrieved, call `TableOperation.delete` with the entity to delete. Then call `execute` on the `CloudTable` object. The following code retrieves and deletes a customer entity.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Create a cloud table object for the table.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+
+ // Create an operation to retrieve the entity with partition key of "Smith" and row key of "Jeff".
+ TableOperation retrieveSmithJeff = TableOperation.retrieve("Smith", "Jeff", CustomerEntity.class);
+
+ // Retrieve the entity with partition key of "Smith" and row key of "Jeff".
+ CustomerEntity entitySmithJeff =
+ cloudTable.execute(retrieveSmithJeff).getResultAsType();
+
+ // Create an operation to delete the entity.
+ TableOperation deleteSmithJeff = TableOperation.delete(entitySmithJeff);
+
+ // Submit the delete operation to the table service.
+ cloudTable.execute(deleteSmithJeff);
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
+
+## Delete a table
+
+Finally, the following code deletes a table from a storage account. Around 40 seconds after you delete a table, you cannot recreate it.
+
+```java
+try
+{
+ // Retrieve storage account from connection-string.
+ CloudStorageAccount storageAccount =
+ CloudStorageAccount.parse(storageConnectionString);
+
+ // Create the table client.
+ CloudTableClient tableClient = storageAccount.createCloudTableClient();
+
+ // Delete the table and all its data if it exists.
+ CloudTable cloudTable = tableClient.getTableReference("people");
+ cloudTable.deleteIfExists();
+}
+catch (Exception e)
+{
+ // Output the stack trace.
+ e.printStackTrace();
+}
+```
++
+## Next steps
+
+* [Getting Started with Azure Table Service in Java](https://github.com/Azure-Samples/storage-table-java-getting-started)
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+* [Azure Storage SDK for Java][Azure Storage SDK for Java]
+* [Azure Storage Client SDK Reference][Azure Storage Client SDK Reference]
+* [Azure Storage REST API][Azure Storage REST API]
+* [Azure Storage Team Blog][Azure Storage Team Blog]
+
+For more information, visit [Azure for Java developers](/java/azure).
+
+[Azure SDK for Java]: https://go.microsoft.com/fwlink/?LinkID=525671
+[Azure Storage SDK for Java]: https://github.com/Azure/azure-storage-java/tree/v8.6.5
+[Azure Storage SDK for Android]: https://github.com/azure/azure-storage-android
+[Azure Storage Client SDK Reference]: https://azure.github.io/azure-storage-java/
+[Azure Storage REST API]: /rest/api/storageservices/
+[Azure Storage Team Blog]: https://blogs.msdn.microsoft.com/windowsazurestorage/
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-nodejs.md
+
+ Title: Use Azure Table storage or Azure Cosmos DB Table API from Node.js
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API from Node.js.
++
+ms.devlang: nodejs
+ Last updated : 07/23/2020++++
+# How to use Azure Table storage or the Azure Cosmos DB Table API from Node.js
++
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples are written in Node.js.
+
+## Create an Azure service account
++
+**Create an Azure storage account**
++
+**Create an Azure Cosmos DB Table API account**
++
+## Configure your application to access Azure Storage or the Azure Cosmos DB Table API
+
+To use Azure Storage or Azure Cosmos DB, you need the Azure Storage SDK for Node.js, which includes a set of convenience libraries that
+communicate with the Storage REST services.
+
+### Use Node Package Manager (NPM) to install the package
+
+1. Use a command-line interface such as **PowerShell** (Windows), **Terminal** (Mac), or **Bash** (Unix), and navigate to the folder where you created your application.
+2. Type **npm install azure-storage** in the command window. Output from the command is similar to the following example.
+
+ ```bash
+ azure-storage@0.5.0 node_modules\azure-storage
+ +-- extend@1.2.1
+ +-- xmlbuilder@0.4.3
+ +-- mime@1.2.11
+ +-- node-uuid@1.4.3
+ +-- validator@3.22.2
+ +-- underscore@1.4.4
+ +-- readable-stream@1.0.33 (string_decoder@0.10.31, isarray@0.0.1, inherits@2.0.1, core-util-is@1.0.1)
+ +-- xml2js@0.2.7 (sax@0.5.2)
+ +-- request@2.57.0 (caseless@0.10.0, aws-sign2@0.5.0, forever-agent@0.6.1, stringstream@0.0.4, oauth-sign@0.8.0, tunnel-agent@0.4.1, isstream@0.1.2, json-stringify-safe@5.0.1, bl@0.9.4, combined-stream@1.0.5, qs@3.1.0, mime-types@2.0.14, form-data@0.2.0, http-signature@0.11.0, tough-cookie@2.0.0, hawk@2.3.1, har-validator@1.8.0)
+ ```
+
+3. You can manually run the **ls** command to verify that a **node_modules** folder was created. Inside that folder you will find the **azure-storage** package, which contains the libraries you need to access storage.
+
+### Import the package
+
+Add the following code to the top of the **server.js** file in your application:
+
+```javascript
+var azure = require('azure-storage');
+```
+
+## Add your connection string
+
+You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+
+### Add an Azure Storage connection
+
+The Azure module reads the environment variables AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY, or AZURE_STORAGE_CONNECTION_STRING for information required to connect to your Azure Storage account. If these environment variables are not set, you must specify the account information when calling `TableService`. For example, the following code creates a `TableService` object:
+
+```javascript
+var tableSvc = azure.createTableService('myaccount', 'myaccesskey');
+```
+
+### Add an Azure Cosmos DB connection
+
+To add an Azure Cosmos DB connection, create a `TableService` object and specify your account name, primary key, and endpoint. You can copy these values from **Settings** > **Connection String** in the Azure portal for your Cosmos DB account. For example:
+
+```javascript
+var tableSvc = azure.createTableService('myaccount', 'myprimarykey', 'myendpoint');
+```
+
+## Create a table
+
+The following code creates a `TableService` object and uses it to create a new table.
+
+```javascript
+var tableSvc = azure.createTableService();
+```
+
+The call to `createTableIfNotExists` creates a new table with the specified name if it does not already exist. The following example creates a new table named 'mytable' if it does not already exist:
+
+```javascript
+tableSvc.createTableIfNotExists('mytable', function(error, result, response){
+ if(!error){
+ // Table exists or created
+ }
+});
+```
+
+The `result.created` is `true` if a new table is created, and `false` if the table already exists. The `response` contains information about the request.
+
+### Apply filters
+
+You can apply optional filtering to operations performed using `TableService`. Filtering operations can include logging, automatic retries, etc. Filters are objects that implement a method with the signature:
+
+```javascript
+function handle (requestOptions, next)
+```
+
+After doing its preprocessing on the request options, the method must call **next**, passing a callback with the following signature:
+
+```javascript
+function (returnObject, finalCallback, next)
+```
+
+In this callback, and after processing the `returnObject` (the response from the request to the server), the callback must either invoke `next` if it exists to continue processing other filters or simply invoke `finalCallback` otherwise to end the service invocation.
+
+Two filters that implement retry logic are included with the Azure SDK for Node.js, `ExponentialRetryPolicyFilter` and `LinearRetryPolicyFilter`. The following creates a `TableService` object that uses the `ExponentialRetryPolicyFilter`:
+
+```javascript
+var retryOperations = new azure.ExponentialRetryPolicyFilter();
+var tableSvc = azure.createTableService().withFilter(retryOperations);
+```
+
+## Add an entity to a table
+
+To add an entity, first create an object that defines your entity properties. All entities must contain a **PartitionKey** and **RowKey**, which are unique identifiers for the entity.
+
+* **PartitionKey** - Determines the partition in which the entity is stored.
+* **RowKey** - Uniquely identifies the entity within the partition.
+
+Both **PartitionKey** and **RowKey** must be string values. For more information, see [Understanding the Table Service Data Model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model).
+
+The following is an example of defining an entity. The **dueDate** is defined as a type of `Edm.DateTime`. Specifying the type is optional, and types are inferred if not specified.
+
+```javascript
+var task = {
+ PartitionKey: {'_':'hometasks'},
+ RowKey: {'_': '1'},
+ description: {'_':'take out the trash'},
+ dueDate: {'_':new Date(2015, 6, 20), '$':'Edm.DateTime'}
+};
+```
+
+> [!NOTE]
+> There is also a `Timestamp` field for each record, which is set by Azure when an entity is inserted or updated.
+
+You can also use the `entityGenerator` to create entities. The following example creates the same task entity using the `entityGenerator`.
+
+```javascript
+var entGen = azure.TableUtilities.entityGenerator;
+var task = {
+ PartitionKey: entGen.String('hometasks'),
+ RowKey: entGen.String('1'),
+ description: entGen.String('take out the trash'),
+ dueDate: entGen.DateTime(new Date(Date.UTC(2015, 6, 20))),
+};
+```
+
+To add an entity to your table, pass the entity object to the `insertEntity` method.
+
+```javascript
+tableSvc.insertEntity('mytable',task, function (error, result, response) {
+ if(!error){
+ // Entity inserted
+ }
+});
+```
+
+If the operation is successful, `result` contains the [ETag](https://en.wikipedia.org/wiki/HTTP_ETag) of the inserted record and `response` contains information about the operation.
+
+Example response:
+
+```javascript
+{ '.metadata': { etag: 'W/"datetime\'2015-02-25T01%3A22%3A22.5Z\'"' } }
+```
+
+> [!NOTE]
+> By default, `insertEntity` does not return the inserted entity as part of the `response` information. If you plan on performing other operations on this entity, or want to cache the information, it can be useful to have it returned as part of the `result`. You can do this by enabling `echoContent` as follows:
+>
+> `tableSvc.insertEntity('mytable', task, {echoContent: true}, function (error, result, response) {...}`
+
+## Update an entity
+
+There are multiple methods available to update an existing entity:
+
+* `replaceEntity` - Updates an existing entity by replacing it.
+* `mergeEntity` - Updates an existing entity by merging new property values into the existing entity.
+* `insertOrReplaceEntity` - Updates an existing entity by replacing it. If no entity exists, a new one will be inserted.
+* `insertOrMergeEntity` - Updates an existing entity by merging new property values into the existing. If no entity exists, a new one will be inserted.
+
+The following example demonstrates updating an entity using `replaceEntity`:
+
+```javascript
+tableSvc.replaceEntity('mytable', updatedTask, function(error, result, response){
+ if(!error) {
+ // Entity updated
+ }
+});
+```
+
+> [!NOTE]
+> By default, updating an entity does not check to see if the data being updated has previously been modified by another process. To support concurrent updates:
+>
+> 1. Get the ETag of the object being updated. This is returned as part of the `response` for any entity-related operation and can be retrieved through `response['.metadata'].etag`.
+> 2. When performing an update operation on an entity, add the ETag information previously retrieved to the new entity. For example:
+>
+> entity2['.metadata'].etag = currentEtag;
+> 3. Perform the update operation. If the entity has been modified since you retrieved the ETag value, such as another instance of your application, an `error` is returned stating that the update condition specified in the request was not satisfied.
+>
+>
+
+With `replaceEntity` and `mergeEntity`, if the entity that is being updated doesn't exist, then the update operation fails; therefore, if you want to store an entity regardless of whether it already exists, use `insertOrReplaceEntity` or `insertOrMergeEntity`.
+
+The `result` for successful update operations contains the **Etag** of the updated entity.
+
+## Work with groups of entities
+
+Sometimes it makes sense to submit multiple operations together in a batch to ensure atomic processing by the server. To accomplish that, use the `TableBatch` class to create a batch, and then use the `executeBatch` method of `TableService` to perform the batched operations.
+
+ The following example demonstrates submitting two entities in a batch:
+
+```javascript
+var task1 = {
+ PartitionKey: {'_':'hometasks'},
+ RowKey: {'_': '1'},
+ description: {'_':'Take out the trash'},
+ dueDate: {'_':new Date(2015, 6, 20)}
+};
+var task2 = {
+ PartitionKey: {'_':'hometasks'},
+ RowKey: {'_': '2'},
+ description: {'_':'Wash the dishes'},
+ dueDate: {'_':new Date(2015, 6, 20)}
+};
+
+var batch = new azure.TableBatch();
+
+batch.insertEntity(task1, {echoContent: true});
+batch.insertEntity(task2, {echoContent: true});
+
+tableSvc.executeBatch('mytable', batch, function (error, result, response) {
+ if(!error) {
+ // Batch completed
+ }
+});
+```
+
+For successful batch operations, `result` contains information for each operation in the batch.
+
+### Work with batched operations
+
+You can inspect operations added to a batch by viewing the `operations` property. You can also use the following methods to work with operations:
+
+* **clear** - Clears all operations from a batch.
+* **getOperations** - Gets an operation from the batch.
+* **hasOperations** - Returns true if the batch contains operations.
+* **removeOperations** - Removes an operation.
+* **size** - Returns the number of operations in the batch.
+
+## Retrieve an entity by key
+
+To return a specific entity based on the **PartitionKey** and **RowKey**, use the **retrieveEntity** method.
+
+```javascript
+tableSvc.retrieveEntity('mytable', 'hometasks', '1', function(error, result, response){
+ if(!error){
+ // result contains the entity
+ }
+});
+```
+
+After this operation is complete, `result` contains the entity.
+
+## Query a set of entities
+
+To query a table, use the **TableQuery** object to build up a query expression using the following clauses:
+
+* **select** - The fields to be returned from the query.
+* **where** - The where clause.
+
+ * **and** - An `and` where condition.
+ * **or** - An `or` where condition.
+* **top** - The number of items to fetch.
+
+The following example builds a query that returns the top five items with a PartitionKey of 'hometasks'.
+
+```javascript
+var query = new azure.TableQuery()
+ .top(5)
+ .where('PartitionKey eq ?', 'hometasks');
+```
+
+Because **select** is not used, all fields are returned. To perform the query against a table, use **queryEntities**. The following example uses this query to return entities from 'mytable'.
+
+```javascript
+tableSvc.queryEntities('mytable',query, null, function(error, result, response) {
+ if(!error) {
+ // query was successful
+ }
+});
+```
+
+If successful, `result.entries` contains an array of entities that match the query. If the query was unable to return all entities, `result.continuationToken` is non-*null* and can be used as the third parameter of **queryEntities** to retrieve more results. For the initial query, use *null* for the third parameter.
+
+### Query a subset of entity properties
+
+A query to a table can retrieve just a few fields from an entity.
+This reduces bandwidth and can improve query performance, especially for large entities. Use the **select** clause and pass the names of the fields to return. For example, the following query returns only the **description** and **dueDate** fields.
+
+```javascript
+var query = new azure.TableQuery()
+ .select(['description', 'dueDate'])
+ .top(5)
+ .where('PartitionKey eq ?', 'hometasks');
+```
+
+## Delete an entity
+
+You can delete an entity using its partition and row keys. In this example, the **task1** object contains the **RowKey** and **PartitionKey** values of the entity to delete. Then the object is passed to the **deleteEntity** method.
+
+```javascript
+var task = {
+ PartitionKey: {'_':'hometasks'},
+ RowKey: {'_': '1'}
+};
+
+tableSvc.deleteEntity('mytable', task, function(error, response){
+ if(!error) {
+ // Entity deleted
+ }
+});
+```
+
+> [!NOTE]
+> Consider using ETags when deleting items, to ensure that the item hasn't been modified by another process. See [Update an entity](#update-an-entity) for information on using ETags.
+>
+>
+
+## Delete a table
+
+The following code deletes a table from a storage account.
+
+```javascript
+tableSvc.deleteTable('mytable', function(error, response){
+ if(!error){
+ // Table deleted
+ }
+});
+```
+
+If you are uncertain whether the table exists, use **deleteTableIfExists**.
+
+## Use continuation tokens
+
+When you are querying tables for large amounts of results, look for continuation tokens. There may be large amounts of data available for your query that you might not realize if you do not build to recognize when a continuation token is present.
+
+The **results** object returned during querying entities sets a `continuationToken` property when such a token is present. You can then use this when performing a query to continue to move across the partition and table entities.
+
+When querying, you can provide a `continuationToken` parameter between the query object instance and the callback function:
+
+```javascript
+var nextContinuationToken = null;
+dc.table.queryEntities(tableName,
+ query,
+ nextContinuationToken,
+ function (error, results) {
+ if (error) throw error;
+
+ // iterate through results.entries with results
+
+ if (results.continuationToken) {
+ nextContinuationToken = results.continuationToken;
+ }
+
+ });
+```
+
+If you inspect the `continuationToken` object, you will find properties such as `nextPartitionKey`, `nextRowKey` and `targetLocation`, which can be used to iterate through all the results.
+
+You can also use `top` along with `continuationToken` to set the page size.
+
+## Work with shared access signatures
+
+Shared access signatures (SAS) are a secure way to provide granular access to tables without providing your Storage account name or keys. SAS are often used to provide limited access to your data, such as allowing a mobile app to query records.
+
+A trusted application such as a cloud-based service generates a SAS using the **generateSharedAccessSignature** of the **TableService**, and provides it to an untrusted or semi-trusted application such as a mobile app. The SAS is generated using a policy, which describes the start and end dates during which the SAS is valid, as well as the access level granted to the SAS holder.
+
+The following example generates a new shared access policy that will allow the SAS holder to query ('r') the table, and expires 100 minutes after the time it is created.
+
+```javascript
+var startDate = new Date();
+var expiryDate = new Date(startDate);
+expiryDate.setMinutes(startDate.getMinutes() + 100);
+startDate.setMinutes(startDate.getMinutes() - 100);
+
+var sharedAccessPolicy = {
+ AccessPolicy: {
+ Permissions: azure.TableUtilities.SharedAccessPermissions.QUERY,
+ Start: startDate,
+ Expiry: expiryDate
+ },
+};
+
+var tableSAS = tableSvc.generateSharedAccessSignature('mytable', sharedAccessPolicy);
+var host = tableSvc.host;
+```
+
+Note that you must also provide the host information, as it is required when the SAS holder attempts to access the table.
+
+The client application then uses the SAS with **TableServiceWithSAS** to perform operations against the table. The following example connects to the table and performs a query. See [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) article for the format of tableSAS.
+
+```javascript
+// Note in the following command, host is in the format: `https://<your_storage_account_name>.table.core.windows.net` and the tableSAS is in the format: `sv=2018-03-28&si=saspolicy&tn=mytable&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D`;
+
+var sharedTableService = azure.createTableServiceWithSas(host, tableSAS);
+var query = azure.TableQuery()
+ .where('PartitionKey eq ?', 'hometasks');
+
+sharedTableService.queryEntities(query, null, function(error, result, response) {
+ if(!error) {
+ // result contains the entities
+ }
+});
+```
+
+Because the SAS was generated with only query access, an error is returned if you attempt to insert, update, or delete entities.
+
+### Access Control Lists
+
+You can also use an Access Control List (ACL) to set the access policy for a SAS. This is useful if you want to allow multiple clients to access the table, but provide different access policies for each client.
+
+An ACL is implemented using an array of access policies, with an ID associated with each policy. The following example defines two policies, one for 'user1' and one for 'user2':
+
+```javascript
+var sharedAccessPolicy = {
+ user1: {
+ Permissions: azure.TableUtilities.SharedAccessPermissions.QUERY,
+ Start: startDate,
+ Expiry: expiryDate
+ },
+ user2: {
+ Permissions: azure.TableUtilities.SharedAccessPermissions.ADD,
+ Start: startDate,
+ Expiry: expiryDate
+ }
+};
+```
+
+The following example gets the current ACL for the **hometasks** table, and then adds the new policies using **setTableAcl**. This approach allows:
+
+```javascript
+var extend = require('extend');
+tableSvc.getTableAcl('hometasks', function(error, result, response) {
+if(!error){
+ var newSignedIdentifiers = extend(true, result.signedIdentifiers, sharedAccessPolicy);
+ tableSvc.setTableAcl('hometasks', newSignedIdentifiers, function(error, result, response){
+ if(!error){
+ // ACL set
+ }
+ });
+ }
+});
+```
+
+After the ACL has been set, you can then create a SAS based on the ID for a policy. The following example creates a new SAS for 'user2':
+
+```javascript
+tableSAS = tableSvc.generateSharedAccessSignature('hometasks', { Id: 'user2' });
+```
+
+## Next steps
+
+For more information, see the following resources.
+
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+* [Azure Storage SDK for Node.js](https://github.com/Azure/azure-storage-node) repository on GitHub.
+* [Azure for Node.js Developers](/azure/developer/javascript/)
+* [Create a Node.js web app in Azure](../../app-service/quickstart-nodejs.md)
+* [Build and deploy a Node.js application to an Azure Cloud Service](../../cloud-services/cloud-services-nodejs-develop-deploy-app.md) (using Windows PowerShell)
cosmos-db How To Use Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-php.md
+
+ Title: Use Azure Storage Table service or Azure Cosmos DB Table API from PHP
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API from PHP.
++++
+ms.devlang: php
+ Last updated : 07/23/2020+
+# How to use Azure Storage Table service or the Azure Cosmos DB Table API from PHP
++
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples are written in PHP and use the [Azure Storage Table PHP Client Library][download]. The scenarios covered include **creating and deleting a table**, and **inserting, deleting, and querying entities in a table**. For more information on the Azure Table service, see the [Next steps](#next-steps) section.
+
+## Create an Azure service account
++
+**Create an Azure storage account**
++
+**Create an Azure Cosmos DB Table API account**
++
+## Create a PHP application
+
+The only requirement to create a PHP application to access the Storage Table service or Azure Cosmos DB Table API is to reference classes in the azure-storage-table SDK for PHP from within your code. You can use any development tools to create your application, including Notepad.
+
+In this guide, you use Storage Table service or Azure Cosmos DB features that can be called from within a PHP application locally, or in code running within an Azure web role, worker role, or website.
+
+## Get the client library
+
+1. Create a file named composer.json in the root of your project and add the following code to it:
+ ```json
+ {
+ "require": {
+ "microsoft/azure-storage-table": "*"
+ }
+ }
+ ```
+2. Download [composer.phar](https://getcomposer.org/composer.phar) in your root.
+3. Open a command prompt and execute the following command in your project root:
+ ```
+ php composer.phar install
+ ```
+ Alternatively, go to the [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table) on GitHub to clone the source code.
+
+## Add required references
+
+To use the Storage Table service or Azure Cosmos DB APIs, you must:
+
+* Reference the autoloader file using the [require_once][require_once] statement, and
+* Reference any classes you use.
+
+The following example shows how to include the autoloader file and reference the **TableRestProxy** class.
+
+```php
+require_once 'vendor/autoload.php';
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+```
+
+In the examples below, the `require_once` statement is always shown, but only the classes necessary for the example to execute are referenced.
+
+## Add your connection string
+
+You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+
+### Add a Storage Table service connection
+
+To instantiate a Storage Table service client, you must first have a valid connection string. The format for the Storage Table service connection string is:
+
+```php
+$connectionString = "DefaultEndpointsProtocol=[http|https];AccountName=[yourAccount];AccountKey=[yourKey]"
+```
+
+### Add a Storage Emulator connection
+
+To access the emulator storage:
+
+```php
+UseDevelopmentStorage = true
+```
+
+### Add an Azure Cosmos DB connection
+
+To instantiate an Azure Cosmos DB Table client, you must first have a valid connection string. The format for the Azure Cosmos DB connection string is:
+
+```php
+$connectionString = "DefaultEndpointsProtocol=[https];AccountName=[myaccount];AccountKey=[myaccountkey];TableEndpoint=[https://myendpoint/]";
+```
+
+To create an Azure Table service client or Azure Cosmos DB client, you need to use the **TableRestProxy** class. You can:
+
+* Pass the connection string directly to it or
+* Use the **CloudConfigurationManager (CCM)** to check multiple external sources for the connection string:
+ * By default, it comes with support for one external source - environmental variables.
+ * You can add new sources by extending the `ConnectionStringSource` class.
+
+For the examples outlined here, the connection string is passed directly.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+
+$tableClient = TableRestProxy::createTableService($connectionString);
+```
+
+## Create a table
+
+A **TableRestProxy** object lets you create a table with the **createTable** method. When creating a table, you can set the Table service timeout. (For more information about the Table service timeout, see [Setting Timeouts for Table Service Operations][table-service-timeouts].)
+
+```php
+require_once 'vendor\autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create Table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+try {
+ // Create table.
+ $tableClient->createTable("mytable");
+}
+catch(ServiceException $e){
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ // Handle exception based on error codes and messages.
+ // Error codes and messages can be found here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+}
+```
+
+For information about restrictions on table names, see [Understanding the Table Service Data Model][table-data-model].
+
+## Add an entity to a table
+
+To add an entity to a table, create a new **Entity** object and pass it to **TableRestProxy->insertEntity**. Note that when you create an entity, you must specify a `PartitionKey` and `RowKey`. These are the unique identifiers for an entity and are values that can be queried much faster than other entity properties. The system uses `PartitionKey` to automatically distribute the table's entities over many Storage nodes. Entities with the same `PartitionKey` are stored on the same node. (Operations on multiple entities stored on the same node perform better than on entities stored across different nodes.) The `RowKey` is the unique ID of an entity within a partition.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+use MicrosoftAzure\Storage\Table\Models\Entity;
+use MicrosoftAzure\Storage\Table\Models\EdmType;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+$entity = new Entity();
+$entity->setPartitionKey("tasksSeattle");
+$entity->setRowKey("1");
+$entity->addProperty("Description", null, "Take out the trash.");
+$entity->addProperty("DueDate",
+ EdmType::DATETIME,
+ new DateTime("2012-11-05T08:15:00-08:00"));
+$entity->addProperty("Location", EdmType::STRING, "Home");
+
+try{
+ $tableClient->insertEntity("mytable", $entity);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+}
+```
+
+For information about Table properties and types, see [Understanding the Table Service Data Model][table-data-model].
+
+The **TableRestProxy** class offers two alternative methods for inserting entities: **insertOrMergeEntity** and **insertOrReplaceEntity**. To use these methods, create a new **Entity** and pass it as a parameter to either method. Each method will insert the entity if it does not exist. If the entity already exists, **insertOrMergeEntity** updates property values if the properties already exist and adds new properties if they do not exist, while **insertOrReplaceEntity** completely replaces an existing entity. The following example shows how to use **insertOrMergeEntity**. If the entity with `PartitionKey` "tasksSeattle" and `RowKey` "1" does not already exist, it will be inserted. However, if it has previously been inserted (as shown in the example above), the `DueDate` property is updated, and the `Status` property is added. The `Description` and `Location` properties are also updated, but with values that effectively leave them unchanged. If these latter two properties were not added as shown in the example, but existed on the target entity, their existing values would remain unchanged.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+use MicrosoftAzure\Storage\Table\Models\Entity;
+use MicrosoftAzure\Storage\Table\Models\EdmType;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+//Create new entity.
+$entity = new Entity();
+
+// PartitionKey and RowKey are required.
+$entity->setPartitionKey("tasksSeattle");
+$entity->setRowKey("1");
+
+// If entity exists, existing properties are updated with new values and
+// new properties are added. Missing properties are unchanged.
+$entity->addProperty("Description", null, "Take out the trash.");
+$entity->addProperty("DueDate", EdmType::DATETIME, new DateTime()); // Modified the DueDate field.
+$entity->addProperty("Location", EdmType::STRING, "Home");
+$entity->addProperty("Status", EdmType::STRING, "Complete"); // Added Status field.
+
+try {
+ // Calling insertOrReplaceEntity, instead of insertOrMergeEntity as shown,
+ // would simply replace the entity with PartitionKey "tasksSeattle" and RowKey "1".
+ $tableClient->insertOrMergeEntity("mytable", $entity);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+```
+
+## Retrieve a single entity
+
+The **TableRestProxy->getEntity** method allows you to retrieve a single entity by querying for its `PartitionKey` and `RowKey`. In the example below, the partition key `tasksSeattle` and row key `1` are passed to the **getEntity** method.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+try {
+ $result = $tableClient->getEntity("mytable", "tasksSeattle", 1);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+
+$entity = $result->getEntity();
+
+echo $entity->getPartitionKey().":".$entity->getRowKey();
+```
+
+## Retrieve all entities in a partition
+
+Entity queries are constructed using filters (for more information, see [Querying Tables and Entities][filters]). To retrieve all entities in partition, use the filter "PartitionKey eq *partition_name*". The following example shows how to retrieve all entities in the `tasksSeattle` partition by passing a filter to the **queryEntities** method.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+$filter = "PartitionKey eq 'tasksSeattle'";
+
+try {
+ $result = $tableClient->queryEntities("mytable", $filter);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+
+$entities = $result->getEntities();
+
+foreach($entities as $entity){
+ echo $entity->getPartitionKey().":".$entity->getRowKey()."<br />";
+}
+```
+
+## Retrieve a subset of entities in a partition
+
+The same pattern used in the previous example can be used to retrieve any subset of entities in a partition. The subset of entities you retrieve are determined by the filter you use (for more information, see [Querying Tables and Entities][filters]).The following example shows how to use a filter to retrieve all entities with a specific `Location` and a `DueDate` less than a specified date.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+$filter = "Location eq 'Office' and DueDate lt '2012-11-5'";
+
+try {
+ $result = $tableClient->queryEntities("mytable", $filter);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+
+$entities = $result->getEntities();
+
+foreach($entities as $entity){
+ echo $entity->getPartitionKey().":".$entity->getRowKey()."<br />";
+}
+```
+
+## Retrieve a subset of entity properties
+
+A query can retrieve a subset of entity properties. This technique, called *projection*, reduces bandwidth and can improve query performance, especially for large entities. To specify a property to retrieve, pass the name of the property to the **Query->addSelectField** method. You can call this method multiple times to add more properties. After executing **TableRestProxy->queryEntities**, the returned entities will only have the selected properties. (If you want to return a subset of Table entities, use a filter as shown in the queries above.)
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+use MicrosoftAzure\Storage\Table\Models\QueryEntitiesOptions;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+$options = new QueryEntitiesOptions();
+$options->addSelectField("Description");
+
+try {
+ $result = $tableClient->queryEntities("mytable", $options);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+
+// All entities in the table are returned, regardless of whether
+// they have the Description field.
+// To limit the results returned, use a filter.
+$entities = $result->getEntities();
+
+foreach($entities as $entity){
+ $description = $entity->getProperty("Description")->getValue();
+ echo $description."<br />";
+}
+```
+
+## Update an entity
+
+You can update an existing entity by using the **Entity->setProperty** and **Entity->addProperty** methods on the entity, and then calling **TableRestProxy->updateEntity**. The following example retrieves an entity, modifies one property, removes another property, and adds a new property. Note that you can remove a property by setting its value to **null**.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+use MicrosoftAzure\Storage\Table\Models\Entity;
+use MicrosoftAzure\Storage\Table\Models\EdmType;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+$result = $tableClient->getEntity("mytable", "tasksSeattle", 1);
+
+$entity = $result->getEntity();
+$entity->setPropertyValue("DueDate", new DateTime()); //Modified DueDate.
+$entity->setPropertyValue("Location", null); //Removed Location.
+$entity->addProperty("Status", EdmType::STRING, "In progress"); //Added Status.
+
+try {
+ $tableClient->updateEntity("mytable", $entity);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+```
+
+## Delete an entity
+
+To delete an entity, pass the table name, and the entity's `PartitionKey` and `RowKey` to the **TableRestProxy->deleteEntity** method.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+try {
+ // Delete entity.
+ $tableClient->deleteEntity("mytable", "tasksSeattle", "2");
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+```
+
+For concurrency checks, you can set the Etag for an entity to be deleted by using the **DeleteEntityOptions->setEtag** method and passing the **DeleteEntityOptions** object to **deleteEntity** as a fourth parameter.
+
+## Batch table operations
+
+The **TableRestProxy->batch** method allows you to execute multiple operations in a single request. The pattern here involves adding operations to **BatchRequest** object and then passing the **BatchRequest** object to the **TableRestProxy->batch** method. To add an operation to a **BatchRequest** object, you can call any of the following methods multiple times:
+
+* **addInsertEntity** (adds an insertEntity operation)
+* **addUpdateEntity** (adds an updateEntity operation)
+* **addMergeEntity** (adds a mergeEntity operation)
+* **addInsertOrReplaceEntity** (adds an insertOrReplaceEntity operation)
+* **addInsertOrMergeEntity** (adds an insertOrMergeEntity operation)
+* **addDeleteEntity** (adds a deleteEntity operation)
+
+The following example shows how to execute **insertEntity** and **deleteEntity** operations in a single request.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+use MicrosoftAzure\Storage\Table\Models\Entity;
+use MicrosoftAzure\Storage\Table\Models\EdmType;
+use MicrosoftAzure\Storage\Table\Models\BatchOperations;
+
+// Configure a connection string for Storage Table service.
+$connectionString = "DefaultEndpointsProtocol=[http|https];AccountName=[yourAccount];AccountKey=[yourKey]"
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+// Create list of batch operation.
+$operations = new BatchOperations();
+
+$entity1 = new Entity();
+$entity1->setPartitionKey("tasksSeattle");
+$entity1->setRowKey("2");
+$entity1->addProperty("Description", null, "Clean roof gutters.");
+$entity1->addProperty("DueDate",
+ EdmType::DATETIME,
+ new DateTime("2012-11-05T08:15:00-08:00"));
+$entity1->addProperty("Location", EdmType::STRING, "Home");
+
+// Add operation to list of batch operations.
+$operations->addInsertEntity("mytable", $entity1);
+
+// Add operation to list of batch operations.
+$operations->addDeleteEntity("mytable", "tasksSeattle", "1");
+
+try {
+ $tableClient->batch($operations);
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+```
+
+For more information about batching Table operations, see [Performing Entity Group Transactions][entity-group-transactions].
+
+## Delete a table
+
+Finally, to delete a table, pass the table name to the **TableRestProxy->deleteTable** method.
+
+```php
+require_once 'vendor/autoload.php';
+
+use MicrosoftAzure\Storage\Table\TableRestProxy;
+use MicrosoftAzure\Storage\Common\Exceptions\ServiceException;
+
+// Create table REST proxy.
+$tableClient = TableRestProxy::createTableService($connectionString);
+
+try {
+ // Delete table.
+ $tableClient->deleteTable("mytable");
+}
+catch(ServiceException $e){
+ // Handle exception based on error codes and messages.
+ // Error codes and messages are here:
+ // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ $code = $e->getCode();
+ $error_message = $e->getMessage();
+ echo $code.": ".$error_message."<br />";
+}
+```
+
+## Next steps
+
+Now that you've learned the basics of the Azure Table service and Azure Cosmos DB, follow these links to learn more.
+
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+
+* [PHP Developer Center](https://azure.microsoft.com/develop/php/).
+
+[download]: https://packagist.org/packages/microsoft/azure-storage-table
+[require_once]: https://php.net/require_once
+[table-service-timeouts]: /rest/api/storageservices/setting-timeouts-for-table-service-operations
+
+[table-data-model]: /rest/api/storageservices/Understanding-the-Table-Service-Data-Model
+[filters]: /rest/api/storageservices/Querying-Tables-and-Entities
+[entity-group-transactions]: /rest/api/storageservices/Performing-Entity-Group-Transactions
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-python.md
+
+ Title: Use Azure Cosmos DB Table API and Azure Table storage using Python
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API by using Python.
++
+ms.devlang: python
+ Last updated : 03/23/2021+++++
+# Get started with Azure Table storage and the Azure Cosmos DB Table API using Python
++
+The Azure Table storage and the Azure Cosmos DB are services that store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because Table storage and Azure Cosmos DB are schemaless, it's easy to adapt your data as the needs of your application evolve. Access to the table storage and table API data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
+
+You can use the Table storage or the Azure Cosmos DB to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
+
+### About this sample
+
+This sample shows you how to use the [Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/) in common Azure Table storage scenarios. The name of the SDK indicates it is for use with Azure Cosmos DB, but it works with both Azure Cosmos DB and Azure Tables storage, each service just has a unique endpoint. These scenarios are explored using Python examples that illustrate how to:
+
+* Create and delete tables
+* Insert and query entities
+* Modify entities
+
+While working through the scenarios in this sample, you may want to refer to the [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb).
+
+## Prerequisites
+
+You need the following to complete this sample successfully:
+
+* [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
+* [Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/). This SDK connects with both Azure Table storage and the Azure Cosmos DB Table API.
+* [Azure Storage account](../../storage/common/storage-account-create.md) or [Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/).
+
+## Create an Azure service account
++
+**Create an Azure storage account**
++
+**Create an Azure Cosmos DB Table API account**
++
+## Install the Azure Cosmos DB Table SDK for Python
+
+After you've created a Storage account, your next step is to install the [Microsoft Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/). For details on installing the SDK, refer to the [README.rst](https://github.com/Azure/azure-cosmosdb-python/blob/master/azure-cosmosdb-table/README.rst) file in the Cosmos DB Table SDK for Python repository on GitHub.
+
+## Import the TableService and Entity classes
+
+To work with entities in the Azure Table service in Python, you use the [TableService][py_TableService] and [Entity][py_Entity] classes. Add this code near the top your Python file to import both:
+
+```python
+from azure.cosmosdb.table.tableservice import TableService
+from azure.cosmosdb.table.models import Entity
+```
+
+## Connect to Azure Table service
+
+To connect to Azure Storage Table service, create a [TableService][py_TableService] object, and pass in your Storage account name and account key. Replace `myaccount` and `mykey` with your account name and key.
+
+```python
+table_service = TableService(account_name='myaccount', account_key='mykey')
+```
+
+## Connect to Azure Cosmos DB
+
+To connect to Azure Cosmos DB, copy your primary connection string from the Azure portal, and create a [TableService][py_TableService] object using your copied connection string:
+
+```python
+table_service = TableService(connection_string='DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;TableEndpoint=myendpoint;')
+```
+
+## Create a table
+
+Call [create_table][py_create_table] to create the table.
+
+```python
+table_service.create_table('tasktable')
+```
+
+## Add an entity to a table
+
+To add an entity, you first create an object that represents your entity, then pass the object to the [TableService.insert_entity method][py_TableService]. The entity object can be a dictionary or an object of type [Entity][py_Entity], and defines your entity's property names and values. Every entity must include the required [PartitionKey and RowKey](#partitionkey-and-rowkey) properties, in addition to any other properties you define for the entity.
+
+This example creates a dictionary object representing an entity, then passes it to the [insert_entity][py_insert_entity] method to add it to the table:
+
+```python
+task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
+ 'description': 'Take out the trash', 'priority': 200}
+table_service.insert_entity('tasktable', task)
+```
+
+This example creates an [Entity][py_Entity] object, then passes it to the [insert_entity][py_insert_entity] method to add it to the table:
+
+```python
+task = Entity()
+task.PartitionKey = 'tasksSeattle'
+task.RowKey = '002'
+task.description = 'Wash the car'
+task.priority = 100
+table_service.insert_entity('tasktable', task)
+```
+
+### PartitionKey and RowKey
+
+You must specify both a **PartitionKey** and a **RowKey** property for every entity. These are the unique identifiers of your entities, as together they form the primary key of an entity. You can query using these values much faster than you can query any other entity properties because only these properties are indexed.
+
+The Table service uses **PartitionKey** to intelligently distribute table entities across storage nodes. Entities that have the same **PartitionKey** are stored on the same node. **RowKey** is the unique ID of the entity within the partition it belongs to.
+
+## Update an entity
+
+To update all of an entity's property values, call the [update_entity][py_update_entity] method. This example shows how to replace an existing entity with an updated version:
+
+```python
+task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
+ 'description': 'Take out the garbage', 'priority': 250}
+table_service.update_entity('tasktable', task)
+```
+
+If the entity that is being updated doesn't already exist, then the update operation will fail. If you want to store an entity whether it exists or not, use [insert_or_replace_entity][py_insert_or_replace_entity]. In the following example, the first call will replace the existing entity. The second call will insert a new entity, since no entity with the specified PartitionKey and RowKey exists in the table.
+
+```python
+# Replace the entity created earlier
+task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
+ 'description': 'Take out the garbage again', 'priority': 250}
+table_service.insert_or_replace_entity('tasktable', task)
+
+# Insert a new entity
+task = {'PartitionKey': 'tasksSeattle', 'RowKey': '003',
+ 'description': 'Buy detergent', 'priority': 300}
+table_service.insert_or_replace_entity('tasktable', task)
+```
+
+> [!TIP]
+> The [update_entity][py_update_entity] method replaces all properties and values of an existing entity, which you can also use to remove properties from an existing entity. You can use the [merge_entity][py_merge_entity] method to update an existing entity with new or modified property values without completely replacing the entity.
+
+## Modify multiple entities
+
+To ensure the atomic processing of a request by the Table service, you can submit multiple operations together in a batch. First, use the [TableBatch][py_TableBatch] class to add multiple operations to a single batch. Next, call [TableService][py_TableService].[commit_batch][py_commit_batch] to submit the operations in an atomic operation. All entities to be modified in batch must be in the same partition.
+
+This example adds two entities together in a batch:
+
+```python
+from azure.cosmosdb.table.tablebatch import TableBatch
+batch = TableBatch()
+task004 = {'PartitionKey': 'tasksSeattle', 'RowKey': '004',
+ 'description': 'Go grocery shopping', 'priority': 400}
+task005 = {'PartitionKey': 'tasksSeattle', 'RowKey': '005',
+ 'description': 'Clean the bathroom', 'priority': 100}
+batch.insert_entity(task004)
+batch.insert_entity(task005)
+table_service.commit_batch('tasktable', batch)
+```
+
+Batches can also be used with the context manager syntax:
+
+```python
+task006 = {'PartitionKey': 'tasksSeattle', 'RowKey': '006',
+ 'description': 'Go grocery shopping', 'priority': 400}
+task007 = {'PartitionKey': 'tasksSeattle', 'RowKey': '007',
+ 'description': 'Clean the bathroom', 'priority': 100}
+
+with table_service.batch('tasktable') as batch:
+ batch.insert_entity(task006)
+ batch.insert_entity(task007)
+```
+
+## Query for an entity
+
+To query for an entity in a table, pass its PartitionKey and RowKey to the [TableService][py_TableService].[get_entity][py_get_entity] method.
+
+```python
+task = table_service.get_entity('tasktable', 'tasksSeattle', '001')
+print(task.description)
+print(task.priority)
+```
+
+## Query a set of entities
+
+You can query for a set of entities by supplying a filter string with the **filter** parameter. This example finds all tasks in Seattle by applying a filter on PartitionKey:
+
+```python
+tasks = table_service.query_entities(
+ 'tasktable', filter="PartitionKey eq 'tasksSeattle'")
+for task in tasks:
+ print(task.description)
+ print(task.priority)
+```
+
+## Query a subset of entity properties
+
+You can also restrict which properties are returned for each entity in a query. This technique, called *projection*, reduces bandwidth and can improve query performance, especially for large entities or result sets. Use the **select** parameter and pass the names of the properties you want returned to the client.
+
+The query in the following code returns only the descriptions of entities in the table.
+
+> [!NOTE]
+> The following snippet works only against the Azure Storage. It is not supported by the Storage Emulator.
+
+```python
+tasks = table_service.query_entities(
+ 'tasktable', filter="PartitionKey eq 'tasksSeattle'", select='description')
+for task in tasks:
+ print(task.description)
+```
+
+## Query for an entity without partition and row keys
+
+You can also query for entities within a table without using the partition and row keys. Use the `table_service.query_entities` method without the "filter" and "select" parameters as show in the following example:
+
+```python
+print("Get the first item from the table")
+tasks = table_service.query_entities(
+ 'tasktable')
+lst = list(tasks)
+print(lst[0])
+```
+
+## Delete an entity
+
+Delete an entity by passing its **PartitionKey** and **RowKey** to the [delete_entity][py_delete_entity] method.
+
+```python
+table_service.delete_entity('tasktable', 'tasksSeattle', '001')
+```
+
+## Delete a table
+
+If you no longer need a table or any of the entities within it, call the [delete_table][py_delete_table] method to permanently delete the table from Azure Storage.
+
+```python
+table_service.delete_table('tasktable')
+```
+
+## Next steps
+
+* [FAQ - Develop with the Table API](table-api-faq.yml)
+* [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb)
+* [Python Developer Center](https://azure.microsoft.com/develop/python/)
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md): A free, cross-platform application for working visually with Azure Storage data on Windows, macOS, and Linux.
+* [Working with Python in Visual Studio (Windows)](/visualstudio/python/overview-of-python-tools-for-visual-studio)
+++
+[py_commit_batch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_create_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_delete_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_get_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_insert_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_insert_or_replace_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_Entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.models.entity
+[py_merge_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_update_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_delete_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_TableService]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_TableBatch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
cosmos-db How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-ruby.md
+
+ Title: Use Azure Cosmos DB Table API and Azure Table Storage with Ruby
+description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API.
++
+ms.devlang: ruby
+ Last updated : 07/23/2020++++
+# How to use Azure Table Storage and the Azure Cosmos DB Table API with Ruby
++
+This article shows you how to create tables, store your data, and perform CRUD operations on the data. Choose either the Azure Table service or the Azure Cosmos DB Table API. The samples described in this article are written in Ruby and uses the [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). The scenarios covered include create a table, delete a table, insert entities, and query entities from the table.
+
+## Create an Azure service account
++
+**Create an Azure storage account**
++
+**Create an Azure Cosmos DB account**
++
+## Add access to Azure storage or Azure Cosmos DB
+
+To use Azure Storage or Azure Cosmos DB, you must download and use the Ruby Azure package that includes a set of convenience libraries that communicate with the Table REST services.
+
+### Use RubyGems to obtain the package
+
+1. Use a command-line interface such as **PowerShell** (Windows), **Terminal** (Mac), or **Bash** (Unix).
+2. Type **gem install azure-storage-table** in the command window to install the gem and dependencies.
+
+### Import the package
+
+Use your favorite text editor, add the following to the top of the Ruby file where you intend to use Storage:
+
+```ruby
+require "azure/storage/table"
+```
+
+## Add your connection string
+
+You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
+
+### Add an Azure Storage connection
+
+The Azure Storage module reads the environment variables **AZURE_STORAGE_ACCOUNT** and **AZURE_STORAGE_ACCESS_KEY** for information required to connect to your Azure Storage account. If these environment variables are not set, you must specify the account information before using **Azure::Storage::Table::TableService** with the following code:
+
+```ruby
+Azure.config.storage_account_name = "<your Azure Storage account>"
+Azure.config.storage_access_key = "<your Azure Storage access key>"
+```
+
+To obtain these values from a classic or Resource Manager storage account in the Azure portal:
+
+1. Log in to the [Azure portal](https://portal.azure.com).
+2. Navigate to the Storage account you want to use.
+3. In the Settings blade on the right, click **Access Keys**.
+4. In the Access keys blade that appears, you'll see the access key 1 and access key 2. You can use either of these.
+5. Click the copy icon to copy the key to the clipboard.
+
+### Add an Azure Cosmos DB connection
+
+To connect to Azure Cosmos DB, copy your primary connection string from the Azure portal, and create a **Client** object using your copied connection string. You can pass the **Client** object when you create a **TableService** object:
+
+```ruby
+common_client = Azure::Storage::Common::Client.create(storage_account_name:'myaccount', storage_access_key:'mykey', storage_table_host:'mycosmosdb_endpoint')
+table_client = Azure::Storage::Table::TableService.new(client: common_client)
+```
+
+## Create a table
+
+The **Azure::Storage::Table::TableService** object lets you work with tables and entities. To create a table, use the **create_table()** method. The following example creates a table or prints the error if there is any.
+
+```ruby
+azure_table_service = Azure::Storage::Table::TableService.new
+begin
+ azure_table_service.create_table("testtable")
+rescue
+ puts $!
+end
+```
+
+## Add an entity to a table
+
+To add an entity, first create a hash object that defines your entity properties. Note that for every entity you must specify a **PartitionKey** and **RowKey**. These are the unique identifiers of your entities, and are values that can be queried much faster than your other properties. Azure Storage uses **PartitionKey** to automatically distribute the table's entities over many storage nodes. Entities with the same **PartitionKey** are stored on the same node. The **RowKey** is the unique ID of the entity within the partition it belongs to.
+
+```ruby
+entity = { "content" => "test entity",
+ :PartitionKey => "test-partition-key", :RowKey => "1" }
+azure_table_service.insert_entity("testtable", entity)
+```
+
+## Update an entity
+
+There are multiple methods available to update an existing entity:
+
+* **update_entity():** Update an existing entity by replacing it.
+* **merge_entity():** Updates an existing entity by merging new property values into the existing entity.
+* **insert_or_merge_entity():** Updates an existing entity by replacing it. If no entity exists, a new one will be inserted:
+* **insert_or_replace_entity():** Updates an existing entity by merging new property values into the existing entity. If no entity exists, a new one will be inserted.
+
+The following example demonstrates updating an entity using **update_entity()**:
+
+```ruby
+entity = { "content" => "test entity with updated content",
+ :PartitionKey => "test-partition-key", :RowKey => "1" }
+azure_table_service.update_entity("testtable", entity)
+```
+
+With **update_entity()** and **merge_entity()**, if the entity that you are updating doesn't exist then the update operation will fail. Therefore, if you want to store an entity regardless of whether it already exists, you should instead use **insert_or_replace_entity()** or **insert_or_merge_entity()**.
+
+## Work with groups of entities
+
+Sometimes it makes sense to submit multiple operations together in a batch to ensure atomic processing by the server. To accomplish that, you first create a **Batch** object and then use the **execute_batch()** method on **TableService**. The following example demonstrates submitting two entities with RowKey 2 and 3 in a batch. Notice that it only works for entities with the same PartitionKey.
+
+```ruby
+azure_table_service = Azure::TableService.new
+batch = Azure::Storage::Table::Batch.new("testtable",
+ "test-partition-key") do
+ insert "2", { "content" => "new content 2" }
+ insert "3", { "content" => "new content 3" }
+end
+results = azure_table_service.execute_batch(batch)
+```
+
+## Query for an entity
+
+To query an entity in a table, use the **get_entity()** method, by passing the table name, **PartitionKey** and **RowKey**.
+
+```ruby
+result = azure_table_service.get_entity("testtable", "test-partition-key",
+ "1")
+```
+
+## Query a set of entities
+
+To query a set of entities in a table, create a query hash object and use the **query_entities()** method. The following example demonstrates getting all the entities with the same **PartitionKey**:
+
+```ruby
+query = { :filter => "PartitionKey eq 'test-partition-key'" }
+result, token = azure_table_service.query_entities("testtable", query)
+```
+
+> [!NOTE]
+> If the result set is too large for a single query to return, a continuation token is returned that you can use to retrieve subsequent pages.
++
+## Query a subset of entity properties
+
+A query to a table can retrieve just a few properties from an entity. This technique, called "projection," reduces bandwidth and can improve query performance, especially for large entities. Use the select clause and pass the names of the properties you would like to bring over to the client.
+
+```ruby
+query = { :filter => "PartitionKey eq 'test-partition-key'",
+ :select => ["content"] }
+result, token = azure_table_service.query_entities("testtable", query)
+```
+
+## Delete an entity
+
+To delete an entity, use the **delete_entity()** method. Pass in the name of the table that contains the entity, the PartitionKey, and the RowKey of the entity.
+
+```ruby
+azure_table_service.delete_entity("testtable", "test-partition-key", "1")
+```
+
+## Delete a table
+
+To delete a table, use the **delete_table()** method and pass in the name of the table you want to delete.
+
+```ruby
+azure_table_service.delete_table("testtable")
+```
+
+## Next steps
+
+* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
+* [Ruby Developer Center](https://azure.microsoft.com/develop/ruby/)
+* [Microsoft Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/introduction.md
+
+ Title: Introduction to the Azure Cosmos DB Table API
+description: Learn how you can use Azure Cosmos DB to store and query massive volumes of key-value data with low latency by using the Azure Tables API.
++++ Last updated : 01/08/2021+++
+# Introduction to Azure Cosmos DB: Table API
+
+[Azure Cosmos DB](introduction.md) provides the Table API for applications that are written for Azure Table storage and that need premium capabilities like:
+
+* [Turnkey global distribution](../distribute-data-globally.md).
+* [Dedicated throughput](../partitioning-overview.md) worldwide (when using provisioned throughput).
+* Single-digit millisecond latencies at the 99th percentile.
+* Guaranteed high availability.
+* Automatic secondary indexing.
+
+Applications written for Azure Table storage can migrate to Azure Cosmos DB by using the Table API with no code changes and take advantage of premium capabilities. The Table API has client SDKs available for .NET, Java, Python, and Node.js.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Table API.
+
+> [!IMPORTANT]
+> The .NET Framework SDK [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table) is in maintenance mode and it will be deprecated soon. Please upgrade to the new .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) to continue to get the latest features supported by the Table API.
+
+## Table offerings
+If you currently use Azure Table Storage, you gain the following benefits by moving to the Azure Cosmos DB Table API:
+
+| Feature | Azure Table storage | Azure Cosmos DB Table API |
+| | | |
+| Latency | Fast, but no upper bounds on latency. | Single-digit millisecond latency for reads and writes, backed with <10 ms latency for reads and writes at the 99th percentile, at any scale, anywhere in the world. |
+| Throughput | Variable throughput model. Tables have a scalability limit of 20,000 operations/s. | Highly scalable with [dedicated reserved throughput per table](../request-units.md) that's backed by SLAs. Accounts have no upper limit on throughput and support >10 million operations/s per table. |
+| Global distribution | Single region with one optional readable secondary read region for high availability. | [Turnkey global distribution](../distribute-data-globally.md) from one to any number of regions. Support for [automatic and manual failovers](../high-availability.md) at any time, anywhere in the world. Multiple write regions to let any region accept write operations. |
+| Indexing | Only primary index on PartitionKey and RowKey. No secondary indexes. | Automatic and complete indexing on all properties by default, with no index management. |
+| Query | Query execution uses index for primary key, and scans otherwise. | Queries can take advantage of automatic indexing on properties for fast query times. |
+| Consistency | Strong within primary region. Eventual within secondary region. | [Five well-defined consistency levels](../consistency-levels.md) to trade off availability, latency, throughput, and consistency based on your application needs. |
+| Pricing | Consumption-based. | Available in both [consumption-based](../serverless.md) and [provisioned capacity](../set-throughput.md) modes. |
+| SLAs | 99.9% to 99.99% availability, depending on the replication strategy. | 99.999% read availability, 99.99% write availability on a single-region account and 99.999% write availability on multi-region accounts. [Comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/) covering availability, latency, throughput and consistency. |
+
+## Get started
+
+Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). Then get started with our [Quick Start for Table API by using .NET](create-table-dotnet.md).
+
+> [!IMPORTANT]
+> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
+>
+
+## Next steps
+
+Here are a few pointers to get you started:
+* [Build a .NET application by using the Table API](create-table-dotnet.md)
+* [Develop with the Table API in .NET](tutorial-develop-table-dotnet.md)
+* [Query table data by using the Table API](tutorial-query-table.md)
+* [Learn how to set up Azure Cosmos DB global distribution by using the Table API](tutorial-global-distribution-table.md)
+* [Azure Cosmos DB Table .NET Standard SDK](dotnet-standard-sdk.md)
+* [Azure Cosmos DB Table .NET SDK](dotnet-sdk.md)
+* [Azure Cosmos DB Table Java SDK](java-sdk.md)
+* [Azure Cosmos DB Table Node.js SDK](nodejs-sdk.md)
+* [Azure Cosmos DB Table SDK for Python](python-sdk.md)
cosmos-db Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/java-sdk.md
+
+ Title: Azure Cosmos DB Table API for Java
+description: Learn all about the Azure Cosmos DB Table API for Java including release dates, retirement dates, and changes made between each version.
++
+ms.devlang: java
+ Last updated : 11/20/2018+++++
+# Azure Cosmos DB Table API for Java: Release notes and resources
+
+> [!div class="op_single_selector"]
+> * [.NET](dotnet-sdk.md)
+> * [.NET Standard](dotnet-standard-sdk.md)
+> * [Java](java-sdk.md)
+> * [Node.js](nodejs-sdk.md)
+> * [Python](python-sdk.md)
+
+
+| | Links |
+|||
+|**SDK download**|[Download Options](https://github.com/azure/azure-storage-java#download)|
+|**API documentation**|[Java API reference documentation](https://azure.github.io/azure-storage-java/)|
+|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-storage-java#contribute-code-or-provide-feedback)|
+
+> [!IMPORTANT]
+> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
+>
+
+## Release notes
+
+### <a name="1.0.0"></a>1.0.0
+* General availability release
+
+## Release and retirement dates
+Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
+
+New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [1.0.0](#1.0.0) |November 15, 2017 | |
+
+## FAQ
+
+## See also
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+
cosmos-db Nodejs Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/nodejs-sdk.md
+
+ Title: Azure Cosmos DB Table API for Node.js
+description: Learn all about the Azure Cosmos DB Table API for Node.js including release dates, retirement dates, and changes made between each version.
++
+ms.devlang: nodejs
+ Last updated : 11/20/2018++++
+# Azure Cosmos DB Table API for Node.js: Release notes and resources
+
+> [!div class="op_single_selector"]
+> * [.NET](dotnet-sdk.md)
+> * [.NET Standard](dotnet-standard-sdk.md)
+> * [Java](java-sdk.md)
+> * [Node.js](nodejs-sdk.md)
+> * [Python](python-sdk.md)
+
+
+| | Links |
+|||
+|**SDK download**|[NPM](https://www.npmjs.com/package/azure-storage)|
+|**API documentation**|[Node.js API reference documentation](https://azure.github.io/azure-storage-node/)|
+|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-storage-node#contribute)|
+
+> [!IMPORTANT]
+> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
+>
+
+## Release notes
+
+### <a name="1.0.0"></a>1.0.0
+* General availability release
+
+## Release and retirement dates
+Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
+
+New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [1.0.0](#1.0.0) |November 15, 2017 | |
+
+## FAQ
+
+## See also
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/powershell-samples.md
+
+ Title: Azure PowerShell samples for Azure Cosmos DB Table API
+description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Table API
++++ Last updated : 01/20/2021+++
+# Azure PowerShell samples for Azure Cosmos DB Table API
+
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+
+## Common Samples
+
+|Task | Description |
+|||
+|[Update an account](../scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
+|[Update an account's regions](../scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
+|[Change failover priority or trigger failover](../scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
+|[Account keys or connection strings](../scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
+|[Create a Cosmos Account with IP Firewall](../scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
+|||
+
+## Table API Samples
+
+|Task | Description |
+|||
+|[Create an account and table](../scripts/powershell/table/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table. |
+|[Create an account and table with autoscale](../scripts/powershell/table/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Creates an Azure Cosmos account and table autoscale. |
+|[List or get tables](../scripts/powershell/table/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get tables. |
+|[Throughput operations](../scripts/powershell/table/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a table including get, update and migrate between autoscale and standard throughput. |
+|[Lock resources from deletion](../scripts/powershell/table/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
+|||
cosmos-db Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/python-sdk.md
+
+ Title: Azure Cosmos DB Table API for Python
+description: Learn all about the Azure Cosmos DB Table API including release dates, retirement dates, and changes made between each version.
++
+ms.devlang: python
+ Last updated : 11/20/2018++++
+# Azure Cosmos DB Table API SDK for Python: Release notes and resources
+
+> [!div class="op_single_selector"]
+> * [.NET](dotnet-sdk.md)
+> * [.NET Standard](dotnet-standard-sdk.md)
+> * [Java](java-sdk.md)
+> * [Node.js](nodejs-sdk.md)
+> * [Python](python-sdk.md)
+
+
+| | Links |
+|||
+|**SDK download**|[PyPI](https://pypi.python.org/pypi/azure-cosmosdb-table/)|
+|**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb)|
+|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)|
+|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)|
+|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) or [Python 3.6+](https://www.python.org/downloads/)|
+
+> [!IMPORTANT]
+> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
+>
+
+## Release notes
+
+### <a name="1.0.0"></a>1.0.0
+* General availability release
+
+### <a name="0.37.1"></a>0.37.1
+* Pre-release SDK
+
+## Release and retirement dates
+Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
+
+New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+
+<br/>
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [1.0.0](#1.0.0) |November 15, 2017 | |
+| [0.37.1](#0.37.1) |October 05, 2017 | |
++
+## FAQ
+
+## See also
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/resource-manager-templates.md
+
+ Title: Resource Manager templates for Azure Cosmos DB Table API
+description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Table API.
++++ Last updated : 05/19/2020+++
+# Manage Azure Cosmos DB Table API resources using Azure Resource Manager templates
+
+In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
+
+This article has examples for Table API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](../templates-samples-cassandra.md), [Gremlin](../templates-samples-gremlin.md), [MongoDB](../templates-samples-mongodb.md), [SQL](../manage-with-templates.md) articles.
+
+> [!IMPORTANT]
+>
+> * Account names are limited to 44 characters, all lowercase.
+> * To change the throughput values, redeploy the template with updated RU/s.
+> * When you add or remove locations to an Azure Cosmos account, you can't simultaneously modify other properties. These operations must be done separately.
+
+To create any of the Azure Cosmos DB resources below, copy the following example template into a new json file. You can optionally create a parameters json file to use when deploying multiple instances of the same resource with different names and values. There are many ways to deploy Azure Resource Manager templates including, [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) and [GitHub](../../azure-resource-manager/templates/deploy-to-azure-button.md).
+
+> [!TIP]
+> To enable shared throughput when using Table API, enable account-level throughput in the Azure portal.
+
+<a id="create-autoscale"></a>
+
+## Azure Cosmos account for Table with autoscale throughput
+
+This template will create an Azure Cosmos account for Table API with one table with autoscale throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-table-autoscale%2Fazuredeploy.json)
++
+<a id="create-manual"></a>
+
+## Azure Cosmos account for Table with standard provisioned throughput
+
+This template will create an Azure Cosmos account for Table API with one table with standard throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-table%2Fazuredeploy.json)
++
+## Next steps
+
+Here are some additional resources:
+
+* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml)
+* [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions)
+* [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/table-import.md
+
+ Title: Migrate existing data to a Table API account in Azure Cosmos DB
+description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.
++++ Last updated : 12/07/2017++++
+# Migrate your data to an Azure Cosmos DB Table API account
+
+This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](introduction.md). If you have data stored in Azure Table Storage, you can use either the data migration tool or AzCopy to import your data to the Azure Cosmos DB Table API.
+
+This tutorial covers the following tasks:
+
+> [!div class="checklist"]
+> * Importing data with the data migration tool
+> * Importing data with AzCopy
+
+## Prerequisites
+
+* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs.
+
+* **Create Azure Cosmos DB resources:** Before you start migrating the data, create all your tables from the Azure portal. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the Azure Cosmos DB tables.
+
+## Data migration tool
+
+You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a Table API account.
+
+To migrate table data:
+
+1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool).
+2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format:
+
+ ```bash
+ dt.exe [/<option>:<value>] /s:<source-name> [/s.<source-option>:<value>] /t:<target-name> [/t.<target-option>:<value>]
+ ```
+
+The supported options for this command are:
+
+* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures.
+* **/OverwriteErrorLog:** Optional. Overwrite the error log file.
+* **/ProgressUpdateInterval:** Optional, default is `00:00:01`. The time interval to refresh on-screen data transfer progress.
+* **/ErrorDetails:** Optional, default is `None`. Specifies that detailed error information should be displayed for the following errors: `None`, `Critical`, or `All`.
+* **/EnableCosmosTableLog:** Optional. Direct the log to an Azure Cosmos DB table account. If set, this defaults to the destination account connection string unless `/CosmosTableLogConnectionString` is also provided. This is useful if multiple instances of the tool are being run simultaneously.
+* **/CosmosTableLogConnectionString:** Optional. The connection string to direct the log to a remote Azure Cosmos DB table account.
+
+### Command-line source settings
+
+Use the following source options when you define Azure Table Storage as the source of the migration.
+
+* **/s:AzureTable:** Reads data from Table Storage.
+* **/s.ConnectionString:** Connection string for the table endpoint. You can retrieve this from the Azure portal.
+* **/s.LocationMode:** Optional, default is `PrimaryOnly`. Specifies which location mode to use when connecting to Table Storage: `PrimaryOnly`, `PrimaryThenSecondary`, `SecondaryOnly`, `SecondaryThenPrimary`.
+* **/s.Table:** Name of the Azure table.
+* **/s.InternalFields:** Set to `All` for table migration, because `RowKey` and `PartitionKey` are required for import.
+* **/s.Filter:** Optional. Filter string to apply.
+* **/s.Projection:** Optional. List of columns to select,
+
+To retrieve the source connection string when you import from Table Storage, open the Azure portal. Select **Storage accounts** > **Account** > **Access keys**, and copy the **Connection string**.
++
+### Command-line target settings
+
+Use the following target options when you define the Azure Cosmos DB Table API as the target of the migration.
+
+* **/t:TableAPIBulk:** Uploads data into the Azure Cosmos DB Table API in batches.
+* **/t.ConnectionString:** The connection string for the table endpoint.
+* **/t.TableName:** Specifies the name of the table to write to.
+* **/t.Overwrite:** Optional, default is `false`. Specifies if existing values should be overwritten.
+* **/t.MaxInputBufferSize:** Optional, default is `1GB`. Approximate estimate of input bytes to buffer before flushing data to sink.
+* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table.
+* **/t.MaxBatchSize:** Optional, default is `2MB`. Specify the batch size in bytes.
+
+### Sample command: Source is Table Storage
+
+Here's a command-line sample showing how to import from Table Storage to the Table API:
+
+```bash
+dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmosdb.azure.com:443 /t.TableName:<Table name> /t.Overwrite
+```
+
+## Migrate data by using AzCopy
+
+You can also use the AzCopy command-line utility to migrate data from Table Storage to the Azure Cosmos DB Table API. To use AzCopy, you first export your data as described in [Export data from Table Storage](/previous-versions/azure/storage/storage-use-azcopy#export-data-from-table-storage). Then, you import the data to Azure Cosmos DB as described in [Azure Cosmos DB Table API](/previous-versions/azure/storage/storage-use-azcopy#import-data-into-table-storage).
+
+Refer to the following sample when you're importing into Azure Cosmos DB. Note that the `/Dest` value uses `cosmosdb`, not `core`.
+
+Example import command:
+
+```bash
+AzCopy /Source:C:\myfolder\ /Dest:https://myaccount.table.cosmosdb.windows.net/mytable1/ /DestKey:key /Manifest:"myaccount_mytable_20140103T112020.manifest" /EntityOperation:InsertOrReplace
+```
+
+## Next steps
+
+Learn how to query data by using the Azure Cosmos DB Table API.
+
+> [!div class="nextstepaction"]
+>[How to query data?](tutorial-query-table.md)
++++
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/table-support.md
+
+ Title: Azure Table Storage support in Azure Cosmos DB
+description: Learn how Azure Cosmos DB Table API and Azure Storage Tables work together by sharing the same table data model an operations
+++ Last updated : 01/08/2021+++++
+# Developing with Azure Cosmos DB Table API and Azure Table storage
+
+Azure Cosmos DB Table API and Azure Table storage share the same table data model and expose the same create, delete, update, and query operations through their SDKs.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Table API.
++
+## Developing with the Azure Cosmos DB Table API
+
+At this time, the [Azure Cosmos DB Table API](introduction.md) has four SDKs available for development:
+
+* [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table): .NET SDK. This library targets .NET Standard and has the same classes and method signatures as the public [Windows Azure Storage SDK](https://www.nuget.org/packages/WindowsAzure.Storage), but also has the ability to connect to Azure Cosmos DB accounts using the Table API. Users of .NET Framework library [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table/) are recommended to upgrade to [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) as it is in maintenance mode and will be deprecated soon.
+
+* [Python SDK](python-sdk.md): The new Azure Cosmos DB Python SDK is the only SDK that supports Azure Table storage in Python. This SDK connects with both Azure Table storage and Azure Cosmos DB Table API.
+
+* [Java SDK](java-sdk.md): This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
+
+* [Node.js SDK](nodejs-sdk.md): This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
++
+Additional information about working with the Table API is available in the [FAQ: Develop with the Table API](table-api-faq.yml) article.
+
+## Developing with Azure Table storage
+
+Azure Table storage has these SDKs available for development:
+
+- The [Microsoft.Azure.Storage.Blob](https://www.nuget.org/packages/Microsoft.Azure.Storage.Blob/), [Microsoft.Azure.Storage.File](https://www.nuget.org/packages/Microsoft.Azure.Storage.File/), [Microsoft.Azure.Storage.Queue](https://www.nuget.org/packages/Microsoft.Azure.Storage.Queue/), and [Microsoft.Azure.Storage.Common](https://www.nuget.org/packages/Microsoft.Azure.Storage.Common/) libraries allow you to work with the Azure Table storage service. If you are using the Table API in Azure Cosmos DB, you can instead use the [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table/) library.
+- [Python SDK](https://github.com/Azure/azure-cosmos-table-python). The Azure Cosmos DB Table SDK for Python supports the Table Storage service (because Azure Table Storage and Cosmos DB's Table API share the same features and functionalities, and in an effort to factorize our SDK development efforts, we recommend to use this SDK).
+- [Azure Storage SDK for Java](https://github.com/azure/azure-storage-java). This Azure Storage SDK provides a client library in Java to consume Azure Table storage.
+- [Node.js SDK](https://github.com/Azure/azure-storage-node). This SDK provides a Node.js package and a browser-compatible JavaScript client library to consume the storage Table service.
+- [AzureRmStorageTable PowerShell module](https://www.powershellgallery.com/packages/AzureRmStorageTable). This PowerShell module has cmdlets to work with storage Tables.
+- [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/). This library enables you to build applications against Azure Storage.
+- [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). This project provides a Ruby package that makes it easy to access Azure storage Table services.
+- [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table). This project provides a PHP client library that makes it easy to access Azure storage Table services.
++
+
+++++
cosmos-db Tutorial Develop Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/tutorial-develop-table-dotnet.md
+
+ Title: Azure Cosmos DB Table API using .NET Standard SDK
+description: Learn how to store and query the structured data in Azure Cosmos DB Table API account
++++
+ms.devlang: dotnet
+ Last updated : 12/03/2019++
+# Get started with Azure Cosmos DB Table API and Azure Table storage using the .NET SDK
+++
+You can use the Azure Cosmos DB Table API or Azure Table storage to store structured NoSQL data in the cloud, providing a key/attribute store with a schema less design. Because Azure Cosmos DB Table API and Table storage are schema less, it's easy to adapt your data as the needs of your application evolve. You can use Azure Cosmos DB Table API or the Table storage to store flexible datasets such as user data for web applications, address books, device information, or other types of metadata your service requires.
+
+This tutorial describes a sample that shows you how to use the [Microsoft Azure Cosmos DB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) with Azure Cosmos DB Table API and Azure Table storage scenarios. You must use the connection specific to the Azure service. These scenarios are explored using C# examples that illustrate how to create tables, insert/ update data, query data and delete the tables.
+
+## Prerequisites
+
+You need the following to complete this sample successfully:
+
+* [Microsoft Visual Studio](https://www.visualstudio.com/downloads/)
+
+* [Microsoft Azure CosmosDB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) - This library is currently available for .NET Standard and .NET framework.
+
+* [Azure Cosmos DB Table API account](create-table-dotnet.md#create-a-database-account).
+
+## Create an Azure Cosmos DB Table API account
++
+## Create a .NET console project
+
+In Visual Studio, create a new .NET console application. The following steps show you how to create a console application in Visual Studio 2019. You can use the Azure Cosmos DB Table Library in any type of .NET application, including an Azure cloud service or web app, and desktop and mobile applications. In this guide, we use a console application for simplicity.
+
+1. Select **File** > **New** > **Project**.
+
+1. Choose **Console App (.NET Core)**, and then select **Next**.
+
+1. In the **Project name** field, enter a name for your application, such as **CosmosTableSamples**. (You can provide a different name as needed.)
+
+1. Select **Create**.
+
+All code examples in this sample can be added to the Main() method of your console application's **Program.cs** file.
+
+## Install the required NuGet package
+
+To obtain the NuGet package, follow these steps:
+
+1. Right-click your project in **Solution Explorer** and choose **Manage NuGet Packages**.
+
+1. Search online for [`Microsoft.Azure.Cosmos.Table`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table), [`Microsoft.Extensions.Configuration`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration), [`Microsoft.Extensions.Configuration.Json`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json), [`Microsoft.Extensions.Configuration.Binder`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Binder) and select **Install** to install the Microsoft Azure Cosmos DB Table Library.
+
+## Configure your storage connection string
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to your Azure Cosmos account or the Table Storage account.
+
+1. Open the **Connection String** or **Access keys** pane. Use the copy button on the right side of the window to copy the **PRIMARY CONNECTION STRING**.
+
+ :::image type="content" source="./media/create-table-dotnet/connection-string.png" alt-text="View and copy the PRIMARY CONNECTION STRING in the Connection String pane":::
+
+1. To configure your connection string, from visual studio right click on your project **CosmosTableSamples**.
+
+1. Select **Add** and then **New Item**. Create a new file **Settings.json** with file type as **TypeScript JSON Configuration** File.
+
+1. Replace the code in Settings.json file with the following code and assign your primary connection string:
+
+ ```csharp
+ {
+ "StorageConnectionString": <Primary connection string of your Azure Cosmos DB account>
+ }
+ ```
+
+1. Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **AppSettings.cs**.
+
+1. Add the following code to the AppSettings.cs file. This file reads the connection string from Settings.json file and assigns it to the configuration parameter:
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/AppSettings.cs":::
+
+## Parse and validate the connection details
+
+1. Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **Common.cs**. You will write code to validate the connection details and create a table within this class.
+
+1. Define a method `CreateStorageAccountFromConnectionString` as shown below. This method will parse the connection string details and validate that the account name and account key details provided in the "Settings.json" file are valid.
+
+ :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/Common.cs" id="createStorageAccount":::
+
+## Create a Table
+
+The [CloudTableClient](/dotnet/api/microsoft.azure.cosmos.table.cloudtableclient) class enables you to retrieve tables and entities stored in Table storage. Because we donΓÇÖt have any tables in the Cosmos DB Table API account, letΓÇÖs add the `CreateTableAsync` method to the **Common.cs** class to create a table:
++
+If you get a "503 service unavailable exception" error, it's possible that the required ports for the connectivity mode are blocked by a firewall. To fix this issue, either open the required ports or use the gateway mode connectivity as shown in the following code:
+
+```csharp
+tableClient.TableClientConfiguration.UseRestExecutorForCosmosEndpoint = true;
+```
+
+## Define the entity
+
+Entities map to C# objects by using a custom class derived from [TableEntity](/dotnet/api/microsoft.azure.cosmos.table.tableentity). To add an entity to a table, create a class that defines the properties of your entity.
+
+Right click on your project **CosmosTableSamples**. Select **Add**, **New Folder** and name it as **Model**. Within the Model folder add a class named **CustomerEntity.cs** and add the following code to it.
++
+This code defines an entity class that uses the customer's first name as the row key and last name as the partition key. Together, an entity's partition and row key uniquely identify it in the table. Entities with the same partition key can be queried faster than entities with different partition keys but using diverse partition keys allows for greater scalability of parallel operations. Entities to be stored in tables must be of a supported type, for example derived from the [TableEntity](/dotnet/api/microsoft.azure.cosmos.table.tableentity) class. Entity properties you'd like to store in a table must be public properties of the type, and support both getting and setting of values. Also, your entity type must expose a parameter-less constructor.
+
+## Insert or merge an entity
+
+The following code example creates an entity object and adds it to the table. The InsertOrMerge method within the [TableOperation](/dotnet/api/microsoft.azure.cosmos.table.tableoperation) class is used to insert or merge an entity. The [CloudTable.ExecuteAsync](/dotnet/api/microsoft.azure.cosmos.table.cloudtable.executeasync) method is called to execute the operation.
+
+Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **SamplesUtils.cs**. This class stores all the code required to perform CRUD operations on the entities.
++
+## Get an entity from a partition
+
+You can get entity from a partition by using the Retrieve method under the [TableOperation](/dotnet/api/microsoft.azure.cosmos.table.tableoperation) class. The following code example gets the partition key row key, email and phone number of a customer entity. This example also prints out the request units consumed to query for the entity. To query for an entity, append the following code to **SamplesUtils.cs** file:
++
+## Delete an entity
+
+You can easily delete an entity after you have retrieved it by using the same pattern shown for updating an entity. The following code retrieves and deletes a customer entity. To delete an entity, append the following code to **SamplesUtils.cs** file:
++
+## Execute the CRUD operations on sample data
+
+After you define the methods to create table, insert or merge entities, run these methods on the sample data. To do so, right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **BasicSamples.cs** and add the following code to it. This code creates a table, adds entities to it.
+
+If don't want to delete the entity and table at the end of the project, comment the `await table.DeleteIfExistsAsync()` and `SamplesUtils.DeleteEntityAsync(table, customer)` methods from the following code. It's best to comment out these methods and validate the data before you delete the table.
++
+The previous code creates a table that starts with ΓÇ£demoΓÇ¥ and the generated GUID is appended to the table name. It then adds a customer entity with first and last name as ΓÇ£Harp WalterΓÇ¥ and later updates the phone number of this user.
+
+In this tutorial, you built code to perform basic CRUD operations on the data stored in Table API account. You can also perform advanced operations such as ΓÇô batch inserting data, query all the data within a partition, query a range of data within a partition, Lists tables in the account whose names begin with the specified prefix. You can download the complete sample form [azure-cosmos-table-dotnet-core-getting-started](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started) GitHub repository. The [AdvancedSamples.cs](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started/blob/main/CosmosTableSamples/AdvancedSamples.cs) class has more operations that you can perform on the data.
+
+## Run the project
+
+From your project **CosmosTableSamples**. Open the class named **Program.cs** and add the following code to it for calling BasicSamples when the project runs.
++
+Now build the solution and press F5 to run the project. When the project is run, you will see the following output in the command prompt:
++
+If you receive an error that says Settings.json file canΓÇÖt be found when running the project, you can resolve it by adding the following XML entry to the project settings. Right click on CosmosTableSamples, select Edit CosmosTableSamples.csproj and add the following itemGroup:
+
+```csharp
+ <ItemGroup>
+ <None Update="Settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+```
+Now you can sign into the Azure portal and verify that the data exists in the table.
++
+## Next steps
+
+You can now proceed to the next tutorial and learn how to migrate data to Azure Cosmos DB Table API account.
+
+> [!div class="nextstepaction"]
+>[Migrate data to Azure Cosmos DB Table API](table-import.md)
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/tutorial-global-distribution-table.md
+
+ Title: Azure Cosmos DB global distribution tutorial for Table API
+description: Learn how global distribution works in Azure Cosmos DB Table API accounts and how to configure the preferred list of regions
+++++ Last updated : 01/30/2020++
+# Set up Azure Cosmos DB global distribution using the Table API
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the [Table API](introduction.md)
+++
+## Connecting to a preferred region using the Table API
+
+In order to take advantage of the [global distribution](../distribute-data-globally.md), client applications should specify the current location where their application is running. This is done by setting the `CosmosExecutorConfiguration.CurrentRegion` property. The `CurrentRegion` property should contain a single location. Each client instance can specify their own region for low latency reads. The region must be named by using their [display names](/previous-versions/azure/reference/gg441293(v=azure.100)) such as "West US".
+
+The Azure Cosmos DB Table API SDK automatically picks the best endpoint to communicate with based on the account configuration and current regional availability. It prioritizes the closest region to provide better latency to clients. After you set the current `CurrentRegion` property, read and write requests are directed as follows:
+
+* **Read requests:** All read requests are sent to the configured `CurrentRegion`. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
+
+* **Write requests:** The SDK automatically sends all write requests to the current write region. In an account with multi-region writes, current region will serve the writes requests as well. Based on the proximity, the SDK automatically selects a fallback geo-replicated region for high availability.
+
+If you don't specify the `CurrentRegion` property, the SDK uses the current write region for all operations.
+
+For example, if an Azure Cosmos account is in "West US" and "East US" regions. If "West US" is the write region and the application is present in "East US". If the CurrentRegion property is not configured, all the read and write requests are always directed to the "West US" region. If the CurrentRegion property is configured, all the read requests are served from "East US" region.
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Configure global distribution using the Azure portal
+> * Configure global distribution using the Azure Cosmos DB Table APIs
cosmos-db Tutorial Query Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/tutorial-query-table.md
+
+ Title: How to query table data in Azure Cosmos DB?
+description: Learn how to query data stored in the Azure Cosmos DB Table API account by using OData filters and LINQ queries
+++++ Last updated : 06/05/2020++++
+# Tutorial: Query Azure Cosmos DB by using the Table API
+
+The Azure Cosmos DB [Table API](introduction.md) supports OData and [LINQ](/rest/api/storageservices/fileservices/writing-linq-queries-against-the-table-service) queries against key/value (table) data.
+
+This article covers the following tasks:
+
+> [!div class="checklist"]
+> * Querying data with the Table API
+
+The queries in this article use the following sample `People` table:
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Harp | Walter | Walter@contoso.com| 425-555-0101 |
+| Smith | Ben | Ben@contoso.com| 425-555-0102 |
+| Smith | Jeff | Jeff@contoso.com| 425-555-0104 |
+
+See [Querying Tables and Entities](/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the Table API.
+
+For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB Table API](introduction.md) and [Develop with the Table API in .NET](tutorial-develop-table-dotnet.md).
+
+## Prerequisites
+
+For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](create-table-dotnet.md) or the [developer tutorial](tutorial-develop-table-dotnet.md) to create an account and populate your database.
+
+## Query on PartitionKey and RowKey
+
+Because the PartitionKey and RowKey properties form an entity's primary key, you can use the following special syntax to identify the entity:
+
+**Query**
+
+```
+https://<mytableendpoint>/People(PartitionKey='Harp',RowKey='Walter')
+```
+
+**Results**
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Harp | Walter | Walter@contoso.com| 425-555-0104 |
+
+Alternatively, you can specify these properties as part of the `$filter` option, as shown in the following section. Note that the key property names and constant values are case-sensitive. Both the PartitionKey and RowKey properties are of type String.
+
+## Query by using an OData filter
+
+When you're constructing a filter string, keep these rules in mind:
+
+* Use the logical operators defined by the OData Protocol Specification to compare a property to a value. Note that you can't compare a property to a dynamic value. One side of the expression must be a constant.
+* The property name, operator, and constant value must be separated by URL-encoded spaces. A space is URL-encoded as `%20`.
+* All parts of the filter string are case-sensitive.
+* The constant value must be of the same data type as the property in order for the filter to return valid results. For more information about supported property types, see [Understanding the Table Service Data Model](/rest/api/storageservices/understanding-the-table-service-data-model).
+
+Here's an example query that shows how to filter by the PartitionKey and Email properties by using an OData `$filter`.
+
+**Query**
+
+```
+https://<mytableapi-endpoint>/People()?$filter=PartitionKey%20eq%20'Smith'%20and%20Email%20eq%20'Ben@contoso.com'
+```
+
+For more information on how to construct filter expressions for various data types, see [Querying Tables and Entities](/rest/api/storageservices/querying-tables-and-entities).
+
+**Results**
+
+| PartitionKey | RowKey | Email | PhoneNumber |
+| | | | |
+| Smith |Ben | Ben@contoso.com| 425-555-0102 |
+
+The queries on datetime properties don't return any data when executed in Azure Cosmos DB's Table API. While the Azure Table storage stores date values with time granularity of ticks, the Table API in Azure Cosmos DB uses the `_ts` property. The `_ts` property is at a second level of granularity, which isn't an OData filter. So, the queries on timestamp properties are blocked by Azure Cosmos DB. As a workaround, you can define a custom datetime or long data type property and set the date value from the client.
+
+## Query by using LINQ
+You can also query by using LINQ, which translates to the corresponding OData query expressions. Here's an example of how to build queries by using the .NET SDK:
+
+```csharp
+IQueryable<CustomerEntity> linqQuery = table.CreateQuery<CustomerEntity>()
+ .Where(x => x.PartitionKey == "4")
+ .Select(x => new CustomerEntity() { PartitionKey = x.PartitionKey, RowKey = x.RowKey, Email = x.Email });
+```
+
+## Next steps
+
+In this tutorial, you've done the following:
+
+> [!div class="checklist"]
+> * Learned how to query by using the Table API
+
+You can now proceed to the next tutorial to learn how to distribute your data globally.
+
+> [!div class="nextstepaction"]
+> [Distribute your data globally](tutorial-global-distribution-table.md)
cosmos-db Templates Samples Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/templates-samples-cassandra.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, keyspaces, and tables.
-This article has examples for Cassandra API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [SQL](templates-samples-sql.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), [Table](templates-samples-table.md) articles.
+This article has examples for Cassandra API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [SQL](templates-samples-sql.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), [Table](table/resource-manager-templates.md) articles.
> [!IMPORTANT] >
cosmos-db Templates Samples Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/templates-samples-gremlin.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and graphs.
-This article has examples for Gremlin API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](templates-samples-cassandra.md), [SQL](templates-samples-sql.md), [MongoDB](templates-samples-mongodb.md), [Table](templates-samples-table.md) articles.
+This article has examples for Gremlin API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](templates-samples-cassandra.md), [SQL](templates-samples-sql.md), [MongoDB](templates-samples-mongodb.md), [Table](table/resource-manager-templates.md) articles.
> [!IMPORTANT] >
cosmos-db Templates Samples Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/templates-samples-mongodb.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts for MongoDB API, databases, and collections.
-This article has examples for Azure Cosmos DB's API for MongoDB only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [SQL](templates-samples-sql.md), [Table](templates-samples-table.md) articles.
+This article has examples for Azure Cosmos DB's API for MongoDB only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [SQL](templates-samples-sql.md), [Table](table/resource-manager-templates.md) articles.
> [!IMPORTANT] >
cosmos-db Templates Samples Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/templates-samples-sql.md
# Azure Resource Manager templates for Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), and [Table](templates-samples-table.md) APIs.
+This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](templates-samples-cassandra.md), [Gremlin](templates-samples-gremlin.md), [MongoDB](templates-samples-mongodb.md), and [Table](table/resource-manager-templates.md) APIs.
## Core (SQL) API
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-cases.md
After reading this article, you'll be able to answer the following questions:
[Azure Cosmos DB](../cosmos-db/introduction.md) is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. The service is designed to allow customers to elastically (and independently) scale throughput and storage across any number of geographical regions. Azure Cosmos DB is the first globally distributed database service in the market today to offer comprehensive [service level agreements](https://azure.microsoft.com/support/legal/sla/cosmos-db/) encompassing throughput, latency, availability, and consistency.
-Azure Cosmos DB is a global distributed, multi-model database that is used in a wide range of applications and use cases. It is a good choice for any [serverless](https://azure.com/serverless) application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. It supports multiple data models (key-value, documents, graphs and columnar) and many APIs for data access including [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md), [SQL API](./introduction.md), [Gremlin API](graph-introduction.md), and [Tables API](table-introduction.md) natively, and in an extensible manner.
+Azure Cosmos DB is a global distributed, multi-model database that is used in a wide range of applications and use cases. It is a good choice for any [serverless](https://azure.com/serverless) application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. It supports multiple data models (key-value, documents, graphs and columnar) and many APIs for data access including [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md), [SQL API](./introduction.md), [Gremlin API](graph-introduction.md), and [Tables API](table/introduction.md) natively, and in an extensible manner.
The following are some attributes of Azure Cosmos DB that make it well-suited for high-performance applications with global ambition.
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/get-started-partners.md
The retail rates used to compute costs shown in the view are the same prices sho
## Analyze costs in cost analysis
-Partners with access to billing scopes in the partner tenant can explore and analyze invoiced costs in cost analysis across customers for a specific customer or for an invoice. In the [cost analysis](quick-acm-cost-analysis.md) view, you can also [save views](quick-acm-cost-analysis.md#saving-and-sharing-customized-views) and export data to [CSV and PNG files](quick-acm-cost-analysis.md#download-usage-data).
+Partners with access to billing scopes in the partner tenant can explore and analyze invoiced costs in cost analysis across customers for a specific customer or for an invoice. In the [cost analysis](quick-acm-cost-analysis.md) view, you can also [save views](quick-acm-cost-analysis.md#saving-and-sharing-customized-views).
Azure RBAC users with access to the subscription in the customer tenant can also analyze retail costs for subscriptions in the customer tenant, save views, and export data to CSV and PNG files.
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 03/10/2021 Last updated : 07/28/2021 -+ # Quickstart: Explore and analyze costs with cost analysis
To pin cost analysis, select the pin icon in the upper-right corner or just afte
To share a link to cost analysis, select **Share** at the top of the window. A custom URL will show, which opens this specific view for this specific scope. If you don't have cost access and get this URL, you'll see an "access denied" message.
-## Download usage data
-
-### [Portal](#tab/azure-portal)
-
-There are times when you need to download the data for further analysis, merge it with your own data, or integrate it into your own systems. Cost Management offers a few different options. As a starting point, if you need a quick high-level summary, like what you get within cost analysis, build the view you need. Then download it by selecting **Export** and selecting **Download data to CSV** or **Download data to Excel**. The Excel download provides more context on the view you used to generate the download, like scope, query configuration, total, and date generated.
-
-If you need the full, unaggregated dataset, download it from the billing account. Then, from the list of services in the portal's left navigation pane, go to **Cost Management + Billing**. Select your billing account, if applicable. Go to **Usage + charges**, and then select the **Download** icon for a billing period.
-
-### [Azure CLI](#tab/azure-cli)
-
-Start by preparing your environment for the Azure CLI:
--
-After you sign in, use the [az costmanagement query](/cli/azure/costmanagement#az_costmanagement_query) command to query month-to-date usage information for your subscription:
-
-```azurecli
-az costmanagement query --timeframe MonthToDate --type Usage \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000"
-```
-
-You can also narrow the query by using the **--dataset-filter** parameter or other parameters:
-
-```azurecli
-az costmanagement query --timeframe MonthToDate --type Usage \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000" \
- --dataset-filter "{\"and\":[{\"or\":[{\"dimension\":{\"name\":\"ResourceLocation\",\"operator\":\"In\",\"values\":[\"East US\",\"West Europe\"]}},{\"tag\":{\"name\":\"Environment\",\"operator\":\"In\",\"values\":[\"UAT\",\"Prod\"]}}]},{\"dimension\":{\"name\":\"ResourceGroup\",\"operator\":\"In\",\"values\":[\"API\"]}}]}"
-```
-
-The **--dataset-filter** parameter takes a JSON string or `@json-file`.
-
-You also have the option of using the [az costmanagement export](/cli/azure/costmanagement/export) commands to export usage data to an Azure storage account. You can download the data from there.
-
-1. Create a resource group or use an existing resource group. To create a resource group, run the [az group create](/cli/azure/group#az_group_create) command:
-
- ```azurecli
- az group create --name TreyNetwork --location "East US"
- ```
-
-1. Create a storage account to receive the exports or use an existing storage account. To create an account, use the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command:
-
- ```azurecli
- az storage account create --resource-group TreyNetwork --name cmdemo
- ```
-
-1. Run the [az costmanagement export create](/cli/azure/costmanagement/export#az_costmanagement_export_create) command to create the export:
-
- ```azurecli
- az costmanagement export create --name DemoExport --type Usage \
- --scope "subscriptions/00000000-0000-0000-0000-000000000000" --storage-account-id cmdemo \
- --storage-container democontainer --timeframe MonthToDate --storage-directory demodirectory
- ```
--- ## Clean up resources - If you pinned a customized view for cost analysis and you no longer need it, go to the dashboard where you pinned it and delete the pinned view.
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/change-credit-card.md
tags: billing
Previously updated : 11/20/2020 Last updated : 07/27/2021
This document applies to customers who signed up for Azure online with a credit card.
-In the Azure portal, you can change your default payment method to a new credit card and update your credit card details. You must be an [Account Administrator](../understand/subscription-transfer.md#whoisaa) or you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes.
+In the Azure portal, you can change your default payment method to a new credit card and update your credit card details. You must be an [Account Administrator](../understand/subscription-transfer.md#whoisaa) or you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes. You can also replace your current credit card for all subscriptions.
If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
The supported payment methods for Microsoft Azure are credit cards and check/wir
With a Microsoft Customer Agreement, your payment methods are associated with billing profiles. Learn how to [check access to a Microsoft Customer Agreement](#check-the-type-of-your-account). If you have an MCA, skip to [manage credit cards for a Microsoft Customer Agreement](#manage-credit-cards-for-a-microsoft-customer-agreement).
+>[!NOTE]
+> When you create a new subscription, you can specify a new credit card. When you do so, no other subscriptions get associated with the new credit card. However, if you later make any of the following changes, *all subscriptions* will use the payment method you select.
+ >- Make a payment method active with the **Set active** option
+ >- Use the **Replace** payment option for any subscription
+ >- Change the default payment method
+ <a id="addcard"></a> ## Manage credit cards for an Azure subscription The following sections apply to customers who have a Microsoft Online Services Program billing account. Learn how to [check your billing account type](#check-the-type-of-your-account). If your billing account type is Microsoft Online Services Program, payment methods are associated with individual Azure subscriptions. If you get an error after you add the credit card, see [Credit card declined at Azure sign-up](./troubleshoot-declined-card.md).
-### Change credit card for a subscription by adding a new credit card
+### Change credit card for all subscriptions by adding a new credit card
+
+You can change the default credit of your Azure subscription to a new credit card or previously saved credit card in the Azure portal. You must be the Account Administrator to change the credit card.
-You can change the default credit of your Azure subscription to a new credit card or previously saved credit card in the Azure portal. You must be the Account Administrator to change the credit card. If multiple subscriptions have the same active payment method, then changing the active payment method on any of the subscriptions also updates the active payment method on the others.
+If multiple subscriptions have the same active payment method, then changing the default payment method on any of the subscriptions also updates the active payment method for the others.
You can change your subscription's default credit card to a new one by following these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Administrator. 1. Search for **Cost Management + Billing**.
- ![Screenshot that shows search](./media/change-credit-card/search.png)
+ :::image type="content" source="./media/change-credit-card/search.png" alt-text="Screenshot showing Search." lightbox="./media/change-credit-card/search.png" :::
1. Select the subscription you'd like to add the credit card to. 1. Select **Payment methods**.
- ![Screenshot that shows Manage payment methods option selected](./media/change-credit-card/payment-methods-blade-x.png)
-1. In the top-left corner, select ΓÇ£+ΓÇ¥ to add a card. A credit card form will appear on the right.
+ :::image type="content" source="./media/change-credit-card/payment-methods-blade-x.png" alt-text="Screenshot showing Manage payment methods option selected." lightbox="./media/change-credit-card/payment-methods-blade-x.png" :::
+1. In the top-left corner, select **+ Add** to add a card. A credit card form appears on the right.
1. Enter credit card details.
- ![Screenshot that shows adding a new card](./media/change-credit-card/sub-add-new-x.png)
-1. To make this card your active payment method, check the box next to **Make this my active payment method** above the form. This card will become the active payment instrument for all subscriptions using the same card as the selected subscription.
+ :::image type="content" source="./media/change-credit-card/sub-add-new-default.png" alt-text="Screenshot showing adding a new card." lightbox="./media/change-credit-card/sub-add-new-default.png" :::
+1. To make this card your default payment method, select **Make this my default payment method** above the form. This card becomes the active payment instrument for all subscriptions using the same card as the selected subscription.
1. Select **Next**.
-### Change credit card for a subscription to a previously saved credit card
+### Replace credit card for a subscription to a previously saved credit card
-You can also change your subscription's default credit card to a one that is already saved to your account by following these steps:
+You can also replace a subscription's default credit card to one that is already saved to your account by following these steps. This procedure changes the credit card for all other subscriptions.
1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Administrator. 1. Search for **Cost Management + Billing**.
- ![Screenshot that shows search](./media/change-credit-card/search.png)
+ :::image type="content" source="./media/change-credit-card/search.png" alt-text="Screenshot showing Search for Cost Management + Billing." lightbox="./media/change-credit-card/search.png" :::
1. Select the subscription you'd like to add the credit card to. 1. Select **Payment methods**.
- ![Screenshot that shows Manage payment methods option selected](./media/change-credit-card/payment-methods-blade-x.png)
-1. Select the box next to the card you'd like to make the active payment method.
-1. Select **Set active**.
- ![Screenshot that shows credit card selected and set active](./media/change-credit-card/sub-change-active-x.png)
+ :::image type="content" source="./media/change-credit-card/payment-methods-blade-x.png" alt-text="Screenshot showing Manage payment methods option." lightbox="./media/change-credit-card/payment-methods-blade-x.png" :::
+1. Select **Replace** to change the current credit card to one you select.
+ :::image type="content" source="./media/change-credit-card/replace-credit-card.png" alt-text="Screenshot showing the Replace option." lightbox="./media/change-credit-card/replace-credit-card.png" :::
+1. In the **Replace default payment method**, select another credit card to replace the default credit card and then select **Next**.
+ :::image type="content" source="./media/change-credit-card/replace-default-payment-method.png" alt-text="Screenshot showing the Replace default payment method box." lightbox="./media/change-credit-card/replace-default-payment-method.png" :::
+1. After a few moments, you'll see confirmation that your payment method was changed.
### Edit credit card details
If your credit card gets renewed and the number stays the same, update the exist
1. Sign in to the [Azure portal](https://portal.azure.com) as the Account Administrator. 1. Search for **Cost Management + Billing**.
- ![Screenshot that shows search](./media/change-credit-card/search.png)
+ :::image type="content" source="./media/change-credit-card/search.png" alt-text="Screenshot of Search." lightbox="./media/change-credit-card/search.png" :::
1. Select **Payment methods**.
- ![Screenshot that shows Manage payment methods option selected](./media/change-credit-card/payment-methods-blade-x.png)
+ :::image type="content" source="./media/change-credit-card/payment-methods-blade-x.png" alt-text="Screenshot showing Manage payment methods" lightbox="./media/change-credit-card/payment-methods-blade-x.png" :::
1. Select the credit card that you'd like to edit. A credit card form will appear on the right.
- ![Screenshot that shows credit card selected](./media/change-credit-card/edit-card-x.png)
+ :::image type="content" source="./media/change-credit-card/edit-card-x.png" alt-text="Screenshot showing Edit payment method." lightbox="./media/change-credit-card/edit-card-x.png" :::
1. Update the credit card details.
-1. Select **Save**.
+1. Select **Next**.
## Manage credit cards for a Microsoft Customer Agreement
To change your credit card, follow these steps:
1. In the menu on the left, select **Billing profiles**. 1. Select a billing profile. 1. In the menu on the left, select **Payment methods**.
- ![Screenshot that shows payment methods in menu](./media/change-credit-card/payment-methods-tab-mca.png)
+ :::image type="content" source="./media/change-credit-card/payment-methods-tab-mca.png" alt-text="Screenshot showing payment methods in menu." lightbox="./media/change-credit-card/payment-methods-tab-mca.png" :::
1. In the **Default payment method** section, select **Replace**.
- :::image type="content" source="./media/change-credit-card/change-payment-method-mca.png" alt-text="Screenshot that shows the replace option" :::
+ :::image type="content" source="./media/change-credit-card/change-payment-method-mca.png" alt-text="Screenshot showing Replace." lightbox="./media/change-credit-card/change-payment-method-mca.png" :::
1. In the new area on the right, either select an existing card from the drop-down or add a new one by selecting the blue **Add new payment method** link. ### Edit a credit card
To edit a credit card, follow these steps:
1. In the menu on the left, select **Billing profiles**. 1. Select a billing profile. 1. In the menu on the left, select **Payment methods**.
- ![Screenshot that shows payment methods in menu](./media/change-credit-card/payment-methods-tab-mca.png)
+ :::image type="content" source="./media/change-credit-card/payment-methods-tab-mca.png" alt-text="Screenshot showing the payment methods in menu." lightbox="./media/change-credit-card/payment-methods-tab-mca.png" :::
1. In the **Your credit cards** section, find the credit card you want to edit. 1. Select the ellipsis (`...`) at the end of the row.
- :::image type="content" source="./media/change-credit-card/edit-delete-credit-card-mca.png" alt-text="Screenshot that shows the ellipsis" :::
+ :::image type="content" source="./media/change-credit-card/edit-delete-credit-card-mca.png" alt-text="Screenshot showing the ellipsis." lightbox="./media/change-credit-card/edit-delete-credit-card-mca.png" :::
1. To edit your credit card details, select **Edit** from the context menu. ## Troubleshooting
The following sections answer commonly asked questions about changing your credi
If you keep getting this error message even if you've already logged out and back in, try again with a private browsing session.
-### How do I use a different card for each subscription I have?
+### How do I use a different card for each subscription?
+
+As noted previously, when you create a new subscription, you can specify a new credit card. When you do so, no other subscriptions get associated with the new credit card. You can add multiple new subscriptions, each with a unique credit card. However, if you later make any of the following changes, *all subscriptions* will use the payment method you select.
-Unfortunately, if your subscriptions are already using the same card, it's not possible to separate them to use different cards. However, when you sign up for a new subscription, you can choose to use a new payment method for that subscription.
+- Make a payment method active with the **Set active** option
+- Use the **Replace** payment option for any subscription
+- Change the default payment method
### How do I make payments?
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
Title: Download Azure billing invoice and daily usage data
-description: Describes how to download or view your Azure billing invoice and daily usage data.
-keywords: billing invoice,invoice download,azure invoice,azure usage
+ Title: Download Azure billing invoice
+description: Describes how to download or view your Azure billing invoice.
+keywords: billing invoice,invoice download,azure invoice
tags: billing Previously updated : 05/13/2021 Last updated : 07/28/2021
-# Download or view your Azure billing invoice and daily usage data
+# Download or view your Azure billing invoice
For most subscriptions, you can download your invoice from the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), you can't download your organization's invoices. Invoices are sent to whoever is set up to receive invoices for the enrollment.
-If you're an EA customer or have a [Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement), you can download usage in the [Azure portal](https://portal.azure.com/).
+Only certain roles have permission to get billing invoice, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
-Only certain roles have permission to get billing invoice and usage information, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
-
-If you have a Microsoft Customer Agreement, you must be a billing profile Owner, Contributor, Reader, or Invoice manager to view billing and usage information. To learn more about billing roles for Microsoft Customer Agreements, see [Billing profile roles and tasks](understand-mca-roles.md#billing-profile-roles-and-tasks).
+If you have a Microsoft Customer Agreement, you must be a billing profile Owner, Contributor, Reader, or Invoice manager to view billing information. To learn more about billing roles for Microsoft Customer Agreements, see [Billing profile roles and tasks](understand-mca-roles.md#billing-profile-roles-and-tasks).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
If you have a Microsoft Customer Agreement, you can opt in to get your invoice i
You can opt out of getting your invoice by email by following the steps above and clicking **Opt out**. All Owners, Contributors, Readers, and Invoice managers will be opted out of getting the invoice by email, too. If you are a Reader, you cannot change the email invoice preference.
-## Download usage in Azure portal
-
- For most subscriptions, follow these steps to find your daily usage:
-
-1. Select your subscription from the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal as [a user with access to invoices](manage-billing-access.md).
-
-2. Select **Invoices**.
-
- ![Screenshot that shows the Billing & usage option](./media/download-azure-invoice-daily-usage-date/billingandusage.png)
-
-3. Click the download button of a invoice period that you want to check.
-
-4. Download a daily breakdown of consumed quantities and estimated charges by clicking **Download csv**. This may take a few minutes to prepare the csv file.
-
-### Download usage for EA customers
-
-To view and download usage data as a EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for *Cost Management + Billing*.
-1. If you have access to multiple billing accounts, select the billing scope for your EA billing account.
-1. Select **Usage + charges**.
-1. For the month you want to download, select **Download**.
-
-### Download usage for your Microsoft Customer Agreement
-
-To view and download usage data for a billing profile, you must be a billing profile Owner, Contributor, Reader, or Invoice manager.
-
-#### Download usage for billed charges
-
-1. Search for **Cost Management + Billing**.
-2. Select a billing profile.
-3. Select **Invoices**.
-4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
-5. Click on the ellipsis (`...`) at the end of the row.
-6. In the download context menu, select **Azure usage and charges**.
-
-#### Download usage for open charges
-
-You can also download month-to-date usage for the current billing period, meaning the charges have not been billed yet.
-
-1. Search for **Cost Management + Billing**.
-2. Select a billing profile.
-3. In the **Overview** blade, click **Download Azure usage and charges**.
-
-## Check access to a Microsoft Customer Agreement
- ## Next steps To learn more about your invoice and charges, see:
cost-management-billing Troubleshoot Customer Agreement Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
This article helps you troubleshoot Microsoft Customer Agreement (MCA) billing i
By using the information from your usage files, you can get a better understanding of usage issues and diagnose them. Usage files are generated in comma delimited (CSV) format. Because the usage files might be large CSV files, they're easier to manipulate and view as pivot tables in a spreadsheet application like Excel. Examples in this article use Excel, but you can use any spreadsheet application that you want.
-Only Billing profile owners, Contributors, Readers, or Invoice Managers have access to download usage files. For more information, see [Download usage for your Microsoft Customer Agreement](./download-azure-invoice-daily-usage-date.md#download-usage-for-your-microsoft-customer-agreement).
+Only Billing profile owners, Contributors, Readers, or Invoice Managers have access to download usage files. For more information, see [Download usage for your Microsoft Customer Agreement](../understand/download-azure-daily-usage.md).
## Get the data and format it Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Use the following steps to format the data as a table.
-1. Download the usage file using the instructions at [Download usage in Azure portal](./download-azure-invoice-daily-usage-date.md#download-usage-in-azure-portal).
+1. Download the usage file using the instructions at [Download usage in Azure portal](../understand/download-azure-daily-usage.md).
1. Open the file in Excel. 1. The unformatted data resembles the following example. :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/raw-csv-data-mca.png" alt-text="Example showing unformatted data" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/raw-csv-data-mca.png" :::
cost-management-billing Troubleshoot Ea Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-ea-billing-issues-usage-file-pivot-tables.md
Only EA admins, Account Owners, and Department Admins have access to download us
Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Use the following steps to format the data as table.
-1. Download the Usage Details Version 2 with All Charges (usage and purchases) file using the instructions at [Download usage for EA customers](./download-azure-invoice-daily-usage-date.md#download-usage-for-ea-customers).
+1. Download the Usage Details Version 2 with All Charges (usage and purchases) file using the instructions at [Download usage for EA customers](../understand/download-azure-daily-usage.md).
1. Open the file in Excel. 1. The unformatted data resembles the following example. :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/raw-csv-data-ea.png" alt-text="Example showing unformatted data in Excel" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/raw-csv-data-ea.png" :::
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/download-azure-daily-usage.md
Title: View and Download Azure usage and charges
+ Title: View and download Azure usage and charges
description: Learn how to download or view your Azure daily usage and charges, and see additional available resources. keywords: billing usage, usage charges, usage download, view usage, azure invoice, azure usage
tags: billing
Previously updated : 08/20/2020+ Last updated : 07/28/2020 # View and download your Azure usage and charges
-You can download a daily breakdown of your Azure usage and charges in the Azure portal. Only certain roles have permission to get Azure usage information, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](../manage/manage-billing-access.md).
+You can download a daily breakdown of your Azure usage and charges in the Azure portal. You can also get your usage data using Azure CLI. Only certain roles have permission to get Azure usage information, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](../manage/manage-billing-access.md).
-If you have a Microsoft Customer Agreement (MCA), you must be a billing profile Owner, Contributor, Reader, or Invoice manager to view your Azure usage and charges. If you have a Microsoft Partner Agreement (MPA), only the Global Admin and Admin Agent role in the partner organization Microsoft can view and download Azure usage and charges. [Check your billing account type in the Azure portal](#check-your-billing-account-type).
+If you have a Microsoft Customer Agreement (MCA), you must be a billing profile Owner, Contributor, Reader, or Invoice manager to view your Azure usage and charges. If you have a Microsoft Partner Agreement (MPA), only the Global Admin and Admin Agent role in the partner organization Microsoft can view and download Azure usage and charges.
Based on the type of subscription that you use, options to download your usage and charges vary.
To view and download usage data as a EA customer, you must be an Enterprise Admi
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for *Cost Management + Billing*. ![Screenshot shows Azure portal search.](./media/download-azure-daily-usage/portal-cm-billing-search.png)
+1. If you have access to multiple billing accounts, select the billing scope for your EA billing account.
1. Select **Usage + charges**. 1. For the month you want to download, select **Download**. ![Screenshot shows Cost Management + Billing Invoices page for E A customers.](./media/download-azure-daily-usage/download-usage-ea.png)
-## Download usage for pending charges
+## Download usage for your Microsoft Customer Agreement
+
+To view and download usage data for a billing profile, you must be a billing profile Owner, Contributor, Reader, or Invoice manager.
+
+### Download usage for billed charges
+
+1. Search for **Cost Management + Billing**.
+2. Select a billing profile.
+3. Select **Invoices**.
+4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
+5. Click on the ellipsis (`...`) at the end of the row.
+6. In the download context menu, select **Azure usage and charges**.
+
+### Download usage for open charges
+
+You can also download month-to-date usage for the current billing period, meaning the charges have not been billed yet.
+
+1. Search for **Cost Management + Billing**.
+2. Select a billing profile.
+3. In the **Overview** blade, click **Download Azure usage and charges**.
+
+### Download usage for pending charges
If you have a Microsoft Customer Agreement, you can download month-to-date usage for the current billing period. These usage charges that have not been billed yet.
If you have a Microsoft Customer Agreement, you can download month-to-date usage
2. Search for *Cost Management + Billing*. 3. Select a billing profile. Depending on your access, you might need to select a billing account first. 4. In the **Overview** area, find the download links beneath the recent charges.
-5. Select **Download usage and prices**.
- :::image type="content" source="./media/download-azure-daily-usage/open-usage01.png" alt-text="Screenshot that shows download from Overview" lightbox="./media/download-azure-daily-usage/open-usage01.png" :::
+5. Select **Download usage and prices**.
+
+## Get usage data with Azure CLI
+
+Start by preparing your environment for the Azure CLI:
++
+After you sign in, use the [az costmanagement query](/cli/azure/costmanagement#az_costmanagement_query) command to query month-to-date usage information for your subscription:
+
+```azurecli
+az costmanagement query --timeframe MonthToDate --type Usage \
+ --scope "subscriptions/00000000-0000-0000-0000-000000000000"
+```
+
+You can also narrow the query by using the **--dataset-filter** parameter or other parameters:
+
+```azurecli
+az costmanagement query --timeframe MonthToDate --type Usage \
+ --scope "subscriptions/00000000-0000-0000-0000-000000000000" \
+ --dataset-filter "{\"and\":[{\"or\":[{\"dimension\":{\"name\":\"ResourceLocation\",\"operator\":\"In\",\"values\":[\"East US\",\"West Europe\"]}},{\"tag\":{\"name\":\"Environment\",\"operator\":\"In\",\"values\":[\"UAT\",\"Prod\"]}}]},{\"dimension\":{\"name\":\"ResourceGroup\",\"operator\":\"In\",\"values\":[\"API\"]}}]}"
+```
+
+The **--dataset-filter** parameter takes a JSON string or `@json-file`.
+
+You also have the option of using the [az costmanagement export](/cli/azure/costmanagement/export) commands to export usage data to an Azure storage account. You can download the data from there.
+
+1. Create a resource group or use an existing resource group. To create a resource group, run the [az group create](/cli/azure/group#az_group_create) command:
+
+ ```azurecli
+ az group create --name TreyNetwork --location "East US"
+ ```
+
+1. Create a storage account to receive the exports or use an existing storage account. To create an account, use the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command:
+
+ ```azurecli
+ az storage account create --resource-group TreyNetwork --name cmdemo
+ ```
+
+1. Run the [az costmanagement export create](/cli/azure/costmanagement/export#az_costmanagement_export_create) command to create the export:
-## Check your billing account type
+ ```azurecli
+ az costmanagement export create --name DemoExport --type Usage \
+ --scope "subscriptions/00000000-0000-0000-0000-000000000000" --storage-account-id cmdemo \
+ --storage-container democontainer --timeframe MonthToDate --storage-directory demodirectory
+ ```
## Need help? Contact us.
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
Title: How to clean up SSISDB logs automatically
description: This article describes how to clean up SSIS project deployment and package execution logs stored in SSISDB by invoking the relevant SSISDB stored procedure automatically via Azure Data Factory, Azure SQL Managed Instance Agent, or Elastic Database Jobs. Previously updated : 07/18/2021 Last updated : 07/28/2021
Once you provision an Azure-SQL Server Integration Services (SSIS) integration r
- SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model) - file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
-In the Project Deployment Model, your Azure-SSIS IR will deploy SSIS projects into SSISDB, fetch SSIS packages to run from SSISDB, and write package execution logs back into SSISDB. To manage the accumulated logs, we've provided relevant SSISDB properties and stored procedure that can be invoked automatically via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
+In the Project Deployment Model, your Azure-SSIS IR will deploy SSIS projects into SSISDB, fetch SSIS packages to run from SSISDB, and write package execution logs back into SSISDB. SSISDB is also used to store SSIS job and IR operation logs. To manage the accumulated logs, we've provided relevant SSISDB properties and stored procedures that can be invoked automatically on schedule via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
-## SSISDB log clean-up properties and stored procedure
-To configure SSISDB log clean-up properties, you can connect to SSISDB hosted by your Azure SQL Database server/Managed Instance using SQL Server Management Studio (SSMS), see [Connecting to SSISDB](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial?view=sql-server-ver15&preserve-view=true#connect-to-the-ssisdb-database). Once connected, on the **Object Explorer** window of SSMS, you can expand the **Integration Services Catalogs** node, right-click on the **SSISDB** subnode, and select the **Properties** menu item to open **Catalog Properties** dialog box. On the **Catalog Properties** dialog box, you can find the following SSISDB log clean-up properties:
+## SSISDB log clean-up properties and stored procedures
+To manage SSIS package execution logs, you can configure SSISDB log clean-up properties by connecting to SSISDB hosted by your Azure SQL Database server/Managed Instance using SQL Server Management Studio (SSMS), see [Connecting to SSISDB](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial?view=sql-server-ver15&preserve-view=true#connect-to-the-ssisdb-database). Once connected, on the **Object Explorer** window of SSMS, you can expand the **Integration Services Catalogs** node, right-click on the **SSISDB** subnode, and select the **Properties** menu item to open **Catalog Properties** dialog box. On the **Catalog Properties** dialog box, you can find the following SSISDB log clean-up properties:
-- **Clean Logs Periodically**: Enables automatic clean-up of package execution logs, by default set to *True*.-- **Retention Period (days)**: Specifies the maximum age of retained logs (in days), by default set to *365* and older logs are deleted by automatic clean-up.-- **Periodically Remove Old Versions**: Enables automatic clean-up of stored project versions, by default set to *True*.-- **Maximum Number of Versions per Project**: Specifies the maximum number of stored project versions, by default set to *10* and older versions are deleted by automatic clean-up.
+- **Clean Logs Periodically**: Enables the clean-up of package execution logs, by default set to *True*.
+- **Retention Period (days)**: Specifies the maximum age of retained logs (in days), by default set to *365* and older logs are deleted when the relevant SSISDB stored procedure is invoked.
+- **Periodically Remove Old Versions**: Enables the clean-up of stored project versions, by default set to *True*.
+- **Maximum Number of Versions per Project**: Specifies the maximum number of stored project versions, by default set to *10* and older versions are deleted when the relevant SSISDB stored procedure is invoked.
![SSISDB log clean-up properties](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/clean-up-logs-ssms-ssisdb-properties.png)
-Once SSISDB log clean-up properties are configured, you can invoke the relevant SSISDB stored procedure, `[internal].[cleanup_server_retention_window_exclusive]`, to clean up logs automatically via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
+Once SSISDB log clean-up properties are configured, you can invoke the relevant SSISDB stored procedure, `[internal].[cleanup_server_retention_window_exclusive]`, to clean up SSIS package execution logs.
+
+To clean up SSIS job logs, you can invoke the relevant SSISDB stored procedure, `[internal].[cleanup_completed_jobs_exclusive]`. The retention period is by default set to 60 minutes and older logs are deleted when the stored procedure is invoked.
+
+To clean up SSIS IR operation logs, you can invoke the relevant SSISDB stored procedure, `[internal].[cleanup_expired_worker]`. The retention period is by default set to 168 hours and older logs are deleted when the stored procedure is invoked.
+
+These SSISDB stored procedures clean up different SSISDB tables:
+
+| SSISDB stored procedures | SSISDB tables to clean up |
+|--||
+| `[internal].[cleanup_server_retention_window_exclusive]` | `[internal].[operations]`<br/><br/>`[internal].[operation_messages_scaleout]`<br/><br/>`[internal].[event_messages_scaleout]`<br/><br/>`[internal].[event_message_context_scaleout]` |
+| `[internal].[cleanup_completed_jobs_exclusive]` | `[internal].[jobs]`<br/><br/>`[internal].[tasks]`<br/><br/>`[internal].[job_worker_agents]` |
+| `[internal].[cleanup_expired_worker]` | `[internal].[worker_agents]` |
+
+These SSISDB stored procedures can also be invoked automatically on schedule via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
## Clean up SSISDB logs automatically via ADF
-Regardless whether you use Azure SQL database server/Managed Instance to host SSISDB, you can always use ADF to clean up SSISDB logs automatically. To do so, you can prepare an Execute SSIS Package activity in ADF pipeline with an embedded package containing a single Execute SQL Task that invokes the relevant SSISDB stored procedure. See example 4) in our blog: [Run Any SQL Anywhere in 3 Easy Steps with SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244).
+Regardless whether you use Azure SQL database server/Managed Instance to host SSISDB, you can always use ADF to clean up SSISDB logs automatically on schedule. To do so, you can prepare an Execute SSIS Package activity in ADF pipeline with an embedded package containing a single Execute SQL Task that invokes the relevant SSISDB stored procedures. See example 4) in our blog: [Run Any SQL Anywhere in 3 Easy Steps with SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244).
![SSISDB log clean-up via ADF](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/run-sql-ssis-activity-ssis-parameters-ssisdb-clean-up.png)
+For the **SQLStatementSource** parameter, you can enter `EXEC internal.cleanup_server_retention_window_exclusive` to clean up SSIS package execution logs.
+
+To clean up SSIS job logs, you can add `EXEC internal.cleanup_completed_jobs_exclusive [@minutesToKeep=ΓÇÖNumber of minutes to set as retention periodΓÇÖ]`.
+
+To clean up SSIS IR operation logs, you can add `EXEC internal.cleanup_expired_worker [@hoursToKeep=ΓÇÖNumber of hours to set as retention periodΓÇÖ] `.
+ Once your ADF pipeline is prepared, you can attach a schedule trigger to run it periodically, see [How to trigger ADF pipeline on a schedule](quickstart-create-data-factory-portal.md#trigger-the-pipeline-on-a-schedule). ## Clean up SSISDB logs automatically via Azure SQL Managed Instance Agent
-If you use Azure SQL Managed Instance to host SSISDB, you can also use its built-in job orchestrator/scheduler, Azure SQL Managed Instance Agent, to clean up SSISDB logs automatically. If SSISDB is recently created in your Azure SQL Managed Instance, we've also created a T-SQL job called **SSIS Server Maintenance Job** under Azure SQL Managed Instance Agent for this purpose. It's by default disabled and configured with a schedule to run daily. If you want to enable it and or reconfigure its schedule, you can do so by connecting to your Azure SQL Managed Instance using SSMS. Once connected, on the **Object Explorer** window of SSMS, you can expand the **SQL Server Agent** node, expand the **Jobs** subnode, and double click on the **SSIS Server Maintenance Job** to enable/reconfigure it.
+If you use Azure SQL Managed Instance to host SSISDB, you can also use its built-in job orchestrator/scheduler, Azure SQL Managed Instance Agent, to clean up SSISDB logs automatically on schedule. If SSISDB is recently created in your Azure SQL Managed Instance, we've also created a T-SQL job called **SSIS Server Maintenance Job** under Azure SQL Managed Instance Agent to specifically clean up SSIS package execution logs. It's by default disabled and configured with a schedule to run daily. If you want to enable it and or reconfigure its schedule, you can do so by connecting to your Azure SQL Managed Instance using SSMS. Once connected, on the **Object Explorer** window of SSMS, you can expand the **SQL Server Agent** node, expand the **Jobs** subnode, and double click on the **SSIS Server Maintenance Job** to enable/reconfigure it.
![SSISDB log clean-up via Azure SQL Managed Instance Agent](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/clean-up-logs-ssms-maintenance-job.png)
EXEC dbo.sp_add_job
@job_name = N'SSIS Server Maintenance Job', @enabled = 0, @owner_login_name = '##MS_SSISServerCleanupJobLogin##',
- @description = N'Runs every day. The job removes operation records from the database that are outside the retention window and maintains a maximum number of versions per project.'
+ @description = N'Runs every day. The job removes operation records from the database that are outside the retention period and maintains a maximum number of versions per project.'
DECLARE @IS_server_name NVARCHAR(30) SELECT @IS_server_name = CONVERT(NVARCHAR, SERVERPROPERTY('ServerName'))
EXEC sp_add_jobschedule
@active_end_time = 120000 ```
+You can also configure the existing **SSIS Server Maintenance Job** or modify the above T-SQL script to additionally clean up SSIS job/IR operation logs by invoking the relevant SSISDB stored procedures.
+ ## Clean up SSISDB logs automatically via Elastic Database Jobs
-If you use Azure SQL Database server to host SSISDB, it doesn't have a built-in job orchestrator/scheduler, so you must use an external component, e.g. ADF (see above) or Elastic Database Jobs (see the rest of this section), to clean up SSISDB logs automatically.
+If you use Azure SQL Database server to host SSISDB, it doesn't have a built-in job orchestrator/scheduler, so you must use an external component, e.g. ADF (see above) or Elastic Database Jobs (see the rest of this section), to clean up SSISDB logs automatically on schedule.
-Elastic Database Jobs is an Azure service that can automate and run jobs against a database or group of databases. You can schedule, run, and monitor these jobs by using Azure portal, Azure PowerShell, T-SQL, or REST APIs. Use Elastic Database Jobs to invoke the relevant SSISDB stored procedure for log clean-up one time or on a schedule. You can choose the schedule interval based on SSISDB resource usage to avoid heavy database load.
+Elastic Database Jobs is an Azure service that can automate and run jobs against a database or group of databases. You can schedule, run, and monitor these jobs by using Azure portal, Azure PowerShell, T-SQL, or REST APIs. Use Elastic Database Jobs to invoke the relevant SSISDB stored procedures for log clean-up one time or on a schedule. You can choose the schedule interval based on SSISDB resource usage to avoid heavy database load.
For more info, see [Manage groups of databases with Elastic Database Jobs](../azure-sql/database/elastic-jobs-overview.md).
-The following sections describe how to invoke the relevant SSISDB stored procedure, `[internal].[cleanup_server_retention_window_exclusive]`, which removes SSISDB logs that are outside the configured retention window.
+The following sections describe how to invoke the relevant SSISDB stored procedures, `[internal].[cleanup_server_retention_window_exclusive]`/`[internal].[cleanup_completed_jobs_exclusive]`/`[internal].[cleanup_expired_worker]`, which remove SSISDB logs that are outside their specific retention periods.
### Configure Elastic Database Jobs using Azure PowerShell [!INCLUDE [requires-azurerm](../../includes/requires-azurerm.md)]
-The following Azure PowerShell scripts create a new Elastic Job that invokes SSISDB log clean-up stored procedure. For more info, see [Create an Elastic Job agent using PowerShell](../azure-sql/database/elastic-jobs-powershell-create.md).
+The following Azure PowerShell scripts create a new Elastic Job that invokes your selected SSISDB log clean-up stored procedure. For more info, see [Create an Elastic Job agent using PowerShell](../azure-sql/database/elastic-jobs-powershell-create.md).
#### Create parameters
param(
$ResourceGroupName = $(Read-Host "Please enter an existing resource group name"), $AgentServerName = $(Read-Host "Please enter the name of an existing Azure SQL Database server, for example myjobserver, to hold your job database"), $SSISDBLogCleanupJobDB = $(Read-Host "Please enter a name for your job database to be created in the given Azure SQL Database server"),
+$StoredProcName = $(Read-Host "Please enter the name of SSISDB log clean-up stored procedure to be invoked by your job (internal.cleanup_server_retention_window_exclusive/internal.cleanup_completed_jobs_exclusive/internal.cleanup_expired_worker)"),
# Your job database should be a clean, empty S0 or higher service tier. We set S0 as default. $PricingTier = "S0",
$SSISDBServerAdminPassword = $(Read-Host "Please enter the target server admin p
$SSISDBName = "SSISDB", # Parameters needed to set the job schedule for invoking SSISDB log clean-up stored procedure
-$RunJobOrNot = $(Read-Host "Please indicate whether you want to run the job to clean up SSISDB logs outside the retention window immediately (Y/N). Make sure the retention window is set properly before running the following scripts as deleted logs cannot be recovered."),
+$RunJobOrNot = $(Read-Host "Please indicate whether you want to run your job that cleans up SSISDB logs outside their retention period immediately (Y/N). Make sure the specific retention period is set properly before running the following scripts as deleted logs cannot be recovered."),
$IntervalType = $(Read-Host "Please enter the interval type for SSISDB log clean-up schedule: Year, Month, Day, Hour, Minute, Second are supported."), $IntervalCount = $(Read-Host "Please enter the count of interval type for SSISDB log clean-up schedule."),
Invoke-SqlCmd @Params
Write-Output "Grant appropriate permissions on SSISDB..." $TargetDatabase = $SSISDBName $CreateJobUser = "CREATE USER SSISDBLogCleanupUser FROM LOGIN SSISDBLogCleanupUser"
-$GrantStoredProcedureExecution = "GRANT EXECUTE ON internal.cleanup_server_retention_window_exclusive TO SSISDBLogCleanupUser"
+$GrantStoredProcedureExecution = "GRANT EXECUTE ON " + $StoredProcName + " TO SSISDBLogCleanupUser"
$TargetDatabase | ForEach-Object -Process { $Params.Database = $_
$JobName = "CleanupSSISDBLog"
$Job = $JobAgent | New-AzureRmSqlElasticJob -Name $JobName -RunOnce $Job
-# Add your job step to invoke internal.cleanup_server_retention_window_exclusive
+# Add your job step to invoke SSISDB log clean-up stored procedure
Write-Output "Adding your job step to invoke SSISDB log clean-up stored procedure..."
-$SqlText = "EXEC internal.cleanup_server_retention_window_exclusive"
+$SqlText = "EXEC " + $StoredProcName
$Job | Add-AzureRmSqlElasticJobStep -Name "Step to invoke SSISDB log clean-up stored procedure" -TargetGroupName $SSISDBTargetGroup.TargetGroupName -CredentialName $JobCred.CredentialName -CommandText $SqlText # Run your job to immediately invoke SSISDB log clean-up stored procedure once
$JobExecution = $Job | Start-AzureRmSqlElasticJob
$JobExecution }
-# Schedule your job to invoke SSISDB log clean-up stored procedure periodically, deleting SSISDB logs outside the retention window
+# Schedule your job to invoke SSISDB log clean-up stored procedure periodically, deleting SSISDB logs outside their retention period
Write-Output "Starting your schedule to invoke SSISDB log clean-up stored procedure periodically..." $Job | Set-AzureRmSqlElasticJob -IntervalType $IntervalType -IntervalCount $IntervalCount -StartTime $StartTime -Enable ``` ### Configure Elastic Database Jobs using T-SQL
-The following T-SQL scripts create a new Elastic Job that invokes SSISDB log clean-up stored procedure. For more info, see [Use T-SQL to create and manage Elastic Database Jobs](../azure-sql/database/elastic-jobs-tsql-create-manage.md).
+The following T-SQL scripts create a new Elastic Job that invokes your selected SSISDB log clean-up stored procedure. For more info, see [Use T-SQL to create and manage Elastic Database Jobs](../azure-sql/database/elastic-jobs-tsql-create-manage.md).
1. Identify an empty S0/higher service tier of Azure SQL Database or create a new one for your job database. Then create an Elastic Job Agent in [Azure portal](https://ms.portal.azure.com/#create/Microsoft.SQLElasticJobAgent). 2. In your job database, create credentials for connecting to SSISDB in your target server.
- ```sql
- -- Connect to the job database specified when creating your job agent.
- -- Create a database master key if one doesn't already exist, using your own password.
- CREATE MASTER KEY ENCRYPTION BY PASSWORD= '<EnterStrongPasswordHere>';
+ ```sql
+ -- Connect to the job database specified when creating your job agent.
+ -- Create a database master key if one doesn't already exist, using your own password.
+ CREATE MASTER KEY ENCRYPTION BY PASSWORD= '<EnterStrongPasswordHere>';
- -- Create credentials for SSISDB log clean-up.
- CREATE DATABASE SCOPED CREDENTIAL SSISDBLogCleanupCred WITH IDENTITY = 'SSISDBLogCleanupUser', SECRET = '<EnterStrongPasswordHere>';
- ```
+ -- Create credentials for SSISDB log clean-up.
+ CREATE DATABASE SCOPED CREDENTIAL SSISDBLogCleanupCred WITH IDENTITY = 'SSISDBLogCleanupUser', SECRET = '<EnterStrongPasswordHere>';
+ ```
3. Define your target group that includes only SSISDB to clean up.
- ```sql
- -- Connect to your job database.
- -- Add your target group.
- EXEC jobs.sp_add_target_group 'SSISDBTargetGroup'
-
- -- Add SSISDB to your target group
- EXEC jobs.sp_add_target_group_member 'SSISDBTargetGroup',
- @target_type = 'SqlDatabase',
- @server_name = '<EnterSSISDBTargetServerName>',
- @database_name = 'SSISDB'
-
- -- View your recently created target group and its members.
- SELECT * FROM jobs.target_groups WHERE target_group_name = 'SSISDBTargetGroup';
- SELECT * FROM jobs.target_group_members WHERE target_group_name = 'SSISDBTargetGroup';
- ```
+ ```sql
+ -- Connect to your job database.
+ -- Add your target group.
+ EXEC jobs.sp_add_target_group 'SSISDBTargetGroup'
+
+ -- Add SSISDB to your target group
+ EXEC jobs.sp_add_target_group_member 'SSISDBTargetGroup',
+ @target_type = 'SqlDatabase',
+ @server_name = '<EnterSSISDBTargetServerName>',
+ @database_name = 'SSISDB'
+
+ -- View your recently created target group and its members.
+ SELECT * FROM jobs.target_groups WHERE target_group_name = 'SSISDBTargetGroup';
+ SELECT * FROM jobs.target_group_members WHERE target_group_name = 'SSISDBTargetGroup';
+ ```
+ 4. Create SSISDB log clean-up user from login in SSISDB and grant it permissions to invoke SSISDB log clean-up stored procedure. For detailed guidance, see [Manage logins](../azure-sql/database/logins-create-manage.md).
- ```sql
- -- Connect to the master database of target server that hosts SSISDB
- CREATE LOGIN SSISDBLogCleanupUser WITH PASSWORD = '<strong_password>';
+ ```sql
+ -- Connect to the master database of target server that hosts SSISDB
+ CREATE LOGIN SSISDBLogCleanupUser WITH PASSWORD = '<strong_password>';
+
+ -- Connect to SSISDB
+ CREATE USER SSISDBLogCleanupUser FROM LOGIN SSISDBLogCleanupUser;
+ GRANT EXECUTE ON '<internal.cleanup_server_retention_window_exclusive/internal.cleanup_completed_jobs_exclusive/internal.cleanup_expired_worker>' TO SSISDBLogCleanupUser
+ ```
- -- Connect to SSISDB
- CREATE USER SSISDBLogCleanupUser FROM LOGIN SSISDBLogCleanupUser;
- GRANT EXECUTE ON internal.cleanup_server_retention_window_exclusive TO SSISDBLogCleanupUser
- ```
5. Create your job and add your job step to invoke SSISDB log clean-up stored procedure.
- ```sql
- -- Connect to your job database.
- -- Add your job to invoke SSISDB log clean-up stored procedure.
- EXEC jobs.sp_add_job @job_name='CleanupSSISDBLog', @description='Remove SSISDB logs outside the configured retention window'
-
- -- Add your job step to invoke internal.cleanup_server_retention_window_exclusive
- EXEC jobs.sp_add_jobstep @job_name='CleanupSSISDBLog',
- @command=N'EXEC internal.cleanup_server_retention_window_exclusive',
- @credential_name='SSISDBLogCleanupCred',
- @target_group_name='SSISDBTargetGroup'
- ```
-6. Before continuing, make sure you set the retention window properly. SSISDB logs outside this window will be deleted and can't be recovered. You can then run your job immediately to start SSISDB log clean-up.
-
- ```sql
- -- Connect to your job database.
- -- Run your job immediately to invoke SSISDB log clean-up stored procedure.
- declare @je uniqueidentifier
- exec jobs.sp_start_job 'CleanupSSISDBLog', @job_execution_id = @je output
-
- -- Watch SSISDB log clean-up results
- select @je
- select * from jobs.job_executions where job_execution_id = @je
- ```
-7. Optionally, you can delete SSISDB logs outside the retention window on a schedule. Configure your job parameters as follows.
-
- ```sql
- -- Connect to your job database.
- EXEC jobs.sp_update_job
- @job_name='CleanupSSISDBLog',
- @enabled=1,
- @schedule_interval_type='<EnterIntervalType(Month,Day,Hour,Minute,Second)>',
- @schedule_interval_count='<EnterDetailedIntervalValue>',
- @schedule_start_time='<EnterProperStartTimeForSchedule>',
- @schedule_end_time='<EnterProperEndTimeForSchedule>'
- ```
+ ```sql
+ -- Connect to your job database.
+ -- Add your job to invoke the relevant SSISDB log clean-up stored procedure.
+ EXEC jobs.sp_add_job @job_name='CleanupSSISDBLog', @description='Remove SSISDB logs outside their specific retention period'
+
+ -- Add your job step to invoke the relevant SSISDB log clean-up stored procedure
+ EXEC jobs.sp_add_jobstep @job_name='CleanupSSISDBLog',
+ @command=N'<EXEC internal.cleanup_server_retention_window_exclusive/EXEC internal.cleanup_completed_jobs_exclusive/EXEC internal.cleanup_expired_worker>',
+ @credential_name='SSISDBLogCleanupCred',
+ @target_group_name='SSISDBTargetGroup'
+ ```
+
+6. Before continuing, make sure you set the specific retention period properly. SSISDB logs outside this period will be deleted and can't be recovered. You can then run your job immediately to start SSISDB log clean-up.
+
+ ```sql
+ -- Connect to your job database.
+ -- Run your job immediately to invoke SSISDB log clean-up stored procedure.
+ declare @je uniqueidentifier
+ exec jobs.sp_start_job 'CleanupSSISDBLog', @job_execution_id = @je output
+
+ -- Watch SSISDB log clean-up results
+ select @je
+ select * from jobs.job_executions where job_execution_id = @je
+ ```
+
+7. Optionally, you can delete SSISDB logs outside their retention period on a schedule. Configure your job parameters as follows.
+
+ ```sql
+ -- Connect to your job database.
+ EXEC jobs.sp_update_job
+ @job_name='CleanupSSISDBLog',
+ @enabled=1,
+ @schedule_interval_type='<EnterIntervalType(Month,Day,Hour,Minute,Second)>',
+ @schedule_interval_count='<EnterDetailedIntervalValue>',
+ @schedule_start_time='<EnterProperStartTimeForSchedule>',
+ @schedule_end_time='<EnterProperEndTimeForSchedule>'
+ ```
### Monitor SSISDB log clean-up job using Azure portal
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
You are running ADF in debug mode.
**Resolution**
-Please run pipeline in trigger mode.
+Execute the pipeline in trigger mode.
### Cannot publish because account is locked **Cause**
-You made changes in collaboration branch to remove storage event trigger. You are trying to publish and encounter "Trigger deactivation error" message. This is due to the storage account, used for the event trigger, is being locked.
+You made changes in collaboration branch to remove storage event trigger. You are trying to publish and encounter `Trigger deactivation error` message.
+
+**Resolution**
+
+This is due to the storage account, used for the event trigger, is being locked. Unlock the account.
### Expression builder fails to load
You made changes in collaboration branch to remove storage event trigger. You ar
The expression builder can fail to load due to network or cache problems with the web browser. - **Resolution** Upgrade the web browser to the latest version of a supported browser, clear cookies for the site, and refresh the page.
You have chained many activities.
You can split your pipelines into sub pipelines, and stich them together with **ExecutePipeline** activity.
+### How to optimize pipeline with mapping data flows to avoid internal server errors, concurrency errors etc. during execution
+
+**Cause**
+
+You have not optimized mapping data flow.
+
+**Resolution**
+* Use memory optimized compute when dealing with large amount of data and transformations.
+* Reduce the batch size in case of a for each activity.
+* Scale up your databases and warehouses to match the performance of your ADF.
+* Use a separate IR(integration runtime) for activities running in parallel.
+* Adjust the partitions at the source and sink accordingly.
+* Review [Data Flow Optimizations](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance)
## Next steps
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-and-access-control-troubleshoot-guide.md
description: Learn how to troubleshoot security and access control issues in Azu
Previously updated : 05/31/2021 Last updated : 07/28/2021
You might notice other data factories (on different tenants) as you're attemptin
The self-hosted IR can't be shared across tenants.
+## Internal Sever error while trying to Delete ADF with Customer Managed Key (CMK) and User Assigned Managed Identity (UA-MI)
+
+### Wrong order of deletion of resources and ADF causes the error.
+
+#### Symptoms
+`{\"error\":{\"code\":\"InternalError\",\"message\":\"Internal error has occurred.\"}}`
+
+#### Cause
+
+If you are doing any operation related to CMK, you should do all ADF related operations first, and then external operations (like Managed Identities or Key Vault operations). For example, if you want to delete all resources, you need to delete the factory first, and then delete the key vault, if you do it in a different order, ADF call will fail as it can't read related objects anymore, and it won't be able to validate if deletion is possible or not.
+
+#### Solution
+
+There are three possible ways to solve the issue. They are as follows:
+
+* You revoked ADF's access to Key vault where the CMK key was stored.
+You can reassign access to data factory following permissions: **Get, Unwrap Key, and Wrap Key**. These permissions are required to enable customer-managed keys in Data Factory. Please refer to [Grant access to ADF](https://docs.microsoft.com/azure/data-factory/enable-customer-managed-key#grant-data-factory-access-to-azure-key-vault)
+ Once the permission is provided, you should be able to delete ADF
+
+* Customer deleted Key Vault / CMK before deleting ADF.
+ CMK in ADF should have "Soft Delete" enabled and "Purge Protect" enabled which has default retention policy of 90 days. You can restore the deleted key.
+Please review [Recover deleted Key](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#list-recover-or-purge-soft-deleted-secrets-keys-and-certificates ) and [Deleted Key Value](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault)
+
+* User Assigned Managed Identity (UA-MI) was deleted before ADF.
+You can recover from this by using REST API calls, you can do this in an http client of your choice in any programming language. If you have not anything already set up for REST API calls with Azure authentication, the easiest way to do this would be by using POSTMAN/Fiddler. Please follow following steps.
+
+ 1. Make a GET call to the factory using Method: GET Url like `https://management.azure.com/subscriptions/YourSubscription/resourcegroups/YourResourceGroup/providers/Microsoft.DataFactory/factories/YourFactoryName?api-version=2018-06-01`
+
+ 2. You need to create a new User Managed Identity with a different Name (same name may work, but just to be sure, it's safer to use a different name than the one in the GET response)
+
+ 3. Modify the encryption.identity property and identity.userassignedidentities to point to the newly created managed identity. Remove the clientId and principalId from the userAssignedIdentity object.
+
+ 4. Make a PUT call to the same factory url passing the new body. It is very important that you are passing whatever you got in the GET response, and only modify the identity. Otherwise they would override other settings unintentionally.
+
+ 5. After the call succeeds, you will be able to see the entities again and retry deleting.
+ ## Next steps For more help with troubleshooting, try the following resources:
For more help with troubleshooting, try the following resources:
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A page](/answers/topics/azure-data-factory.html) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
$DataProxyIntegrationRuntimeName = "" # OPTIONAL to configure a proxy for on-pre
$DataProxyStagingLinkedServiceName = "" # OPTIONAL to configure a proxy for on-premises data access $DataProxyStagingPath = "" # OPTIONAL to configure a proxy for on-premises data access
-# Add self-hosted integration runtime parameters if you configure a proxy for on-premises data accesss
+# Add self-hosted integration runtime parameters if you configure a proxy for on-premises data access
if(![string]::IsNullOrEmpty($DataProxyIntegrationRuntimeName) -and ![string]::IsNullOrEmpty($DataProxyStagingLinkedServiceName)) { Set-AzDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
If you need to use strong cryptography/more secure network protocol (TLS 1.2) an
## Next steps
-After you've configured your self-hosted IR as a proxy for your Azure-SSIS IR, you can deploy and run your packages to access data on-premises as Execute SSIS Package activities in Data Factory pipelines. To learn how, see [Run SSIS packages as Execute SSIS Package activities in Data Factory pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
+After you've configured your self-hosted IR as a proxy for your Azure-SSIS IR, you can deploy and run your packages to access data on-premises as Execute SSIS Package activities in Data Factory pipelines. To learn how, see [Run SSIS packages as Execute SSIS Package activities in Data Factory pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
This model describes a Home, with one **property** for an ID. The Home model als
This section goes into more detail about **properties** and **telemetry** in DTDL models.
+For a comprehensive list of the fields that may appear as part of a property, please see [Property in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). For a comprehensive list of the fields that may appear as part of telemetry, please see [Telemetry in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#telemetry).
+ ### Difference between properties and telemetry Here's some additional guidance on conceptually distinguishing between DTDL **property** and **telemetry** in Azure Digital Twins.
The following example shows a Sensor model with a semantic-type telemetry for Te
This section goes into more detail about **relationships** in DTDL models.
+For a comprehensive list of the fields that may appear as part of a relationship, please see [Relationship in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#relationship).
+ ### Basic relationship example Here is a basic example of a relationship on a DTDL model. This example shows a relationship on a Home model that allows it to connect to a Floor model. :::code language="json" source="~/digital-twins-docs-samples-getting-started/models/basic-home-example/IHome.json" highlight="12-18":::
+>[!NOTE]
+>For relationships, `@id` is an optional field. If no `@id` is provided, the digital twin interface processor will assign one.
+ ### Targeted and non-targeted relationships Relationships can be defined with or without a **target**. A target specifies which types of twin the relationship can reach. For example, you might include a target to specify that a Home model can only have a *rel_has_floors* relationship with twins that are Floor twins.
The following example shows another version of the Home model, where the `rel_ha
This section goes into more detail about **components** in DTDL models.
+For a comprehensive list of the fields that may appear as part of a component, please see [Component in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#component).
+ ### Basic component example Here is a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat model as a component.
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
First, complete the setup steps in [Set up an instance and authentication](how-t
To proceed, you will need a client app project in which you write your code. If you don't already have a client app project set up, create a basic project in your language of choice to use with this tutorial.
-## Common authentication methods with Azure.Identity
+## Authenticate using Azure.Identity library
`Azure.Identity` is a client library that provides several credential-obtaining methods that you can use to get a bearer token and authenticate with your SDK. Although this article gives examples in C#, you can view `Azure.Identity` for several languages, including...
Three common credential-obtaining methods in `Azure.Identity` are:
* [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential?view=azure-dotnet&preserve-view=true) works great in cases where you need [managed identities (MSI)](../active-directory/managed-identities-azure-resources/overview.md), and is a good candidate for working with Azure Functions and deploying to Azure services. * [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) is intended for interactive applications, and can be used to create an authenticated SDK client
-The following example shows how to use each of these with the .NET (C#) SDK.
+The rest of this article shows how to use these with the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-## Authentication examples: .NET (C#) SDK
+### Add Azure.Identity to your .NET project
-This section shows an example in C# for using the provided .NET SDK to write authentication code.
+To set up your .NET project to authenticate with `Azure.Identity`, complete the following steps:
-First, include the SDK package `Azure.DigitalTwins.Core` and the `Azure.Identity` package in your project. Depending on your tools of choice, you can include the packages using the Visual Studio package manager or the `dotnet` command line tool.
+1. Include the SDK package `Azure.DigitalTwins.Core` and the `Azure.Identity` package in your project. Depending on your tools of choice, you can include the packages using the Visual Studio package manager or the `dotnet` command line tool.
-You'll also need to add the following using statements to your project code:
+1. Add the following using statements to your project code:
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="Azure_Digital_Twins_dependencies":::
-Then, add code to obtain credentials using one of the methods in `Azure.Identity`.
+Next, add code to obtain credentials using one of the methods in `Azure.Identity`. The following sections give more detail about using each one.
### DefaultAzureCredential method
Here is an example of the code to create an authenticated SDK client using `Inte
>[!NOTE] > While you can place the client ID, tenant ID and instance URL directly into the code as shown above, it's a good idea to have your code get these values from a configuration file or environment variable instead.
-#### Other notes about authenticating Azure Functions
+## Authenticate Azure Functions
See [Set up an Azure function for processing data](how-to-create-azure-function.md) for a more complete example that explains some of the important configuration choices in the context of functions.
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
Here's a sample relationship-based query. This code snippet selects all digital
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByRelationship1":::
+The type of the relationship (`contains` in the example above) is indicated using the relationship's **name** field from its [DTDL definition](concepts-models.md#basic-relationship-example).
+ > [!NOTE] > The developer does not need to correlate this `JOIN` with a key value in the `WHERE` clause (or specify a key value inline with the `JOIN` definition). This correlation is computed automatically by the system, as the relationship properties themselves identify the target entity.
digital-twins Reference Query Clause Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-join.md
The `JOIN` clause is used in the Azure Digital Twins query language as part of t
This clause is optional while querying. ## Core syntax: JOIN ... RELATED
-Because relationships in Azure Digital Twins are part of digital twins, not independent entities, the `RELATED` keyword is used in `JOIN` queries to reference the set of relationships of a certain type from the twin collection. The set of relationships can be assigned a collection name.
+Because relationships in Azure Digital Twins are part of digital twins, not independent entities, the `RELATED` keyword is used in `JOIN` queries to reference the set of relationships of a certain type from the twin collection (the type is specified using the relationship's **name** field from its [DTDL definition](concepts-models.md#basic-relationship-example)). The set of relationships can be assigned a collection name within the query.
The query must then use the `WHERE` clause to specify which specific twin or twins are being used to support the relationship query, which is done by filtering on either the source or target twin's `$dtId` value.
event-grid Custom Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-topics.md
Title: Custom topics in Azure Event Grid description: Describes custom topics in Azure Event Grid. Previously updated : 07/07/2020 Last updated : 07/27/2021 # Custom topics in Azure Event Grid
The following sections provide links to tutorials to create custom topics using
| [Resource Manager template: custom topic and WebHook endpoint](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid) | A Resource Manager template that creates a custom topic and subscription for that custom topic. It sends events to a WebHook. | | [Resource Manager template: custom topic and Event Hubs endpoint](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid-event-hubs-handler)| A Resource Manager template that creates a subscription for a custom topic. It sends events to an Azure Event Hubs. |
+> [!NOTE]
+> Azure Digital Twins can route event notifications to custom topics that you create with Event Grid. For more information, see [Manage endpoints and routes in Azure Digital Twins](../digital-twins/how-to-manage-routes.md).
+ ## Next steps See the following articles:
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-and-retry.md
Title: Azure Event Grid delivery and retry description: Describes how Azure Event Grid delivers events and how it handles undelivered messages. Previously updated : 10/29/2020 Last updated : 07/27/2021 # Event Grid message delivery and retry-
-This article describes how Azure Event Grid handles events when delivery isn't acknowledged.
-
-Event Grid provides durable delivery. It delivers each message **at least once** for each subscription. Events are sent to the registered endpoint of each subscription immediately. If an endpoint doesn't acknowledge receipt of an event, Event Grid retries delivery of the event.
+Event Grid provides durable delivery. It tries to deliver each message **at least once** for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed [retry schedule](#retry-schedule) and [retry policy](#retry-policy). By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event.
> [!NOTE] > Event Grid doesn't guarantee order for event delivery, so subscribers may receive them out of order.
-## Batched event delivery
-
-Event Grid defaults to sending each event individually to subscribers. The subscriber receives an array with a single event. You can configure Event Grid to batch events for delivery for improved HTTP performance in high-throughput scenarios.
-
-Batched delivery has two settings:
-
-* **Max events per batch** - Maximum number of events Event Grid will deliver per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid doesn't delay events to create a batch if fewer events are available. Must be between 1 and 5,000.
-* **Preferred batch size in kilobytes** - Target ceiling for batch size in kilobytes. Similar to max events, the batch size may be smaller if more events aren't available at the time of publish. It's possible that a batch is larger than the preferred batch size *if* a single event is larger than the preferred size. For example, if the preferred size is 4 KB and a 10-KB event is pushed to Event Grid, the 10-KB event will still be delivered in its own batch rather than being dropped.
-
-Batched delivery in configured on a per-event subscription basis via the portal, CLI, PowerShell, or SDKs.
-
-### Azure portal:
-![Batch delivery settings](./media/delivery-and-retry/batch-settings.png)
-
-### Azure CLI
-When creating an event subscription, use the following parameters:
--- **max-events-per-batch** - Maximum number of events in a batch. Must be a number between 1 and 5000.-- **preferred-batch-size-in-kilobytes** - Preferred batch size in kilobytes. Must be a number between 1 and 1024.-
-```azurecli
-storageid=$(az storage account show --name <storage_account_name> --resource-group <resource_group_name> --query id --output tsv)
-endpoint=https://$sitename.azurewebsites.net/api/updates
-
-az eventgrid event-subscription create \
- --resource-id $storageid \
- --name <event_subscription_name> \
- --endpoint $endpoint \
- --max-events-per-batch 1000 \
- --preferred-batch-size-in-kilobytes 512
-```
-
-For more information on using Azure CLI with Event Grid, see [Route storage events to web endpoint with Azure CLI](../storage/blobs/storage-blob-event-quickstart.md).
-
-## Retry schedule and duration
-
+## Retry schedule
When EventGrid receives an error for an event delivery attempt, EventGrid decides whether it should retry the delivery, dead-letter the event, or drop the event based on the type of the error. If the error returned by the subscribed endpoint is a configuration-related error that can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead-lettering on the event or drop the event if dead-letter isn't configured.
If the endpoint responds within 3 minutes, Event Grid will attempt to remove the
Event Grid adds a small randomization to all retry steps and may opportunistically skip certain retries if an endpoint is consistently unhealthy, down for a long period, or appears to be overwhelmed.
-For deterministic behavior, set the event time-to-live and max delivery attempts in the [subscription retry policies](manage-event-delivery.md).
+## Retry policy
+You can customize the retry policy when creating an event subscription by using the following two configurations. An event will be dropped if either of the limits of the retry policy is reached.
-By default, Event Grid expires all events that aren't delivered within 24 hours. You can [customize the retry policy](manage-event-delivery.md) when creating an event subscription. You provide the maximum number of delivery attempts (default is 30) and the event time-to-live (default is 1440 minutes).
+- **Maximum number of attempts** - The value must be an integer between 1 and 30. The default value is 30.
+- **Event time-to-live (TTL)** - The value must be an integer between 1 and 1440. The default value is 1440 minutes
-## Delayed Delivery
+For sample CLI and PowerShell command to configure these settings, see [Set retry policy](manage-event-delivery.md#set-retry-policy).
+
+## Output batching
+Event Grid defaults to sending each event individually to subscribers. The subscriber receives an array with a single event. You can configure Event Grid to batch events for delivery for improved HTTP performance in high-throughput scenarios. Batching is turned off by default and can be turned on per-subscription.
+
+### Batching policy
+Batched delivery has two settings:
+* **Max events per batch** - Maximum number of events Event Grid will deliver per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid doesn't delay events to create a batch if fewer events are available. Must be between 1 and 5,000.
+* **Preferred batch size in kilobytes** - Target ceiling for batch size in kilobytes. Similar to max events, the batch size may be smaller if more events aren't available at the time of publish. It's possible that a batch is larger than the preferred batch size *if* a single event is larger than the preferred size. For example, if the preferred size is 4 KB and a 10-KB event is pushed to Event Grid, the 10-KB event will still be delivered in its own batch rather than being dropped.
+
+Batched delivery in configured on a per-event subscription basis via the portal, CLI, PowerShell, or SDKs.
+
+### Batching behavior
+
+* All or none
+
+ Event Grid operates with all-or-none semantics. It doesn't support partial success of a batch delivery. Subscribers should be careful to only ask for as many events per batch as they can reasonably handle in 60 seconds.
+
+* Optimistic batching
+
+ The batching policy settings aren't strict bounds on the batching behavior, and are respected on a best-effort basis. At low event rates, you'll often observe the batch size being less than the requested maximum events per batch.
+
+* Default is set to OFF
+
+ By default, Event Grid only adds one event to each delivery request. The way to turn on batching is to set either one of the settings mentioned earlier in the article in the event subscription JSON.
+
+* Default values
+
+ It isn't necessary to specify both the settings (Maximum events per batch and Approximate batch size in kilo bytes) when creating an event subscription. If only one setting is set, Event Grid uses (configurable) default values. See the following sections for the default values, and how to override them.
+
+### Azure portal:
+![Batch delivery settings](./media/delivery-and-retry/batch-settings.png)
+
+### Azure CLI
+When creating an event subscription, use the following parameters:
+
+- **max-events-per-batch** - Maximum number of events in a batch. Must be a number between 1 and 5000.
+- **preferred-batch-size-in-kilobytes** - Preferred batch size in kilobytes. Must be a number between 1 and 1024.
+
+```azurecli
+storageid=$(az storage account show --name <storage_account_name> --resource-group <resource_group_name> --query id --output tsv)
+endpoint=https://$sitename.azurewebsites.net/api/updates
+
+az eventgrid event-subscription create \
+ --resource-id $storageid \
+ --name <event_subscription_name> \
+ --endpoint $endpoint \
+ --max-events-per-batch 1000 \
+ --preferred-batch-size-in-kilobytes 512
+```
+
+For more information on using Azure CLI with Event Grid, see [Route storage events to web endpoint with Azure CLI](../storage/blobs/storage-blob-event-quickstart.md).
++
+## Delayed Delivery
As an endpoint experiences delivery failures, Event Grid will begin to delay the delivery and retry of events to that endpoint. For example, if the first 10 events published to an endpoint fail, Event Grid will assume that the endpoint is experiencing issues and will delay all subsequent retries *and new* deliveries for some time - in some cases up to several hours. The functional purpose of delayed delivery is to protect unhealthy endpoints and the Event Grid system. Without back-off and delay of delivery to unhealthy endpoints, Event Grid's retry policy and volume capabilities can easily overwhelm a system.
event-grid Event Schema Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-communication-services.md
Azure Communication Services emits the following event types:
| Microsoft.Communication.ChatThreadPropertiesUpdated| Published when a chat thread's properties like topic are updated.| | Microsoft.Communication.ChatMessageEditedInThread | Published when a message is edited in a chat thread | | Microsoft.Communication.ChatMessageDeletedInThread | Published when a message is deleted in a chat thread |
+| Microsoft.Communication.RecordingFileStatusUpdated | Published when recording file is available |
You can use the Azure portal or Azure CLI to subscribe to events emitted by your Communication Services resource. Get started with handling events by looking at [How to handle SMS Events in Communication Services](../communication-services/quickstarts/telephony-sms/handle-sms-events.md)
This section contains an example of what that data would look like for each even
} ] ```
+> [!IMPORTANT]
+> Call Recording feature is still in a Public Preview
+### Microsoft.Communication.RecordingFileStatusUpdated
+
+```json
+[
+ {
+ "id": "7283825e-f8f1-4c61-a9ea-752c56890500",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/}{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "/recording/call/{call-id}/recordingId/{recording-id}",
+ "data": {
+ "recordingStorageInfo": {
+ "recordingChunks": [
+ {
+ "documentId": "0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8",
+ "index": 0,
+ "endReason": "SessionEnded",
+ "contentLocation": "https://storage.asm.skype.com/v1/objects/0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8/content/video",
+ "metadataLocation": "https://storage.asm.skype.com/v1/objects/0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8/content/acsmetadata"
+ }
+ ]
+ },
+ "recordingStartTime": "2021-07-27T15:20:23.6089755Z",
+ "recordingDurationMs": 6620,
+ "sessionEndReason": "CallEnded"
+ },
+ "eventType": "Microsoft.Communication.RecordingFileStatusUpdated",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-07-27T15:20:34.2199328Z"
+ }
+]
+```
## Quickstarts and how-tos
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/manage-event-delivery.md
Title: Dead letter and retry policies - Azure Event Grid description: Describes how to customize event delivery options for Event Grid. Set a dead-letter destination, and specify how long to retry delivery. Previously updated : 07/20/2020 Last updated : 07/27/2021
To turn off dead-lettering, rerun the command to create the event subscription b
When creating an Event Grid subscription, you can set values for how long Event Grid should try to deliver the event. By default, Event Grid tries for 24 hours (1440 minutes), or 30 times. You can set either of these values for your event grid subscription. The value for event time-to-live must be an integer from 1 to 1440. The value for max retries must be an integer from 1 to 30.
-You can't configure the [retry schedule](delivery-and-retry.md#retry-schedule-and-duration).
+You can't configure the [retry schedule](delivery-and-retry.md#retry-schedule).
### Azure CLI
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/overview.md
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 01/28/2021 Last updated : 07/27/2021 # What is Azure Event Grid?
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/receive-events.md
Title: Receive events from Azure Event Grid to an HTTP endpoint description: Describes how to validate an HTTP endpoint, then receive and deserialize Events from Azure Event Grid Previously updated : 11/19/2020 Last updated : 07/16/2021
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
The Event Hubs Geo-disaster recovery feature is designed to make it easier to re
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated. > [!IMPORTANT]
-> The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md).
+> - The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md).
+> - Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them.
## Outages and disasters
Advantage of this approach is that failover can happen at the application layer
> [!NOTE] > For guidance on geo-disaster recovery of a virtual network, see [Virtual Network - Business Continuity](../virtual-network/virtual-network-disaster-recovery-guidance.md).+
+## Role-based access control
+Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them.
## Next steps Review the following samples or reference documentation.
expressroute Designing For High Availability With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/designing-for-high-availability-with-expressroute.md
Microsoft peering is designed for communication between public end-points. So co
[![3]][3]
-In the option 1, NAT is applied after splitting the traffic between the primary and secondary connections of the ExpressRoute. To meet the stateful requirements of NAT, independent NAT pools are used between the primary and the secondary devices so that the return traffic would arrive to the same edge device through which the flow egressed.
+#### Option 1:
-In the option 2, a common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute. It's important to make the distinction that the common NAT pool before splitting the traffic does not mean introducing single-point of failure thereby compromising high-availability.
+NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic will arrive on the same edge device through which the flow egressed.
-With the option 1, following an ExpressRoute connection failure, ability to reach the corresponding NAT pool is broken. Therefore, all the broken flows have to be re-established either by TCP or application layer following the corresponding window timeout. If either of the NAT pools are used to frontend any of the on-premises servers and if the corresponding connectivity were to fail, the on-premises servers cannot be reached from Azure until the connectivity is fixed.
+If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. That's why all broken network flows have to be re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit.
-Whereas with the option 2, the NAT is reachable even after a primary or secondary connection failure. Therefore, the network layer itself can reroute the packets and help faster recovery following the failure.
+#### Option 2:
+
+A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it will introduce a single-point of failure as such compromising high-availability.
+
+The NAT pool is reachable even after the primary or secondary connection fail. That's why the network layer itself can reroute the packets and help recover faster following a failure.
> [!NOTE]
-> If you use NAT option 1 (independent NAT pools for primary and secondary ExpressRoute connections) and map a port of an IP address from one of the NAT pool to an on-premises server, the server will not be reachable via the ExpressRoute circuit when the corresponding connection fails.
->
+> * If you use NAT option 1 (independent NAT pools for primary and secondary ExpressRoute connections) and map a port of an IP address from one of the NAT pool to an on-premises server, the server will not be reachable via the ExpressRoute circuit when the corresponding connection fails.
+> * Terminating ExpressRoute BGP connections on stateful devices can cause issues with failover during planned or unplanned maintenances by Microsoft or your ExpressRoute Provider. You should test your set up to ensure your traffic will failover properly, and when possible, terminate BGP sessions on stateless devices.
## Fine-tuning features for private peering
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Each virtual network can have only one virtual network gateway per gateway type.
## <a name="gwsku"></a>Gateway SKUs [!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)]
-If you want to upgrade your gateway to a more powerful gateway SKU, in most cases you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet. This will work for upgrades to Standard and HighPerformance SKUs. However, to upgrade to the UltraPerformance SKU, you will need to recreate the gateway. Recreating a gateway incurs downtime.
+If you want to upgrade your gateway to a more powerful gateway SKU, in most cases you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet. This will work for upgrades to Standard and HighPerformance SKUs. However, to upgrade a non Availability Zone (AZ) gateway to the UltraPerformance SKU, you will need to recreate the gateway. Recreating a gateway incurs downtime. You do not need to delete and recreate the gateway to upgrade an AZ-enabled SKU.
### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU The following table shows the features supported across each gateway type.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/overview.md
lose ownership of the subscription. If you're directly assigned to the Owner rol
subscription (not inherited from the management group), you can move it to any management group where you're a contributor.
+> [!IMPORTANT]
+> Azure Resource Manager caches management group hierarchy details for up to 30 minutes.
+> As a result, moving a management group may not immediately be reflected in the Azure portal.
+ ## Audit management groups using activity logs Management groups are supported within
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
Title: Overview of Azure Policy description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Previously updated : 05/01/2021 Last updated : 07/27/2021 # What is Azure Policy?
There are a few key differences between Azure Policy and Azure role-based access
RBAC). Azure Policy evaluates state by examining properties on resources that are represented in Resource Manager and properties of some Resource Providers. Azure Policy doesn't restrict actions (also called _operations_). Azure Policy ensures that resource state is compliant to your business
-rules without concern for who made the change or who has permission to make a change.
+rules without concern for who made the change or who has permission to make a change. Some Azure
+Policy resources, such as [policy definitions](#policy-definition),
+[initiative definitions](#initiative-definition), and [assignments](#assignments), are visible to
+all users. This design enables transparency to all users and services for what policy rules are set
+in their environment.
Azure RBAC focuses on managing user [actions](../../role-based-access-control/resource-provider-operations.md) at different scopes. If
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-azure-rbac.md
You can choose between:
* FHIR Data Exporter: Can read and export (`$export` operator) data. * FHIR Data Contributor: Can perform all data plane operations.
-If these roles are not sufficient for your need, you can also [create custom roles](../../role-based-access-control/tutorial-custom-role-powershell.md).
- In the **Select** box, search for a user, service principal, or group that you wish to assign the role to.
+>[!Note]
+>Make sure that the client application registration is completed. See details on [application registration](register-confidential-azure-ad-client-app.md)
+>If OAuth 2.0 authorization code grant type is used, grant the same FHIR application role to the user. If OAuth 2.0 client credentials grant type is used, this step is not required.
+ ## Caching behavior The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a
In this article, you learned how to assign Azure roles for the FHIR data plane. To learn about additional settings for the Azure API for FHIR: >[!div class="nextstepaction"]
->[Additional settings for Azure API for FHIR](azure-api-for-fhir-additional-settings.md)
+>[Additional settings for Azure API for FHIR](azure-api-for-fhir-additional-settings.md)
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/disaster-recovery.md
+
+ Title: Disaster recovery for Azure API for FHIR
+description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.
++++ Last updated : 07/28/2021+++
+# Disaster recovery for Azure API for FHIR
+
+The Azure API for FHIR® is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
+
+The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
+
+ ## How to enable DR
+
+To enable the DR feature, create a one-time support ticket. You can choose an Azure paired region or another region where the Azure API for FHIR is supported. The Microsoft support team will enable the DR feature based on the support priority.
+
+## How the DR process works
+
+The DR process involves the following steps:
+* Data replication
+* Automatic failover
+* Affected region recovery
+* Manual failback
+
+### Data replication in the secondary region
+
+By default, the Azure API for FHIR offers data protection through backup and restore. When the disaster recovery feature is enabled, data replication begins. A data replica is automatically created and synchronized in the secondary Azure region. The initial data replication can take a few minutes to a few hours, or longer, depending on the amount of data. The secondary data replica is a replication of the primary data. It's used directly to recover the service, and it helps speed up the recovery process.
+
+It's worth noting that the throughput RU/s must have the same values in the primary and secondary regions.
+
+[ ![Azure Traffic Manager.](media/disaster-recovery/azure-traffic-manager.png) ](media/disaster-recovery/azure-traffic-manager.png#lightbox)
+
+### Automatic failover
+
+During a primary region outage, the Azure API for FHIR automatically fails over to the secondary region and the same service endpoint is used. The service is expected to resume in one hour or less, and potential data loss is up to 15 minutes' worth of data. Other configuration changes may be required. For more information, see [Configuration changes in DR](#configuration-changes-in-dr).
+
+[ ![Failover in disaster recovery.](media/disaster-recovery/failover-in-disaster-recovery.png) ](media/disaster-recovery/failover-in-disaster-recovery.png#lightbox)
+
+### Affected region recovery
+
+After the affected region recovers, it's automatically available as a secondary region and data replication restarts. You can start the data recovery process or wait until the failback step is completed.
+
+[ ![Replication in disaster recovery.](media/disaster-recovery/replication-in-disaster-recovery.png) ](media/disaster-recovery/replication-in-disaster-recovery.png#lightbox)
+
+When the compute has failed back to the recovered region and the data hasn't, there may be potential network latencies. The main reason is that the compute and the data are in two different regions. The network latencies should disappear automatically as soon as the data fails back to the recovered region through a manual trigger.
+
+[ ![Network latency.](media/disaster-recovery/network-latency.png) ](media/disaster-recovery/network-latency.png#lightbox)
++
+### Manual failback
+
+The compute fails back automatically to the recovered region. The data is switched back to the recovered region manually by the Microsoft support team using the script.
+
+[ ![Failback in disaster recovery.](media/disaster-recovery/failback-in-disaster-recovery.png) ](media/disaster-recovery/failback-in-disaster-recovery.png#lightbox)
+
+## Configuration changes in DR
+
+Other configuration changes may be required when Private Link, Customer Managed Key (CMK), IoMT FHIR Connector (the Internet of Medical Things) and $export are used.
+
+### Private link
+
+You can enable the private link feature before or after the Azure API for FHIR has been provisioned. You can also provision private link before or after the DR feature has been enabled. Refer to the list below when you're ready to configure Private Link for DR.
+
+* Configure Azure Private Link in the primary region. This step isn't required in the secondary region. For more information, see [Configure private link](https://docs.microsoft.com/azure/healthcare-apis/fhir/configure-private-link)
+
+* Create one Azure VNet in the primary region and another VNet in the secondary region. For information, see [Create a virtual network using the Azure portal](https://docs.microsoft.com/azure/virtual-network/quick-create-portal).
+
+* In the primary region, VNet creates a VNet peering to the secondary region VNet. For more information, see [Virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview).
+
+* Use the default settings, or you can tailor the configuration as needed. The importance is that the traffic can flow between the two virtual networks.
+
+* When the private DNS is set up, the VNet in the secondary region needs to be manually set up as a "Virtual network links". The primary VNet should have already been added as part of the Private Link endpoint creation flow. For more information, see [Virtual network links](https://docs.microsoft.com/azure/dns/private-dns-virtual-network-links).
+
+* Optionally, set up one VM in the primary region VNet and one in the secondary region VNet. You can access the Azure API for FHIR from both VMs.
+
+The private link feature should continue to work during a regional outage and after the failback has completed. For more information, see [Configure private link](https://docs.microsoft.com/azure/healthcare-apis/fhir/configure-private-link).
+
+> [!NOTE]
+> Configuring virtual networks and VNet peering does not affect data replication.
+
+### CMK
+
+Your access to the Azure API for FHIR will be maintained if the key vault hosting the managed key in your subscription is accessible. There's a possible temporary downtime as Key Vault can take up to 20 minutes to re-establish its connection. For more information, see [Azure Key Vault availability and redundancy](https://docs.microsoft.com/azure/key-vault/general/disaster-recovery-guidance).
+
+### $export
+
+The export job will be picked up from another region after 10 minutes without an update to the job status. Follow the guidance for Azure storage for recovering your storage account in the event of a regional outage. For more information, see [Disaster recovery and storage account failover](https://docs.microsoft.com/azure/storage/common/storage-disaster-recovery-guidance).
+
+Ensure that you grant the same permissions to the system identity of the Azure API for FHIR. Also, if the storage account is configured with selected networks, see [How to export FHIR data](https://docs.microsoft.com/azure/healthcare-apis/fhir/export-data).
+
+### IoMT FHIR Connector
+
+Any existing connection won't function until the failed region is restored. You can create a new connection once the failover has completed and your FHIR server is accessible. This new connection will continue to function when failback occurs.
+
+> [!NOTE]
+> IoMT Connector is a preview feature and does not provide support for disaster recovery.
+
+## How to test DR
+
+While not required, you can test the DR feature on a non-production environment. For DR test, only the data will be included and the compute won't be included.
+
+Consider the following steps for DR test.
+
+* Prepare a test environment with test data. It's recommended that you use a service instance with small amounts of data to reduce the time to replicate the data.
+
+* Create a support ticket and provide your Azure subscription and the service name for the Azure API for FHIR for your test environment.
+
+* Come up with a test plan, as you would with any DR test.
+
+* The Microsoft support team enables the DR feature and confirms that the failover has taken place.
+
+* Conduct your DR test and record the testing results, which it should include any data loss and network latency issues.
+
+* For failback, notify the Microsoft support team to complete the failback step.
+
+* (Optional) Share any feedback with the Microsoft support team.
++
+> [!NOTE]
+> The DR test will double the cost of your test environment during the test. No extra cost will incur after the DR test is completed and the DR feature is disabled.
+
+## Cost of disaster recovery
+
+The disaster recovery feature incurs extra costs because data of the compute and data replica running in the environment in the secondary region. For more pricing details, refer to the [Azure API for FHIR pricing]( https://azure.microsoft.com/pricing/details/azure-api-for-fhir) web page.
+
+> [!NOTE]
+> The DR offering is subject to the [SLA for Azure API for FHIR](https://azure.microsoft.com/support/legal/sla/azure-api-for-fhir/v1_0/), 1.0.
++
+## Next steps
+
+In this article, you've learned how DR for Azure API for FHIR works and how to enable it. To learn about Azure API for FHIR's other supported features, see:
+
+>[!div class="nextstepaction"]
+>[FHIR supported features](fhir-features-supported.md)
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-sdks.md
Azure IoT also offers service SDKs that enable you to build solution-side applic
The IoT Hub service SDKs allow you to build applications that easily interact with your IoT Hub to manage devices and security. You can use these SDKs to send cloud-to-device messages, invoke direct methods on your devices, update device properties, and more.
-[**Learn more about IoT Hub**](https://azure.microsoft.com/services/iot-hub/) | [**Try controlling a device**](../iot-hub/quickstart-control-device-python.md)
+[**Learn more about IoT Hub**](https://azure.microsoft.com/services/iot-hub/) | [**Try controlling a device**](../iot-hub/quickstart-control-device.md)
**C# IoT Hub Service SDK**: [GitHub Repository](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/service) | [Package](https://www.nuget.org/packages/Microsoft.Azure.Devices/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/service/samples) | [Reference Documentation](/dotnet/api/microsoft.azure.devices)
iot-develop Quickstart Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-iot-hub.md
In this quickstart, you learned a basic Azure IoT application workflow for secur
As a next step, explore the following articles to learn more about building device solutions with Azure IoT. > [!div class="nextstepaction"]
-> [Control a device connected to an IoT hub](../iot-hub/quickstart-control-device-dotnet.md)
+> [Control a device connected to an IoT hub](../iot-hub/quickstart-control-device.md)
> [!div class="nextstepaction"] > [Send telemetry to IoT Central](quickstart-send-telemetry-central.md) > [!div class="nextstepaction"]
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-collect-and-transport-metrics.md
All configuration for the metrics-collector is done using environment variables.
| `UploadTarget` | Controls whether metrics are sent directly to Azure Monitor over HTTPS or to IoT Hub as D2C messages. For more information, see [upload target](#upload-target). <br><br>Can be either **AzureMonitor** or **IoTMessage** <br><br> **Not required** <br><br> Default value: *AzureMonitor* | | `LogAnalyticsWorkspaceId` | [Log Analytics workspace ID](../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br>Default value: *none* | | `LogAnalyticsSharedKey` | [Log Analytics workspace key](../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br> Default value: *none* |
-| `ScrapeFrequencySecs` | Recurring time interval in seconds at which to collect and transport metrics.<br><br> Example: *600* <br><br> **Not required** <br><br> Default value: *300* |
+| `ScrapeFrequencyInSecs` | Recurring time interval in seconds at which to collect and transport metrics.<br><br> Example: *600* <br><br> **Not required** <br><br> Default value: *300* |
| `MetricsEndpointsCSV` | Comma-separated list of endpoints to collect Prometheus metrics from. All module endpoints to collect metrics from must appear in this list.<br><br> Example: *http://edgeAgent:9600/metrics, http://edgeHub:9600/metrics, http://MetricsSpewer:9417/metrics* <br><br> **Not required** <br><br> Default value: *http://edgeHub:9600/metrics, http://edgeAgent:9600/metrics* | | `AllowedMetrics` | List of metrics to collect, all other metrics will be ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* | | `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric will not be reported if it is included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
iot-hub Iot Hub Configure File Upload Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload-cli.md
Previously updated : 08/08/2017 Last updated : 07/20/2021
[!INCLUDE [iot-hub-file-upload-selector](../../includes/iot-hub-file-upload-selector.md)]
-To [upload files from a device](iot-hub-devguide-file-upload.md), you must first associate an Azure Storage account with your IoT hub. You can use an existing storage account or create a new one.
+This article shows you how to configure file uploads on your IoT hub using the Azure CLI.
-To complete this tutorial, you need the following:
+To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure storage account and blob container with your IoT hub. IoT Hub automatically generates SAS URIs with write permissions to this blob container for devices to use when they upload files. In addition to the storage account and blob container, you can set the time-to-live for the SAS URI and the type of authentication that IoT Hub uses with Azure storage. You can also configure settings for the optional file upload notifications that IoT Hub can deliver to backend services.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
+## Prerequisites
-* [Azure CLI](/cli/azure/install-azure-cli).
+* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
* An Azure IoT hub. If you don't have an IoT hub, you can use the [`az iot hub create` command](/cli/azure/iot/hub#az_iot_hub_create) to create one or [Create an IoT hub using the portal](iot-hub-create-through-portal.md). * An Azure Storage account. If you don't have an Azure Storage account, you can use the Azure CLI to create one. For more information, see [Create a storage account](../storage/common/storage-account-create.md). ++ ## Sign in and set your Azure account
-Sign in to your Azure account and select your subscription.
+Sign in to your Azure account and select your subscription. If you're using Azure Cloud Shell, you should be signed in already; however, you still might need to select your Azure subscription if you have multiple subscriptions.
1. At the command prompt, run the [login command](/cli/azure/get-started-with-azure-cli):
Sign in to your Azure account and select your subscription.
The following steps assume that you created your storage account using the **Resource Manager** deployment model, and not the **Classic** deployment model.
-To configure file uploads from your devices, you need the connection string for an Azure storage account. The storage account must be in the same subscription as your IoT hub. You also need the name of a blob container in the storage account. Use the following command to retrieve your storage account keys:
+To configure file uploads from your devices, you need the connection string for an Azure Storage account. The storage account must be in the same subscription as your IoT hub. You also need the name of a blob container in the storage account. Use the following command to retrieve your storage account keys:
```azurecli az storage account show-connection-string --name {your storage account name} \ --resource-group {your storage account resource group} ```
+The connection string will be similar to the following output:
-Make a note of the **connectionString** value. You need it in the following steps.
+```json
+{
+ "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName={your storage account name};AccountKey={your storage account key}"
+}
+```
+
+Make a note of the `connectionString` value. You need it in the following steps.
You can either use an existing blob container for your file uploads or create a new one:
You can either use an existing blob container for your file uploads or create a
--connection-string "{your storage account connection string}" ```
-## File upload
+## Configure your IoT hub
You can now configure your IoT hub to enable the ability to [upload files to the IoT hub](iot-hub-devguide-file-upload.md) using your storage account details.
The configuration requires the following values:
* **SAS TTL**: This setting is the time-to-live of the SAS URIs returned to the device by IoT Hub. Set to one hour by default.
-* **File notification settings default TTL**: The time-to-live of a file upload notification before it is expired. Set to one day by default.
+* **File notification settings default TTL**: The time-to-live of a file upload notification before it expires. Set to one day by default.
* **File notification maximum delivery count**: The number of times the IoT Hub attempts to deliver a file upload notification. Set to 10 by default.
-Use the following Azure CLI commands to configure the file upload settings on your IoT hub:
+* **Authentication type**: The type of authentication for IoT Hub to use with Azure Storage. This setting determines how your IoT hub authenticates and authorizes with Azure Storage. The default is key-based authentication; however, system-assigned and user-assigned managed identities can also be used. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn how to configure managed identities on your IoT hub and Azure Storage account, see [IoT Hub support for managed identities](./iot-hub-managed-identity.md). Once configured, you can set one of your managed identities to use for authentication with Azure storage.
+
+ > [!NOTE]
+ > The authentication type setting configures how your IoT hub authenticates with your Azure Storage account. Devices always authenticate with Azure Storage using the SAS URI that they get from the IoT hub.
+
-<!--Robinsh this is out of date, add cloud powershell -->
+The following commands show how to configure the file upload settings on your IoT hub. These commands are shown separately for clarity, but, typically, you would issue a single command with all the required parameters for your scenario. Include quotes where they appear in the command line. Don't include the braces. More detail about each parameter can be found in the Azure CLI documentation for the [az iot hub update](/cli/azure/iot/hub#az_iot_hub_update) command.
-In a bash shell, use:
+The following command configures the storage account and blob container.
```azurecli az iot hub update --name {your iot hub name} \
- --set properties.storageEndpoints.'$default'.connectionString="{your storage account connection string}"
+ --fileupload-storage-connectionstring "{your storage account connection string}" \
+ --fileupload-storage-container-name "{your container name}"
+```
+The following command sets the SAS URI time to live to the default (one hour).
+
+```azurecli
az iot hub update --name {your iot hub name} \
- --set properties.storageEndpoints.'$default'.containerName="{your storage container name}"
+ --fileupload-sas-ttl 1
+```
+The following command enables file notifications and sets the file notification properties to their default values. (The file upload notification time to live is set to one hour.)
+
+```azurecli
az iot hub update --name {your iot hub name} \
- --set properties.storageEndpoints.'$default'.sasTtlAsIso8601=PT1H0M0S
+ --fileupload-notifications true \
+ --fileupload-notification-max-delivery-count 10 \
+ --fileupload-notification-ttl 1 \
+ --set properties.messagingEndpoints.fileNotifications.lockDurationAsIso8601=PT0H1M0S
+```
+> [!NOTE]
+> The lock duration can only be set by using the `--set` parameter. There is not currently a named parameter available.
+The following command configures key-based authentication:
+
+```azurecli
az iot hub update --name {your iot hub name} \
- --set properties.enableFileUploadNotifications=true
+ --fileupload-storage-auth-type keyBased
+```
+The following command configures authentication using the IoT hub's system-assigned managed identity. Before you can run this command, you need to enable the system-assigned managed identity for your IoT hub and grant it the correct RBAC role on your Azure Storage account. To learn how, see [IoT Hub support for managed identities](./iot-hub-managed-identity.md).
+
+```azurecli
az iot hub update --name {your iot hub name} \
- --set properties.messagingEndpoints.fileNotifications.maxDeliveryCount=10
+ --fileupload-storage-auth-type identityBased \
+ --fileupload-storage-identity [system]
+```
+
+The following commands retrieve the user-assigned managed identities configured on your IoT hub and configure authentication with one of them. Before you can use a user-assigned managed identity to authenticate, it must be configured on your IoT hub and granted an appropriate RBAC role on your Azure Storage account. For more detail and steps, see [IoT Hub support for managed identities](./iot-hub-managed-identity.md).
+
+To query for user-assigned managed identities on your IoT hub, use the [az iot hub identity show](/cli/azure/iot/hub/identity#az_iot_hub_identity_show) command.
+
+```azurecli
+az iot hub identity show --name {your iot hub name} --query userAssignedIdentities
+```
+The command returns a collection of the user-assigned managed identities configured on your IoT hub. The following output shows a collection that contains a single user-assigned managed identity.
+
+```json
+{
+ "/subscriptions/{your subscription ID}/resourcegroups/{your resource group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{your user-assigned managed identity name}":
+ {
+ "clientId": "<client ID GUID>",
+ "principalId": "<principal ID GUID>"
+ }
+}
+```
+
+The following command configures authentication to use the user-assigned identity above.
+
+```azurecli
az iot hub update --name {your iot hub name} \
- --set properties.messagingEndpoints.fileNotifications.ttlAsIso8601=PT1H0M0S
+ --fileupload-storage-auth-type identityBased \
+ --fileupload-storage-identity "/subscriptions/{your subscription ID}/resourcegroups/{your resource group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{your user-assigned managed identity name}"
```
-You can review the file upload configuration on your IoT hub using the following command:
+You can review the settings on your IoT hub using the following command:
```azurecli az iot hub show --name {your iot hub name} ```
-## Next steps
+To review only the file upload settings, use the following command:
+
+```azurecli
+az iot hub show --name {your iot hub name}
+ --query '[properties.storageEndpoints, properties.enableFileUploadNotifications, properties.messagingEndpoints.fileNotifications]'
+```
+
+For most situations, using the named parameters in the Azure CLI commands is easiest; however, you can also configure file upload settings with the `--set` parameter. The following commands can help you understand how.
+
+```azurecli
+az iot hub update --name {your iot hub name} \
+ --set properties.storageEndpoints.'$default'.connectionString="{your storage account connection string}"
-For more information about the file upload capabilities of IoT Hub, see [Upload files from a device](iot-hub-devguide-file-upload.md).
+az iot hub update --name {your iot hub name} \
+ --set properties.storageEndpoints.'$default'.containerName="{your storage container name}"
-Follow these links to learn more about managing Azure IoT Hub:
+az iot hub update --name {your iot hub name} \
+ --set properties.storageEndpoints.'$default'.sasTtlAsIso8601=PT1H0M0S
-* [Bulk manage IoT devices](iot-hub-bulk-identity-mgmt.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
+az iot hub update --name {your iot hub name} \
+ --set properties.enableFileUploadNotifications=true
-To further explore the capabilities of IoT Hub, see:
+az iot hub update --name {your iot hub name} \
+ --set properties.messagingEndpoints.fileNotifications.maxDeliveryCount=10
+
+az iot hub update --name {your iot hub name} \
+ --set properties.messagingEndpoints.fileNotifications.ttlAsIso8601=PT1H0M0S
+```
+
+## Next steps
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-* [Secure your IoT solution from the ground up](../iot-fundamentals/iot-security-ground-up.md)
+* [Upload files from a device overview](iot-hub-devguide-file-upload.md)
+* [IoT Hub support for managed identities](./iot-hub-managed-identity.md)
+* [File upload how-to guides](./iot-hub-csharp-csharp-file-upload.md)
+* Azure CLI [az iot hub update](/cli/azure/iot/hub#az_iot_hub_update), [az iot hub identity show](/cli/azure/iot/hub/identity#az_iot_hub_identity_show), and [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) commands
iot-hub Iot Hub Configure File Upload Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload-powershell.md
Previously updated : 08/08/2017 Last updated : 07/20/2021
[!INCLUDE [iot-hub-file-upload-selector](../../includes/iot-hub-file-upload-selector.md)]
-To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure storage account with your IoT hub. You can use an existing storage account or create a new one.
+This article shows you how to configure file uploads on your IoT hub using PowerShell.
+
+To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure storage account and blob container with your IoT hub. IoT Hub automatically generates SAS URIs with write permissions to this blob container for devices to use when they upload files. In addition to the storage account and blob container, you can set the time-to-live for the SAS URI and configure settings for the optional file upload notifications that IoT Hub can deliver to backend services.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To complete this tutorial, you need the following:
+## Prerequisites
* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-* [Azure PowerShell cmdlets](/powershell/azure/install-Az-ps).
- * An Azure IoT hub. If you don't have an IoT hub, you can use the [New-AzIoTHub cmdlet](/powershell/module/az.iothub/new-aziothub) to create one or use the portal to [Create an IoT hub](iot-hub-create-through-portal.md). * An Azure storage account. If you don't have an Azure storage account, you can use the [Azure Storage PowerShell cmdlets](/powershell/module/az.storage/) to create one or use the portal to [Create a storage account](../storage/common/storage-account-create.md)
+* Use the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart-powershell).
+
+ [![Launch Cloud Shell in a new window](./media/iot-hub-configure-file-upload-powershell/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+* If you prefer, [install](/powershell/scripting/install/installing-powershell) PowerShell locally.
+
+ * [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps). (The module is installed by default in the Azure Cloud Shell PowerShell environment.)
+ * Sign in to PowerShell by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command. To finish the authentication process, follow the steps displayed in your terminal. For additional sign-in options, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
++ ## Sign in and set your Azure account
-Sign in to your Azure account and select your subscription.
+Sign in to your Azure account and select your subscription. If you're using Azure Cloud Shell, you should be signed in already; however, you still might need to select your Azure subscription if you have multiple subscriptions.
1. At the PowerShell prompt, run the **Connect-AzAccount** cmdlet:
Sign in to your Azure account and select your subscription.
Connect-AzAccount ```
-2. If you have multiple Azure subscriptions, signing in to Azure grants you access to all the Azure subscriptions associated with your credentials. Use the following command to list the Azure subscriptions available for you to use:
+2. If you have multiple Azure subscriptions, signing in to Azure grants you access to all the Azure subscriptions associated with your credentials. Use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) command to list the Azure subscriptions available for you to use:
```powershell Get-AzSubscription
Sign in to your Azure account and select your subscription.
```powershell Select-AzSubscription `
- -SubscriptionName "{your subscription name}"
+ -Name "{your subscription name}"
```
+ > [!NOTE]
+ > The **Select-AzSubscription** command is an alias of the [Select-AzContext](/powershell/module/az.accounts/select-azcontex) that allows you to use the subscription name (**Name**) or subscription ID (**Id**) returned by the **Get-AzSubscription** command rather than the more complex context name required for the **Select-AzContext** command.
+ ## Retrieve your storage account details The following steps assume that you created your storage account using the **Resource Manager** deployment model, and not the **Classic** deployment model.
-To configure file uploads from your devices, you need the connection string for an Azure storage account. The storage account must be in the same subscription as your IoT hub. You also need the name of a blob container in the storage account. Use the following command to retrieve your storage account keys:
+To configure file uploads from your devices, you need the connection string for an Azure storage account. The storage account must be in the same subscription as your IoT hub. You also need the name of a blob container in the storage account. Use the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) command to retrieve your storage account keys:
```powershell Get-AzStorageAccountKey `
Make a note of the **key1** storage account key value. You need it in the follow
You can either use an existing blob container for your file uploads or create new one:
-* To list the existing blob containers in your storage account, use the following commands:
+* To list the existing blob containers in your storage account, use the [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext) and [Get-AzStorageContainer](/powershell/module/az.storage/get-azstoragecontainer) commands:
```powershell $ctx = New-AzStorageContext `
You can either use an existing blob container for your file uploads or create ne
Get-AzStorageContainer -Context $ctx ```
-* To create a blob container in your storage account, use the following commands:
+* To create a blob container in your storage account, use the [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext) and [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) commands:
```powershell $ctx = New-AzStorageContext `
The configuration requires the following values:
* **SAS TTL**: This setting is the time-to-live of the SAS URIs returned to the device by IoT Hub. Set to one hour by default.
-* **File notification settings default TTL**: The time-to-live of a file upload notification before it is expired. Set to one day by default.
+* **File notification settings default TTL**: The time-to-live of a file upload notification before it's expired. Set to one day by default.
* **File notification maximum delivery count**: The number of times the IoT Hub attempts to deliver a file upload notification. Set to 10 by default.
-Use the following PowerShell cmdlet to configure the file upload settings on your IoT hub:
+Use the [Set-AzIotHub](/powershell/module/az.iothub/set-aziothub) command to configure the file upload settings on your IoT hub:
```powershell Set-AzIotHub `
Set-AzIotHub `
-FileUploadNotificationMaxDeliveryCount 10 ```
-## Next steps
-
-For more information about the file upload capabilities of IoT Hub, see [Upload files from a device](iot-hub-devguide-file-upload.md).
+> [!NOTE]
+> By default, IoT Hub authenticates with Azure Storage using the account key in the connection string. Authentication using either system-assigned or user-assigned managed identities is also available. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn more, see [IoT Hub support for managed identities](./iot-hub-managed-identity.md). Currently, there are not parameters on the **Set-AzIotHub** command to set the authentication type. Instead, you can use either the [Azure portal](./iot-hub-configure-file-upload.md) or [Azure CLI](./iot-hub-configure-file-upload-cli.md).
-Follow these links to learn more about managing Azure IoT Hub:
-
-* [Bulk manage IoT devices](iot-hub-bulk-identity-mgmt.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
-
-To further explore the capabilities of IoT Hub, see:
+## Next steps
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-* [Secure your IoT solution from the ground up](../iot-fundamentals/iot-security-ground-up.md)
+* [Upload files from a device overview](iot-hub-devguide-file-upload.md)
+* [IoT Hub support for managed identities](./iot-hub-managed-identity.md)
+* [File upload how-to guides](./iot-hub-csharp-csharp-file-upload.md)
iot-hub Iot Hub Configure File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-configure-file-upload.md
Previously updated : 07/03/2017 Last updated : 07/20/2021
[!INCLUDE [iot-hub-file-upload-selector](../../includes/iot-hub-file-upload-selector.md)]
-## File upload
+This article shows you how to configure file uploads on your IoT hub using the Azure portal.
-To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure Storage account with your hub. Select **File upload** to display a list of file upload properties for the IoT hub that is being modified.
+To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure storage account and blob container with your IoT hub. IoT Hub automatically generates SAS URIs with write permissions to this blob container for devices to use when they upload files. In addition to the storage account and blob container, you can set the time-to-live for the SAS URI and the type of authentication that IoT Hub uses with Azure storage. You can also configure settings for the optional file upload notifications that IoT Hub can deliver to backend services.
-![View IoT Hub file upload settings in the portal](./media/iot-hub-configure-file-upload/file-upload-settings.png)
+## Prerequisites
-* **Storage container**: Use the Azure portal to select a blob container in an Azure Storage account in your current Azure subscription to associate with your IoT Hub. If necessary, you can create an Azure Storage account on the **Storage accounts** blade and blob container on the **Containers** blade. IoT Hub automatically generates SAS URIs with write permissions to this blob container for devices to use when they upload files.
+* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- ![View storage containers for file upload in the portal](./media/iot-hub-configure-file-upload/file-upload-container-selection.png)
+* An Azure IoT hub. If you don't have an IoT hub, see [Create an IoT hub using the portal](iot-hub-create-through-portal.md).
-* **Receive notifications for uploaded files**: Enable or disable file upload notifications via the toggle.
+## Configure your IoT hub
-* **SAS TTL**: This setting is the time-to-live of the SAS URIs returned to the device by IoT Hub. Set to one hour by default but can be customized to other values using the slider.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub and select **File upload** to display the file upload properties. Then select **Azure Storage Container** under **Storage container settings**.
-* **File notification settings default TTL**: The time-to-live of a file upload notification before it is expired. Set to one day by default but can be customized to other values using the slider.
+ :::image type="content" source="./media/iot-hub-configure-file-upload/file-upload-settings.png" alt-text="View IoT Hub file upload settings in the portal":::
-* **File notification maximum delivery count**: The number of times the IoT Hub attempts to deliver a file upload notification. Set to 10 by default but can be customized to other values using the slider.
+1. Select an Azure Storage account and blob container in your current subscription to associate with your IoT hub. If necessary, you can create an Azure Storage account on the **Storage accounts** pane and create a blob container on the **Containers** pane.
- ![Configure IoT Hub file upload in the portal](./media/iot-hub-configure-file-upload/file-upload-selected-container.png)
+ :::image type="content" source="./media/iot-hub-configure-file-upload/file-upload-container-selection.png" alt-text="View storage containers for file upload in the portal":::
-## Next steps
+1. After you've selected an Azure Storage account and blob container, configure the rest of the file upload properties.
+
+ :::image type="content" source="./media/iot-hub-configure-file-upload/file-upload-selected-container.png" alt-text="Configure IoT Hub file upload in the portal":::
+
+ * **Receive notifications for uploaded files**: Enable or disable file upload notifications via the toggle.
+
+ * **SAS TTL**: This setting is the time-to-live of the SAS URIs returned to the device by IoT Hub. Set to one hour by default but can be customized to other values using the slider.
-For more information about the file upload capabilities of IoT Hub, see [Upload files from a device](iot-hub-devguide-file-upload.md) in the IoT Hub developer guide.
+ * **File notification settings default TTL**: The time-to-live of a file upload notification before it's expired. Set to one day by default but can be customized to other values using the slider.
-Follow these links to learn more about managing Azure IoT Hub:
+ * **File notification maximum delivery count**: The number of times the IoT Hub attempts to deliver a file upload notification. Set to 10 by default but can be customized to other values using the slider.
-* [Bulk manage IoT devices](iot-hub-bulk-identity-mgmt.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
+ * **Authentication type**: By default, Azure IoT Hub uses key-based authentication to connect and authorize with Azure Storage. You can also configure user-assigned or system-assigned managed identities to authenticate Azure IoT Hub with Azure Storage. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn how to configure managed identities, see [IoT Hub support for managed identities](./iot-hub-managed-identity.md). After you've configured one or more managed identities on your Azure Storage account and IoT hub, you can select one for authentication with Azure storage with the **System-assigned** or **User-assigned** buttons.
-To further explore the capabilities of IoT Hub, see:
+ > [!NOTE]
+ > The authentication type setting configures how your IoT hub authenticates with your Azure Storage account. Devices always authenticate with Azure Storage using the SAS URI that they get from the IoT hub.
+
+1. Select **Save** to save your settings. Be sure to check the confirmation for successful completion. Some selections, like **Authentication type**, are validated only after you save your settings.
+
+## Next steps
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-* [Secure your IoT solution from the ground up](../iot-fundamentals/iot-security-ground-up.md)
+* [Upload files from a device overview](iot-hub-devguide-file-upload.md)
+* [IoT Hub support for managed identities](./iot-hub-managed-identity.md)
+* [File upload how-to guides](./iot-hub-csharp-csharp-file-upload.md)
iot-hub Iot Hub Csharp Csharp Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-schedule-jobs.md
To learn more about each of these capabilities, see:
* Device twin and properties: [Get started with device twins](iot-hub-csharp-csharp-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
-* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Tutorial: Use direct methods](quickstart-control-device-dotnet.md)
+* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-csharp)
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
You can learn more from the following resources:
* To learn how to configure devices using device twin's desired properties, see the [Use desired properties to configure devices](tutorial-device-twins.md) tutorial.
-* To learn how to control devices interactively, such as turning on a fan from a user-controlled app, see the [Use direct methods](quickstart-control-device-dotnet.md) tutorial.
+* To learn how to control devices interactively, such as turning on a fan from a user-controlled app, see the [Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-csharp) quickstart.
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Here is a detailed comparison of the various cloud-to-device communication optio
Learn how to use direct methods, desired properties, and cloud-to-device messages in the following tutorials:
-* [Use direct methods](quickstart-control-device-node.md)
+* [Use direct methods](quickstart-control-device.md)
* [Use desired properties to configure devices](tutorial-device-twins.md) * [Send cloud-to-device messages](iot-hub-node-node-c2d.md)
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-direct-methods.md
Now you have learned how to use direct methods, you may be interested in the fol
If you would like to try out some of the concepts described in this article, you may be interested in the following IoT Hub tutorial:
-* [Use direct methods](quickstart-control-device-node.md)
+* [Use direct methods](quickstart-control-device.md)
* [Device management with Azure IoT Tools for VS Code](iot-hub-device-management-iot-toolkit.md)
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-protocols.md
Last updated 01/29/2018
-# Reference - choose a communication protocol
+# Choose a device communication protocol
IoT Hub allows devices to use the following protocols for device-side communications:
iot-hub Iot Hub Java Java Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-schedule-jobs.md
To learn more about each of these capabilities, see:
* Device twin and properties: [Get started with device twins](iot-hub-java-java-twin-getstarted.md)
-* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Tutorial: Use direct methods](quickstart-control-device-java.md)
+* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-java)
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
Use the following resources to learn how to:
* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) tutorial.
-* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](quickstart-control-device-java.md) tutorial.s
+* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-java) quickstart.
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
Use the following resources to learn how to:
* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) tutorial.
-* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](quickstart-control-device-java.md) tutorial.
+* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-java) quickstart.
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resou
## Egress connectivity from IoT Hub to other Azure resources In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). You can choose which managed identity to use for each IoT Hub egress connectivity to customer-owned endpoints including storage accounts, event hubs, and service bus endpoints.
-### Message routing
+## Configure message routing with managed identities
In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) to event hub custom endpoint as an example. The same thing applies to other routing custom endpoints. 1. First we need to go to your event hub in Azure portal, to assign the managed identity the right access. In your event hub, navigate to the **Access control (IAM)** tab and click **Add** then **Add a role assignment**.
In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md)
10. Choose the new authentication type to be updated for this endpoint, click **Save**.
-### File Upload
+## Configure file upload with managed identities
IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices to upload files to a customer-owned storage account. To allow the file upload to function, IoT Hub needs to have connectivity to the storage account. Similar to message routing, you can pick the preferred authentication type and managed identity for IoT Hub egress connectivity to your Azure Storage account. 1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices
> [!NOTE] > In the file upload scenario, both hub and your device need to connect with your storage account. The steps above are for connecting your IoT hub to your storage account with desired authentication type. You still need to connect your device to storage using the SAS URI. Today the SAS URI is generated using connection string. We'll add support to generate SAS URI with managed identity soon. Please follow the steps in [file upload](iot-hub-devguide-file-upload.md).
-### Bulk device import/export
+## Configure bulk device import/export with managed identities
IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account.
iot-hub Iot Hub Node Node File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-file-upload.md
ms.devlang: nodejs Previously updated : 07/18/2021 Last updated : 07/27/2021
The tutorial shows you how to:
* Use the IoT Hub file upload notifications to trigger processing the file in your app back end.
-The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart demonstrates the basic device-to-cloud messaging functionality of IoT Hub. However, in some scenarios you cannot easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
+The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart demonstrates the basic device-to-cloud messaging functionality of IoT Hub. However, in some scenarios you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
* Large files that contain images * Videos
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-
These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub.
-At the end of this tutorial you run two Node.js console apps:
+At the end of this article, you run two Node.js console apps:
* **FileUpload.js**, which uploads a file to storage using a SAS URI provided by your IoT hub.
-* **ReadFileUploadNotification.js**, which receives file upload notifications from your IoT hub.
+* **FileUploadNotification.js**, which receives file upload notifications from your IoT hub.
> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, .NET, Javascript, Python, and Java) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center] for step-by-step instructions on how to connect your device to Azure IoT Hub.
+> IoT Hub supports many device platforms and languages, including C, Java, Python, and JavaScript, through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## Prerequisites
-* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/master/doc/node-devbox-setup.md) describes how to install Node.js for this tutorial on either Windows or Linux.
+* Node.js version 10.0.x or later. The LTS version is recommended. You can download Node.js from [nodejs.org](https://nodejs.org).
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
At the end of this tutorial you run two Node.js console apps:
## Upload a file from a device app
-In this section, you copy the device app from GitHub to upload a file to IoT hub.
+In this section, you create a device app to upload a file to IoT hub. The code is based on code available in the [upload_to_blob_advanced.js](https://github.com/Azure/azure-iot-sdk-node/blob/master/device/samples/upload_to_blob_advanced.js) sample in the [Azure IoT node.js SDK](https://github.com/Azure/azure-iot-sdk-node) device samples.
-1. There are two file upload samples available on GitHub for Node.js, a basic sample and one that is more advanced. Copy the basic sample from the repository [here](https://github.com/Azure/azure-iot-sdk-node/blob/master/device/samples/upload_to_blob.js). The advanced sample is located [here](https://github.com/Azure/azure-iot-sdk-node/blob/master/device/samples/upload_to_blob_advanced.js).
-
-2. Create an empty folder called ```fileupload```. In the ```fileupload``` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
+1. Create an empty folder called `fileupload`. In the `fileupload` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
```cmd/sh npm init ```
-3. At your command prompt in the ```fileupload``` folder, run the following command to install the **azure-iot-device** Device SDK package and **azure-iot-device-mqtt** package:
+1. At your command prompt in the `fileupload` folder, run the following command to install the **azure-iot-device** Device SDK, the **azure-iot-device-mqtt**, and the **@azure/storage-blob** packages:
```cmd/sh
- npm install azure-iot-device azure-iot-device-mqtt --save
+ npm install azure-iot-device azure-iot-device-mqtt @azure/storage-blob --save
```
-4. Using a text editor, create a **FileUpload.js** file in the ```fileupload``` folder, and copy the basic sample into it.
+1. Using a text editor, create a **FileUpload.js** file in the `fileupload` folder, and copy the following code into it.
```javascript
- // Copyright (c) Microsoft. All rights reserved.
- // Licensed under the MIT license. See LICENSE file in the project root for full license information.
- 'use strict';
- var Protocol = require('azure-iot-device-mqtt').Mqtt;
- var Client = require('azure-iot-device').Client;
- var fs = require('fs');
+ const Client = require('azure-iot-device').Client;
+ const Protocol = require('azure-iot-device-mqtt').Mqtt;
+ const errors = require('azure-iot-common').errors;
+ const path = require('path');
+
+ const {
+ AnonymousCredential,
+ BlockBlobClient,
+ newPipeline
+ } = require('@azure/storage-blob');
+
+ // make sure you set these environment variables prior to running the sample.
+ const deviceConnectionString = process.env.DEVICE_CONNECTION_STRING;
+ const localFilePath = process.env.PATH_TO_FILE;
+ const storageBlobName = path.basename(localFilePath);
+
+ async function uploadToBlob(localFilePath, client) {
+ const blobInfo = await client.getBlobSharedAccessSignature(storageBlobName);
+ if (!blobInfo) {
+ throw new errors.ArgumentError('Invalid upload parameters');
+ }
+
+ const pipeline = newPipeline(new AnonymousCredential(), {
+ retryOptions: { maxTries: 4 },
+ telemetry: { value: 'HighLevelSample V1.0.0' }, // Customized telemetry string
+ keepAliveOptions: { enable: false }
+ });
+
+ // Construct the blob URL to construct the blob client for file uploads
+ const { hostName, containerName, blobName, sasToken } = blobInfo;
+ const blobUrl = `https://${hostName}/${containerName}/${blobName}${sasToken}`;
+
+ // Create the BlockBlobClient for file upload to the Blob Storage Blob
+ const blobClient = new BlockBlobClient(blobUrl, pipeline);
- var deviceConnectionString = process.env.ConnectionString;
- var filePath = process.env.FilePath;
+ // Setup blank status notification arguments to be filled in on success/failure
+ let isSuccess;
+ let statusCode;
+ let statusDescription;
- var client = Client.fromConnectionString(deviceConnectionString, Protocol);
- fs.stat(filePath, function (err, fileStats) {
- var fileStream = fs.createReadStream(filePath);
+ try {
+ const uploadStatus = await blobClient.uploadFile(localFilePath);
+ console.log('uploadStreamToBlockBlob success');
- client.uploadToBlob('testblob.txt', fileStream, fileStats.size, function (err, result) {
- if (err) {
- console.error('error uploading file: ' + err.constructor.name + ': ' + err.message);
- } else {
- console.log('Upload successful - ' + result);
- }
- fileStream.destroy();
+ // Save successful status notification arguments
+ isSuccess = true;
+ statusCode = uploadStatus._response.status;
+ statusDescription = uploadStatus._response.bodyAsText;
+
+ // Notify IoT Hub of upload to blob status (success)
+ console.log('notifyBlobUploadStatus success');
+ }
+ catch (err) {
+ isSuccess = false;
+ statusCode = err.code;
+ statusDescription = err.message;
+
+ console.log('notifyBlobUploadStatus failed');
+ console.log(err);
+ }
+
+ await client.notifyBlobUploadStatus(blobInfo.correlationId, isSuccess, statusCode, statusDescription);
+ }
+
+ // Create a client device from the connection string and upload the local file to blob storage.
+ const deviceClient = Client.fromConnectionString(deviceConnectionString, Protocol);
+ uploadToBlob(localFilePath, deviceClient)
+ .catch((err) => {
+ console.log(err);
+ })
+ .finally(() => {
+ process.exit();
});
- });
```
-5. Add environment variables for your device connection string and the path to the file that you want to upload.
+1. Save and close the **FileUpload.js** file.
+
+1. Copy an image file to the `fileupload` folder and give it a name such as `myimage.png`.
-6. Save and close the **FileUpload.js** file.
+1. Add environment variables for your device connection string and the path to the file that you want to upload. You got the device connection string when you [registered the device with your IoT hub](#register-a-new-device-in-the-iot-hub).
+
+ - For Windows:
-7. Copy an image file to the `fileupload` folder and give it a name such as `myimage.png`. Place the path to the file in the `FilePath` environment variable.
+ ```cmd
+ set DEVICE_CONNECTION_STRING={your device connection string}
+ set PATH_TO_FILE={your image filepath}
+ ```
+
+ - For Linux/Bash:
+
+ ```bash
+ export DEVICE_CONNECTION_STRING="{your device connection string}"
+ export PATH_TO_FILE="{your image filepath}"
+ ```
## Get the IoT hub connection string
-In this article you create a backend service to receive file upload notification messages from the IoT hub you created. To receive file upload notification messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
+In this article, you create a backend service to receive file upload notification messages from the IoT hub you created. To receive file upload notification messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)]
In this article you create a backend service to receive file upload notification
In this section, you create a Node.js console app that receives file upload notification messages from IoT Hub.
-You can use the **iothubowner** connection string from your IoT Hub to complete this section. You will find the connection string in the [Azure portal](https://portal.azure.com/) on the **Shared access policy** blade.
-
-1. Create an empty folder called ```fileuploadnotification```. In the ```fileuploadnotification``` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
+1. Create an empty folder called `fileuploadnotification`. In the `fileuploadnotification` folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
```cmd/sh npm init ```
-2. At your command prompt in the ```fileuploadnotification``` folder, run the following command to install the **azure-iothub** SDK package:
+1. At your command prompt in the `fileuploadnotification` folder, run the following command to install the **azure-iothub** SDK package:
```cmd/sh npm install azure-iothub --save ```
-3. Using a text editor, create a **FileUploadNotification.js** file in the `fileuploadnotification` folder.
+1. Using a text editor, create a **FileUploadNotification.js** file in the `fileuploadnotification` folder.
-4. Add the following `require` statements at the start of the **FileUploadNotification.js** file:
+1. Add the following `require` statements at the start of the **FileUploadNotification.js** file:
```javascript 'use strict';
- var Client = require('azure-iothub').Client;
+ const Client = require('azure-iothub').Client;
```
-5. Add a `iothubconnectionstring` variable and use it to create a **Client** instance. Replace the `{iothubconnectionstring}` placeholder value with the IoT hub connection string that you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string):
+1. Read the connection string for your IoT hub from the environment:
```javascript
- var connectionString = '{iothubconnectionstring}';
+ const connectionString = process.env.IOT_HUB_CONNECTION_STRING;
```
- > [!NOTE]
- > For the sake of simplicity the connection string is included in the code: this is not a recommended practice, and depending on your use-case and architecture you may want to consider more secure ways of storing this secret.
-
-6. Add the following code to connect the client:
+1. Add the following code to create a service client from the connection string:
```javascript
- var serviceClient = Client.fromConnectionString(connectionString);
+ const serviceClient = Client.fromConnectionString(connectionString);
```
-7. Open the client and use the **getFileNotificationReceiver** function to receive status updates.
+1. Open the client and use the **getFileNotificationReceiver** function to receive status updates.
```javascript serviceClient.open(function (err) {
You can use the **iothubowner** connection string from your IoT Hub to complete
}); ```
-8. Save and close the **FileUploadNotification.js** file.
+1. Save and close the **FileUploadNotification.js** file.
+
+1. Add an environment variable for your IoT Hub connection string. You copied this string previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
+
+ - For Windows:
+
+ ```cmd
+ set IOT_HUB_CONNECTION_STRING={your iot hub connection string}
+ ```
+
+ - For Linux/Bash:
+
+ ```bash
+ export IOT_HUB_CONNECTION_STRING="{your iot hub connection string}"
+ ```
## Run the applications
-Now you are ready to run the applications.
+Now you're ready to run the applications.
At a command prompt in the `fileuploadnotification` folder, run the following command:
At a command prompt in the `fileupload` folder, run the following command:
node FileUpload.js ```
-The following screenshot shows the output from the **FileUpload** app:
+The following output is from the **FileUpload** app after the upload has completed:
+
+```output
+uploadStreamToBlockBlob success
+notifyBlobUploadStatus success
+```
-![Output from simulated-device app](./media/iot-hub-node-node-file-upload/simulated-device.png)
+The following sample output is from the **FileUploadNotification** app after the upload has completed:
-The following screenshot shows the output from the **FileUploadNotification** app:
+```output
+Service client connected
+File upload from device:
+{"deviceId":"myDeviceId","blobUri":"https://{your storage account name}.blob.core.windows.net/device-upload-container/myDeviceId/image.png","blobName":"myDeviceId/image.png","lastUpdatedTime":"2021-07-23T23:27:06+00:00","blobSizeInBytes":26214,"enqueuedTimeUtc":"2021-07-23T23:27:07.2580791Z"}
+```
-![Output from read-file-upload-notification app](./media/iot-hub-node-node-file-upload/read-file-upload-notification.png)
+## Verify the file upload
You can use the portal to view the uploaded file in the storage container you configured:
-![Uploaded file](./media/iot-hub-node-node-file-upload/uploaded-file.png)
+1. Navigate to your storage account in Azure portal.
+1. On the left pane of your storage account, select **Containers**.
+1. Select the container you uploaded the file to.
+1. Select the folder named after your device.
+1. Select the blob that you uploaded your file to. In this article, it's the blob with the same name as your file.
+
+ :::image type="content" source="./media/iot-hub-node-node-file-upload/view-uploaded-file.png" alt-text="Screenshot of viewing the uploaded file in the Azure portal.":::
+
+1. View the blob properties on the page that opens. You can select **Download** to download the file and view its contents locally.
## Next steps
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
Learn more about each of these capabilities in these articles:
* Device twin and properties: [Get started with device twins](iot-hub-node-node-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
-* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Tutorial: direct methods](quickstart-control-device-node.md)
+* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-nodejs)
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
Use the following resources to learn how to:
* configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) tutorial,
-* control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](quickstart-control-device-node.md) tutorial.
+* control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-nodejs) quickstart.
iot-hub Iot Hub Python Python Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-python-schedule-jobs.md
Learn more about each of these capabilities in these articles:
* Device twin and properties: [Get started with device twins](iot-hub-python-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
-* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Tutorial: direct methods](quickstart-control-device-python.md)
+* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-python)
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
iot-hub Iot Hub Python Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-twin-getstarted.md
Use the following resources to learn how to:
* Configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) tutorial.
-* Control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](quickstart-control-device-python.md) tutorial.
+* Control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](/azure/iot-hub/quickstart-control-device?pivots=programming-language-python) quickstart.
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-android.md
Title: Quickstart - Control a device from Azure IoT Hub quickstart (Android) | Microsoft Docs
+ Title: Control a device from Azure IoT Hub (Android) | Microsoft Docs
description: In this quickstart, you run two sample Java applications. One application is a service application that can remotely control devices connected to your hub. The other application runs on a physical or simulated device connected to your hub that can be controlled remotely.
#Customer intent: As a developer new to IoT Hub, I need to use a service application written for Android to control devices connected to the hub.
-# Quickstart: Control a device connected to an IoT hub (Android)
-
+# Control a device connected to an IoT hub (Android)
In this quickstart, you use a direct method to control a simulated device connected to Azure IoT Hub. IoT Hub is an Azure service that enables you to manage your IoT devices from the cloud and ingest high volumes of device telemetry to the cloud for storage or processing. You can use direct methods to remotely change the behavior of a device connected to your IoT hub. This quickstart uses two applications: a simulated device application that responds to direct methods called from a back-end service application and a service application that calls the direct method on the Android device.
iot-hub Quickstart Control Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device.md
+
+ Title: Quickstart - Control a device from Azure IoT Hub | Microsoft Docs
+description: In this quickstart, you run two sample applications. One application is a service application that can remotely control devices connected to your hub. The other application simulates a device connected to your hub that can be controlled remotely.
++++++ Last updated : 07/26/2021
+zone_pivot_groups: iot-hub-set1
+#Customer intent: As a developer new to IoT Hub, I need to see how to use a service application to control a device connected to the hub.
++
+# Quickstart: Control a device connected to an IoT hub
+
+In this quickstart, you use a direct method to control a simulated device connected to your IoT hub. IoT Hub is an Azure service that lets you manage your IoT devices from the cloud and ingest high volumes of device telemetry to the cloud for storage or processing. You can use direct methods to remotely change the behavior of devices connected to your IoT hub.
+++++++++++++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you called a direct method on a device from a service application, and responded to the direct method call in a simulated device application.
+
+To learn how to route device-to-cloud messages to different destinations in the cloud, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Route telemetry to different endpoints for processing](tutorial-routing.md)
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-cli.md
If you are a device developer, the suggested next step is to see the telemetry q
To learn how to control your simulated device from a back-end application, continue to the next quickstart. > [!div class="nextstepaction"]
-> [Quickstart: Control a device connected to an IoT hub](quickstart-control-device-dotnet.md)
+> [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md)
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
A port is reused for an unlimited number of connections. The port is only reused
Each public IP assigned as a frontend IP of your load balancer is given 64,000 SNAT ports for its backend pool members. Ports can't be shared with backend pool members. A range of SNAT ports can only be used by a single backend instance to ensure return packets are routed correctly.
-Should you use the automatic allocation of outbound SNAT through a load-balancing rule, the allocation table will define your port allocation.
+Should you use the automatic allocation of outbound SNAT through a load-balancing rule, the allocation table will define your port allocation for each IP.
The following <a name="snatporttable"></a>table shows the SNAT port preallocations for tiers of backend pool sizes:
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
In a logic app, each workflow always starts with a single [trigger](#trigger). A
The following screenshot shows part of an example enterprise workflow. This workflow uses conditions and switches to determine the next action. Let's say you have an order system, and your workflow processes incoming orders. You want to review orders above a certain cost manually. Your workflow already has previous steps that determine how much an incoming order costs. So, you create an initial condition based on that cost value. For example: -- If the order is above a certain amount, the condition is false. So, the workflow processes the order.
+- If the order is below a certain amount, the condition is false. So, the workflow processes the order.
- If the condition is true, the workflow sends an email for manual review. A switch determines the next step. - If the reviewer approves, the workflow continues to process the order. - If the reviewer escalates, the workflow sends an escalation email to get more information about the order.
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-azure-machine-learning-architecture.md
Previously updated : 08/20/2020 Last updated : 07/27/2021 #Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
Previously updated : 10/02/2020 Last updated : 07/27/2021 #Customer intent: As a data scientist, I want to know what a compute instance is and how to use it for Azure Machine Learning.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
Previously updated : 06/18/2021 Last updated : 07/27/2021 #Customer intent: As a data scientist, I want to understand what a compute target is and why I need it.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
Previously updated : 09/22/2020 Last updated : 07/27/2021 #Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
machine-learning Dsvm Tools Deep Learning Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
--++ Previously updated : 05/12/2021 Last updated : 07/27/2021 # Deep learning and AI frameworks for the Azure Data Science VM
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--|
-| Version(s) supported | 1.8.1 (Ubuntu 18.04, Windows 2019) |
+| Version(s) supported | 1.9.0 (Ubuntu 18.04, Windows 2019) |
| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
-| How is it configured / installed on the DSVM? | Installed in Python, conda environment 'py38_pytorch' |
+| How is it configured / installed on the DSVM? | Installed in Python, conda environments 'py38_default', 'py38_pytorch' |
| How to run it | Terminal: Activate the correct environment, and then run Python.<br/>* [JupyterHub](dsvm-ubuntu-intro.md#how-to-access-the-ubuntu-data-science-virtual-machine): Connect, and then open the PyTorch directory for samples. | ## [TensorFlow](https://www.tensorflow.org/)
Deep learning frameworks on the DSVM are listed below.
|--|--| | Version(s) supported | 2.5 | | Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
-| How is it configured / installed on the DSVM? | Installed in Python, conda environment 'py38_tensorflow' |
+| How is it configured / installed on the DSVM? | Installed in Python, conda environments 'py38_default', 'py38_tensorflow' |
| How to run it | Terminal: Activate the correct environment, and then run Python. <br/> * Jupyter: Connect to [Jupyter](provision-vm.md) or [JupyterHub](dsvm-ubuntu-intro.md#how-to-access-the-ubuntu-data-science-virtual-machine), and then open the TensorFlow directory for samples. |
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-assign-roles.md
The following table is a summary of Azure Machine Learning activities and the pe
| Publishing pipelines and endpoints | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/endpoints/pipelines/*", "/workspaces/pipelinedrafts/*", "/workspaces/modules/*"` | | Deploying a registered model on an AKS/ACI resource | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/write", "/workspaces/services/aci/write"` | | Scoring against a deployed AKS endpoint | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/score/action", "/workspaces/services/aks/listkeys/action"` (when you are not using Azure Active Directory auth) OR `"/workspaces/read"` (when you are using token auth) |
-| Accessing storage using interactive notebooks | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/computes/read", "/workspaces/notebooks/samples/read", "/workspaces/notebooks/storage/*", "/workspaces/listKeys/action"` |
+| Accessing storage using interactive notebooks | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/computes/read", "/workspaces/notebooks/samples/read", "/workspaces/notebooks/storage/*", "/workspaces/listStorageAccountKeys/action"` |
| Create new custom role | Owner, contributor, or custom role allowing `Microsoft.Authorization/roleDefinitions/write` | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` | > [!TIP]
Here are a few things to be aware of while you use Azure role-based access contr
- [Enterprise security overview](concept-enterprise-security.md) - [Virtual network isolation and privacy overview](how-to-network-security-overview.md) - [Tutorial: Train models](tutorial-train-models-with-aml.md)-- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
+- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
The following steps describe how this topology works:
2. **Create private endpoint with private DNS integration targeting Private DNS Zone linked to DNS Server Virtual Network**:
- The next step is to create a Private Endpoint to the Azure Machine Learning workspace. A private endpoint ensures Private DNS integration is enabled. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+ The next step is to create a Private Endpoint to the Azure Machine Learning workspace. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+
+ > [!IMPORTANT]
+ > The private endpoint must have Private DNS integration enabled for this example to function correctly.
3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
The following steps describe how this topology works:
The zones to conditionally forward are listed below. The Azure DNS Virtual Server IP address is 168.63.129.16: **Azure Public regions**:
- - ``` privatelink.api.azureml.ms```
- - ``` privatelink.notebooks.azure.net```
+ - ```api.azureml.ms```
+ - ```notebooks.azure.net```
**Azure China regions**:
- - ```privatelink.api.ml.azure.cn```
- - ```privatelink.notebooks.chinacloudapi.cn```
+ - ```api.ml.azure.cn```
+ - ```notebooks.chinacloudapi.cn```
**Azure US Government regions**:
- - ```privatelink.api.ml.azure.us```
- - ```privatelink.notebooks.usgovcloudapi.net```
+ - ```api.ml.azure.us```
+ - ```notebooks.usgovcloudapi.net```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.usgovcloudapi.net```
-5. **Public DNS responds with CNAME**:
+5. **Azure DNS recursively resolves workspace domain to CNAME**:
- DNS Server will proceed to resolve the FQDNs from step 4 from the Public DNS. The Public DNS will respond with one of the domains listed in the informational section in step 1.
+ The DNS Server will resolve the FQDNs from step 4 from Azure DNS. Azure DNS will respond with one of the domains listed in step 1.
6. **DNS Server recursively resolves workspace domain CNAME record from Azure DNS**:
The following steps describe how this topology works:
2. **Create private endpoint with private DNS integration targeting Private DNS Zone linked to DNS Server Virtual Network**:
- The next step is to create a Private Endpoint to the Azure Machine Learning workspace. A private endpoint ensures Private DNS integration is enabled. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+ The next step is to create a Private Endpoint to the Azure Machine Learning workspace. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+
+ > [!IMPORTANT]
+ > The private endpoint must have Private DNS integration enabled for this example to function correctly.
3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
The following steps describe how this topology works:
The zones to conditionally forward are listed below. The Azure DNS Virtual Server IP address is 168.63.129.16. **Azure Public regions**:
- - ``` privatelink.api.azureml.ms```
- - ``` privatelink.notebooks.azure.net```
+ - ```api.azureml.ms```
+ - ```notebooks.azure.net```
**Azure China regions**:
- - ```privatelink.api.ml.azure.cn```
- - ```privatelink.notebooks.chinacloudapi.cn```
+ - ```api.ml.azure.cn```
+ - ```notebooks.chinacloudapi.cn```
**Azure US Government regions**:
- - ```privatelink.api.ml.azure.us```
- - ```privatelink.notebooks.usgovcloudapi.net```
+ - ```api.ml.azure.us```
+ - ```notebooks.usgovcloudapi.net```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
The zones to conditionally forward are listed below. The IP addresses to forward to are the IP addresses of your DNS Servers: **Azure Public regions**:
- - ``` privatelink.api.azureml.ms```
- - ``` privatelink.notebooks.azure.net```
+ - ```api.azureml.ms```
+ - ```notebooks.azure.net```
**Azure China regions**:
- - ```privatelink.api.ml.azure.cn```
- - ```privatelink.notebooks.chinacloudapi.cn```
+ - ```api.ml.azure.cn```
+ - ```notebooks.chinacloudapi.cn```
**Azure US Government regions**:
- - ```privatelink.api.ml.azure.us```
- - ```privatelink.notebooks.usgovcloudapi.net```
+ - ```api.ml.azure.us```
+ - ```notebooks.usgovcloudapi.net```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.usgovcloudapi.net```
-6. **Public DNS responds with CNAME**:
+6. **On-premises DNS server recursively resolves workspace domain**:
- DNS Server will proceed to resolve the FQDNs from step 4 from the Public DNS. The Public DNS will respond with one of the domains listed in the informational section in step 1.
+ The on-premises DNS Server will resolve the FQDNs from step 5 from the DNS Server. Because there is a conditional forwarder (step 4), the on-premises DNS Server will sen