Updates from: 04/13/2023 01:05:25
Service Microsoft Docs article Related commit history on GitHub Change details
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-$mynam
Optionally, also delete the namespace as follows: ```
-kubectl delete --namespace <name of namespace>
+kubectl delete namespace <name of namespace>
## Example:
-kubectl delete --namespace arc
+kubectl delete namespace arc
``` ## Verify all objects are deleted
azure-arc Managed Identity Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/managed-identity-authentication.md
The following response is an example that is returned:
For an Azure Arc-enabled Linux server, using Bash, you invoke the web request to get the token from the local host in the specific port. Specify the following request using the IP address or the environmental variable **IDENTITY_ENDPOINT**. To complete this step, you need an SSH client. ```bash
-ChallengeTokenPath=$(curl -s -D - -H Metadata:true "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com" | grep Www-Authenticate | cut -d "=" -f 2 | tr -d "[:cntrl:]")
-ChallengeToken=$(cat $ChallengeTokenPath)
+CHALLENGE_TOKEN_PATH=$(curl -s -D - -H Metadata:true "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com" | grep Www-Authenticate | cut -d "=" -f 2 | tr -d "[:cntrl:]")
+CHALLENGE_TOKEN=$(cat $CHALLENGE_TOKEN_PATH)
if [ $? -ne 0 ]; then echo "Could not retrieve challenge token, double check that this command is run with root privileges." else
- curl -s -H Metadata:true -H "Authorization: Basic $ChallengeToken" "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com"
+ curl -s -H Metadata:true -H "Authorization: Basic $CHALLENGE_TOKEN" "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fmanagement.azure.com"
fi ```
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
When Azure Arc-enabled servers is configured on the VM, you see two representati
For Linux, run the following commands: ```bash
- current_hostname=$(hostname)
+ CURRENT_HOSTNAME=$(hostname)
sudo service walinuxagent stop sudo waagent -deprovision -force sudo rm -rf /var/lib/waagent
- sudo hostnamectl set-hostname $current_hostname
+ sudo hostnamectl set-hostname $CURRENT_HOSTNAME
``` 3. Block access to the Azure IMDS endpoint.
azure-maps Extend Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/extend-geojson.md
Title: Extended GeoJSON geometries | Microsoft Azure Maps
-description: Learn how Azure Maps extends the GeoJSON spec to include additional geometric shapes. View examples that set up circles and rectangles for use in maps.
+description: Learn how Azure Maps extends the GeoJSON spec to include more geometric shapes. View examples that set up circles and rectangles for use in maps.
Last updated 05/17/2018
# Extended GeoJSON geometries
-Azure Maps provides a list of powerful APIs to search inside and along geographical features. These APIs adhere to the standard [GeoJSON spec][1] of representing geographical features.
+Azure Maps provides a list of powerful APIs to search inside and along geographical features. These APIs adhere to the standard [GeoJSON spec] of representing geographical features.
-The [GeoJSON spec][1] supports only the following geometries:
+The [GeoJSON spec] supports only the following geometries:
* GeometryCollection * LineString
The [GeoJSON spec][1] supports only the following geometries:
* Point * Polygon
-Some Azure Maps APIs accept geometries that aren't part of the [GeoJSON spec][1]. For instance, the [Search Inside Geometry](/rest/api/maps/search/postsearchinsidegeometry) API accepts Circle and Polygons.
+Some Azure Maps APIs accept geometries that aren't part of the [GeoJSON spec]. For instance, the [Search Inside Geometry] API accepts Circle and Polygons.
-This article provides a detailed explanation on how Azure Maps extends the [GeoJSON spec][1] to represent certain geometries.
+This article provides a detailed explanation on how Azure Maps extends the [GeoJSON spec] to represent certain geometries.
## Circle
-The `Circle` geometry is not supported by the [GeoJSON spec][1]. We use a `GeoJSON Point Feature` object to represent a circle.
+The [GeoJSON spec] doesn't support the `Circle` geometry. The `GeoJSON Point Feature` object is used to represent a circle.
A `Circle` geometry represented using the `GeoJSON Feature` object __must__ contain the following coordinates and properties: -- Center
+| Coordinate | Property |
+|||
+| Center | The circle's center is represented using a `GeoJSON Point` object. |
+| Radius | The circle's `radius` is represented using `GeoJSON Feature`'s properties. The radius value is in _meters_ and must be of the type `double`. |
+| SubType | The circle geometry must also contain the `subType` property. This property must be a part of the `GeoJSON Feature`'s properties and its value should be _Circle_ |
- The circle's center is represented using a `GeoJSON Point` object.
+### Circle example
-- Radius
+Here's how you represent a circle using a `GeoJSON Feature` object. Let's center the circle at latitude: 47.639754 and longitude: -122.126986, and assign it a radius equal to 100 meters:
- The circle's `radius` is represented using `GeoJSON Feature`'s properties. The radius value is in _meters_ and must be of the type `double`.
--- SubType-
- The circle geometry must also contain the `subType` property. This property must be a part of the `GeoJSON Feature`'s properties and its value should be _Circle_
-
-#### Example
-
-Here's how you'll represent a circle using a `GeoJSON Feature` object. Let's center the circle at latitude: 47.639754 and longitude: -122.126986, and assign it a radius equal to 100 meters:
-
-```json
+```json
{ "type": "Feature", "geometry": {
Here's how you'll represent a circle using a `GeoJSON Feature` object. Let's cen
## Rectangle
-The `Rectangle` geometry is not supported by the [GeoJSON spec][1]. We use a `GeoJSON Polygon Feature` object to represent a rectangle. The rectangle extension is primarily used by the Web SDK's drawing tools module.
+The [GeoJSON spec] doesn't support the `Rectangle` geometry. The `GeoJSON Polygon Feature` object is used to represent a rectangle. The rectangle extension is primarily used by the Web SDK's drawing tools module.
A `Rectangle` geometry represented using the `GeoJSON Polygon Feature` object __must__ contain the following coordinates and properties: -- Corners-
- The rectangle's corners are represented using the coordinates of a `GeoJSON Polygon` object. There should be five coordinates, one for each corner. And, a fifth coordinate that is the same as the first coordinate, to close the polygon ring. It will be assumed that these coordinates align, and that the developer may rotate them as wanted.
--- SubType
+| Coordinate | Property |
+|||
+| Corners | The rectangle's corners are represented using the coordinates of a `GeoJSON Polygon` object. There should be five coordinates, one for each corner. And, a fifth coordinate that is the same as the first coordinate, to close the polygon ring. It's assumed that these coordinates align, and that the developer may rotate them as wanted. |
+| SubType | The rectangle geometry must also contain the `subType` property. This property must be a part of the `GeoJSON Feature`'s properties, and its value should be _Rectangle_. |
- The rectangle geometry must also contain the `subType` property. This property must be a part of the `GeoJSON Feature`'s properties, and its value should be _Rectangle_
-
-### Example
+### Rectangle example
```json {
A `Rectangle` geometry represented using the `GeoJSON Polygon Feature` object __
} ```+ ## Next steps Learn more about GeoJSON data in Azure Maps: > [!div class="nextstepaction"]
-> [Geofence GeoJSON format](geofence-geojson.md)
+> [Geofence GeoJSON format]
Review the glossary of common technical terms associated with Azure Maps and location intelligence applications: > [!div class="nextstepaction"]
-> [Azure Maps glossary](glossary.md)
+> [Azure Maps glossary]
-[1]: https://tools.ietf.org/html/rfc7946
+[GeoJSON spec]: https://tools.ietf.org/html/rfc7946
+[Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Geofence GeoJSON format]: geofence-geojson.md
+[Azure Maps glossary]: glossary.md
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
# Azure Maps service geographic scope
-Azure Maps is a global service that supports specifying a geographic scope, which allows you to limit data residency to the European (EU) or United States (US) geographic areas (geos). All requests (including input data) are stored exclusively in the specified geographic area. For more information on Azure regions and geographies, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies).
+Azure Maps is a global service that supports specifying a geographic scope, which allows you to limit data residency to the European (EU) or United States (US) geographic areas (geos). All requests (including input data) are stored exclusively in the specified geographic area. For more information on Azure regions and geographies, see [Azure geographies].
## Data locations
-For disaster recovery and high availability, Microsoft may replicate customer data to other regions within the same geographic area. For example, if you use the Azure Maps Europe API geographic endpoint, your requests (including input data) are kept in an Azure datacenter in Europe. This only impacts where request data is saved, it doesn't limit the locations from which the customers, or their end users, may access customer data via Azure Maps API.
+For disaster recovery and high availability, Microsoft may replicate customer data to other regions within the same geographic area. For example, if you use the Azure Maps Europe API geographic endpoint, your requests (including input data) are kept in an Azure datacenter in Europe. The only impact is where request data is saved, it doesn't limit the locations from which the customers, or their end users, may access customer data via Azure Maps API.
## Geographic API endpoint mapping
-The table below describes the mapping between geography and supported Azure geographic API endpoint. For example, if you want all Azure Maps Search Address requests to be processed and stored within the European Azure geography, use the `eu.atlas.microsoft.com` endpoint.
+The following table describes the mapping between geography and supported Azure geographic API endpoint. For example, if you want all Azure Maps Search Address requests to be processed and stored within the European Azure geography, use the `eu.atlas.microsoft.com` endpoint.
| Azure Geographic areas (geos) | API geographic endpoint | |-||
The table below describes the mapping between geography and supported Azure geog
| United States | `us.atlas.microsoft.com` | > [!TIP]
-> When using the Azure Government cloud, use the `atlas.azure.us` endpoint. For more information, see [Azure Government cloud support](how-to-use-map-control.md#azure-government-cloud-support).
+> When using the Azure Government cloud, use the `atlas.azure.us` endpoint. For more information, see [Azure Government cloud support].
### URL example for geographic mapping
-The following is the [Search - Get Search Address](/rest/api/maps/search/get-search-address) request:
+The following code snippet is an example of the [Search - Get Search Address] request:
```http GET https://{geography}.atlas.microsoft.com/search/address/{format}?api-version=1.0&query={query}
GET https://eu.atlas.microsoft.com/search/address/{format}?api-version=1.0&query
## Additional information -- For information on limiting what regions a SAS token is allowed to be used in see [Authentication with Azure Maps](azure-maps-authentication.md#create-sas-tokens)-- [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies)-- [Azure Government cloud support](how-to-use-map-control.md#azure-government-cloud-support)
+For information on limiting what regions a SAS token can use in see [Authentication with Azure Maps]
+
+- [Azure geographies]
+- [Azure Government cloud support]
+
+[Authentication with Azure Maps]: azure-maps-authentication.md#create-sas-tokens
+[Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies
+[Azure Government cloud support]: how-to-use-map-control.md#azure-government-cloud-support
+[Search - Get Search Address]: /rest/api/maps/search/get-search-address
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
The following list describes common words used with the Azure Maps services.
<a name="advanced-routing"></a> **Advanced routing**: A collection of services that perform advance operations using road routing data; such as, calculating reachable ranges (isochrones), distance matrices, and batch route requests.
-<a name="aerial-imagery"></a> **Aerial imagery**: See [Satellite imagery](#satellite-imagery).
+<a name="aerial-imagery"></a> **Aerial imagery**: See [Satellite imagery](#satellite-imagery).
<a name="along-a-route-search"></a> **Along a route search**: A spatial query that looks for data within a specified detour time or distance from a route path.
The following list describes common words used with the Azure Maps services.
<a name="annotation"></a> **Annotation**: Text or graphics displayed on the map to provide information to the user. Annotation may identify or describe a specific map entity, provide general information about an area on the map, or supply information about the map itself.
-<a name="antimeridian"></a> **Antimeridian**: Also known as the 180<sup>th</sup> Meridian. This is the point where -180 degrees and 180 degrees of longitude meet. Which is the opposite of the prime meridian on the globe.
+<a name="antimeridian"></a> **Antimeridian**: Or _180<sup>th</sup> Meridian_. The point where -180 degrees and 180 degrees of longitude meet, the opposite of the prime meridian on the globe.
<a name="application-programming-interface-api"></a> **Application Programming Interface (API)**: A specification that allows developers to create applications.
The following list describes common words used with the Azure Maps services.
<a name="envelope"></a> **Envelope**: See [Bounding box](#bounding-box).
-<a name="extended-postal-code"></a> **Extended postal code**: A postal code that may include additional information. For example, in the USA, zip codes have five digits. But, an extended zip code, known as zip+4, includes four additional digits. These additional digits are used to identify a geographic segment within the five-digit delivery area, such as a city block, a group of apartments, or a post office box. Knowing the geographic segment aids in efficient mail sorting and delivery.
+<a name="extended-postal-code"></a> **Extended postal code**: A postal code that may include more information. For example, in the USA, zip codes have five digits. But, an extended zip code, known as zip+4, includes four more digits. These digits are used to identify a geographic segment within the five-digit delivery area, such as a city block, a group of apartments, or a post office box. Knowing the geographic segment aids in efficient mail sorting and delivery.
<a name="extent"></a> **Extent**: See [Bounding box](#bounding-box).
The following list describes common words used with the Azure Maps services.
<a name="federated-authentication"></a> **Federated authentication**: An authentication method that allows a single logon/authentication mechanism to be used across multiple web and mobile apps.
-<a name="feature"></a> **Feature**: An object that combines a geometry with an additional metadata information.
+<a name="feature"></a> **Feature**: An object that combines a geometry with metadata information.
<a name="feature-collection"></a> **Feature collection**: A collection of feature objects.
The following list describes common words used with the Azure Maps services.
<a name="free-form-address"></a> **Free form address**: A full address that is represented as a single line of text.
-<a name="fuzzy-search"></a> **Fuzzy search**: A search that takes in a free form string of text that may be an address or point of interest.
+<a name="fuzzy-search"></a> **Fuzzy search**: A search that takes in a free form string of text that may be an address or point of interest.
## G
-<a name="geocode"></a> **Geocode**: An address or location that has been converted into a coordinate that can be used to display that location on a map.
+<a name="geocode"></a> **Geocode**: An address or location that has been converted into a coordinate that can be used to display that location on a map.
-<a name="geocoding"></a> **Geocoding**: Also known as forward geocoding, is the process of converting address of location data into coordinates.
+<a name="geocoding"></a> **Geocoding**: Or _forward geocoding_, is the process of converting address of location data into coordinates.
<a name="geodesic-path"></a> **Geodesic path**: The shortest path between two points on a curved surface. When rendered on Azure Maps this path appears as a curved line due to the Mercator projection. <a name="geofence"></a> **Geofence**: A defined geographical region that can be used to trigger events when a device enters or exists the region.
-<a name="geojson"></a> **GeoJSON**: Is a common JSON-based file format used for storing geographical vector data such as points, lines, and polygons. **Note**: Azure Maps uses an extended version of GeoJSON as [documented here](extend-geojson.md).
+<a name="geojson"></a> **GeoJSON**: Is a common JSON-based file format used for storing geographical vector data such as points, lines, and polygons. For more information Azure Maps use of an extended version of GeoJSON, see [Extended geojson](extend-geojson.md).
<a name="geometry"></a> **Geometry**: Represents a spatial object such as a point, line, or polygon.
The following list describes common words used with the Azure Maps services.
<a name="gis"></a> **GIS**: An acronym for "Geographic Information System". A common term used to describe the mapping industry.
-<a name="gml"></a> **GML**: Also known as Geography Markup Language. An XML file extension for storing spatial data.
+<a name="gml"></a> **GML** (Geography Markup Language): An XML file extension for storing spatial data.
-<a name="gps"></a> **GPS**: Also known as Global Positioning System, is a system of satellites used for determining a devices position on the earth. The orbiting satellites transmit signals that allow a GPS receiver anywhere on earth to calculate its own location through trilateration.
+<a name="gps"></a> **GPS** (Global Positioning System): A system of satellites used for determining a devices position on the earth. The orbiting satellites transmit signals that allow a GPS receiver anywhere on earth to calculate its own location through trilateration.
-<a name="gpx"></a> **GPX**: Also known as GPS eXchange format, is an XML file format commonly created from GPS devices.
+<a name="gpx"></a> **GPX** (GPS eXchange format): An XML file format commonly created from GPS devices.
<a name="great-circle-distance"></a> **Great-circle distance**: The shortest distance between two points on the surface of a sphere. <a name="greenwich-mean-time-gmt"></a> **Greenwich Mean Time (GMT)**: The time at the prime meridian, which runs through the Royal Observatory in Greenwich, England.
-<a name="guid"></a> **GUID**: A globally unique identifier. A string used to uniquely identify an interface, class, type library, component category, or record.
+<a name="guid"></a> **GUID** (globally unique identifier): A string used to uniquely identify an interface, class, type library, component category, or record.
## H <a name="haversine-formula"></a> **Haversine formula**: A common equation used for calculating the great-circle distance between two points on a sphere.
-<a name="hd-maps"></a> **HD maps**: Also known as High Definition Maps, consists of high fidelity road network information such as lane markings, signage, and direction lights required for autonomous driving.
+<a name="hd-maps"></a> **HD maps** (High Definition Maps): consists of high fidelity road network information such as lane markings, signage, and direction lights required for autonomous driving.
<a name="heading"></a> **Heading**: The direction something is pointing or facing. See also [Bearing](#heading).
The following list describes common words used with the Azure Maps services.
<a name="linear-interpolation"></a> **Linear interpolation**: The estimation of an unknown value using the linear distance between known values.
-<a name="linestring"></a> **LineString**: A geometry used to represent a line. Also known as a polyline.
+<a name="linestring"></a> **LineString**: A geometry used to represent a line. Also known as a polyline.
<a name="localization"></a> **Localization**: Support for different languages and cultures.
The following list describes common words used with the Azure Maps services.
<a name="map-tile"></a> **Map Tile**: A rectangular image that represents a partition of a map canvas. For more information, see the [Zoom levels and tile grid documentation](zoom-levels-and-tile-grid.md).
-<a name="marker"></a> **Marker**: Also known as a pin or pushpin, is an icon that represents a point location on a map.
+<a name="marker"></a> **Marker**: Also called a pin or pushpin, is an icon that represents a point location on a map.
<a name="mercator-projection"></a> **Mercator projection**: A cylindrical map projection that became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines, as straight segments that conserve the angles with the meridians. All flat map projections distort the shapes or sizes of the map when compared to the true layout of the Earth's surface. The Mercator projection exaggerates areas far from the equator, such that smaller areas appear larger on the map as you approach the poles.
-<a name="multilinestring"></a> **MultiLineString**: A geometry that represents a collection of LineString objects.
+<a name="multilinestring"></a> **MultiLineString**: A geometry that represents a collection of LineString objects.
<a name="multipoint"></a> **MultiPoint**: A geometry that represents a collection of Point objects. <a name="multipolygon"></a> **MultiPolygon**: A geometry that represents a collection of Polygon objects. For example, to show the boundary of Hawaii, each island would be outlined with a polygon. Thus, the boundary of Hawaii would thus be a MultiPolygon.
-<a name="municipality"></a> **Municipality**: A city or town.
+<a name="municipality"></a> **Municipality**: A city or town.
<a name="municipality-subdivision"></a> **Municipality subdivision**: A subdivision of a municipality, such as a neighborhood or local area name such as "downtown".
The following list describes common words used with the Azure Maps services.
<a name="pitch"></a> **Pitch**: The amount of tilt the map has relative to the vertical where 0 is looking straight down at the map.
-<a name="point"></a> **Point**: A geometry that represents a single position on the map.
+<a name="point"></a> **Point**: A geometry that represents a single position on the map.
<a name="points-of-interest-poi"></a> **Points of interest (POI)**: A business, landmark, or common place of interest.
-<a name="polygon"></a> **Polygon**: A solid geometry that represents an area on a map.
+<a name="polygon"></a> **Polygon**: A solid geometry that represents an area on a map.
-<a name="polyline"></a> **Polyline**: A geometry used to represent a line. Also known as a LineString.
+<a name="polyline"></a> **Polyline**: A geometry used to represent a line. Also known as a LineString.
<a name="position"></a> **Position**: The longitude, latitude, and altitude (x,y,z coordinates) of a point.
The following list describes common words used with the Azure Maps services.
<a name="primary-key"></a> **Primary key**: The first of two subscription keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication).
-<a name="prime-meridian"></a> **Prime meridian**: A line of longitude that represents 0-degrees longitude. Generally, longitude values decrease when traveling in a westerly direction until 180 degrees and increase when traveling in easterly directions to -180-degrees.
+<a name="prime-meridian"></a> **Prime meridian**: A line of longitude that represents 0-degrees longitude. Generally, longitude values decrease when traveling in a westerly direction until 180 degrees and increase when traveling in easterly directions to -180-degrees.
-<a name="prj"></a> **PRJ**: A text file which often accompanies a Shapefile file that contains information about the projected coordinate system the data set is in.
+<a name="prj"></a> **PRJ**: A text file often accompanying a `Shapefile` file that contains information about the projected coordinate system the data set is in.
-<a name="projection"></a> **Projection**: A projected coordinate system based on a map projection such as transverse Mercator, Albers equal area, and Robinson. These provide the ability to project maps of the earth's spherical surface onto a two-dimensional Cartesian coordinate plane. Projected coordinate systems are sometimes referred to as map projections.
+<a name="projection"></a> **Projection**: A projected coordinate system based on a map projection such as transverse Mercator, Albers equal area, and Robinson. Projection enables the earth's spherical surface to be represented on a two-dimensional Cartesian coordinate plane. Projected coordinate systems are sometimes referred to as map projections.
## Q
-<a name="quadkey"></a> **Quadkey**: A base-4 address index for a tile within a quadtree tiling system. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation for more information.
+<a name="quadkey"></a> **`Quadkey`**: A base-4 address index for a tile within a `quadtree` tiling system. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-<a name="quadtree"></a> **Quadtree**: A data structure in which each node has exactly four children. The tiling system used in Azure Maps uses a quadtree structure such that as a user zooms in one level, each map tile breaks up into four subtiles. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation for more information.
+<a name="quadtree"></a> **`Quadtree`**: A data structure in which each node has exactly four children. The tiling system used in Azure Maps uses a 'quadtree' structure such that as a user zooms in one level, each map tile breaks up into four subtiles. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-<a name="queries-per-second-qps"></a> **Queries Per Second (QPS)**: The number of queries or requests that can be made to a service or platform within one second.
+<a name="queries-per-second-qps"></a> **Queries Per Second (QPS)**: The number of queries or requests that can be made to a service or platform within one second.
## R
-<a name="radial-search"></a> **Radial search**: A spatial query that searches a fixed straight-line distance (as the crow flies) from a point.
+<a name="radial-search"></a> **Radial search**: A spatial query that searches a fixed straight-line distance (as the crow flies) from a point.
<a name="raster-data"></a> **Raster data**: A matrix of cells (or pixels) organized into rows and columns (or a grid) where each cell contains a value representing information, such as temperature. Raster's include digital aerial photographs, imagery from satellites, digital pictures, and scanned maps.
The following list describes common words used with the Azure Maps services.
<a name="rest-service"></a> **REST service**: The acronym REST stands for Representational State Transfer. A REST service is a URL-based web service that relies on basic web technology to communicate, the most common methods being HTTP GET and POST requests. These types of services tend to me much quicker and smaller than traditional SOAP-based services.
-<a name="reverse-geocode"></a> **Reverse geocode**: The process of taking a coordinate and determining the address in which is represents on a map.
+<a name="reverse-geocode"></a> **Reverse geocode**: The process of taking a coordinate and determining the address it represents on a map.
<a name="reproject"></a> **Reproject**: See [Transformation](#transformation).
The following list describes common words used with the Azure Maps services.
<a name="route"></a> **Route**: A path between two or more locations, which may also include additional information such as instructions for waypoints along the route.
-<a name="requests-per-second-rps"></a> **Requests Per Second (RPS)**: See [Queries Per Second (QPS)](#queries-per-second-qps).
+<a name="requests-per-second-rps"></a> **Requests Per Second (RPS)**: See [Queries Per Second (QPS)](#queries-per-second-qps).
<a name="rss"></a> **RSS**: Acronym for Really Simple Syndication, Resource Description Framework (RDF) Site Summary, or Rich Site Summary, depending on the source. A simple, structured XML format for sharing content among different Web sites. RSS documents include key metadata elements such as author, date, title, a brief description, and a hypertext link. This information helps a user (or an RSS publisher service) decide what materials are worth further investigation. ## S
-<a name="satellite-imagery"></a> **Satellite imagery**: Imagery that has been captured by planes and satellites pointing straight down.
+<a name="satellite-imagery"></a> **Satellite imagery**: Imagery captured by planes and satellites pointing straight down.
<a name="secondary-key"></a> **Secondary key**: The second of two subscriptions keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication).
-<a name="shapefile-shp"></a> **Shapefile (SHP)**: Also known as an ESRI Shapefile, is a vector data storage format for storing the location, shape, and attributes of geographic features. A shapefile is stored in a set of related files.
+<a name="shapefile-shp"></a> **Shapefile (SHP)**: Or *ESRI Shapefile*, is a vector data storage format for storing the location, shape, and attributes of geographic features. A shapefile is stored in a set of related files.
-<a name="shared-key-authentication"></a> **Shared key authentication**: Shared Key authentication relies on passing Azure Maps account generated keys with each request to Azure Maps. These keys are often referred to as subscription keys. It is recommended that keys are regularly regenerated for security. Two keys are provided so that you can maintain connections using one key while regenerating the other. When you regenerate your keys, you must update any applications that access this account to use the new keys. To learn more about Azure Maps authentication, see [Azure Maps and Azure AD](azure-maps-authentication.md) and [Manage authentication in Azure Maps](how-to-manage-authentication.md).
+<a name="shared-key-authentication"></a> **Shared key authentication**: Shared Key authentication relies on passing Azure Maps account generated keys with each request to Azure Maps. These keys are often referred to as subscription keys. It's recommended that keys are regularly regenerated for security. Two keys are provided so that you can maintain connections using one key while regenerating the other. When you regenerate your keys, you must update any applications that access this account to use the new keys. To learn more about Azure Maps authentication, see [Azure Maps and Azure AD](azure-maps-authentication.md) and [Manage authentication in Azure Maps](how-to-manage-authentication.md).
<a name="software-development-kit-sdk"></a> **Software development kit (SDK)**: A collection of documentation, sample code, and sample apps to help a developer use an API to build apps.
The following list describes common words used with the Azure Maps services.
<a name="spatial-reference"></a> **Spatial reference**: A coordinate-based local, regional, or global system used to precisely locate geographical entities. It defines the coordinate system used to relate map coordinates to locations in the real world. Spatial references ensure spatial data from different layers, or sources, can be integrated for accurate viewing or analysis. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system and WGS 84 for input geometry data.
-<a name="sql-spatial"></a> **SQL spatial**: Refers to the spatial functionality built into SQL Azure and SQL Server 2008 and above. This spatial functionality is also available as a .NET library that can be used independently of SQL Server. For more information, see the [Spatial Data (SQL Server) documentation](/sql/relational-databases/spatial/spatial-data-sql-server) for more information.
+<a name="sql-spatial"></a> **SQL spatial**: Refers to the spatial functionality built into SQL Azure and SQL Server 2008 and above. This spatial functionality is also available as a .NET library that can be used independently of SQL Server. For more information, see [Spatial Data (SQL Server)](/sql/relational-databases/spatial/spatial-data-sql-server).
<a name="subscription-key"></a> **Subscription key**: See [Shared key authentication](#shared-key-authentication).
-<a name="synchronous-request"></a> **Synchronous request**: An HTTP request opens a connection and waits for a response. Browsers limit the number of concurrent HTTP requests that can be made from a page. If multiple long running synchronous requests are made at the same time, then this limit can be reached. Requests will be delayed until one of the other requests has completed.
+<a name="synchronous-request"></a> **Synchronous request**: An HTTP request opens a connection and waits for a response. Browsers limit the number of concurrent HTTP requests that can be made from a page. If multiple long running synchronous requests are made at the same time, then this limit can be reached. Requests are delayed until one of the other requests has completed.
## T
-<a name="telematics"></a> **Telematics**: Sending, receiving, and storing information via telecommunication devices in conjunction with effecting control on remote objects.
+<a name="telematics"></a> **Telematics**: Sending, receiving, and storing information via telecommunication devices along with effecting control on remote objects.
<a name="temporal-data"></a> **Temporal data**: Data that specifically refers to times or dates. Temporal data may refer to discrete events, such as lightning strikes; moving objects, such as trains; or repeated observations, such as counts from traffic sensors.
The following list describes common words used with the Azure Maps services.
- One transaction is created for every 15 map or traffic tiles requested. - One transaction is created for each API call to one of the services in Azure Maps. Searching and routing are examples of Azure Maps service.
-<a name="transformation"></a> **Transformation**: The process of converting data between different geographic coordinate systems. You may, for example, have some data that was captured in the United Kingdom and based on the OSGB 1936 geographic coordinate system. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system variant of WGS84. As such to display the data correctly, it will need to have its coordinates transformed from one system to another.
+<a name="transformation"></a> **Transformation**: The process of converting data between different geographic coordinate systems. You may, for example, have some data that was captured in the United Kingdom and based on the OSGB 1936 geographic coordinate system. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system variant of WGS84. As such to display the data correctly, it needs to have its coordinates transformed from one system to another.
<a name="traveling-salesmen-problem-tsp"></a> **Traveling Salesmen Problem (TSP)**: A Hamiltonian circuit problem in which a salesperson must find the most efficient way to visit a series of stops, then return to the starting location.
The following list describes common words used with the Azure Maps services.
<a name="vehicle-routing-problem-vrp"></a> **Vehicle Routing Problem (VRP)**: A class of problems, in which a set of ordered routes for a fleet of vehicles is calculated while taking into consideration as set of constraints. These constraints may include delivery time windows, multiple route capacities, and travel duration constraints.
-<a name="voronoi-diagram"></a> **Voronoi diagram**: A partition of space into areas, or cells, that surround a set of geometric objects, usually point features. These cells, or polygons, must satisfy the criteria for Delaunay triangles. All locations within an area are closer to the object it surrounds than to any other object in the set. Voronoi diagrams are often used to delineate areas of influence around geographic features.
+<a name="voronoi-diagram"></a> **Voronoi diagram**: A partition of space into areas, or cells, that surrounds a set of geometric objects, usually point features. These cells, or polygons, must satisfy the criteria for Delaunay triangles. All locations within an area are closer to the object it surrounds than to any other object in the set. Voronoi diagrams are often used to delineate areas of influence around geographic features.
## W
The following list describes common words used with the Azure Maps services.
<a name="waypoint-optimization"></a> **Waypoint optimization**: The process of reordering a set of waypoints to minimize the travel time or distance required to pass through all provided waypoints. Depending on the complexity of the optimization, this optimization is often referred to as the [Traveling Salesmen Problem](#traveling-salesmen-problem-tsp) or [Vehicle Routing Problem](#vehicle-routing-problem-vrp).
-<a name="web-map-service-wms"></a> **Web Map Service (WMS)**: WMS is an Open Geographic Consortium (OGC) standard that defines image-based map services. WMS services provide map images for specific areas within a map on demand. Images include pre-rendered symbology and may be rendered in one of several named styles if defined by the service.
+<a name="web-map-service-wms"></a> **Web Map Service (WMS)**: WMS is an Open Geographic Consortium (OGC) standard that defines image-based map services. WMS services provide map images for specific areas within a map on demand. Images include prerendered symbology and may be rendered in one of several named styles if defined by the service.
<a name="web-mercator"></a> **Web Mercator**: Also known as Spherical Mercator projection. It's a slight variant of the Mercator projection, one used primarily in Web-based mapping programs. It uses the same formulas as the standard Mercator projection as used for small-scale maps. However, the Web Mercator uses the spherical formulas at all scales, but large-scale Mercator maps normally use the ellipsoidal form of the projection. The discrepancy is imperceptible at the global scale, but it causes maps of local areas to deviate slightly from true ellipsoidal Mercator maps, at the same scale.
The following list describes common words used with the Azure Maps services.
## Z
-<a name="z-coordinate"></a> **Z-coordinate**: See [Altitude](#altitude).
+<a name="z-coordinate"></a> **Z-coordinate**: See [Altitude](#altitude).
<a name="zip-code"></a> **Zip code**: See [Postal code](#postal-code).
-<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map will often be visible. But, the map will show limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map will display an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation.
+<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation.
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
Once you've created an Azure storage account with files uploaded into one or mor
:::image type="content" source="./media/data-registry/add-datastore.png" lightbox="./media/data-registry/add-datastore.png" alt-text="A screenshot showing the add datastore screen.":::
-The new datastore will now appear in the list of datastores.
+The new datastore now appears in the list of datastores.
### Assign roles to managed identities and add them to the datastore
-Once your managed identities and datastore are created, you can add the managed identities to the datastore and simultaneously assign them the **Contributor** and **Storage Blob Data Reader** roles. While it's possible to add roles to your managed identities directly in your managed identities or storage account, you can easily do this while simultaneously associating them with your Azure Maps datastore directly in the datastore pane.
+Once your managed identities and datastore are created, you can add the managed identities to the datastore, and simultaneously assign them the **Contributor** and **Storage Blob Data Reader** roles. While it's possible to add roles to your managed identities directly in your managed identities or storage account, which you can easily do while simultaneously associating them with your Azure Maps datastore directly in the datastore pane.
> [!NOTE] > Each managed identity associated with the datastore will need the **Contributor** and **Storage Blob Data Reader** roles granted to them. If you do not have the required permissions to grant roles to managed identities, consult your Azure administrator.
To assign roles to your managed identities and associate them with a datastore:
With a datastore created in your Azure Maps account, you're ready to gather the properties required to create the data registry.
-There are the AzureBlob properties that you'll pass in the body of the HTTP request, and [The user data ID](#the-user-data-id) passed in the URL.
+There are the AzureBlob properties that you pass in the body of the HTTP request, and [The user data ID](#the-user-data-id) passed in the URL.
### The AzureBlob
The `AzureBlob` is a JSON object that defines properties required to create the
|`linkedResource`| The ID of the datastore registered in the Azure Maps account.<BR>The datastore contains a link to the file being registered. | | `blobUrl` | A URL pointing to the Location of the AzurebBlob, the file imported into your container. |
-The following two sections will provide you with details how to get the values to use for the [msiClientId](#the-msiclientid-property), [blobUrl](#the-bloburl-property) properties.
+The following two sections provide you with details how to get the values to use for the [msiClientId](#the-msiclientid-property), [blobUrl](#the-bloburl-property) properties.
#### The msiClientId property
The `msiClientId` property is the ID of the managed identity used to create the
# [system-assigned](#tab/System-assigned)
-When using System-assigned managed identities, you don't need to provide a value for the `msiClientId` property. The data registry service will automatically use the system assigned identity of the Azure Maps account when `msiClientId` is null.
+When using System-assigned managed identities, you don't need to provide a value for the `msiClientId` property. The data registry service automatically uses the system assigned identity of the Azure Maps account when `msiClientId` is null.
# [user-assigned](#tab/User-assigned)
The `blobUrl` property is the path to the file being registered. You can get thi
[data registry] 1. Select your **storage account** in the **Azure portal**. 1. Select **Containers** from the left menu.
-1. A list of containers will appear. Select the container that contains the file you wish to register.
+1. A list of containers appear. Select the container that contains the file you wish to register.
1. The container opens, showing a list of the files previously uploaded. 1. Select the desired file, then copy the URL.
The user data ID (`udid`) of the data registry is a user-defined GUID that must
## Create a data registry
-Now that you have your storage account with the desired files linked to your Azure Maps account through the datastore and have gathered all required properties, you're ready to use the [data registry] API to register those files. If you have multiple files in your Azure storage account that you want to register, you'll need to run the register request for each file (`udid`).
+Now that you have your storage account with the desired files linked to your Azure Maps account through the datastore and have gathered all required properties, you're ready to use the [data registry] API to register those files. If you have multiple files in your Azure storage account that you want to register, you need to run the register request for each file (`udid`).
> [!NOTE] > The maximum size of a file that can be registered with an Azure Maps datastore is one gigabyte.
https://us.atlas.microsoft.com/dataRegistries/operations/{udid}?api-version=2022
## Get a list of all files in the data registry
-To get a list of all files registered in an Azure Maps account using the [List][list] request:
+Use the [List][list] request to get a list of all files registered in an Azure Maps account:
```http https://us.atlas.microsoft.com/dataRegistries?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} ```
-The following is a sample response showing three possible statuses, completed, running and failed:
+The following sample demonstrates three possible statuses, completed, running and failed:
```json {
Use the `udid` to get the content of a file registered in an Azure Maps account:
https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ```
-The contents of the file will appear in the body of the response. For example, a text based GeoJSON file will appear similar to the following example:
+The contents of the file appear in the body of the response. For example, a text based GeoJSON file appears similar to the following example:
```json {
If you need to replace a previously registered file with another file, rerun the
## Data validation
-When you register a file in Azure Maps using the data registry API, an MD5 hash is created from the contents of the file, encoding it into a 128-bit fingerprint and saving it in the `AzureBlob` as the `contentMD5` property. The MD5 hash stored in the `contentMD5` property is used to ensure the data integrity of the file. Since the MD5 hash algorithm always produces the same output given the same input, the data validation process can compare the `contentMD5` property of the file when it was registered against a hash of the file in the Azure storage account to check that it's intact and unmodified. If the hash isn't the same, the validation fails. If the file in the underlying storage account changes, the validation will fail. If you need to modify the contents of a file that has been registered in Azure Maps, you'll need to register it again.
-
-[data registry]: /rest/api/maps/data-registry
-[list]: /rest/api/maps/data-registry/list
-[Register]: /rest/api/maps/data-registry/register-or-replace
-[Get operation]: /rest/api/maps/data-registry/get-operation
+When you register a file in Azure Maps using the data registry API, an MD5 hash is created from the contents of the file, encoding it into a 128-bit fingerprint and saving it in the `AzureBlob` as the `contentMD5` property. The MD5 hash stored in the `contentMD5` property is used to ensure the data integrity of the file. Since the MD5 hash algorithm always produces the same output given the same input, the data validation process can compare the `contentMD5` property of the file when it was registered against a hash of the file in the Azure storage account to check that it's intact and unmodified. If the hash isn't the same, the validation fails. If the file in the underlying storage account changes, the validation fails. If you need to modify the contents of a file that has been registered in Azure Maps, you need to register it again.
+<!- end-style links ->
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[storage account overview]: /azure/storage/common/storage-account-overview
+[Azure portal]: https://portal.azure.com/
[create storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
+[geographic scope]: geographic-scope.md
[managed identity]: /azure/active-directory/managed-identities-azure-resources/overview
+[storage account overview]: /azure/storage/common/storage-account-overview
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Azure portal]: https://portal.azure.com/
[Visual Studio]: https://visualstudio.microsoft.com/downloads/
-[geographic scope]: geographic-scope.md
+<!- REST API Links >
+[data registry]: /rest/api/maps/data-registry
+[Get operation]: /rest/api/maps/data-registry/get-operation
+[list]: /rest/api/maps/data-registry/list
+[Register]: /rest/api/maps/data-registry/register-or-replace
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
# Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)
-The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor. The sample application collects entries from a text file and
+The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor. The sample application collects entries from a text file and either converts the plain log to JSON format generating a resulting .json file, or sends the content to the data collection endpoint.
> [!NOTE] > This tutorial uses the Azure portal to configure the components to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components and that has sample code for client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme).
Before you can send data to the workspace, you need to create the custom table w
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot that shows the custom log table name."::: ## Parse and filter sample data
-Instead of directly configuring the schema of the table, you can upload a file with a sample JSON array of data through the portal, and Azure Monitor will set the schema automatically. The sample JSON file must contain one or more log records structured as an array, in the same way they data is sent in the body of an HTTP request of the logs ingestion API call.
+Instead of directly configuring the schema of the table, you can upload a file with a sample JSON array of data through the portal, and Azure Monitor will set the schema automatically. The sample JSON file must contain one or more log records structured as an array, in the same way the data is sent in the body of an HTTP request of the logs ingestion API call.
+
+1. Follow the instructions in [generate sample data](#generate-sample-data) to create the *data_sample.json* file.
1. Select **Browse for files** and locate the *data_sample.json* file that you previously created.
Instead of directly configuring the schema of the table, you can upload a file w
' ' * ' ' * ' [' * '] "' RequestType:string
- " " Resource:string
- " " *
+ ' ' Resource:string
+ ' ' *
'" ' ResponseCode:int
- " " *
+ ' ' *
``` 1. Select **Run** to view the results. This action extracts the contents of `RawData` into the separate columns `ClientIP`, `RequestType`, `Resource`, and `ResponseCode`.
Instead of directly configuring the schema of the table, you can upload a file w
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse kind = regex RawData with *
- ':"'
+ | parse RawData with
ClientIP:string
- " - -" * '"'
- RequestType:string
- ' '
- Resource:string
- " " *
+ ' ' *
+ ' ' *
+ ' [' * '] "' RequestType:string
+ ' ' Resource:string
+ ' ' *
'" ' ResponseCode:int
- " " *
+ ' ' *
+ | project-away Time, RawData
+ | where ResponseCode != 200
``` 1. Select **Run** to view the results.
The following PowerShell script generates sample data to configure the custom ta
$payload += $log_entry } # Write resulting payload to file
- New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force
+ New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json -AsArray) -Force
} else { ############
You can use the following sample data for the tutorial. Alternatively, you can u
## Next steps -- [Complete a similar tutorial by using the Azure portal](tutorial-logs-ingestion-api.md)
+- [Complete a similar tutorial by using ARM templates](tutorial-logs-ingestion-api.md)
- [Read more about custom logs](logs-ingestion-api-overview.md) - [Learn more about writing transformation queries](../essentials//data-collection-transformations.md)
backup Backup Sql Server Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md
This file should be placed before you trigger the restore operation.
## Next steps
-For more information about Azure Backup for SQL Server VMs (public preview), see [Azure Backup for SQL VMs](/azure/azure-sql/virtual-machines/windows/backup-restore#azbackup).
+For more information about [Azure Backup for SQL VMs](/azure/azure-sql/virtual-machines/windows/backup-restore#azbackup).
cloud-shell Msi Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md
If you want to authenticate with different credentials, you can do so using `az
### Acquire token Execute the following commands to set your user access token as an environment variable,
-`access_token`.
+`ACCESS_TOKEN`.
```bash
-response=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s)
-access_token=$(echo $response | python -c 'import sys, json; print (json.load(sys.stdin)["access_token"])')
-echo The access token is $access_token
+RESPONSE=$(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s)
+ACCESS_TOKEN=$(echo $response | python -c 'import sys, json; print (json.load(sys.stdin)["access_token"])')
+echo The access token is $ACCESS_TOKEN
``` ### Use token
Execute the following command to get a list of all Virtual Machines in your acco
you acquired in the previous step. ```bash
-curl https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines?api-version=2021-07-01 -H "Authorization: Bearer $access_token" -H "x-ms-version: 2019-02-02"
+curl https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines?api-version=2021-07-01 -H "Authorization: Bearer $ACCESS_TOKEN" -H "x-ms-version: 2019-02-02"
``` ## Handling token expiration
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
again.
Bash: ```bash
- token=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
- curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token"
+ TOKEN=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
+ curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $TOKEN"
``` PowerShell:
cognitive-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md
Here's a subset of the basic structure and syntax of an SSML document:
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="string"> <mstts:backgroundaudio src="string" volume="string" fadein="string" fadeout="string"/> <voice name="string" effect="string">
- <audio src="string"/></audio>
+ <audio src="string"></audio>
<bookmark mark="string"/> <break strength="string" time="string" /> <emphasis level="value"></emphasis>
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
To use the identity in the following steps, use the [az identity show](/cli/azur
```azurecli-interactive # Get service principal ID of the user-assigned identity
-spID=$(az identity show \
+SP_ID=$(az identity show \
--resource-group myResourceGroup \ --name myACIId \ --query principalId --output tsv) # Get resource ID of the user-assigned identity
-resourceID=$(az identity show \
+RESOURCE_ID=$(az identity show \
--resource-group myResourceGroup \ --name myACIId \ --query id --output tsv)
Run the following [az keyvault set-policy](/cli/azure/keyvault) command to set a
az keyvault set-policy \ --name mykeyvault \ --resource-group myResourceGroup \
- --object-id $spID \
+ --object-id $SP_ID \
--secret-permissions get ```
az container create \
--resource-group myResourceGroup \ --name mycontainer \ --image mcr.microsoft.com/azure-cli \
- --assign-identity $resourceID \
+ --assign-identity $RESOURCE_ID \
--command-line "tail -f " ```
Output:
To store the access token in a variable to use in subsequent commands to authenticate, run the following command: ```bash
-token=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq -r '.access_token')
+TOKEN=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq -r '.access_token')
``` Now use the access token to authenticate to key vault and read a secret. Be sure to substitute the name of your key vault in the URL (*https:\//mykeyvault.vault.azure.net/...*): ```bash
-curl https://mykeyvault.vault.azure.net/secrets/SampleSecret/?api-version=2016-10-01 -H "Authorization: Bearer $token"
+curl https://mykeyvault.vault.azure.net/secrets/SampleSecret/?api-version=2016-10-01 -H "Authorization: Bearer $TOKEN"
``` The response looks similar to the following, showing the secret. In your code, you would parse this output to obtain the secret. Then, use the secret in a subsequent operation to access another Azure resource.
The `--assign-identity` parameter with no additional value enables a system-assi
```azurecli-interactive # Get the resource ID of the resource group
-rgID=$(az group show --name myResourceGroup --query id --output tsv)
+RG_ID=$(az group show --name myResourceGroup --query id --output tsv)
# Create container group with system-managed identity az container create \ --resource-group myResourceGroup \ --name mycontainer \ --image mcr.microsoft.com/azure-cli \
- --assign-identity --scope $rgID \
+ --assign-identity --scope $RG_ID \
--command-line "tail -f " ```
The `identity` section in the output looks similar to the following, showing tha
Set a variable to the value of `principalId` (the service principal ID) of the identity, to use in later steps. ```azurecli-interactive
-spID=$(az container show \
+SP_ID=$(az container show \
--resource-group myResourceGroup \ --name mycontainer \ --query identity.principalId --out tsv)
Run the following [az keyvault set-policy](/cli/azure/keyvault) command to set a
az keyvault set-policy \ --name mykeyvault \ --resource-group myResourceGroup \
- --object-id $spID \
+ --object-id $SP_ID \
--secret-permissions get ```
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/using-azure-container-registry-mi.md
In order to properly configure the identity in future steps, use [az identity sh
```azurecli-interactive # Get resource ID of the user-assigned identity
-userID=$(az identity show --resource-group myResourceGroup --name myACRId --query id --output tsv)
+USERID=$(az identity show --resource-group myResourceGroup --name myACRId --query id --output tsv)
# Get service principal ID of the user-assigned identity
-spID=$(az identity show --resource-group myResourceGroup --name myACRId --query principalId --output tsv)
+SPID=$(az identity show --resource-group myResourceGroup --name myACRId --query principalId --output tsv)
``` You'll need the identity's resource ID to sign in to the CLI from your virtual machine. To show the value: ```bash
-echo $userID
+echo $USERID
``` The resource ID is of the form:
The resource ID is of the form:
You'll also need the service principal ID to grant the managed identity access to your container registry. To show the value: ```bash
-echo $spID
+echo $SPID
``` The service principal ID is of the form:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx
In order for your identity to access your container registry, you must grant it a role assignment. Use to following command to grant the `acrpull` role to the identity you've just created, making sure to provide your registry's ID and the service principal we obtained earlier: ```azurecli-interactive
-az role assignment create --assignee $spID --scope <registry-id> --role acrpull
+az role assignment create --assignee $SPID --scope <registry-id> --role acrpull
``` ## Deploy using an Azure Resource Manager (ARM) template
az deployment group create --resource-group myResourceGroup --template-file azur
To deploy a container group using managed identity to authenticate image pulls via the Azure CLI, use the following command, making sure that your `<dns-label>` is globally unique: ```azurecli-interactive
-az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $userID --assign-identity $userID --ports 80 --dns-name-label <dns-label>
+az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $USERID --assign-identity $USERID --ports 80 --dns-name-label <dns-label>
``` ## Deploy in a virtual network using the Azure CLI
az container create --name my-containergroup --resource-group myResourceGroup --
To deploy a container group to a virtual network using managed identity to authenticate image pulls from an ACR that runs behind a private endpoint via the Azure CLI, use the following command: ```azurecli-interactive
-az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $userID --assign-identity $userID --vnet "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myVNetResourceGroup/providers/ --subnet mySubnetName
+az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $USERID --assign-identity $USERID --vnet "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myVNetResourceGroup/providers/ --subnet mySubnetName
``` For more info on how to deploy to a virtual network see [Deploy container instances into an Azure virtual network](./container-instances-vnet.md).
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
Azure Machine Learning supports a Table type (`mltable`). This allows for the creation of a *blueprint* that defines how to load data files into memory as a Pandas or Spark data frame. In this article you learn: > [!div class="checklist"]
-> - When to use Tables instead of Files or Folders.
-> - How to install the MLTable SDK.
-> - How to define a data loading blueprint using an `MLTable` file.
-> - Examples that show use of Tables in Azure ML.
-> - How to use Tables during interactive development (for example, in a notebook).
+> - When to use Azure ML Tables instead of Files or Folders.
+> - How to install the `mltable` SDK.
+> - How to define a data loading blueprint using an `mltable` file.
+> - Examples that show how `mltable` is used in Azure ML.
+> - How to use the `mltable` during interactive development (for example, in a notebook).
## Prerequisites
git clone --depth 1 https://github.com/Azure/azureml-examples
> [!TIP] > Use `--depth 1` to clone only the latest commit to the repository. This reduces the time needed to complete the operation.
-The examples relevant to Azure Machine Learning Tables can be found in the following folder of the clone repo:
+The examples relevant to Azure Machine Learning Tables can be found in the following folder of the cloned repo:
```bash cd azureml-examples/sdk/python/using-mltable
cd azureml-examples/sdk/python/using-mltable
Azure Machine Learning Tables (`mltable`) allow you to define how you want to *load* your data files into memory, as a Pandas and/or Spark data frame. Tables have two key features:
-1. **An `MLTable` file.** A YAML-based file that defines the data loading *blueprint*. In the MLTable file, you can specify:
+1. **An MLTable file.** A YAML-based file that defines the data loading *blueprint*. In the MLTable file, you can specify:
- The storage location(s) of the data - local, in the cloud, or on a public http(s) server. - *Globbing* patterns over cloud storage. These locations can specify sets of filenames, with wildcard characters (`*`). - *read transformation* - for example, the file format type (delimited text, Parquet, Delta, json), delimiters, headers, etc.
Azure Machine Learning Tables are useful in the following scenarios:
- You want to train ML models using Azure Machine Learning AutoML. > [!TIP]
-> Azure ML *doesn't* require use of Azure ML Tables (`mltable`) for your tabular data. You can use Azure ML File (`uri_file`) and Folder (`uri_folder`) types, and your own parsing logic loads the data into a Pandas or Spark data frame.
+> Azure ML *doesn't require* use of Azure ML Tables (`mltable`) for your tabular data. You can use Azure ML File (`uri_file`) and Folder (`uri_folder`) types, and your own parsing logic loads the data into a Pandas or Spark data frame.
>
-> If you have a simple CSV file or Parquet folder, it's **easier** to use Azure ML Files/Folders instead of than Tables.
+> If you have a simple CSV file or Parquet folder, it's **easier** to use Azure ML Files/Folders instead of Tables.
## Azure Machine Learning Tables Quickstart
With this data, you want to load into a Pandas data frame:
Pandas code handles this. However, achieving *reproducibility* would become difficult because you must either: -- share code, which means that if the schema changes (for example, a column name change) then all users must update their code, or-- write an ETL pipeline, which has heavy overhead.
+- Share code, which means that if the schema changes (for example, a column name change) then all users must update their code, or
+- Write an ETL pipeline, which has heavy overhead.
Azure Machine Learning Tables provide a light-weight mechanism to serialize (save) the data loading steps in an `MLTable` file, so that you and members of your team can *reproduce* the Pandas data frame. If the schema changes, you only update the `MLTable` file, instead of updates in many places that involve Python data loading code.
Azure Machine Learning Tables support reading from:
The `mltable` flexibility allows for data loading into a single dataframe from a combination of: -- local and cloud storage-- different cloud storage locations (for example: different blob containers)-- files, folder and glob patterns.
+- Local and cloud storage
+- Different cloud storage locations (for example: different blob containers)
+- Files, folder and glob patterns.
For example:
You can create a table containing the paths on cloud storage. This example has s
1.jpeg ```
-MLTable can construct a table that contains the storage paths of these images and their folder names (labels), which can be used to stream the images. The following code shows how to create the MLTable:
+The `mltable` can construct a table that contains the storage paths of these images and their folder names (labels), which can be used to stream the images. The following code shows how to create the MLTable:
```python import mltable
print(df.head())
tbl.save("./pets") ```
-The following code shows how to open the storage location in the Pandas data frame, and plot the images in a grid:
+The following code shows how to open the storage location in the Pandas data frame, and plot the images:
```python # plot images on a grid. Note this takes ~1min to execute.
for i in range(1, columns*rows +1):
#### Create a data asset to aid sharing and reproducibility
-You have your MLTable file currently saved on disk, which makes it hard to share with Team members. When you create a data asset in Azure Machine Learning, your MLTable is uploaded to cloud storage and "bookmarked", which allows your Team members to access the MLTable using a friendly name. Also, the data asset is versioned.
+You have your `mltable` file currently saved on disk, which makes it hard to share with Team members. When you create a data asset in Azure Machine Learning, the `mltable` is uploaded to cloud storage and "bookmarked", which allows your Team members to access the `mltable` using a friendly name. Also, the data asset is versioned.
```python import time
my_data = Data(
ml_client.data.create_or_update(my_data) ```
-Now that you have your MLTable stored in the cloud, you and Team members can access it with a friendly name in an interactive session (for example, a notebook):
+Now that the `mltable` is stored in the cloud, you and your Team members can access it with a friendly name in an interactive session (for example, a notebook):
```python import mltable
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
For more information about pricing for Azure Storage blob inventory, see [Azure
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
-## Known issues
+## Known issues and limitations
This section describes limitations and known issues of the Azure Storage blob inventory feature.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Lifecycle management supports tiering and deletion of current versions, previous
|--|--||-| | tierToCool | Supported for `blockBlob` | Supported | Supported | | enableAutoTierToHotFromCool<sup>1</sup> | Supported for `blockBlob` | Not supported | Not supported |
-| tierToArchive | Supported for `blockBlob` | Supported | Supported |
+| tierToArchive<sup>4</sup> | Supported for `blockBlob` | Supported | Supported |
| delete<sup>2,3</sup> | Supported for `blockBlob` and `appendBlob` | Supported | Supported | <sup>1</sup> The `enableAutoTierToHotFromCool` action is available only when used with the `daysAfterLastAccessTimeGreaterThan` run condition. That condition is described in the next table.
Lifecycle management supports tiering and deletion of current versions, previous
<sup>3</sup> A lifecycle management policy will not delete the current version of a blob until any previous versions or snapshots associated with that blob have been deleted. If blobs in your storage account have previous versions or snapshots, then you must include previous versions and snapshots when you specify a delete action as part of the policy.
+<sup>4</sup> Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the archive tier. The archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. This action gets listed based on the redundancy configured for the account.
+ > [!NOTE] > If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, action `delete` is cheaper than action `tierToArchive`. Action `tierToArchive` is cheaper than action `tierToCool`.
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
Update-AzStorageFileServiceProperty `
To get the status of SMB Multichannel, use the `az storage account file-service-properties show` command. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these Bash commands. ```bash
-resourceGroupName="<resource-group>"
-storageAccountName="<storage-account>"
+RESOURCE_GROUP_NAME="<resource-group>"
+STORAGE_ACCOUNT_NAME="<storage-account>"
# If you've never enabled or disabled SMB Multichannel, the value for the SMB Multichannel # property returned by Azure Files will be null. Null returned values should be interpreted
storageAccountName="<storage-account>"
# PowerShell commands replace null values with the human-readable default values. ## Search strings
-replaceSmbMultichannel="\"smbMultichannelEnabled\": null"
+REPLACESMBMULTICHANNEL="\"smbMultichannelEnabled\": null"
# Replacement values for null parameters.
-defaultSmbMultichannelEnabled="\"smbMultichannelEnabled\": false"
+DEFAULTSMBMULTICHANNELENABLED="\"smbMultichannelEnabled\": false"
# Build JMESPath query string
-query="{"
-query="${query}smbMultichannelEnabled: protocolSettings.smb.multichannel.enabled"
-query="${query}}"
+QUERY="{"
+QUERY="${QUERY}smbMultichannelEnabled: protocolSettings.smb.multichannel.enabled"
+QUERY="${QUERY}}"
# Get protocol settings from the Azure Files FileService object protocolSettings=$(az storage account file-service-properties show \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
- --query "${query}")
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
+ --query "${QUERY}")
# Replace returned values if null with default values
-protocolSettings="${protocolSettings/$replaceSmbMultichannel/$defaultSmbMultichannelEnabled}"
+PROTOCOL_SETTINGS="${protocolSettings/$REPLACESMBMULTICHANNEL/$DEFAULTSMBMULTICHANNELENABLED}"
# Print returned settings
-echo $protocolSettings
+echo $PROTOCOL_SETTINGS
``` To enable/disable SMB Multichannel, use the `az storage account file-service-properties update` command. ```azurecli az storage account file-service-properties update \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--enable-smb-multichannel "true" ```
Update-AzStorageFileServiceProperty `
To get the status of the SMB security settings, use the `az storage account file-service-properties show` command. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these Bash commands. ```bash
-resourceGroupName="<resource-group>"
-storageAccountName="<storage-account>"
+RESOURCE_GROUP_NAME="<resource-group>"
+STORAGE_ACCOUNT_NAME="<storage-account>"
# If you've never changed any SMB security settings, the values for the SMB security # settings returned by Azure Files will be null. Null returned values should be interpreted
storageAccountName="<storage-account>"
# PowerShell commands replace null values with the human-readable default values. # Values to be replaced
-replaceSmbProtocolVersion="\"smbProtocolVersions\": null"
-replaceSmbChannelEncryption="\"smbChannelEncryption\": null"
-replaceSmbAuthenticationMethods="\"smbAuthenticationMethods\": null"
-replaceSmbKerberosTicketEncryption="\"smbKerberosTicketEncryption\": null"
+REPLACESMBPROTOCOLVERSION="\"smbProtocolVersions\": null"
+REPLACESMBCHANNELENCRYPTION="\"smbChannelEncryption\": null"
+REPLACESMBAUTHENTICATIONMETHODS="\"smbAuthenticationMethods\": null"
+REPLACESMBKERBEROSTICKETENCRYPTION="\"smbKerberosTicketEncryption\": null"
# Replacement values for null parameters. If you copy this into your own # scripts, you will need to ensure that you keep these variables up-to-date with any new # options we may add to these parameters in the future.
-defaultSmbProtocolVersions="\"smbProtocolVersions\": \"SMB2.1;SMB3.0;SMB3.1.1\""
-defaultSmbChannelEncryption="\"smbChannelEncryption\": \"AES-128-CCM;AES-128-GCM;AES-256-GCM\""
-defaultSmbAuthenticationMethods="\"smbAuthenticationMethods\": \"NTLMv2;Kerberos\""
-defaultSmbKerberosTicketEncryption="\"smbKerberosTicketEncryption\": \"RC4-HMAC;AES-256\""
+DEFAULTSMBPROTOCOLVERSIONS="\"smbProtocolVersions\": \"SMB2.1;SMB3.0;SMB3.1.1\""
+DEFAULTSMBCHANNELENCRYPTION="\"smbChannelEncryption\": \"AES-128-CCM;AES-128-GCM;AES-256-GCM\""
+DEFAULTSMBAUTHENTICATIONMETHODS="\"smbAuthenticationMethods\": \"NTLMv2;Kerberos\""
+DEFAULTSMBKERBEROSTICKETENCRYPTION="\"smbKerberosTicketEncryption\": \"RC4-HMAC;AES-256\""
# Build JMESPath query string
-query="{"
-query="${query}smbProtocolVersions: protocolSettings.smb.versions,"
-query="${query}smbChannelEncryption: protocolSettings.smb.channelEncryption,"
-query="${query}smbAuthenticationMethods: protocolSettings.smb.authenticationMethods,"
-query="${query}smbKerberosTicketEncryption: protocolSettings.smb.kerberosTicketEncryption"
-query="${query}}"
+QUERY="{"
+QUERY="${QUERY}smbProtocolVersions: protocolSettings.smb.versions,"
+QUERY="${QUERY}smbChannelEncryption: protocolSettings.smb.channelEncryption,"
+QUERY="${QUERY}smbAuthenticationMethods: protocolSettings.smb.authenticationMethods,"
+QUERY="${QUERY}smbKerberosTicketEncryption: protocolSettings.smb.kerberosTicketEncryption"
+QUERY="${QUERY}}"
# Get protocol settings from the Azure Files FileService object
-protocolSettings=$(az storage account file-service-properties show \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
- --query "${query}")
+PROTOCOLSETTINGS=$(az storage account file-service-properties show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
+ --query "${QUERY}")
# Replace returned values if null with default values
-protocolSettings="${protocolSettings/$replaceSmbProtocolVersion/$defaultSmbProtocolVersions}"
-protocolSettings="${protocolSettings/$replaceSmbChannelEncryption/$defaultSmbChannelEncryption}"
-protocolSettings="${protocolSettings/$replaceSmbAuthenticationMethods/$defaultSmbAuthenticationMethods}"
-protocolSettings="${protocolSettings/$replaceSmbKerberosTicketEncryption/$defaultSmbKerberosTicketEncryption}"
+PROTOCOLSETTINGS="${protocolSettings/$REPLACESMBPROTOCOLVERSION/$DEFAULTSMBPROTOCOLVERSIONS}"
+PROTOCOLSETTINGS="${protocolSettings/$REPLACESMBCHANNELENCRYPTION/$DEFAULTSMBCHANNELENCRYPTION}"
+PROTOCOLSETTINGS="${protocolSettings/$REPLACESMBAUTHENTICATIONMETHODS/$DEFAULTSMBAUTHENTICATIONMETHODS}"
+PROTOCOLSETTINGS="${protocolSettings/$REPLACESMBKERBEROSTICKETENCRYPTION/$DEFAULTSMBKERBEROSTICKETENCRYPTION}"
# Print returned settings
-echo $protocolSettings
+echo $PROTOCOLSETTINGS
``` Depending on your organizations security, performance, and compatibility requirements, you may wish to modify the SMB protocol settings. The following Azure CLI command restricts your SMB file shares to only the most secure options.
Depending on your organizations security, performance, and compatibility require
```azurecli az storage account file-service-properties update \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--versions "SMB3.1.1" \ --channel-encryption "AES-256-GCM" \ --auth-methods "Kerberos" \
storage Files Troubleshoot Smb Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md
New-AzStorageAccountKey `
The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you'll need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment. ```bash
-resourceGroupName="<resource-group>"
-storageAccountName="<storage-account>"
+RESOURCE_GROUP_NAME="<resource-group>"
+STORAGE_ACCOUNT_NAME="<storage-account>"
# Rotate primary key (key 1). You should switch to key 2 before rotating key 1. az storage account keys renew \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--key "primary" # Rotate secondary key (key 2). You should switch to the new key 1 before rotating key 2. az storage account keys renew \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--key "secondary" ```
If you still need help, [contact support](https://portal.azure.com/?#blade/Micro
- [Troubleshoot Azure Files connectivity (SMB)](files-troubleshoot-smb-connectivity.md) - [Troubleshoot Azure Files general SMB issues on Linux](files-troubleshoot-linux-smb.md) - [Troubleshoot Azure Files general NFS issues on Linux](files-troubleshoot-linux-nfs.md)-- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md)
+- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md)
storage Files Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot.md
IP4Address : 10.0.0.5
# Different storage accounts, especially in different Azure environments, # may have different suffixes than file.core.windows.net, so be sure to use the correct # suffix for your storage account.
-hostName="mystorageaccount.file.core.windows.net"
+HOSTNAME="mystorageaccount.file.core.windows.net"
# Do the name resolution.
-nslookup $hostName
+nslookup $HOSTNAME
``` The output returned by `nslookup` may be different depending on your environment and desired networking configuration. For example, if you are trying to access the public endpoint of the storage account that does not have a private endpoint configured, you would see a result that looks like the following, where `x.x.x.x` is the IP address of the cluster `file.phx10prdstf01a.store.core.windows.net` of the Azure storage platform that serves your storage account:
TcpTestSucceeded : True
# Different storage accounts, especially in different Azure environments, # may have different suffixes than file.core.windows.net, so be sure to use the correct # suffix for your storage account.
-hostName="mystorageaccount.file.core.windows.net"
+HOSTNAME="mystorageaccount.file.core.windows.net"
# Do the TCP connection test - see the above protocol/port table to figure out which # port to use for your test. This test uses port 445, the port used by SMB.
-nc -zvw3 $hostName 445
+nc -zvw3 $HOSTNAME 445
``` If the connection was successfully established, you should expect to see the following result:
storage Storage Files Configure P2s Vpn Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-linux.md
The Azure virtual network gateway can provide VPN connections using several VPN
sudo apt update sudo apt install strongswan strongswan-pki libstrongswan-extra-plugins curl libxml2-utils cifs-utils unzip
-installDir="/etc/"
+INSTALL_DIR="/etc/"
``` If the installation fails or you get an error such as **EAP_IDENTITY not supported, sending EAP_NAK**, you might need to install extra plugins:
The following script will create an Azure virtual network with three subnets: on
Remember to replace `<region>`, `<resource-group>`, and `<desired-vnet-name>` with the appropriate values for your environment. ```bash
-region="<region>"
-resourceGroupName="<resource-group>"
-virtualNetworkName="<desired-vnet-name>"
-
-virtualNetwork=$(az network vnet create \
- --resource-group $resourceGroupName \
- --name $virtualNetworkName \
- --location $region \
+REGION="<region>"
+RESOURCE_GROUP_NAME="<resource-group>"
+VIRTUAL_NETWORK_NAME="<desired-vnet-name>"
+
+VIRTUAL_NETWORK=$(az network vnet create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $VIRTUAL_NETWORK_NAME \
+ --location $REGION \
--address-prefixes "192.168.0.0/16" \ --query "newVNet.id" | tr -d '"')
-serviceEndpointSubnet=$(az network vnet subnet create \
- --resource-group $resourceGroupName \
- --vnet-name $virtualNetworkName \
+SERVICE_ENDPOINT_SUBNET=$(az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
--name "ServiceEndpointSubnet" \ --address-prefixes "192.168.0.0/24" \ --service-endpoints "Microsoft.Storage" \ --query "id" | tr -d '"')
-privateEndpointSubnet=$(az network vnet subnet create \
- --resource-group $resourceGroupName \
- --vnet-name $virtualNetworkName \
+PRIVATE_ENDPOINT_SUBNET=$(az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
--name "PrivateEndpointSubnet" \ --address-prefixes "192.168.1.0/24" \ --query "id" | tr -d '"')
-gatewaySubnet=$(az network vnet subnet create \
- --resource-group $resourceGroupName \
- --vnet-name $virtualNetworkName \
+GATEWAY_SUBNET=$(az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
--name "GatewaySubnet" \ --address-prefixes "192.168.2.0/24" \ --query "id" | tr -d '"')
gatewaySubnet=$(az network vnet subnet create \
In order for VPN connections from your on-premises Linux machines to be authenticated to access your virtual network, you must create two certificates: a root certificate, which will be provided to the virtual machine gateway, and a client certificate, which will be signed with the root certificate. The following script creates the required certificates. ```bash
-rootCertName="P2SRootCert"
-username="client"
-password="1234"
+ROOT_CERT_NAME="P2SRootCert"
+USERNAME="client"
+PASSWORD="1234"
mkdir temp cd temp sudo ipsec pki --gen --outform pem > rootKey.pem
-sudo ipsec pki --self --in rootKey.pem --dn "CN=$rootCertName" --ca --outform pem > rootCert.pem
+sudo ipsec pki --self --in rootKey.pem --dn "CN=$ROOT_CERT_NAME" --ca --outform pem > rootCert.pem
-rootCertificate=$(openssl x509 -in rootCert.pem -outform der | base64 -w0 ; echo)
+ROOT_CERTIFICATE=$(openssl x509 -in rootCert.pem -outform der | base64 -w0 ; echo)
sudo ipsec pki --gen --size 4096 --outform pem > "clientKey.pem" sudo ipsec pki --pub --in "clientKey.pem" | \
sudo ipsec pki --pub --in "clientKey.pem" | \
--issue \ --cacert rootCert.pem \ --cakey rootKey.pem \
- --dn "CN=$username" \
- --san $username \
+ --dn "CN=$USERNAME" \
+ --san $USERNAME \
--flag clientAuth \ --outform pem > "clientCert.pem"
-openssl pkcs12 -in "clientCert.pem" -inkey "clientKey.pem" -certfile rootCert.pem -export -out "client.p12" -password "pass:$password"
+openssl pkcs12 -in "clientCert.pem" -inkey "clientKey.pem" -certfile rootCert.pem -export -out "client.p12" -password "pass:$PASSWORD"
``` ## Deploy virtual network gateway
Remember to replace `<desired-vpn-name-here>` with the name you would like for t
> P2S IKEv2/OpenVPN connections are not supported with the **Basic** SKU. This script uses the **VpnGw1** SKU for the virtual network gateway, accordingly. ```azurecli
-vpnName="<desired-vpn-name-here>"
-publicIpAddressName="$vpnName-PublicIP"
+VPN_NAME="<desired-vpn-name-here>"
+PUBLIC_IP_ADDR_NAME="$VPN_NAME-PublicIP"
-publicIpAddress=$(az network public-ip create \
- --resource-group $resourceGroupName \
- --name $publicIpAddressName \
- --location $region \
+PUBLIC_IP_ADDR=$(az network public-ip create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $PUBLIC_IP_ADDR_NAME \
+ --location $REGION \
--sku "Basic" \ --allocation-method "Dynamic" \ --query "publicIp.id" | tr -d '"') az network vnet-gateway create \
- --resource-group $resourceGroupName \
- --name $vpnName \
- --vnet $virtualNetworkName \
- --public-ip-addresses $publicIpAddress \
- --location $region \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $VPN_NAME \
+ --vnet $VIRTUAL_NETWORK_NAME \
+ --public-ip-addresses $PUBLIC_IP_ADDR \
+ --location $REGION \
--sku "VpnGw1" \ --gateway-typ "Vpn" \ --vpn-type "RouteBased" \
az network vnet-gateway create \
--client-protocol "IkeV2" > az network vnet-gateway root-cert create \
- --resource-group $resourceGroupName \
- --gateway-name $vpnName \
- --name $rootCertName \
- --public-cert-data $rootCertificate \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --gateway-name $VPN_NAME \
+ --name $ROOT_CERT_NAME \
+ --public-cert-data $ROOT_CERTIFICATE \
--output none ```
az network vnet-gateway root-cert create \
The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Linux machine. The following script will place the certificates you created in the correct spot and configure the `ipsec.conf` file with the correct values from the configuration file in the downloadable package. ```azurecli
-vpnClient=$(az network vnet-gateway vpn-client generate \
- --resource-group $resourceGroupName \
- --name $vpnName \
+VPN_CLIENT=$(az network vnet-gateway vpn-client generate \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $VPN_NAME \
--authentication-method EAPTLS | tr -d '"')
-curl $vpnClient --output vpnClient.zip
+curl $VPN_CLIENT --output vpnClient.zip
unzip vpnClient.zip
-vpnServer=$(xmllint --xpath "string(/VpnProfile/VpnServer)" Generic/VpnSettings.xml)
-vpnType=$(xmllint --xpath "string(/VpnProfile/VpnType)" Generic/VpnSettings.xml | tr '[:upper:]' '[:lower:]')
-routes=$(xmllint --xpath "string(/VpnProfile/Routes)" Generic/VpnSettings.xml)
+VPN_SERVER=$(xmllint --xpath "string(/VpnProfile/VpnServer)" Generic/VpnSettings.xml)
+VPN_TYPE=$(xmllint --xpath "string(/VpnProfile/VpnType)" Generic/VpnSettings.xml | tr '[:upper:]' '[:lower:]')
+ROUTES=$(xmllint --xpath "string(/VpnProfile/Routes)" Generic/VpnSettings.xml)
-sudo cp "${installDir}ipsec.conf" "${installDir}ipsec.conf.backup"
-sudo cp "Generic/VpnServerRoot.cer_0" "${installDir}ipsec.d/cacerts"
-sudo cp "${username}.p12" "${installDir}ipsec.d/private"
+sudo cp "${INSTALL_DIR}ipsec.conf" "${INSTALL_DIR}ipsec.conf.backup"
+sudo cp "Generic/VpnServerRoot.cer_0" "${INSTALL_DIR}ipsec.d/cacerts"
+sudo cp "${USERNAME}.p12" "${INSTALL_DIR}ipsec.d/private"
sudo tee -a "${installDir}ipsec.conf" <<EOF
-conn $virtualNetworkName
- keyexchange=$vpnType
+conn $VIRTUAL_NETWORK_NAME
+ keyexchange=$VPN_TYPE
type=tunnel leftfirewall=yes left=%any
conn $virtualNetworkName
auto=add EOF
-echo ": P12 client.p12 '$password'" | sudo tee -a "${installDir}ipsec.secrets" >
+echo ": P12 client.p12 '$PASSWORD'" | sudo tee -a "${INSTALL_DIR}ipsec.secrets" >
sudo ipsec restart
-sudo ipsec up $virtualNetworkName
+sudo ipsec up $VIRTUAL_NETWORK_NAME
``` ## Mount Azure file share
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
On other distributions, use the appropriate package manager or [compile from sou
* **Ensure port 445 is open**: SMB communicates over TCP port 445 - make sure your firewall or ISP isn't blocking TCP port 445 from the client machine. Replace `<your-resource-group>` and `<your-storage-account>` and then run the following script: ```bash
- resourceGroupName="<your-resource-group>"
- storageAccountName="<your-storage-account>"
+ RESOURCE_GROUP_NAME="<your-resource-group>"
+ STORAGE_ACCOUNT_NAME="<your-storage-account>"
# This command assumes you have logged in with az login
- httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+ HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
- smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})
- fileHost=$(echo $smbPath | tr -d "/")
+ SMBPATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})
+ FILE_HOST=$(echo $-- | tr -d "/")
- nc -zvw3 $fileHost 445
+ nc -zvw3 $FILE_HOST 445
``` If the connection was successful, you should see something similar to the following output:
On other distributions, use the appropriate package manager or [compile from sou
If you're unable to open up port 445 on your corporate network or are blocked from doing so by an ISP, you may use a VPN connection or ExpressRoute to work around port 445. For more information, see [Networking considerations for direct Azure file share access](storage-files-networking-overview.md). ## Mount the Azure file share on-demand with mount
-When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file system. You can mount file shares to anywhere on your system. The following example mounts under the `/mount` path. You can change this to your preferred path you want by modifying the `$mntRoot` variable.
+When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file system. You can mount file shares to anywhere on your system. The following example mounts under the `/mount` path. You can change this to your preferred path you want by modifying the `$MNT_ROOT` variable.
Replace `<resource-group-name>`, `<storage-account-name>`, and `<file-share-name>` with the appropriate information for your environment: ```bash
-resourceGroupName="<resource-group-name>"
-storageAccountName="<storage-account-name>"
-fileShareName="<file-share-name>"
+RESOURCE_GROUP_NAME="<resource-group-name>"
+STORAGE_ACCOUNT_NAME="<storage-account-name>"
+FILE_SHARE_NAME="<file-share-name>"
-mntRoot="/mount"
-mntPath="$mntRoot/$storageAccountName/$fileShareName"
+MNT_ROOT="/mount"
+MNT_PATH="$MNT_ROOT/$STORAGE_ACCOUNT_NAME/$FILE_SHARE_NAME"
-sudo mkdir -p $mntPath
+sudo mkdir -p $MNT_PATH
```
-Next, mount the file share using the `mount` command. In the following example, the `$smbPath` command is populated using the fully qualified domain name for the storage account's file endpoint and `$storageAccountKey` is populated with the storage account key.
+Next, mount the file share using the `mount` command. In the following example, the `$SMB_PATH` command is populated using the fully qualified domain name for the storage account's file endpoint and `$STORAGE_ACCOUNT_KEY` is populated with the storage account key.
# [SMB 3.1.1](#tab/smb311) > [!Note]
Next, mount the file share using the `mount` command. In the following example,
```azurecli # This command assumes you have logged in with az login
-httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
+SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})$FILE_SHARE_NAME
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+STORAGE_ACCOUNT_KEY=$(az storage account keys list \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30,mfsymlinks
+sudo mount -t cifs $SMB_PATH $MNT_PATH -o username=$STORAGE_ACCOUNT_NAME,password=$STORAGE_ACCOUNT_KEY,serverino,nosharesock,actimeo=30,mfsymlinks
``` # [SMB 3.0](#tab/smb30) ```azurecli # This command assumes you have logged in with az login
-httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
+SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})$FILE_SHARE_NAME
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+STORAGE_ACCOUNT_KEY=$(az storage account keys list \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30,mfsymlinks
+sudo mount -t cifs $SMB_PATH $MNT_PATH -o vers=3.0,username=$STORAGE_ACCOUNT_NAME,password=$STORAGE_ACCOUNT_KEY,serverino,nosharesock,actimeo=30,mfsymlinks
``` # [SMB 2.1](#tab/smb21) ```azurecli # This command assumes you have logged in with az login
-httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
+SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})$FILE_SHARE_NAME
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+STORAGE_ACCOUNT_KEY=$(az storage account keys list \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" --output tsv | tr -d '"')
-sudo mount -t cifs $smbPath $mntPath -o vers=2.1,username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30,mfsymlinks
+sudo mount -t cifs $SMB_PATH $MNT_PATH -o vers=2.1,username=$STORAGE_ACCOUNT_NAME,password=$STORAGE_ACCOUNT_KEY,serverino,nosharesock,actimeo=30,mfsymlinks
```
You can use `uid`/`gid` or `dir_mode` and `file_mode` in the mount options for t
You can also mount the same Azure file share to multiple mount points if desired. When you're done using the Azure file share, use `sudo umount $mntPath` to unmount the share. ## Automatically mount file shares
-When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file system. You can mount file shares to anywhere on your system. The following example mounts under the `/mount` path. You can change this to your preferred path you want by modifying the `$mntRoot` variable.
+When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file system. You can mount file shares to anywhere on your system. The following example mounts under the `/mount` path. You can change this to your preferred path you want by modifying the `$MNT_ROOT` variable.
```bash
-mntRoot="/mount"
-sudo mkdir -p $mntRoot
+MNT_ROOT="/mount"
+sudo mkdir -p $MNT_ROOT
``` To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the storage account key as the password. Because the storage account credentials may change over time, you should store the credentials for the storage account separately from the mount configuration.
To mount an Azure file share on Linux, use the storage account name as the usern
The following example shows how to create a file to store the credentials. Remember to replace `<resource-group-name>` and `<storage-account-name>` with the appropriate information for your environment. ```bash
-resourceGroupName="<resource-group-name>"
-storageAccountName="<storage-account-name>"
+RESOURCE_GROUP_NAME="<resource-group-name>"
+STORAGE_ACCOUNT_NAME="<storage-account-name>"
# Create a folder to store the credentials for this storage account and # any other that you might set up.
-credentialRoot="/etc/smbcredentials"
+CREDENTIAL_ROOT="/etc/smbcredentials"
sudo mkdir -p "/etc/smbcredentials" # Get the storage account key for the indicated storage account. # You must be logged in with az login and your user identity must have # permissions to list the storage account keys for this command to work.
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroupName \
- --account-name $storageAccountName \
+STORAGE_ACCOUNT_KEY=$(az storage account keys list \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" --output tsv | tr -d '"') # Create the credential file for this individual storage account
-smbCredentialFile="$credentialRoot/$storageAccountName.cred"
-if [ ! -f $smbCredentialFile ]; then
- echo "username=$storageAccountName" | sudo tee $smbCredentialFile >
- echo "password=$storageAccountKey" | sudo tee -a $smbCredentialFile >
+SMB_CREDENTIAL_FILE="$CREDENTIAL_ROOT/$STORAGE_ACCOUNT_NAME.cred"
+if [ ! -f $SMB_CREDENTIAL_FILE ]; then
+ echo "username=$STORAGE_ACCOUNT_NAME" | sudo tee $SMB_CREDENTIAL_FILE >
+ echo "password=$STORAGE_ACCOUNT_KEY" | sudo tee -a $SMB_CREDENTIAL_FILE >
else
- echo "The credential file $smbCredentialFile already exists, and was not modified."
+ echo "The credential file $SMB_CREDENTIAL_FILE already exists, and was not modified."
fi # Change permissions on the credential file so only root can read or modify the password file.
-sudo chmod 600 $smbCredentialFile
+sudo chmod 600 $SMB_CREDENTIAL_FILE
``` To automatically mount a file share, you have a choice between using a static mount via the `/etc/fstab` utility or using a dynamic mount via the `autofs` utility.
To automatically mount a file share, you have a choice between using a static mo
Using the earlier environment, create a folder for your storage account/file share under your mount folder. Replace `<file-share-name>` with the appropriate name of your Azure file share. ```bash
-fileShareName="<file-share-name>"
+FILE_SHARE_NAME="<file-share-name>"
-mntPath="$mntRoot/$storageAccountName/$fileShareName"
-sudo mkdir -p $mntPath
+MNT_PATH="$MNT_ROOT/$STORAGE_ACCOUNT_NAME/$FILE_SHARE_NAME"
+sudo mkdir -p $MNT_PATH
``` Finally, create a record in the `/etc/fstab` file for your Azure file share. In the command below, the default 0755 Linux file and folder permissions are used, which means read, write, and execute for the owner (based on the file/directory Linux owner), read and execute for users in owner group, and read and execute for others on the system. You may wish to set alternate `uid` and `gid` or `dir_mode` and `file_mode` permissions on mount as desired. For more information on how to set permissions, see [UNIX numeric notation](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation) on Wikipedia.
Finally, create a record in the `/etc/fstab` file for your Azure file share. In
> If you want Docker containers running .NET Core applications to be able to write to the Azure file share, include **nobrl** in the SMB mount options to avoid sending byte range lock requests to the server. ```bash
-httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
+SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})$FILE_SHARE_NAME
-if [ -z "$(grep $smbPath\ $mntPath /etc/fstab)" ]; then
- echo "$smbPath $mntPath cifs nofail,credentials=$smbCredentialFile,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab >
+if [ -z "$(grep $SMB_PATH\ $MNT_PATH /etc/fstab)" ]; then
+ echo "$SMB_PATH $MNT_PATH cifs nofail,credentials=$SMB_CREDENTIAL_FILE,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab >
else echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already present. You may want to double check /etc/fstab to ensure the configuration is as desired." fi
sudo zypper install autofs
Next, update the `autofs` configuration files. ```bash
-fileShareName="<file-share-name>"
+FILE_SHARE_NAME="<file-share-name>"
-httpEndpoint=$(az storage account show \
- --resource-group $resourceGroupName \
- --name $storageAccountName \
+HTTP_ENDPOINT=$(az storage account show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $STORAGE_ACCOUNT_NAME \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
+SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-$(expr length $HTTP_ENDPOINT))$FILE_SHARE_NAME
-echo "$fileShareName -fstype=cifs,credentials=$smbCredentialFile :$smbPath" > /etc/auto.fileshares
+echo "$FILE_SHARE_NAME -fstype=cifs,credentials=$SMB_CREDENTIAL_FILE :$SMB_PATH" > /etc/auto.fileshares
echo "/fileshares /etc/auto.fileshares --timeout=60" > /etc/auto.master ```
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
-# PowerShell & REST APIs for for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
+# PowerShell & REST APIs for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
Many dedicated SQL pool administrative tasks can be managed using either Azure PowerShell cmdlets or REST APIs. Below are some examples of how to use PowerShell commands to automate common tasks in your dedicated SQL pool (formerly SQL DW). For some good REST examples, see the article [Manage scalability with REST](sql-data-warehouse-manage-compute-rest-api.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md
To see how your autoscale rules are applied, select **Run history** across the t
## Next steps
-In this article, you learned how to use autoscale rules to scale horizontally and increase or decrease the *number* of VM instances in your scale set. You can also scale vertically to increase or decrease the VM instance *size*. For more information, see [Vertical autoscale with Virtual Machine Scale Sets](virtual-machine-scale-sets-vertical-scale-reprovision.md).
-
+In this article, you learned how to use autoscale rules to scale horizontally and increase or decrease the *number* of VM instances in your scale set.
For information on how to manage your VM instances, see [Manage Virtual Machine Scale Sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md). To learn how to generate alerts when your autoscale rules trigger, see [Use autoscale actions to send email and webhook alert notifications in Azure Monitor](../azure-monitor/autoscale/autoscale-webhook-email.md). You can also [Use audit logs to send email and webhook alert notifications in Azure Monitor](../azure-monitor/alerts/alerts-log-webhook.md).