0% found this document useful (0 votes)
412 views

Azure

This document provides an overview of setting up a local development environment for Python developers to build applications on Azure. It recommends creating an Azure account, either through your company or for free individually. It also recommends installing tools like the Azure CLI to manage Azure resources from your local machine. Finally, it suggests using the Azure portal for common management tasks as you get started with Azure and cloud development.

Uploaded by

505 Dedeepya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
412 views

Azure

This document provides an overview of setting up a local development environment for Python developers to build applications on Azure. It recommends creating an Azure account, either through your company or for free individually. It also recommends installing tools like the Azure CLI to manage Azure resources from your local machine. Finally, it suggests using the Azure portal for common management tasks as you get started with Azure and cloud development.

Uploaded by

505 Dedeepya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 297

Contents

Azure for Python developers


Cloud development with Azure
What is Azure?
Set up your dev environment
Create, access, and manage resources
The Azure development flow
Web
Web app solutions
Django/Flask web app
Django/Flask web app + database
Django/Flask web app + database + auth
1. Overview
2. Run the web app locally
3. Create an App Service
4. Create a storage account
5. Create a PostgreSQL database
6. Deploy the web app to Azure
7. Clean up resources
Static websites
Hosting in Azure Storage
Deploy from VS Code
Configure web apps
Set up web dev environment
Configure a startup file
Set up CI/CD
Add sign in
Store and retrieve secrets
Set up Azure monitor
Serverless solutions
Create a function
Using Visual Studio Code
Using the Azure CLI
Connect to Azure Storage
Using Visual Studio Code
Using the Azure CLI
Data
Data engineering solutions
Serverless, Cloud ETL
1. Overview
2. Create solution resources
3. Securely ingest relational data
4. Process relational data for analytics
5. Load processed data to Data Lake
More data and storage solutions
Data science solutions
Deploy machine learning models
Containers
Python containers overview
Deploy to App Service
1. Overview
2. Build and test container locally
3. Build container in Azure
4. Deploy container App Service
5. Clean up resources
Deploy to Container Apps
Overview
Build and deploy a container app
Configure CI/CD
Deploy a Kubernetes cluster
Azure SDK for Python
Overview
Library usage patterns
Authentication
Overview
Auth during local development
Auth using developer service principals
Auth using developer accounts
Auth from Azure-hosted apps
Auth from on-premises apps
Additional auth methods
Walkthrough tutorial
1. Introduction and background
2. Authentication requirements
3. Third-party API implementation
4. Main app implementation
5. Dependencies and environment variables
6. Main app startup code
7. Main app endpoint
Install packages
Package index
Library API reference
Examples
Create a resource group
List groups and resources
Create Azure storage
Use Azure storage
Create & deploy a web app
Create & query a database
Create a virtual machine
Use Azure Managed Disks with a VM
Configure logging
Configure proxies
Use a sovereign domain
Library API reference
Explore services supporting Python
App hosting
Data solutions
Identity and security
Machine learning
AI (Cognitive Services)
Messaging & IoT
Other services
Samples
Web apps & serverless
Databases
Storage
Azure Storage
Redis Cache
Identity & Security
IoT
Data science
Machine Learning Service
Databricks
HDInsight
AI with Cognitive Services
Search
All Samples
Cloud development on Azure
10/28/2022 • 4 minutes to read • Edit Online

You're a Python developer, and you're ready to develop cloud applications for Microsoft Azure. To help you
prepare for a long and productive career, this series of three articles orients you to the basic landscape of cloud
development on Azure.

What is Azure? Data centers, services, and resources


Microsoft's CEO, Satya Nadella, often refers to Azure as "the world's computer." A computer, as you well know, is
a collection of hardware that's managed by an operating system, which provides a platform upon which you can
build software that helps people apply the system's computing power to any number of tasks. (That's why we
use the word "application" to describe such software.)
In the case of Azure, the computer's hardware is not a single machine but an enormous pool of virtualized
server computers contained in dozens of massive data centers around the world. The Azure "operating system"
is then composed of services that dynamically allocate and de-allocate different parts of that resource pool as
applications need them. Those dynamic allocations allow applications to respond quickly to any number of
changing conditions, such as customer demand.
Each allocation is called a resource, and each resource is assigned both a unique object identifier (a GUID) and a
unique URL. Types of resources include virtual machines (CPU cores and memory), storage, databases, virtual
networks, container registries, container orchestrators, web hosts, AI and analytics engines, and so on.

Resources are the building blocks of a cloud application. The cloud development process thus begins with
creating the appropriate environment into which you can deploy the different parts of the application. Put
simply, you cannot deploy any code or data to Azure until you've allocated and configured—that is
provisioned—the suitable target resources.
The process of creating the environment for your application, then, involves identifying the relevant services and
resource types involved, and then provisioning those resources. The provisioning process is essentially how you
construct the computing system to which you deploy your application. Provisioning is also the point at which
you begin renting those resources from Azure.
There are hundreds of different types of resources at your disposal, from basic "infrastructure" resources like
virtual machines, where you retain full control and responsibility for the software you deploy, to higher-level
"platform" services that provide a more managed environment where you concern yourself with only data and
application code.
Finding the right services for your application, and balancing their relative costs, can be challenging, but is also
part of the creative fun of cloud development. To understand the many choices, review the Azure developer's
guide. Here, let's next discuss how you actually work with all of these services and resources.

NOTE
You've probably seen and perhaps have grown weary of the terms IaaS (infrastructure-as-a-service), PaaS (platform-as-a-
service), and so on. The as-a-service part reflects the reality that you generally don't have physical access to the data
centers themselves. Instead, you use tools like the Azure portal, the Azure CLI, or Azure's REST API to provision
infrastructure resources, platform resources, and so on. As a service, Azure is always standing by waiting to receive your
requests.
On this developer center, we spare you the IaaS, PaaS, etc. jargon because "as-a-service" is just inherent to the cloud to
begin with!

NOTE
A hybrid cloud refers to the combination of private computers and data centers with cloud resources like Azure, and has
its own considerations beyond what's covered in the previous discussion. Furthermore, this discussion assumes new
application development; scenarios that involve rearchitecting and migrating existing on-premises applications are not
covered here.

NOTE
You might hear the terms cloud native and cloud enabled applications, which are often discussed as the same thing. There
are differences, however. A cloud enabled application is often one that is migrated, as a whole, from an on-premises data
center to cloud-based servers. Oftentimes, such applications retain their original structure and are simply deployed to
virtual machines in the cloud (and therefore across geographic regions). Such a migration allows the application to scale
to meet global demand without having to provision new hardware in your own data center. However, scaling must be
done at the virtual machine (or infrastructure) level, even if only one part of the application needs increased performance.
A cloud native application, on the other hand, is written from the outset to take advantage of the many different,
independently scalable services available in a cloud such as Azure. Cloud native applications are more loosely structured
(using micro-service architectures, for example), which allows you to more precisely configure deployment and scaling for
each part. Such a structure simplifies maintenance and often dramatically reduces costs because you need pay for
premium services only where necessary.
For more information, see Build cloud-native applications in Azure and Architecting Cloud Native .NET Applications for
Azure, the principles of which apply to applications written in any language.

Next step
Provisioning, accessing, and managing resources >>>
Configure your local Python dev environment for
Azure
10/28/2022 • 6 minutes to read • Edit Online

To develop Python applications using Azure, you first want to configure your local development environment.
Configuration includes creating an Azure account, installing tools for Azure development, and connecting those
tools to your Azure account.
Developing on Azure requires Python 3.7 or higher. To verify the version of Python on your workstation, in a
console window type the command python3 --version for macOS/Linux or py --version for Windows.

Create an Azure Account


To develop Python applications with Azure, you need an Azure account. Your Azure account is the credentials
you use to sign-in to Azure with and what you use to create Azure resources.
If you're using Azure at work, talk to your company's cloud administrator to get your credentials used to sign-in
to Azure.
Otherwise, you can create an Azure account for free and receive 12 months of popular services for free and a
$200 credit to explore Azure for 30 days.
Create an Azure account for free

Use the Azure portal


Once you have your credentials, you can sign in to the Azure portal at https://portal.azure.com. The Azure portal
is typically easiest way to get started with Azure, especially if you're new to Azure and cloud development. In the
Azure portal, you can do various management tasks such as creating and deleting resources.
If you're already experienced with Azure and cloud development, you'll probably start off using tools as well
such as Visual Studio Code and Azure CLI. Articles in the Python developer center show how to work with the
Azure portal, Visual Studio Code, and Azure CLI.
Sign in to the Azure portal

Use Visual Studio Code


You can use any editor or IDE to write Python code when developing for Azure. However, you may want to
consider using Visual Studio Code for Azure and Python development. Visual Studio Code provides many
extensions and customizations for Azure and Python, which make your development cycle and the deployment
from a local environment to Azure easier.
For Python development using Visual Studio Code, install:
Python extension. This extension includes IntelliSense (Pylance), Linting, Debugging (multi-threaded,
remote), Jupyter Notebooks, code formatting, refactoring, unit tests, and more.
Azure Tools extension pack. The extension pack contains extensions for working with Azure App Service,
Azure Functions, Azure Storage, Azure Cosmos DB, and Azure Virtual Machines in one convenient
package. The Azure extensions make it easy to discover and interact with the Azure.
To install extensions from Visual Studio Code:
1. Press Ctrl+Shift+X to open the Extensions window.
2. Search for the Azure Tools extension.
3. Select the Install button.

To learn more about installing extensions in Visual Studio Code, refer to the Extension Marketplace document on
the Visual Studio Code website.
After installing the Azure Tools extension, sign in with your Azure account. On the left-hand panel, you'll see an
Azure icon. Select this icon, and a control panel for Azure services will appear. Choose Sign in to Azure... to
complete the authentication process.
NOTE
If you see the error "Cannot find subscription with name [subscription ID]", this may be because you are behind a
proxy and unable to reach the Azure API. Configure HTTP_PROXY and HTTPS_PROXY environment variables with your
proxy information in your terminal:

# Windows
set HTTPS_PROXY=https://username:password@proxy:8080
set HTTP_PROXY=http://username:password@proxy:8080

# macOS/Linux
export HTTPS_PROXY=https://username:password@proxy:8080
export HTTP_PROXY=http://username:password@proxy:8080

Use the Azure CLI


In addition to the Azure portal and Visual Studio Code, Azure also offers the Azure CLI command-line tool to
create and manage Azure resources. The Azure CLI offers the benefits of efficiency, repeatability, and the ability
to script recurring tasks. In practice, most developers use both the Azure portal and the Azure CLI.

Install on macOS
Install on Linux
Install on Windows

The Azure CLI is installed through homebrew on macOS. If you don't have homebrew available on your system,
install homebrew before continuing.

brew update && brew install azure-cli

This command will first update your brew repository information and then install the Azure CLI.
After installing, sign-in to your Azure account from the Azure CLI by typing the command az login in a
terminal window on your workstation.

az login

The Azure CLI will open your default browser to complete the sign-in process.

Configure Python virtual environment


When creating Python applications for Azure, it's recommended to create a virtual environment for each
application. A virtual environment is a self-contained directory for a particular version of Python plus the other
packages needed for that application.
To create a virtual environment, follow these steps.
1. Open a terminal or command prompt.
2. Create a folder for your project.
3. Create the virtual environment:

Windows
macOS/Linux
# py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
py -3 -m venv .venv

This command runs the Python venv module and creates a virtual environment in a folder named
".venv". Typically, .gitignore files have a ".venv" entry so that the virtual environment doesn't get checked
in with your code checkins.
4. Activate the virtual environment:
Windows
macOS/Linux

source .venv/Scripts/activate

Once you activate that environment (which Visual Studio Code does automatically), running pip install
installs a library into that environment only. Python code running in a virtual environment uses the specific
package versions installed into that virtual environment. Using different virtual environments allows different
applications to use different versions of a package, which is sometimes required. To learn more about virtual
environments, see Virtual Environments and Packages in the Python docs.
For example, if your requirements are in a requirements.txt file, then inside the activated virtual environment,
you can install them with:

pip install -r requirements.txt

Next step
Provisioning, accessing, and managing resources >>>
Provisioning, accessing, and managing resources on
Azure
10/28/2022 • 6 minutes to read • Edit Online

Previous article: overview


As described in the previous article of this series, an essential part of developing a cloud application is
provisioning the necessary resources within Azure to which you can then deploy your code and data. That is,
building a cloud application begins with building what is essentially the target cloud computer to which you
deploy that code and data. (To review the types of available resources, see the Azure developer's guide.)
How is this provisioning done, exactly? How do you ask Azure to allocate resources for your application, and
how do you then configure and otherwise access those resources? In short, how do you talk to Azure itself to
get all these resources in place?

Means of communicating with Azure


As with most operating systems, you can communicate with Azure through three routes: a user interface, a
command-line interface, and an API.

You can use any or all of these complementary methods to create, configure, and manage whatever Azure
resources you need. In fact, you typically use all three in the course of a development project, and it's worth your
time to become familiar with each of them.
Within this developer center, we primarily show how to provision resources using both the Azure CLI and
Python code that uses the Azure libraries. Using the portal is well covered in the documentation for each
individual service.
NOTE
The Azure libraries for Python are sometimes referred to as the Azure SDK for Python. However, there are no SDK
components other than the libraries, which you acquire through the Python package manager, pip.

Azure portal
The Azure portal is Azure's fully customizable, browser-based user interface through which you can provision
and manage resources with all Azure services. To access the portal, you must first sign in using a Microsoft
Account, and then create a free Azure account with a subscription.
Pros : The user interface makes it easy to explore services and all their various configuration options. Setting
configuration values is secure because no information is stored on the local workstation.
Cons : Working with the portal is a manual process and can't be easily automated. To remember what you did to
change a configuration, for example, you generally record your steps in a separate document.

Azure CLI
The Azure CLI is Azure's open source command-line interface. Once you're signed in to the Azure CLI (using the
az login command), you can perform the same tasks that you can through the portal.

Pros : Easily automated through scripts and processing of output. Provides higher-level commands that
provision multiple resources together for common tasks, such as deploying a web app. Scripts can be managed
in source control.
Cons : Steeper learning curve than using the portal, and commands are subject to bugs. Error messages aren't
always helpful.
You can also use the Azure PowerShell module in place of the Azure CLI, although the Azure CLI's Linux-style
commands are typically more familiar to Python developers.
In place of the local CLI or PowerShell, you can use the same commands in the Azure Cloud Shell,
https://shell.azure.com/. The Cloud Shell is convenient because it's automatically authenticated with Azure once
it opens and has the same capabilities you would through the Azure portal. The Cloud Shell also comes pre-
configured with many different tools that would be inconvenient to install locally, especially if you need to run
only one or two commands.
Because Cloud Shell isn't a local environment, it's more suitable for singular operations like you'd do through
the portal rather than scripted automation. Nevertheless, you can clone source repositories (for example, GitHub
repositories) in the Cloud Shell. As a result, you can develop automation scripts locally, store them in a
repository, clone the repository in Cloud Shell, and then run them there.

Azure REST API and Azure libraries


The Azure REST API is Azure's programmatic interface, provided via secure REST over HTTP because Azure's data
centers are all inherently connected to the Internet. Every resource is assigned a unique URL that supports a
resource-specific API, subject to stringent authentication protocols and access policies. (The Azure portal and the
Azure CLI, in fact, ultimately do their work through the REST API.)
For developers, the Azure libraries (sometimes referred to as the Azure SDKs) provide language-specific libraries
that translate the capabilities of the REST API into much more convenient programming paradigms such as
classes and objects. For Python, you always install individual libraries with pip install rather than installing a
standalone SDK as a whole. (For other languages, see Azure SDK downloads.)
Pros : Precise control over all operations, including a much more direct means of using output from one
operation as input to another as compared to the Azure CLI. For Python developers, allows working within
familiar language paradigms rather than using the CLI. Can also be used from application code to automate
detailed management scenarios.
Cons : Operations that can be done with one CLI command typically require multiple lines of code, all of which is
subject to bugs. Doesn't provide higher-level operations like the Azure CLI.

Automatic on-demand provisioning


Many Azure services allow you to configure scaling characteristics to meet variable demand, in which case
Azure can automatically provision extra resources when needed and de-allocate them as appropriate. Such
automatic scaling is one of the key advantages of a cloud platform that's backed by the resources of multiple
data centers. Instead of designing your environment for peak demand, paying for capacity you wouldn't typically
be utilizing, you can design the environment for baseline or average usage and pay for extra capability only
when necessary.
For more information, see Autoscaling in the Azure Architecture Center.

Subscriptions, resource groups, and regions


Within Azure's resource model, you can imagine that, over time, you'll be provisioning many different resources
across many Azure services for different applications. There are three levels of hierarchy that you can use to
organize these resources:
1. Subscriptions : each Azure subscription has its own billing account and often represents a distinct team
or department within an organization. In general, you provision all the resources you need for any given
application within the same subscription so they can benefit from features like shared authentication.
However, because all resources can be accessed through public URLs and the necessary authorization
tokens, it's possible to spread resources across multiple subscriptions.
2. Resource groups : within a subscription, resource groups are containers for other resources, which you
can then manage as a group. (For this reason, a resource group typically relates to a specific project.)
Whenever you provision a resource, in fact, you must specify the group to which it belongs. Your first step
with a new project is usually to create an appropriate resource group. And by deleting the resource group
you de-allocate all of its contained resources rather than having to delete each resource individually. Trust
us when we say that neglecting to organize your resource groups can lead to many headaches later on
when you don't remember which resource belongs to which project!
3. Resource naming : within a resource group, you can then use whatever naming strategies you like to
express commonalities or relationships between resources. Because the name is often used in the
resource's URL, there may be limitations on the characters you can use. (Some names, for example, allow
only letters and numbers, whereas others allow hyphens and underscores.)
As you work with Azure, you'll develop your own preferences for organizing your resources and your own
conventions for naming subscriptions, resource groups, and individual resources.
Regions and geographies
A key characteristic of a resource group is that it's always associated with a specific Azure region, which is the
location of the specific data center. All the resources in the same group are co-located in that data center, and can
thus interact much more efficiently than if they were in different regions. Developers often choose regions that
are closest to their customers, thus optimizing an application's responsiveness. Azure also offers geo-replication
features to synchronize copies of your application and databases across multiple regions so you can better
serve a global customer base.
Due to local laws and regulations, which are determined by the geography in which you create a subscription,
you might have access to only certain regions and those regions may not support all Azure services. For details,
see Azure global infrastructure.

Next step
The Azure development flow >>>
The Azure development flow: provision, code, test,
deploy, and manage
10/28/2022 • 7 minutes to read • Edit Online

Previous article: provisioning, accessing, and managing resources


Now that you understand Azure's model of services and resources, you can understand the overall flow of
developing cloud applications with Azure: provision , code , test , deploy , and manage .

ST EP P RIM A RY TO O L S A C T IVIT IES

Provision Azure CLI, Azure portal, Cloud Shell, Provision resource groups; provision
Python scripts using Azure specific resources in those groups;
management libraries configure resources to be ready for use
from app code and/or ready to receive
Python code in deployments.

Code Code editor (such as Visual Studio Write Python code using the Azure
Code and PyCharm), Azure libraries, client libraries to interact with
reference documentation provisioned resources.

Test Python runtime, debugger Run Python code locally against active
cloud resources (typically dev or test
resources rather than production
resources). The code itself isn't yet
hosted on Azure, which helps you
debug and iterate quickly.

Deploy Azure CLI, GitHub, DevOps Once code has been tested locally,
deploy it to an appropriate Azure
hosting service where the code itself
can run in the cloud. Deployed code
typically runs against staging or
production resources.

Manage Azure CLI, Azure portal, Python scripts, Monitor app performance and
Azure Monitor responsiveness, make adjustments in
production environment, migrate
improvements back to dev
environment for the next round of
provisioning and development.

Step 1: Provision and configure resources


As described in the previous article of this series, the first step in developing any application is to provision and
configure the resources that make up the target environment for your application.
Provisioning begins by creating a resource group in a suitable Azure region. You can create a resource group
through the Azure portal, through the Azure CLI, or with a custom script that uses the Azure libraries (or REST
API).
Within that resource group, you then provision and configure the individual resources you need, again using the
portal, the CLI, or the Azure libraries. (Again, review the Azure developer's guide for an overview of available
resource types.)
Configuration includes setting access policies that control what identities (service principals and/or application
IDs) are able to access those resources. Access policies are generally managed through Azure Role-Based Access
Control (RBAC); some services have more specific access controls as well. As a cloud developer working with
Azure, make sure to familiarize yourself with Azure RBAC because you use it with just about any resource that
has security concerns.
For most application scenarios, you typically create provisioning scripts with the Azure CLI and/or Python code
using the Azure libraries. Such scripts describe the totality of your application's resource needs (essentially
defining the custom cloud computer to which you're deploying the application). A script enables you to easily
recreate the same set of resources within different development, test, staging, and production environments,
rather than manually performing many repeated steps in the Azure portal. Such scripts also make it easy to
provision an environment in a different region, or to use different resource groups. If you also maintain these
scripts in source control repositories, you also have full auditing and change history.

Step 2: Write your app code to use resources


Once you've provisioned the resources you need for your application, you write the application code to work
with the run time aspects of those resources.
For example, in the provisioning step you might have created an Azure storage account, created a blob container
within that account, and set access policies for the application on that container. This provisioning process is
demonstrated in Example - Provision Azure Storage. From your code, you can then authenticate with that
storage account and then create, update, or delete blobs within that container. This run time process is
demonstrated in Example - Use Azure Storage. Similarly, you might have provisioned a database with a schema
and appropriate permissions (as demonstrated in Example - Provision a database), so that your application code
can connect to the database and perform the usual create-read-update-delete queries.
App code typically uses environment variables to identify the names and URLs of the resources to use.
Environment variables allow you to easily switch between cloud environments (dev, test, staging, and
production) without any changes to the code. The various Azure services that host application code provide a
means to define the necessary variables. For example, in Azure App Service (to host web apps) and Azure
Functions (serverless compute for Azure), you define application settings through the Azure portal or Azure CLI,
which then appear to your code as environment variables.
As a Python developer, you'll likely write your application code in Python using the Azure libraries for Python.
That said, any independent part of a cloud application can be written in any supported language. If you're
working in a team with a variety of language expertise, for instance, it's entirely possible that some parts of the
application are written in Python, some in JavaScript, some in Java, and others in C#.
Note that application code can use the Azure libraries to perform provisioning and management operations as
needed. Provisioning scripts, similarly, can use the libraries to initialize resources with specific data, or perform
housekeeping tasks on cloud resources even when those scripts are run locally.

Step 3: Test and debug your app code locally


Developers typically like to test app code on their local workstations before deploying that code to the cloud.
Testing app code locally means that you're typically accessing other resources that you've already provisioned in
the cloud, such as storage, databases, and so forth. The difference is that you're not yet running the app code
itself within a cloud service.
By running the code locally, you can also take full advantage of debugging features offered by tools such as
Visual Studio Code and manage your code in a source control repository.
You don't need to modify your code at all for local testing: Azure fully supports local development and
debugging using the same code you deploy to the cloud. Environment variables are again the key: on the cloud,
your code can access the hosting resource's settings as environment variables. By creating those same
environment variables locally, the same code runs without modification. This pattern works for authentication
credentials, resource URLs, connection strings, and any number of other settings, making it easy to use
resources in a development environment when running code locally and production resources once the code is
deployed to the cloud.

Step 4: Deploy your app code to Azure


Once you've tested your code locally, you're ready to deploy the code to the Azure resource that you've
provisioned to host it. For example, if you're writing a Django web app, you either deploy that code to a virtual
machine (where you provide your own web server) or to Azure App Service (which provides the web server for
you). Once deployed, that code is running on the server rather than on your local machine, and can access all
the Azure resources for which it's authorized.
As noted in the previous section, in typical development processes you first deploy your code to the resources
you've provisioned in a development environment. After a round of testing, you deploy your code to resources
in a staging environment, making the application available to your test team and perhaps preview customers.
Once you're satisfied with the application's performance, you can deploy the code to your production
environment. All of these deployments can also be automated through continuous integration and continuous
deployment using Azure DevOps.
However you do it, once the code is deployed to the cloud, it truly becomes a cloud application, running entirely
on the server computers in Azure data centers.

Step 5: Manage, monitor, and revise


After deployment, you want to make sure the application is performing as it should, responding to customer
requests and using resources efficiently (and at the lowest cost). You can manage how Azure automatically
scales your deployment as needed, and you can collect and monitor performance data through the Azure portal,
the Azure CLI, or custom scripts written with the Azure libraries. You can then make real-time adjustments to
your provisioned resources to optimize performance, again using any of the same tools.
Monitoring gives you insight about how you might restructure your cloud application. For example, you may
find that certain portions of a web app (such as a group of API endpoints) are used only occasionally in
comparison to the primary parts. You could then choose to deploy those APIs separately as serverless Azure
Functions, where they have their own backing compute resources that don't compete with the main application
but cost only pennies per month. Your main application then becomes more responsive to more customers
without having to scale up to a higher-cost tier.

Next steps
You're now familiar with the basic structure of Azure and the overall development flow: provision resources,
write and test code, deploy the code to Azure, and then monitor and manage those resources.
The next step is to get familiar with the Azure libraries for Python, which you'll be using in many parts of the
flow.
Learn to use the Azure libraries for Python >>>
Overview: Deploy a Python web app to Azure with
managed identity
10/28/2022 • 3 minutes to read • Edit Online

In this tutorial, you'll deploy Python code (Django or Flask ) to create and deploy a web app running in Azure
App Service. The web app uses managed identity to access Azure Storage and Azure Database for PostgreSQL
resources.
Each article in the tutorial covers a part or service shown in the service diagram below. The left side of the
diagram shows the local or development environment with a Python app using a local PostgreSQL instance and
a local storage emulator. The right side of the diagram shows the Python app deployed in Azure with Azure App
Service, Azure Database for PostgreSQL, and Azure Storage Service.

How managed identity is used


Managed identity provides an identity for your app so that it can connect to Azure resources without the need to
use a secret key or other application secret. Internally, Azure knows the identity of your app and what resources
it's allowed to connect to. Managed identity is the recommended approach to authenticate an app in Azure when
using the Azure SDK for Python as is shown in this tutorial. For more information about authentication in Azure
with Python, see How to authenticate Python apps to Azure services using the Azure SDK for Python.
The sample Python app code doesn't change between the local development and Azure-hosted environments.
Using the same code is possible because the DefaultAzureCredential is used, which handles both authentication
scenarios as shown in the following diagram.
Prerequisites for the tutorial
To complete this tutorial, you'll need:
An Azure account with an active subscription. If you don't have an Azure account, you can create one for free.
Knowledge of Python with Flask development or Django development.
Python 3.9 installed locally.
Azure Identity client library for Pythonand Azure Blob Storage Client Library for Python.
Optionally, PostgreSQL installed locally.
Optionally, Azurite storage emulator installed locally.
This tutorial shows three different tools for accomplishing the steps to go from local Python code to deployed
web app. The three tools are the Azure portal, Visual Studio Code and extensions, and the Azure CLI. You'll be
prompted at the start of instructions to download any other tools needed to complete the task. You can mix and
match the tools, for example, completing one step in the portal and another step with the Azure CLI.

Set up the sample app


Sample Python applications using the Flask and Django frameworks are available to help you follow along with
this tutorial. Download or clone one of the sample applications to your local workstation.

NOTE
If you are following this tutorial with your own app, look at the requirements.txt file description in each project's
README.md file (Flask, Django) to see what packages you'll need and how DefaultAzureCredential is implemented.

Clone the sample app:


Flask
Django

git clone https://github.com/Azure-Samples/msdocs-flask-web-app-managed-identity.git

Navigate to the application folder:

Flask
Django

cd msdocs-flask-web-app-managed-identity

Create a virtual environment for the app:


Windows
macOS/Linux

py -3.9 -m venv .venv


.venv\scripts\activate

Install the dependencies:


pip install -r requirements.txt

For now, you're done setting up the sample app. In later steps, you'll optionally configure the app for use in a
local development environment or as a deployed web app in Azure.

What the sample app does


The sample Python code when run locally or deployed to Azure creates a restaurant review application. Users
can create restaurants and add reviews to restaurants. Reviews can have text and images.
When deployed, restaurants and review data are stored in Azure Database for PostgreSQL server. Review
images are stored in Azure Blob storage. Here's an example screenshot:

Next step
Run the web app locally >>>
Configure and run the Python app locally with a
PostgreSQL instance and a storage emulator
10/28/2022 • 5 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll learn how to run the Python app locally.
This optional step requires a local PostgreSQL instance, a local storage emulator, and other setup steps. If you
skip this step now, you can return to it after you've completed the rest of the tutorial.

To run the app locally, you'll need:


A virtual environment and install the requirements as shown in the previous article. You'll add two more
packages to the environment: django-sslserver (Django only) and python-certifi-win32.
PostgreSQL installed locally to which the Python app can connect.
Azurite local storage emulator installed and running.
Azure Storage Explorer installed to connect to local storage and create a container.
A way to create a trusted development certificate, such as with mkcert.
The steps shown in this article apply to both Django and Flask frameworks except where noted.

TIP
Instead of using local storage emulation, you could use Azure Storage and authenticate locally with developer account or
AD group. For more information, see Authenticate Python apps to Azure services during local development using
developer accounts. The rest of this article shows local emulation of storage with Azurite.

1. Create a database in local PostgreSQL


In a local PostgreSQL instance, create a database for the sample app. For example, using the PostgreSQL
interactive terminal psql , connect to the PostgreSQL database server and create the restaurant database.
psql --host=<LOCAL_SERVER_NAME> \
--port=5432 \
--username=<LOCAL_ADMIN_USERNAME> \
--dbname=postgres

postgres=> CREATE DATABASE restaurant;


postgres=> \c restaurant
restaurant=>

Type \? to show help or \q to quit.


Alternatively, you can use a tool like Azure Data Studio to connect to your local PostgreSQL instance and run the
commands above.

2. Create a development certificate


Use mkcert to create a locally trusted development certificate. Run the following commands in the root of the
Python app's project folder:

mkcert -install
mkcert -cert-file cert.pem -key-file key.pem localhost 127.0.0.1

The last command creates a cert.pem and key.pem file. mkcert creates certificates signed by your own private
CA that your machine is automatically configured to trust when you run mkcert -install .

3. Configure SSL-enabled dev environment (Django)


To add TLS (SSL) capabilities to the local development environment for Django, install the django-sslserver
package. If you're using Flask, go to the next step.

pip install django-sslserver

With this package, you can run the app locally using the certificate and key you created as shown in a later step.

4. Use machine CA certificates


In a virtual environment, the default Certificate Authority (CA) certificates come with the environment and are
stored in the cacert.pem file. You can verify the location of the certificates for an environment by running
python -m certifi . However, in this tutorial you'll use the machine CA certificates that contain the locally trusted
certificate created above.
Solutions for using machine CA certificates are described in Fixing your SSL Verify Errors in Python. For
example, to use the Windows certificate install the python-certifi-win32 package, while on macOS/Linux you can
specify an environment variable.

Windows
macOS/Linux

pip install python-certifi-win32


NOTE
The package python-certifi-win32 was tested on Python 3.9. If your environment is a different version and you run
into problems, create a virtual environment with Python 3.9. For example, py -3.9 -m venv .venv39 .

5. Start Azurite and create a container


In your local setup, start Azurite from the command line to emulate blob storage that can be used by the Python
app.

bash
PowerShell terminal

azurite-blob \
--location "<folder-path>" \
--debug "<folder-path>\debug.log" \
--oauth basic \
--cert "<project-root>\cert.pem" \
--key "<project-root\key.pem"

The command creates a service listening on https://127.0.0.1:10000 .


In the command above, replace:
<folder-path> with a location where Azurite will store data and write a debug log.
<project-root> with the directory of the Python project where you ran mkcert to create the certificate and
key files.
Finally, create a container in Azurite and configure it with Azure Storage Explorer. Use Storage Explorer to
connect to Azurite using HTTPS. To connect using HTTPS, import the certificate you created with mkcert .

TIP
One way of getting the correct certificates into Azure Storage Explorer, is to get them from your browser. First, make sure
the Python app is running locally with TLS (SSL). (See the next step for details.) Then, select the lock icon next to URL in
the browser. Export all certificates in the certification path to .cer files. If you followed the steps above with mkcert , there
should be two items in the path. Import these .cer files into Storage Explorer.

Connecting Azure Storage Explorer to Azurite is covered in the article Use the Azurite emulator for local Azure
Storage development. If you encounter errors connecting, refer to the SSL certificate issues section of the
Storage Explorer Troubleshooting guide.

IN ST RUC T IO N S SC REEN SH OT

1. Open Azure Storage Explorer and connect to Azurite.


2. Create a container named photos in the local storage
account.
IN ST RUC T IO N S SC REEN SH OT

1. Right select the photos container and select Set Public


Access Level... .
2. Select Public read access for blobs only and Apply .

6. Configure and test the app


If you started with one of the sample apps, copy the .env.sample file to .env. If you didn't start with one of the
sample apps, create an .env file and make sure you have the dependencies in the requirements.txt. Add other
packages as needed such as django-sslserver or python-certifi-win32.
The .env is only used in local development and should look like the example below. The .env file contains info
about connecting to your local PostgreSQL and Azurite instances:

# Local PostgreSQL connection info


DBNAME=<local-database name>
DBHOST=<local-database-hostname>
DBUSER=<local-db-user-name>
DBPASS=<local-db-password>

# Emulator storage connection info


STORAGE_URL=https://127.0.0.1:10000/devstoreaccount1
STORAGE_CONTAINER_NAME=photos

The sample app uses the python-dotenv to read environment variables from the .env file.
Next, create the restaurant and review database tables:

Flask
Django

flask db init
flask db migrate -m "initial migration"

Run the app with HTTPS using the certificate and key files you created:

Flask
Django

flask run --cert=cert.pem --key=key.pem

The sample Flask and Django apps use the azure.identity package, which contains the DefaultAzureCredential.
The DefaultAzureCredential can be used with Azurite and the Azure Python SDK.
To test your Python app locally, go to https://127.0.0.1:8000 (Django) or http://127.0.0.1:5000 (Flask). Your
Python app is running locally with local PostgreSQL instance and Azurite storage emulator.
If you run into DefaultAzureCredential issues, make sure you're signed in to Azure. For example, in the Azure
CLI, you can use az login , in Visual Studio Code use the command palette (Ctrl+Shift+P) to run the Azure:
Sign In command, and in Azure PowerShell use Connect-AzAccount .
Here's an example screenshot of the sample app:

Next step
Create an App Service to host the Python app >>>
Create a Python web app in App Service and
enable managed identity
10/28/2022 • 7 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure App Service to host a
Python web app and create a system assigned managed identity for the web app. The managed identity is
authenticated with Azure AD, so you don’t have to store credentials in code when accessing other Azure
resources.

1. Create the App Service


Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to create your Azure resource.

IN ST RUC T IO N S SC REEN SH OT

In the Azure portal:


1. Enter app services in the search bar at the top of
the Azure portal.
2. Select App Ser vices under the under Ser vices
heading on the menu that appears below the
search bar.

On the App Ser vices page, select + Create .


IN ST RUC T IO N S SC REEN SH OT

On the Create Web App page, fill out the form as follows:
1. Resource Group → Select Create new and use
the name msdocs-web-app-rg.
2. Name → Use msdocs-web-app-<unique-id>.
The name must be unique across Azure with the
web app's URL
https://<app-service-
name>.azurewebsites.com
).
3. Runtime stack → Python 3.9
4. Region → Any Azure region near you.
5. App Ser vice Plan → Select Create new under
Linux Plan and use the name of msdocs-web-
app.
6. App Ser vice Plan → Select Change size under
Sku and size to select a different App Service
plan.

In the Spec Picker section, select an App Service plan. The


App Service plan controls how many resources
(CPU/memory) are available to your app and the cost of
those resources.
1. Select Dev/Test .
2. Select B1 (Basic) Plan.
The B1 Basic plan will incur a small charge against
your Azure account but is recommended for
better performance over the F1 (Free) plan.
3. Select Apply .

Back on the Create Web App page, select the Review +


create button at the bottom of the screen.This will take you
to a page to review the configuration. Select Create to
create your App Service.

2. Enable managed identity


In this step, you create a system assigned managed identity for the App Service. The managed identity is
authenticated with Azure AD, so you don’t have to store any credentials in code. For more information, see What
are managed identities for Azure resources.
You can enable managed identity using either the Azure portal or the Azure CLI.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT

Go to the App Service you just created.

To get to your resource, you can type the name of your


resource in the search box at the top of the page and
navigate to it from there.

Select Identity in Resource menu of the App Service.

In the Identity resource under System assigned :


1. Change the Status slider to On .
2. Select Save to save the changes.
In the Enable system assigned managed identity
dialog, select Yes .

Note the System assigned managed identity Object


(principal) ID . You'll need this value in a later step when
you assign roles to this identity.

Next step
Create a storage account >>>
Create an Azure storage account and configure a
role for managed identity
10/28/2022 • 6 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure Blob Storage account to
store images saved by the sample app.

1. Create a storage account


Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to create an Azure Storage account.

IN ST RUC T IO N S SC REEN SH OT

In the Azure portal:


1. In the search bar at the top of the Azure portal, enter
"storage account"
2. On the menu that appears below the search bar,
under Services, select the item labeled Storage
accounts

On the Storage Accounts page select +Create


IN ST RUC T IO N S SC REEN SH OT

On the Create a storage account page, fill out the form


as follows.
1. Create or use an existing new resource group for the
storage account, for example msdocs-web-app-rg. To
create a new resource group, select the Create new
link under Resource group .
2. Give your storage account a name of
msdocswebapp<unique-id> where <unique-id> are
any three random digits. Storage account names
must be between 3 and 24 characters long and
contain only lower case letters and numbers.
3. Select the region for your storage account.
4. Select Standard performance.
5. Select Locally-redundant storage for this example
under redundancy.
6. Select the Review + create button at the bottom of
the screen and then select Create on the summary
screen to create your storage account.

Upon creation of your Azure storage account, you will see a


page indicating that the deployment is complete. Select the
Go to resource button on the page to view your Storage
account.

Create a photos container.


1. In the left resource menu for the storage account,
select Containers .
2. Select + Container .
3. For Name of the container use photos.
4. For Public access level select Blob (anonymous
read access for blobs).
5. Select Create .

2. Assign data contributor role


In this step, you'll assign a role to a managed identity. A role is a collection of permissions for a scope or set of
resources. Specifically, you assign the Storage Blob Data Contributor role to the app's managed identity so that
the web app can access the storage account.
Grouping Azure resources into a single resource group is commonly done when developing applications that
use Azure resources. Up to this point in the tutorial, the App Service and Storage Account you created should be
in the same resource group. Therefore, you'll assign the storage role at the resource group level.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT

Locate your resource group by searching for the name in the


search box at the top of the Azure portal.

Navigate to your resource group by selecting its name under


the Resource Groups heading in the dialog box.

On the left resource menu for the resource group, select


Access control (IAM) .

On the Access control (IAM) page:


1. Select the Role assignments tab.
2. Select + Add from the top menu and then Add role
assignment from the resulting drop-down menu.

The Add role assignment page lists all of the roles that
can be assigned for the resource group.
1. Use the search box to find the role Storage Blob Data
Contributor.
2. In the Storage Blob Data Contributor row of the
role table, select View .
3. In the BuiltInRole page, select Select role .
4. Back on the Add role assignment page, select
Next .

The next Add role assignment page allows you to specify


what security principal to assign the role to.
1. Under Assign access to , select Managed
identity .
2. Under Members , select + Select members .
IN ST RUC T IO N S SC REEN SH OT

In the Select managed identities dialog:


1. Use the Managed identity and Select filters to
find the managed identities in your subscription and
select the App Service created previously.
2. Select the App service in the results.
3. Select Select at the bottom of the dialog.

The managed identity will now show as selected on the Add


role assignment page.

Select Review + assign to go to the final page and then


Review + assign again to complete the process.

Next step
Create an Azure database for PostgreSQL >>>
Create an Azure Database for PostgreSQL and
configure managed identity
10/28/2022 • 14 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure Database for PostgreSQL
Service.

1. Create an Azure PostgreSQL server


You can create an Azure Database for PostgreSQL server using the Azure portal, Visual Studio Code, or the
Azure CLI.

NOTE
Managed identity is currently only supported in PostgreSQL Single Server.

Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to create your Azure Database for PostgreSQL resource.

IN ST RUC T IO N S SC REEN SH OT

In the portal:
1. Enter postgres in the search bar at the top of the
Azure portal.
2. Select the item labeled Azure Database for
PostgreSQL ser vers under the under Ser vices
heading on the menu that appears below the
search bar.
IN ST RUC T IO N S SC REEN SH OT

On the Azure Database for PostgreSQL ser vers page,


select + Create

On the next page:


Select Create under Single ser ver .
In any dialogs that follow, select options to continue
creating a Single server that is required in this
tutorial.

On the Single ser ver page, fill out the form as follows:
1. Resource group → Select and use a name of
msdocs-web-app-rg.
2. Ser ver name → Enter a name such as msdocs-
web-app-postgres-database-<unique-id>. The
name must be unique across Azure with the
database server's URL
https://<server-
name>.postgres.database.azure.com
). Allowed characters are A - Z , 0 - 9 , and -
.
3. Data source → None
4. Region → Same Azure region used for the App
Service.
5. Version → Keep the default (which is the latest
version).
6. Compute + storage → Select Configure
ser ver to select a different Compute + storage
plan, which is discussed below.
7. Admin username → Enter an admin username
following the portal suggestions for naming.
8. Password → Enter the admin password.
9. Confirm password → Re-enter the admin
password.

To change the Compute + storage options, select


Configure ser ver link to go to the Configure page.

After you are done on the Configure page, select Ok to


return to the Single Ser ver page.

Select Review + Create to continue to the review page. On


the review page, select Create to create your Azure
Database for PostgreSQL Server Service.

When the database is created, you can go the resource by


selecting the Go to resource link.
2. Add database firewall rules
In this step, you'll add firewall rules that allow:
The web app to access to the database server. This access is enabled with a database firewall rule that
accepts connections from all Azure resources. In a production system, you should turn off this rule and
use an Azure Virtual Network (VNet). This firewall rule can also be useful during database configuration
when you might use an Azure Cloud Shell (an Azure resource) with psql to access the database.
Your local environment to access the database server. This access is useful for subsequent configuration
steps especially but should be turned off after configuration and deployment is completed.

Azure portal
VS Code
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Add a rule to allow your web app to access the PostgreSQL


Flexible server.
1. In the left resource menu for the server, select
Networking .
2. Select the Yes next to Allow public access to
Azure ser vices .
3. Select + Add current client IP address if you
haven't already and you'll connect to the
database from your local environment.
4. Select Save to save the change.
To secure communication between production web apps and
database servers, use an Azure Virtual Network (VNet).

3. Create a database
psql
VS Code

In your local environment, or anywhere you can use the PostgreSQL interactive terminal psql such as the Azure
Cloud Shell, connect to the PostgreSQL database server to create the restaurant database.
Start psql:

psql --host=<server-name>.postgres.database.azure.com \
--port=5432 \
--username=<admin-user>@<server-name> \
--dbname=postgres

The values of <server-name> and <admin-user> are the values from a previous step, used in the creation of the
PostgreSQL database service. The command above will prompt you for the admin password. If you have trouble
connecting, restart the database and try again. If you're connecting from your local environment, your IP
address must be added to the firewall rule list for the database service.
At the postgres=> prompt, create the database:
CREATE DATABASE restaurant;

The semicolon (";") at the end of the command is necessary. To verify that the restaurant database was
successfully created, use the command \c restaurant to change the prompt from postgres=> (default) to the
restaurant-> . Type \? to show help or \q to quit.

You can also create a database using Azure Data Studio or any other IDE, and Visual Studio Code with the Azure
Tools extension pack installed.

4. Configure managed identity for PostgreSQL


When you configure managed identity for PostgreSQL, you enable the web app to securely connect to the
database without a password. Instead, the App Service authenticates to PostgreSQL with a managed identity. For
more information, see Authenticating Azure-hosted apps to Azure resources with the Azure SDK for Python.
The configuration of managed identity for PostgreSQL can be broken into two steps:
Set an Active Directory admin for the PostgreSQL database.
Create a role for the managed identity in the PostgreSQL database.
Set an Active Directory admin for the PostgreSQL database
In this step, you'll create an Azure Active Directory user as the administrator for the Azure Database for
PostgreSQL server. For more information, see Use Azure Active Directory for authentication with PostgreSQL.

Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

In the Azure portal for the PostgreSQL database:


1. In the resource panel and select Active Director y
admin .
2. Then, select Set admin to add a user.

In the Active Director y admin :


1. Search for a user account and select it.
2. Select Select to add user as Active Directory
administrator.

After selecting a user, be sure to select Save to apply the


change.

Create a role for the managed identity in the PostgreSQL database


The role you'll create is the role used by the web app (App Service) to connect to the PostgreSQL server. Specify
the role user name webappuser and a password that is equal to the application ID of the managed identity for
the web app.
Before you can create the role, you need to get the application ID that was created when you configured the
system-assigned managed identity in a previous step in this tutorial. The application ID is different than the
Object (principal) ID create when you configure managed identity for the App Service.
Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

In the Azure portal:


1. Go to the Azure Active Director y resource.
2. In the Search your tenant field, search for the
name of the web app for which you configured
managed identity.

On the Enterprise Application page:


1. Select the Over view of the returned Enterprise
Application.
2. In the properties of the enterprise application,
copy the Application ID for use later.
Save the application ID for the next step.

Next, you need to grant the identity permission to access the database. This grant is done by creating a new role
that identifies the managed identity as one that can access the database. If you are already in the Azure portal,
you can use the Azure Cloud Shell to complete this task.

TIP
Alternatively, you can connect to the database with a local instance of PostgreSQL or Azure Data Studio. For the
PostgreSQL interactive terminal psql used locally, you still need to generate a token with az account get-access-token.
Azure Data Studio is integrated with Azure Active Directory such that the token is generated automatically. Regardless of
how you connect, make sure you specify the user name as <azure-ad-user-name>@<server-name>.

If you sign into the Cloud Shell with an account other than the one that was set as admin for PostgreSQL, then
change accounts with az login .
In a Cloud Shell, you can choose between Bash and PowerShell.

bash
PowerShell terminal
# Sign into Azure as the Azure AD user that was set as Active Directory admin
# az login

# Get an access token for PostgreSQL with the Azure AD user


token=$(az account get-access-token \
--resource-type oss-rdbms \
--output tsv \
--query accessToken)

# View token to confirm


echo $token

# Sign into the Postgres server


psql "host=<server-name>.postgres.database.azure.com \
port=5432 \
dbname=restaurant \
user=<aad-user-name>@<server-name> \
password=$token \
sslmode=require"

In the PostgreSQL database, run the following commands to create a role that the web app will use to access the
database.

SET aad_validate_oids_in_tenant = off;


CREATE ROLE webappuser
WITH LOGIN PASSWORD '<application-id-of-system-assigned-managed-identity>'
IN ROLE azure_ad_user;

You'll use the user name webappuser as an App Service configuration setting in the next step.

Next step
Deploy to the Python app to Azure
Deploy and configure a Python web app in Azure
with managed identity
10/28/2022 • 14 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll configure the App Service and then deploy
the Python app to it.

1. Configure the web app in Azure


With the web app, storage account, and PostgreSQL database resources created, the next step is to tell the web
app how to connect to the Azure Storage account and Azure Database for PostgreSQL service.
The Python sample code expects environment variables named DBHOST , DBNAME , DBUSER , STORAGE_ACCOUNT_NAME
, and STORAGE_CONTAINER_NAME to connect to the storage and database resources. You don't specify an access key
for storage or a password for the database because authentication is handled by managed identity.
Azure portal
VS Code
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Go to the App Service page for the web app.


1. Select Configuration under Settings on the
left resource menu.
2. Select Application settings at the top of the
page.
IN ST RUC T IO N S SC REEN SH OT

Create application settings:


1. Select + New application setting to create
the following settings:
DBHOST → Use the server name you used
earlier when you created the database, for
example, msdocs-web-app-postgres-
database-<unique id>. The sample code
appends .postgres.database.azure.com to
create the full qualified PostgreSQL server
URL. The sample code appends
.postgres.database.azure.com to create the
fully qualified PostgreSQL server URL.
DBNAME → Enter restaurant, the name of the
application database.
DBUSER → Enter webappuser, the user you
created for the managed identity in the
previous article. The sample code constructs
the correct Postgres username from DBUSER
and DBHOST , so don't include the @server
portion.
STORAGE_ACCOUNT_NAME → The name of
the storage account, which the sample code
combines with blob.core.windows.net to
create the storage URL endpoint.
STORAGE_CONTAINER_NAME → The name of
the container in the storage account, where
photos are stored. For example, photos.
2. Confirm you have five settings with the correct
values.
3. Select Save to apply the settings.

2. Deploy the Python web app to Azure


Azure App Service supports multiple ways to deploy your application code to Azure including support for
GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local
workstation to Azure.

Deploy using VS Code


Deploy using Local Git
Deploy using a ZIP file

To deploy a web app from VS Code, you must have the Azure Tools extension pack installed and be signed into
Azure from VS Code.

IN ST RUC T IO N S SC REEN SH OT

Locate the Azure icon in the left-hand toolbar and select it to


bring up the Azure Tools for VS Code extension.
IN ST RUC T IO N S SC REEN SH OT

In the App Ser vices section of the Azure Tools extension:


1. Locate your web app and right-click to bring up
the context menu. (Make sure you viewing
resources by Group by Resource Type .)
2. Select Deploy to Web App... from the menu.
In the Visual Studio code prompt that appears, select your
web app as the web app to deploy.

Select Deploy in the dialog box.

Select Yes to update your build configuration and improve


deployment performance.

When the deployment is complete, a dialog box will appear


in the lower right corner of the screen with an option to
browse to the website. If you use this link, the web page will
report an error because the web app isn't ready until you do
the migration in the next step. You may see another dialog
box warning of this problem.

3. Create the database schema


With the code deployed and the database in place, the app is almost ready to use. As a final step, you need to
establish the necessary schema in the database. You create the schema by "migrating" the data models stored
with the app code to the PostgreSQL database.
Step 1. Create an SSH session and connect to web app server.

Azure portal
VS Code
Azure CLI

Navigate to page for the App Service instance in the Azure portal.
1. Select SSH , under Development Tools on the left resource menu.
2. Then Go to open an SSH console on the web app server. It may take a minute to connect the first time.
If you can't connect with SSH, see Troubleshooting tips.
Step 2. In the SSH session, run commands to migrate the models into the database:

Flask
Django

When you deploy the Flask sample app to Azure App Service, the database tables are automatically created in
Azure Database for PostgreSQL server. If you try to run flask db init you'll receive the message "Directory
migrations already exists and is not empty."
If you can't migrate the models, see Troubleshooting tips.
TIP
In an SSH session, for Django you can also create users with the python manage.py createsuperuser command like
you would with a typical Django app. For more information, see the documentation for django django-admin and
manage.py. Use the superuser account to access the /admin portion of the web site. For Flask, use an extension such as
Flask-admin to provide the same functionality.

4. Test the Python web app in Azure


The sample Python app uses the azure.identity package and DefaultAzureCredentialClass . The
DefaultAzureCredential automatically detects that a managed identity exists for the App Service and uses it to
access other Azure resources (storage and Postgres in this case). There's no need to provide storage keys,
certificates, or credentials to the App Service to access these resources.
Browse to the deployed application at the URL http://<app-name>.azurewebsites.net . It can take a minute or two
for the app to start. If you see a default app page that isn't the default sample app page, wait a minute and
refresh the browser.
To test the functionality of the sample app, add a restaurant and then add some reviews with photos for the
restaurant. The restaurant and review information is stored in Azure Database for PostgreSQL and the photos
are stored in Azure Storage. Here's an example screenshot:

5. Troubleshooting tips
Here are a few tips for troubleshooting your deployment:
When you deploy Python code to App Service, a built-in Linux container is created to run the web app. If a
deployment isn't successful, in the Azure portal check the Deployment Center | Logs generated during
the build of the container to confirm the deployment failed. If there was a failure, go to the Diagnose
and solve problems resource of the App Service to check the diagnostic logging. The Application
logging logs are the most useful for troubleshooting failed deployments. Be sure to check the timestamp
of the logging entries to make sure they correspond to the deployment you're troubleshooting. There
may be a delay in writing the logs and you might need to wait to see the logging information for the
deployment.
If you encounter errors related to connecting to the database while doing the migration, check the values
of the application settings of the App Service, specifically DBHOST , DBNAME , and DBUSER . Without these
settings, the web app can't communicate with the database.
If you have the database connection information correctly specified, confirm that you set up managed
identity for the database correctly.
If you can't open an SSH session to connect to your Azure App Service, then the app might have failed to
start. Check the diagnostic logs for details, and in particular, the application logs. Errors can occur for
many reasons. For example, if you haven't created the necessary app settings in the previous section, the
logs will indicate KeyError: 'DBNAME' .
Check that there's an App Service configuration setting SCM_DO_BUILD_DURING_DEPLOYMENT set to true or
1 . For more information and background on how Azure App Service runs Python apps, see Configure a
Linux Python app for Azure App Service.
If you're deploying to App Service using local Git and you specified the wrong credentials, it might get
cached and you need to clear these credentials. For more information about Git credentials, see Git Tools -
Credential Storage. On Windows, you can open the Credential Manager / Windows Credentials, find the
credentials and remove it.
If deployment is successful and the web app is running, print statements in the code write to the log
stream. In the Azure portal, go to the App Service and open the Log Stream resource. For more
information, see Enable diagnostics logging for apps in Azure App Service - Stream logs.

Next step
Clean up resources >>>
Clean up and next steps of managed identity
tutorial
10/28/2022 • 2 minutes to read • Edit Online

This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll clean up resources used in Azure so you
don't incur other charges and help keep your Azure subscription uncluttered. You can leave the Azure resources
running if you want to use them for further development work.

1. Clean up resources
In this tutorial, all the Azure resources were created in the same resource group. Removing the resource group
removes all resources in the resource group and is the fastest way to remove all Azure resources used for your
app.
Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to delete a resource group.

IN ST RUC T IO N S SC REEN SH OT

Navigate to the resource group in the Azure portal.


1. Enter the name of the resource group in the search
bar at the top of the page.
2. Under the Resource Groups heading, select the
name of the resource group to navigate to it.

Select the Delete resource group button at the top of the


page.

In the confirmation dialog, enter the name of the resource


group to confirm deletion. Select Delete to delete the
resource group.

2. Next steps
After completing this tutorial, here are some next steps you can take to build upon what you learned and move
the tutorial code and deployment closer to production ready:
Secure communication to your Azure Database for PostgreSQL server, see Use Virtual Network service
endpoints and rules for Azure Database for PostgreSQL - Single Server.
Map a custom DNS name to your app, see Tutorial: Map custom DNS name to your app.
Monitor App Service for availability, performance, and operation, see Monitoring App Service and Set up
Azure Monitor for your Python application.
Enable continuous deployment to Azure App Service, see Continuous deployment to Azure App Service,
Use CI/CD to deploy a Python web app to Azure App Service on Linux, and Design a CI/CD pipeline using
Azure DevOps.
More details on how App Service runs a Python app, see Configure Python app.
Review PostgresSQL best practices, see Best practices for building an application with Azure Database for
PostgreSQL.
Learn more about security for Blob storage, see Security recommendations for Blob storage.

3. Related Learn modules


The following are some Learn modules that explore the technologies and themes covered in this tutorial:
Introduction to Python
Get started with Django
Create views and templates in Django
Create data-driven websites by using the Python framework Django
Deploy a Django application to Azure by using PostgreSQL
Azure Database for PostgreSQL
Create and connect to an Azure Database for PostgreSQL
Explore Azure Blob storage
Configure your local environment for deploying
Python web apps on Azure
10/28/2022 • 7 minutes to read • Edit Online

This article walks you through setting up your local environment to develop Python web apps and deploy them
to Azure. Your web app can be pure Python or use one of the common Python-based web frameworks like
Django, Flask, or FastAPI.
Python web apps developed locally can be deployed to services such as Azure App Service, Azure Container
Apps, or Azure Static Web Apps. There are many options for deployment. For example for App Service
deployment, you can choose to deploy from code, a Docker container, or a Static Web App. If you deploy from
code, you can deploy with Visual Studio Code, with the Azure CLI, from a local Git repository, or with GitHub
actions. If you deploy in a Docker Container, you can do so from Azure Container Registry, Docker Hub, or any
private registry.
Before continuing with this article, we suggest you review the Set up your dev environment for guidance on
setting up your dev environment for Python and Azure. Below, we'll discuss setup and configuration specific to
Python web app development.
After you get your local environment setup for Python web app development, you'll be ready to tackle these
articles:
Quickstart: Create a Python (Django or Flask) web app in Azure App Service.
Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
Tutorial: Deploy a Python web app to Azure with managed identity

Working with Visual Studio Code


The Visual Studio Code integrated development environment (IDE) is an easy way to develop Python web apps
and work with Azure resources that web apps use.

TIP
Make sure you have Python extension installed. For an overview of working with Python in VS Code, see Getting Started
with Python in VS Code.

In VS code, you work with Azure resources through VS Code extensions. You can install extensions from the
Extensions View or the key combination Ctrl+Shift+X. For Python web apps, you'll likely be working with one
or more of the following extensions:
The Azure App Service extension enables you to interact with Azure App Service from within Visual
Studio Code. App Service provides fully managed hosting for web applications including websites and
web APIs.
The Azure Static Web Apps extension enables you to create Azure Static Web Apps directly from VS Code.
Static Web Apps is serverless and a good choice for static content hosting.
If you plan on working with containers, then install:
The Docker extension to build and work with containers locally. For example, you can run a
containerized Python web app on Azure App Service using Web Apps for Containers.
The Azure Container Apps extension to create and deploy containerized apps directly from Visual
Studio Code.
There are other extensions such as the Azure Storage, Azure Databases, and Azure Resources extensions.
You can always add these and other extensions as needed.
Extensions in Visual Studio Code are accessible as you would expect in a typical IDE interface and with rich
keyword support using the VS Code command palette. To access the command palette, use the key combination
Ctrl+Shift+P. The command palette is a good way to see all the possible actions you can take on an Azure
resource. The screenshot below shows some of the actions for App Service.

Working with other IDEs


If you're working in another IDE that doesn't have explicit support for Azure, then you can use the Azure CLI to
manage Azure resources. In the screenshot below, a simple Flask web app is open in the PyCharm IDE. The web
app can be deployed to an Azure App Service using the az webapp up command. In the screenshot, the CLI
command runs within the PyCharm embedded terminal emulator. If your IDE doesn't have an embedded
emulator, your can use any terminal and the same command. The Azure CLI must be installed on your computer
and be accessible in either case.

Azure CLI commands


When working locally with web apps using the Azure CLI commands, you'll typically work with the following
commands:

C OMMAND DESC RIP T IO N

az webapp Manages web apps. Includes the subcommands create to


create a web app and the up to create and deploy from a
local workspace

az container app Manages Azure Container Apps.


C OMMAND DESC RIP T IO N

az staticwebapp Manages Azure Static Web Apps.

az group Manages resource groups and template deployments. Use


the subcommand create to a resource group to put your
Azure resources in.

az appservice Manages App Service plans.

az config Managed Azure CLI configuration. To save keystrokes, you


can define a default location or resource group that other
commands use automatically.

Here's an example Azure CLI command to create a web app and associated resources, and deploy it to Azure in
one command using az webapp up. Run the command in the root directory of your web app.

bash
PowerShell terminal

az webapp up \
--runtime PYTHON:3.9 \
--sku B1 \
--logs

For more about this example, see Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service.
Keep in mind that for some of your Azure workflow you can also use the Azure CLI from an Azure Cloud Shell.
Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources.

Azure SDK key packages


In your Python web apps, you can refer programmatically to Azure services using the Azure SDK for Python.
This SDK is discussed extensively in the section Use the Azure libraries (SDK) for Python. In this section, we'll
briefly mention some key packages of the SDK that you'll use in web development. And, we'll show an example
around the best-practice for authenticating your code with Azure resources.
Below are some of the packages commonly used in web app development. You can install packages in your
virtual environment directly with pip . Or put the Python package index (Pypi) name in your requirements.txt
file.

SDK DO C S IN STA L L P Y T H O N PA C K A GE IN DEX

Azure Identity pip install azure-identity azure-identity

Azure Storage Blobs pip install azure-storage-blob azure-storage-blob

Azure Cosmos DB pip install azure-cosmos azure-cosmos

Azure Key Vault Secrets pip install azure-keyvault- azure-keyvault-secrets


secrets

The azure-identity package allows your web app to authenticate with Azure Active Directory (Azure AD). For
authentication in your web app code, it's recommended that you use the DefaultAzureCredential in the
azure-identity package. Here's an example of how to access Azure Storage. The pattern is similar for other
Azure resources.

from azure.identity import DefaultAzureCredential


from azure.storage.blob import BlobServiceClient

azure_credential = DefaultAzureCredential()
blob_service_client = BlobServiceClient(
account_url=account_url,
credential=azure_credential)

The DefaultAzureCredential will look in predefined locations for account information, for example, in
environment variables, in the VS Code Account extension, or from the Azure CLI sign-in. For in-depth
information on the DefaultAzureCredential logic, see Authenticate Python apps to Azure services by using the
Azure SDK for Python.

Python-based web frameworks


In Python web app development, you often work with Python-based web frameworks. These frameworks
provide functionality such page templates, session management, database access, and easy access to HTTP
request and response objects. Frameworks enable you to avoid the need for you to have to reinvent the wheel
for common functionality.
Three common Python web frameworks are Django, Flask, or FastAPI. These and other web frameworks can be
used with Azure.
Below is an example of how you might get started quickly with these frameworks locally. Running these
commands, you'll end up with an application, albeit a simple one that could be deployed to Azure. Run these
commands inside a virtual environment.
Step 1: Download the frameworks with pip.

Django
Flask
FastAPI

pip install Django

Step 2: Create a hello world app.


Django
Flask
FastAPI

Create a sample project using the django-admin startproject command. The project includes a manage.py file
that is the entry point for running the app.

django-admin startproject hello_world

Step 3: Run the code locally.

Django
Flask
FastAPI

Django uses WSGI to run the app.

python hello_world\manage.py runserver

Step 4: Browse the hello world app.


Django
Flask
FastAPI

http://127.0.0.1:8000/

At this point, add a requirements.txt file and then you can deploy the web app to Azure or containerize it with
Docker and then deploy it.

Next steps
Quickstart: Create a Python (Django or Flask) web app in Azure App Service.
Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
Tutorial: Deploy a Python web app to Azure with managed identity
Configure a custom startup file for Python apps on
Azure App Service
10/28/2022 • 4 minutes to read • Edit Online

In this article, you learn about configuring a custom startup file, if needed, for a Python web app hosted on
Azure App Service. For running locally, you don't need a startup file. However, when you deploy a web app to
Azure App Service, your code is run in Docker container that can use any startup commands if they are present.
You need a custom startup file in the following cases:
You want to start the Gunicorn default web server with extra arguments beyond the defaults, which are
--bind=0.0.0.0 --timeout 600 .

Your app is built with a framework other than Flask or Django, or you want to use a different web server
besides Gunicorn.
You have a Flask app whose main code file is named something other than app.py or application.py*, or
the app object is named something other than app .
In other words, unless you have an app.py or application.py in the root folder of your project, and the
Flask app object is named app , then you need a custom startup command.
For more information, see Configure Python Apps - Container startup process.

Create a startup file


When you need a custom startup file, use the following steps:
1. Create a file in your project named startup.txt, startup.sh, or another name of your choice that contains
your startup command(s). See the later sections in this article for specifics on Django, Flask, and other
frameworks.
A startup file can include multiple commands if needed.
2. Commit the file to your code repository so it can be deployed with the rest of the app.
3. In Visual Studio Code, select the Azure icon in the Activity Bar, expand RESOURCES , find and expand
your subscription, expand App Ser vices , and right-click the App Service, and select Open in Por tal .
4. In the Azure portal, on the Configuration page for the App Service, select General settings , enter the
name of your startup file (like startup.txt or startup.sh) under Stack settings > Star tup Command ,
then select Save .
NOTE
Instead of using a startup command file, you can put the startup command itself directly in the Star tup
Command field on the Azure portal. Using a command file is preferable, however, because this part of your
configuration is then in your repository where you can audit changes and redeploy to a different App Service
instance altogether.

5. The App Service restarts when you save the changes.


If you haven't deployed your app code, however, visiting the site at this point shows "Application Error."
This message indicates that the Gunicorn server started but failed to find the app, and therefore nothing
is responding to HTTP requests.

Django startup commands


By default, App Service automatically locates the folder that contains your wsgi.py file and starts Gunicorn with
the following command:

# <module> is the folder that contains wsgi.py. If you need to use a subfolder,
# specify the parent of <module> using --chdir.
gunicorn --bind=0.0.0.0 --timeout 600 <module>.wsgi

If you want to change any of the Gunicorn arguments, such as using --timeout 1200 , then create a command
file with those modifications. For more information, see Container startup process - Django app.

Flask startup commands


By default, the App Service on Linux container assumes that a Flask app's WSGI callable is named app and is
contained in a file named application.py or app.py and resides in the app's root folder.
If you use any of the following variations, then your custom startup command must identify the app object's
location in the format file:app_object:
Different file name and/or app object name : for example, if the app's main code file is hello.py and
the app object is named myapp , the startup command is as follows:

gunicorn --bind=0.0.0.0 --timeout 600 hello:myapp

Star tup file is in a subfolder : for example, if the startup file is myapp/website.py and the app object is
app , then use Gunicorn's --chdir argument to specify the folder and then name the startup file and app
object as usual:

gunicorn --bind=0.0.0.0 --timeout 600 --chdir myapp website:app

Star tup file is within a module : in the python-sample-vscode-flask-tutorial code, the webapp.py
startup file is contained within the folder hello_app, which is itself a module with an __init__.py file. The
app object is named app and is defined in __init__.py and webapp.py uses a relative import.
Because of this arrangement, pointing Gunicorn to webapp:app produces the error, "Attempted relative
import in non-package," and the app fails to start.
In this situation, create a shim file that imports the app object from the module, and then have Gunicorn
launch the app using the shim. The python-sample-vscode-flask-tutorial code, for example, contains
startup.py with the following contents:

from hello_app.webapp import app

The startup command is then:

gunicorn --bind=0.0.0.0 --workers=4 startup:app

For more information, see Container startup process - Flask app.

Other frameworks and web servers


The App Service container that runs Python apps has Django and Flask installed by default, along with the
Gunicorn web server.
To use a framework other than Django or Flask (such as Falcon, FastAPI, etc.), or to use a different web server:
Include the framework and/or web server in your requirements.txt file.
In your startup command, identify the WSGI callable as described in the previous section for Flask.
To launch a web server other than Gunicorn, use a python -m command instead of invoking the server
directly. For example, the following command starts the uvicorn server, assuming that the WSGI callable
is named app and is found in application.py:

python -m uvicorn application:app --host 0.0.0.0

You use python -m because web servers installed via requirements.txt aren't added to the Python global
environment and therefore can't be invoked directly. The python -m command invokes the server from
within the current virtual environment.
Overview: Cloud-based, serverless ETL using
Python on Azure
10/28/2022 • 2 minutes to read • Edit Online

This series shows you one way to create a serverless, cloud-based Extract, Transform, and Load Python solution
using an Azure Function App.

The Azure Function App securely ingests data from Azure Storage Blob. Then, the data is processed using Pandas
and loaded into an Azure Data Lake Store. Finally, the source data file is archived using Cool-Tier Access in an
Azure Storage Blob.

Next Step
Next: Get started >>>
Create resources for a cloud-based, serverless ETL
solution using Python on Azure
10/28/2022 • 12 minutes to read • Edit Online

This article shows you how to use Azure CLI to deploy and configure the Azure resources used for our cloud-
based, serverless ETL.

IMPORTANT
To complete each part of this series, you must create all of these resources in advance. Create each of the resources in a
single resource group for organization and ease of resource clean-up.

Prerequisites
Before you can begin the steps in this article, complete the tasks below:
Azure subscription, if you don't have an Azure subscription, create one for free
Python 3.7 or later is installed.

python --version

Azure CLI (2.0.46 or later); the CLI commands can be run in the Azure Cloud Shell or you can install Azure
CLI locally.

az --version

Visual Studio Code on one of the supported platforms is installed

code --version

Install the latest version of Azure Functions Core Tools, version 4 or later.
func --version

Install Visual Studio Code extensions:


Visual Studio Code Python extension
Visual Studio Code Azure CLI Tools extension
Visual Studio Code Azure Functions extension

1. Set up your dev environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Step 1: Run az login to sign into Azure.

az login

Step 2: When using the Azure CLI, you can turn on the param-persist option that automatically stores
parameters for continued use. To learn more, see Azure CLI persisted parameter. [optional]

az config param-persist on

IMPORTANT
Be sure to create and activate a local virtual environment for this project.

2. Create an Azure Resource Group


Create an Azure Resource Group to organize the Azure services used in this series logically.
Azure Resource Groups can also provide more insights through resource monitoring and cost management.
Step 1: Run az group create to create a resource group for this series.

service_location='eastus'
resource_group_name='rg-cloudetl-demo'

# Create an Azure Resource Group to organize the Azure services used in this series logically
az group create \
--location $service_location \
--name $resource_group_name

NOTE
You can not host Linux and Windows apps in the same resource group. Suppose you have an existing resource group
named rg-cloudetl-demo with a Windows function app or web app. In that case, you must use a different resource group.

3. Configure Azure Blob Storage


Azure Blob Storage is a general-purpose, object storage solution. In this series, blob storage acts as a landing
zone for 'source' system data and is a common data engineering scenario.
Create an Azure Storage Account
An Azure Storage Account is a namespace in Azure to store data. The blob storage URL combines the storage
account name and the base Azure Storage Blob endpoint address, so the storage account name must be
unique .
The below instructions create the Azure Storage Account programmatically. However, you can also create a
storage account using the Azure portal.
Step 1: Run az storage account create to create a Storage Account with Kind StorageV2, and assign an
Azure Identity.

storage_acct_name='stcloudetldemodata'

# Create a general-purpose storage account in your resource group and assign it an identity
az storage account create \
--name $storage_acct_name \
--resource-group $resource_group_name \
--location $service_location \
--sku Standard_LRS \
--assign-identity

Step 2: Run the az role assignment create to add the 'Storage Blob Data Contributor' role to your user
email.

user_email='[email protected]'

# Assign the 'Storage Blob Data Contributor' role to your user


az role assignment create \
--assignee $user_email \
--role 'Storage Blob Data Contributor' \
--resource-group $resource_group_name

IMPORTANT
Role assignment creation could take a minute to apply in Azure. It is recommended to wait a moment before running the
next command in this article.

Create a Container in the Storage Account


Containers to organize blob data, similar to a file system directory. A container can store an unlimited amount of
blobs, and a storage account can have multiple containers.
The below instructions create the Azure Storage Account programmatically. However, you can also create a
container using the Azure portal.
Step 1: Run az storage container create to create two new containers in your Storage Account, one for
the source dat and the other for archiving processed files.
abs_container_name='demo-cloudetl-data'
abs_archive_container_name='demo-cloudetl-archive'

# Create a storage container in a storage account.


az storage container create \
--name $abs_container_name \
--account-name $storage_acct_name \
--auth-mode login

az storage container create \


--name $abs_archive_container_name \
--account-name $storage_acct_name \
--auth-mode login

Step 2: Run az storage account show to capture the storage account ID.

storage_acct_id=$(az storage account show \


--name $storage_acct_name \
--resource-group $resource_group_name \
--query 'id' \
--output tsv)

Step 3: Run az storage account keys list to capture one of the storage account access keys for the next
section.

# Capture storage account access key1


storage_acct_key1=$(az storage account keys list \
--resource-group $resource_group_name \
--account-name $storage_acct_name \
--query [0].value \
--output tsv)

4. Configure Azure Data Lake Gen2


Azure Data Lake Storage Gen 2 (ADLS) is built upon the Azure Blob File System (ABFS) over TLS/SSL for
encryption. An optimized driver for big data workloads was also added to ADLS Gen 2. This feature, along with
the cost savings, available storage tiers, and high-availability & disaster recovery options of blob storage, make
ADLS Gen 2 the ideal storage solution for big data analytics.
Create Azure Data Lake Storage Account
A storage account is created the same for ADLS Gen 2 as for Azure Blob Storage. The only difference is that the
hierarchical namespace (HNS) property must be enabled. The hierarchical namespace is a fundamental part of
Data Lake Storage Gen2. This functionality enables the organization of objects/files into a hierarchy of
directories for efficient data access.
Step 1: Run az storage account create to create an Azure Data Lake Gen 2 Storage Account with Kind
StorageV2, HNS enabled, and assign an Azure Identity.
adls_acct_name='dlscloudetldemo'
fsys_name='processed-data-demo'
dir_name='finance_data'

# Create a ADLS Gen2 account


az storage account create \
--name $adls_acct_name \
--resource-group $resource_group_name \
--kind StorageV2 \
--hns \
--location $service_location \
--assign-identity

Step 2: Run az storage account keys list to capture one of the ADLS storage account access keys for the
next section.

adls_acct_key1=$(az storage account keys list \


--resource-group $resource_group_name \
--account-name $adls_acct_name \
--query [0].value
--output tsv)

NOTE
It is very easy to turn a data lake into a data swamp. So, it is important to govern the data that resides in your data lake.
Azure Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud,
and software-as-a-service (SaaS) data. Easily create a holistic, up-to-date map of your data landscape with automated
data discovery, sensitive data classification, and end-to-end data lineage.

Configure Data Lake Storage structure


When loading data into a data lake, considerations must be made to ease security, efficient processing, and
partitioning efforts. Azure Data Lake Storage Gen 2 uses directories instead of the virtual folders in blob storage.
Directories allow for more precise security, control access, and directory level filesystem operations.
Step 1: Run az storage fs create to create a file system in ADLS Gen 2. A file system contains files and
folders, similarly to how a container in Azure Blob Storage contains blobs.

# Create a file system in ADLS Gen2


az storage fs create \
--name $fsys_name \
--account-name $adls_acct_name \
--auth-mode login

Step 2: Run az storage fs directory create to create the directory (folder) in the newly created file system
to land our processed data.

# Create a directory in ADLS Gen2 file system


az storage fs directory create \
--name $dir_name \
--file-system $fsys_name \
--account-name $adls_acct_name \
--auth-mode login

5. Set up Azure Key Vault


It was common practice to store sensitive information from the application code into a 'config.json' file in the
past. However, the sensitive information would still be stored in plain text. Additionally, in Azure, the developer
also manually copies the values in the local app settings file to the Azure app configuration settings.
A better approach is to use an Azure Key Vault. Azure Key Vault is a centralized cloud solution for storing and
managing sensitive information, such as passwords, certificates, and keys. Using Azure Key Vault also provides
better access monitoring and logs to see who accesses secret, when, and how.
Configure Azure Key Vault and secrets
Create a new Azure Key Vault within your resource group.
Step 1: Run az keyvault create to create an Azure Key Vault.

key_vault_name='kv-cloudetl-demo'

# Provision new Azure Key Vault in our resource group


az keyvault create \
--location $service_location \
--name $key_vault_name \
--resource-group $resource_group_name

Step 2: Set a 'secret' in Azure Key Vault to store the Blob Storage Account access key. Run az keyvault
secret set to create and set a secret in Azure Key Vault.

abs_secret_name='abs-access-key1'
adls_secret_name='adls-access-key1'

# Create Secret for Azure Blob Storage Account


az keyvault secret set \
--vault-name $key_vault_name \
--name $abs_secret_name \
--value $storage_acct_key1

# Create Secret for Azure Data Lake Storage Account


az keyvault secret set \
--vault-name $key_vault_name \
--name $adls_secret_name \
--value $adls_acct_key1

IMPORTANT
If your secret value contains special characters, you will need to 'escape' the special character by wrapping it with double
quotes and the entire string in single quotes. Otherwise, the secret value is not set correctly.
Will not work: "This is my secret value & it has a special character."
Will not work: "This is my secret value '&' it has a special character."
Will work : 'this is my secret value "&" it has a special character'

Set environment variables


This application uses the key vault name as an environment variable called KEY_VAULT_NAME.

export KEY_VAULT_NAME=$key_vault_name
export ABS_SECRET_NAME=$abs_secret_name
export ADLS_SECRET_NAME=$adls_secret_name

6. Create a serverless function


A ser verless architecture builds and runs services without infrastructure management, such as provisioning,
scaling, and maintaining the resources required to run the Function App. Azure takes care of these management
tasks in the backend, allowing developers to focus on building the app.
Create a local Python Function project
A local Python Function project is needed to build and execute our function during development. Create a
function project using the Azure Functions Core Tools and following the steps below.
Step 1: Run the func init command to create a functions project in a folder named
CloudETLDemo_Local:

func init CloudETLDemo_Local --python

Step 2: Navigate into the project folder:

cd CloudETLDemo_Local

Step 3: Add functions to your project by using the following command, where the --name argument is
the unique name of your function and the --template argument specifies the function's trigger (HTTP).

func new --name demo_relational_data_cloudetl --template "HTTP trigger" --authlevel "anonymous"

Troubleshooting : If you get an error


Functions version 2 is not supported for runtime python with version 3.7 and os linux. Supported
functions versions are ['4', '3'].
, you need to update your azure-functions-core-tools using the command
npm i -g azure-functions-core-tools@4 --unsafe-perm true . In order to make sure all references to the
previous version are removed, the best practice is to remove all files and folders created by func init
and rerun the steps in this section.
Step 4: Check that the function was correctly created by running the function locally. Start the local Azure
Functions runtime host from the CloudETLDemo_Local folder:

func start

Step 5: Grab the localhost URL at the bottom and append '?name=Functions' to the query string.

http://localhost:7071/api/demo_relational_data_cloudetl?name=Functions

Step 6: When finished, use 'Ctrl +C ' and choose y to stop the functions host.
Initialize a Python Function App in Azure
An Azure Function App must be created to host our data ingestion function. This Function App is what we
deploy our local dev function to once complete.
Step 1: Run az functionapp create to create the function app in Azure.
funcapp_name='CloudETLFunc'

# Create a serverless function app in the resource group.


az functionapp create \
--name $funcapp_name \
--storage-account $storage_acct_name \
--consumption-plan-location $service_location \
--resource-group $resource_group_name \
--os-type Linux \
--runtime python \
--runtime-version 3.7 \
--functions-version 2

NOTE
App Name is also the default DNS domain for the function app.

Step 2: Run az functionapp config appsettings set to store Azure Key Vault name and Azure Blob Storage
access key application configurations.

# Update function app's settings to include Azure Key Vault environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "KEY_VAULT_NAME=kv-cloudetl-demo"

# Update function app's settings to include Azure Blob Storage Access Key in Azure Key Vault secret
environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ABS_SECRET_NAME=abs-access-key1"

# Update function app's settings to include Azure Data Lake Storage Gen 2 Access Key in Azure Key
Vault secret environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ADLS_SECRET_NAME=adls-access-key1"

7. Assign access policies and roles


A Key Vault access policy determines whether a security principal, user, application, or user group, can do
different operations on secrets, keys, and certificates.
Step 1: Create an access policy in Azure Key Vault for the Azure Function App.
The below instructions assign access policies programmatically. However, you can also assign a Key Vault
access policy using the Azure portal.
# Generate managed service identity for function app
az functionapp identity assign \
--resource-group $resource_group_name \
--name $funcapp_name

# Capture function app managed identity id


func_principal_id=$(az resource list \
--name $funcapp_name \
--query [*].identity.principalId \
--output tsv)

# Capture key vault object/resource id


kv_scope=$(az resource list \
--name $key_vault_name \
--query [*].id \
--output tsv)

# set permissions policy for function app to key vault - get list and set
az keyvault set-policy \
--name $key_vault_name \
--resource-group $resource_group_name \
--object-id $func_principal_id \
--secret-permission get list set

Step 2: Run az role assignment create to assign 'Key Vault Secrets User' built-in role to Azure Function
App.

# Create a 'Key Vault Contributor' role assignment for function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Key Vault Contributor' \
--scope $kv_scope

# Assign the 'Storage Blob Data Contributor' role to the function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Storage Blob Data Contributor' \
--resource-group $resource_group_name

# Assign the 'Storage Queue Data Contributor' role to the function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Storage Queue Data Contributor' \
--resource-group $resource_group_name

8. Upload a CSV Blob to the Container


To ingest relational data later in this series, upload a data file (blob) to an Azure Storage container.

NOTE
If you already have your data (blob) uploaded, you can skip to the next ar ticle in this series .

Sample Data
M A N UFA C T
UN IT S URIN G GRO SS
SEGM EN T C O UN T RY P RO DUC T SO L D P RIC E SA L E P RIC E SA L ES DAT E

Governmen Canada Carretera 1618.5 $3.00 $20.00 $32,370.00 1/1/2014


t

Governmen Germany Carretera 1321 $3.00 $20.00 $26,420.00 1/1/2014


t

Midmarket France Carretera 2178 $3.00 $15.00 $32,670.00 6/1/2014

Midmarket Germany Carretera 888 $3.00 $15.00 $13,320.00 6/1/2014

Midmarket Mexico Carretera 2470 $3.00 $15.00 $37,050.00 6/1/2014

Step 1: Create a file named 'financial_sample.csv' locally that contains this data by copying the below
data into the file:

Segment,Country,Product,Units Sold,Manufacturing Price,Sale Price,Gross Sales,Date


Government,Canada,Carretera,1618.5,$3.00,$20.00,"$32,370.00",1/1/2014
Government,Germany,Carretera,1321,$3.00,$20.00,"$26,420.00",1/1/2014
Midmarket,France,Carretera,2178,$3.00,$15.00,"$32,670.00",6/1/2014
Midmarket,Germany,Carretera,888,$3.00,$15.00,"$13,320.00",6/1/2014
Midmarket,Mexico,Carretera,2470,$3.00,$15.00,"$37,050.00",6/1/2014

Step 2: Upload your data (blob) to your storage container by running az storage blob upload.

az storage blob upload \


--account-name $storage_acct_name \
--container-name $abs_container_name \
--name 'financial_sample.csv' \
--file 'financial_sample.csv' \
--auth-mode login

Next Step
Next: Securely ingest relational data >>>
Ingest data from Azure Blob Storage using a Python
Azure Function and Azure Key Vault
10/28/2022 • 7 minutes to read • Edit Online

In this article, you'll learn how to retrieve a secret from a Key Vault to securely access Azure Storage Blob data
using a serverless Python Function.

The data needed for analytics is typically gathered from various disparate data sources. Data ingestion is the
process of extracting data from these data sources into a data store and is the first step of an Extract, Transform,
and Load (ETL) solution. There are two types of data ingestion: Batch Processing and Streaming. Batch
processing is when a large amount of data is processed simultaneously, with subprocesses executing
simultaneously in sequential order. This article focuses on batch processing using a serverless Python Function
to retrieve data securely from Azure Blob Storage using Azure Key Vault.

Prerequisites
This article assumes you have set up your environment as described in the previous articles:
Configure your local Python dev environment for Azure
Create resources

TIP
Capture the below information from the previous article to use later in this article:
Azure Blob Storage Account name
Azure Blob Container name
Azure Key Vault name
Sample Data Filename
1. Install required Python Azure SDK libraries
Open the requirements.txt file created in the previous article and complete the following steps.
Step 1: Create and activate Python virtual environment.

# Create Python virtual environment


# [NOTE] On Windows, use py -3 -m venv .venv
python3 -m venv .venv

# Activate Python virtual environment


source .venv/bin/activate

Step 2: Review the file contents and ensure the following Python Azure SDK libraries are listed:

azure-identity
azure-storage-blob
azure-keyvault-secrets
azure-functions
pandas

Step 3: In a terminal, with a virtual environment activated, run the 'pip install' command to install the
required libraries.

pip install -r requirements.txt

2. Retrieve Key Vault secret in the Function


Storing secrets in an Azure Key Vault, rather than storing sensitive data in plain text, improves the security of
your sensitive information.
The Python Azure SDK key vault secret client library provides secret management. This code creates the client
object and retrieves the secret value for the Azure Blob Storage Account.
Step 1: Open the 'local.setting.json' file and add Environment Variable values for local development.
Run the printenv command if you need to retrieve the Environment Variable values.

printenv ABS_SECRET_NAME
printenv ADLS_SECRET_NAME
printenv KEY_VAULT_NAME

Step 2: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below
code.
import logging
import os
import azure.functions as func
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)

except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")

NOTE
In this example, the logged-in user is used to authenticate to Key Vault, which is the preferred method for local
development. A managed identity must be assigned to an App Service or Virtual Machine for applications deployed to
Azure. For more information, see Managed Identity Overview.

3. Ingest data from Azure Blob Storage with a serverless Function


Extract, Transform, and Load (ETL) is a popular approach used in data processing solutions. In ETL solutions,
extracted data from one or more source systems. Then data is transformed into a 'staging' area, and loaded into
a data store. The polished data can then be consumed by analytic tools, such as a data warehouse or data lake.
Step 1: Modify the code of your existing 'init.py' class file to begin the ETL process. This function will
securely extract raw data from blob storage into your serverless Azure Function.
import logging
import os
import azure.functions as func
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)

except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")

Step 2: Open the 'init.py' class file of the demo_relational_data_cloudetl function. Then add the below
code to gather a list of blobs.
import logging
import os
from io import StringIO
import pandas as pd
from datetime import datetime, timedelta

import azure.functions as func


from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient

def return_blob_files(container_client, arg_date, std_date_format):


start_date = datetime.strptime(arg_date, std_date_format).date() - timedelta(days=1)

blob_files = [blob for blob in container_client.list_blobs() if blob.creation_time.date() >=


start_date]

return blob_files

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'

abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)

# Initialize Azure Service SDK Clients


abs_service_client = BlobServiceClient(
account_url = abs_acct_url,
credential = az_credential
)

abs_container_client = abs_service_client.get_container_client(container=abs_container_name)

# Run ETL Application


process_file_list = return_blob_files(
container_client = abs_container_client,
arg_date = arg_date,
std_date_format = std_date_format
)

except Exception as e:
logging.info(e)

return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")


Step 3: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below code
to ingest data into a Pandas DataFrame.

import logging
import os
from io import StringIO
import pandas as pd
from datetime import datetime, timedelta

import azure.functions as func


from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient

def return_blob_files(container_client, arg_date, std_date_format):


start_date = datetime.strptime(arg_date, std_date_format).date() - timedelta(days=1)

blob_files = [blob for blob in container_client.list_blobs() if blob.creation_time.date() >=


start_date]

return blob_files

def read_csv_to_dataframe(container_client, filename, file_delimiter= ','):


blob_client = container_client.get_blob_client(blob=filename)

# Retrieve extract blob file


blob_download = blob_client.download_blob()

# Read blob file into DataFrame


blob_data = StringIO(blob_download.content_as_text())
df = pd.read_csv(blob_data,delimiter=file_delimiter)
return df

def ingest_relational_data(container_client, blob_file_list):


df = pd.concat([read_csv_to_dataframe(container_client=container_client, filename=blob_name.name)
for blob_name in blob_file_list], ignore_index=True)

return df

def run_cloud_etl(source_container_client, blob_file_list):


df = ingest_relational_data(source_container_client, blob_file_list)

# Check the blob file data


logging.info(df.head(5))

return True

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'

abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
access_key_secret = secret_client.get_secret(blob_secret_name)

# Initialize Azure Service SDK Clients


abs_service_client = BlobServiceClient(
account_url = abs_acct_url,
credential = az_credential
)

abs_container_client = abs_service_client.get_container_client(container=abs_container_name)

# Run ETL Application


process_file_list = return_blob_files(
container_client = abs_container_client,
arg_date = arg_date,
std_date_format = std_date_format
)

run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list= process_file_list
)

except Exception as e:
logging.info(e)

return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")

Step 4: Execute the function locally and review the execution log to ensure the output is correct.

Segment Country Product Units Sold Manufacturing Price Sale Price Gross Sales Date
0 Government Canada Carretera 1618.5 $3.00 $20.00 "$32,370.00"
1/1/2014
1 Government Germany Carretera 1321 $3.00 $20.00 "$26,420.00"
1/1/2014
2 Midmarket France Carretera 2178 $3.00 $15.00 "$32,670.00"
6/1/2014
3 Midmarket Germany Carretera 888 $3.00 $15.00 "$13,320.00"
6/1/2014
4 Midmarket Mexico Carretera 2470 $3.00 $15.00 "$37,050.00"
6/1/2014

4. Deploy ingest Function to Azure


Now that the code is complete for this article deploy the local function project to the Azure Function App created
earlier in this article.
Step 1: Use the Azure Functions Core Tools again to deploy your local functions project to Azure by
running func Azure functionapp publish.

func azure functionapp publish CloudETLDemo

Step 2: Add environment variables to the Azure App Config Setting within the Azure portal.
# Update function app's settings to include Azure Key Vault environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "KEY_VAULT_NAME=kv-cloudetl-demo"

# Update function app's settings to include Azure Blob Storage Access Key in Azure Key Vault secret
environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ABS_SECRET_NAME=abs-access-key1"

Step 3: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display similar output as
when you ran the function locally.

https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?name=Functions

or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."

curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?
name=Functions"

Next Step
Next: Process relational data for analytics >>>
Transform relational data with Pandas and Azure
Function Apps
10/28/2022 • 8 minutes to read • Edit Online

In this article, you'll use the Pandas Python library in a serverless function to prepare relational data and start to
build out a data lake.

The 'Transform' stage handles data cleansing, validation, and business logic implementation required for later
analysis.
Some essential tasks are to compile, convert, reformat, validate, and cleanse the data in a 'staging' or 'data
landing zone' before loading it into the targeted analytic data store.
Source data is often captured in a format not ideal for data analytics. That's why, the data must be cleansed and
manipulated to address any data issues. By taking this step, you increase the integrity of your data, leading to
insights of higher quality.
There are different kinds of data problems that can occur in any data processing pipeline. This article addresses
a few common problems and provides solutions using the Python Pandas library.

Prerequisites
If you haven't already, follow all the instructions and complete the following articles to set up your local and
Azure dev environment:
Configure your local Python dev environment for Azure
Create resources
Ingest relational data

1. Install required Python Azure SDK libraries


Open and review the requirements.txt file contents and make sure the following Python Azure SDK libraries
exist:
azure-identity
azure-storage-blob
azure-keyvault-secrets
azure-functions
pandas

In a terminal or command prompt with a virtual environment activated, run the 'pip install' command to install
the required libraries.

pip install -r requirements.txt

IMPORTANT
Be sure to capture the following information for this article:
Azure Resource Group Name
Azure Blob Storage Account Name
Azure Key Vault URL
Also, activate the local virtual environment created in previous articles for this project.

2. Cleaning relational data with Python


Cleansing a dataset can include jobs to sort, filter, deduplicate, rename, and map data. Using Pandas library
helps simplify any repetitive, time-consuming tasks associated with working with the data.
Step 1: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below code
to reformat the column names.

def process_relational_data(df):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

return processed_df

Step 2: Add the below code to filter out the unneeded columns from the DataFrame.
def process_relational_data(df, columns):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

return processed_df

Step 3: Add the below code to clean the column values in the DataFrame.

def process_relational_data(df, columns):


# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

return processed_df

3. Standardize the data structure


The DataFrame schema must align with the schema of the target data store. Standardization or reformatting of
the data is required if misalignment exists. For instance, currency and dates are two common fields in datasets
that don't align with the target schema.
Step 1: Add the below code to handle inconsistent date formatting.
def process_relational_data(df, columns):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')

return processed_df

Step 2: Add the below code to standardize the currency columns with special characters in the
DataFrame.

def process_relational_data(df, columns):


# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')

# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)

return processed_df

4. Convert data to meet business requirements and logic


The data and DataFrame are now standardized and cleansed. Now, Convert data according to business
requirements and logic.
Step 1: Add year and month columns to the DataFrame for later analytic use.

def process_relational_data(df, columns):


# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')

# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)

# Capture dateparts (year and month) in new DataFrame columns


processed_df['sale_year'] = pd.DatetimeIndex(processed_df['date']).year
processed_df['sale_month'] = pd.DatetimeIndex(processed_df['date']).month

return processed_df

Step 2: Add the below code to the demo_relational_data_cloudetl function to aggregate the DataFrame
based on the business requirements.
def process_relational_data(df, columns, groupby_columns):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')

# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)

# Capture dateparts (year and month) in new DataFrame columns


processed_df['sale_year'] = pd.DatetimeIndex(processed_df['date']).year
processed_df['sale_month'] = pd.DatetimeIndex(processed_df['date']).month

# Get Gross Sales per Segment, Country, Sale Year, and Sale Month
processed_df = processed_df.sort_values(by=['sale_year', 'sale_month']).groupby(groupby_columns,
as_index=False).agg(total_units_sold=('units_sold', sum), total_gross_sales=('gross_sales', sum))

return processed_df

5. Add data processing to the solution


Now add this new functionality to the overall solution by modifying the 'main' and 'run_cloud_etl' functions.
Step 1: Add the below code to integrate the data processing functionality into the overall Cloud ETL
solution.

def run_cloud_etl(service_client, storage_account_url, source_container, archive_container,


source_container_client, blob_file_list, columns, groupby_columns):
df = ingest_relational_data(source_container_client, blob_file_list)
df = process_relational_data(df, columns, groupby_columns)

return True

Step 2: Add the below code to the demo_relational_data_cloudetl function to integrate data processing
to the overall Cloud ETL solution.
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'

# List of columns relevant for analysis


cols = ['segment', 'country', 'units_sold', 'gross_sales', 'date']

# List of columns to aggregate


groupby_cols = ['segment', 'country', 'sale_year', 'sale_month']

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
archive_container_name = 'demo-cloudetl-archive'

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)

# Initialize Azure Service SDK Clients


abs_service_client = BlobServiceClient(
account_url = abs_acct_url,
credential = az_credential
)

abs_container_client = abs_service_client.get_container_client(container=abs_container_name)

# Run ETL Application


process_file_list = return_blob_files(
container_client = abs_container_client,
arg_date = arg_date,
std_date_format = std_date_format
)

run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list = process_file_list,
columns = cols,
groupby_columns = groupby_cols,
service_client = abs_service_client,
storage_account_url = abs_acct_url,
source_container = abs_container_name,
archive_container = archive_container_name
)

except Exception as e:
logging.info(e)

return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")


6. Deploy Azure Function App
Now that the code is complete for this article, deploy the local function project to the Azure Function App
created earlier.
Step 1: Use the Azure Functions Core Tools again to deploy your local functions project to Azure by
running func Azure functionapp publish.

func azure functionapp publish <APP_NAME>

Step 2: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display a similar outcome as
when you ran the function locally.

https://msdocs-azurefunctions.azurewebsites.net/api/ingest_relational_data?name=Functions

or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."

curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/ingest_relational_data?name=Functions"

Next Step
Next: Load and archive processed relational data >>>
Load relational data into Azure Data Lake Storage
with Azure Functions
10/28/2022 • 7 minutes to read • Edit Online

This article, loads processed data into Azure Data Lake Storage Gen 2 using a serverless Python Function. The
data is then archived using Azure Blob Storage Access Tiers.

The final step of our solution loads the now processed data into the target data store. The data can be loaded
using a row by row approach, or ideally a bulk insert/load process.

TIP
Use bulk loading/bulk insert functions to load the well transformed data
User manual/individual inserts for questionable datasets.

Prerequisites
Azure subscription, if you not, create one for free before you begin.
The Azure Functions Core Tools version 3.x
Visual Studio Code on one of the supported platforms.
The PowerShell extension for Visual Studio Code
The Azure Functions extension for Visual Studio Code
Python 3.7 or later installed

1. Configure your dev environment


If you haven't already, follow all the instructions and complete the following articles to set up your local and
Azure dev environment:
Configure your local Python dev environment for Azure
Create resources
Ingest relational data
Transform relational data

2. Install required Python Azure SDK libraries


Open and review the requirements.txt file contents and make sure the following Python Azure SDK
libraries are included:

azure-storage-file-datalake
azure-identity
azure-storage-blob
azure-keyvault-secrets
azure-functions
azure-mgmt-storage
pandas
pyarrow
fastparquet

In a terminal or command prompt with a virtual environment activated, run the 'pip install' command to
install the required libraries.

pip install -r requirements.txt

3. Load processed relational data into Azure Data Lake Storage Gen 2
Once the data is transformed into a format ideal for analysis, load the data into an analytical data store. The data
store can be a database system, data warehouse, data lake, or Hadoop. Each destination has different
approaches for loading data reliability and optimized performance. The data can now be used for analysis and
business intelligence.
This article loads the transformed data into Azure Data Lake Storage (ADLS) Gen 2. As previously discussed,
ADLS is the recommended data storage solution for analytic workloads. Various compute and analytic Azure
services can easily connect to Azure Data Lake Storage Gen 2.
Step 1: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below
helper function to load a DataFrame to ADLS Gen 2.

def write_dataframe_to_datalake(df, datalake_service_client, filesystem_name, dir_name, filename):


file_path = f'{dir_name}/{filename}'

file_client = datalake_service_client.get_file_client(filesystem_name, file_path)

processed_df = df.to_parquet(index=False)

file_client.upload_data(data=processed_df,overwrite=True, length=len(processed_df))

file_client.flush_data(len(processed_df))

return True

Step 2: Add the below code to create a function to hold any code relevant to loading relational data in
our solution.
def load_relational_data(processed_df, datalake_service_client, filesystem_name, dir_name,
file_format, file_prefix):
now = datetime.today().strftime("%Y%m%d_%H%M%S")
processed_filename = f'{file_prefix}_{now}.{file_format}'
write_dataframe_to_datalake(processed_df, datalake_service_client, filesystem_name, dir_name,
processed_filename)
return True

4. Move processed source data file to Cool Tier Blob Storage


After loading data into the data lake, the source file is achieved to Azure Blob Storage. Data archiving is when
data is identified no longer active, but requires retention.
Azure Blob Storage has a feature, Access Tiers, is the go-to solution for data archiving, because of ease of use
and cost savings. There are three tiers: Hot, Cool, and Archive. The option used in this solution is 'Cool Tier',
however based on your organization's needs, a better fit could be 'Archive Tier'.
Data moved to a cooler tier can be restored and accessed at any time. However, depending on the access tier
chosen, the data rehydration time can vary.
For more information about Access Tiers to help with your decision, see the Hot, cool, archive access tiers for
blob data article.
Step 1: Add the below helper function to the demo_relational_data_cloudetl function to archive the
processed source file.

def archive_cooltier_blob_file(blob_service_client, storage_account_url, source_container,


archive_container, blob_list):

for blob in blob_list:


blob_name = blob.name
source_blob_url = f'{storage_account_url}{source_container}/{blob_name}'

# Copy source blob file to archive container and change blob access tier to 'Cool'
archive_blob_client = blob_service_client.get_blob_client(archive_container, blob_name)
archive_blob_client.start_copy_from_url(source_url=source_blob_url,
standard_blob_tier=StandardBlobTier.Cool)
(blob_service_client.get_blob_client(source_container,
blob_name)).delete_blob(delete_snapshots='include')

return True

Step 2: Add the below code to the demo_relational_data_cloudetl function to integrate data archiving to
the overall Cloud ETL run.

def run_cloud_etl(service_client, storage_account_url, source_container, archive_container,


source_container_client, blob_file_list, columns, groupby_columns, datalake_service_client,
filesystem_name, dir_name, file_format, file_prefix):
df = ingest_relational_data(source_container_client, blob_file_list)
df = process_relational_data(df, columns, groupby_columns)
result = load_relational_data(df, datalake_service_client, filesystem_name, dir_name,
file_format, file_prefix)
result = archive_cooltier_blob_file(service_client, storage_account_url, source_container,
archive_container, blob_file_list)

return result

5. Final Serverless, Cloud ETL Solution


Congratulations, you've reached the end of this series! Below is the complete Azure Function App python code
for your reference.

import logging
import os
import pandas as pd
import pyarrow
import fastparquet
from io import StringIO
from datetime import datetime, timedelta

import azure.functions as func


from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, StandardBlobTier
from azure.storage.filedatalake import DataLakeServiceClient

def return_blob_files(container_client, arg_date, std_date_format):


start_date = datetime.strptime(arg_date, std_date_format).date() - timedelta(days=1)

blob_files = [blob for blob in container_client.list_blobs() if blob.creation_time.date() >= start_date]

return blob_files

def read_csv_to_dataframe(container_client, filename, file_delimiter= ','):


blob_client = container_client.get_blob_client(blob=filename)

# Retrieve extract blob file


blob_download = blob_client.download_blob()

# Read blob file into DataFrame


blob_data = StringIO(blob_download.content_as_text())
df = pd.read_csv(blob_data,delimiter=file_delimiter)
return df

def write_dataframe_to_datalake(df, datalake_service_client, filesystem_name, dir_name, filename):

file_path = f'{dir_name}/{filename}'

file_client = datalake_service_client.get_file_client(filesystem_name, file_path)

processed_df = df.to_parquet(index=False)

file_client.upload_data(data=processed_df,overwrite=True, length=len(processed_df))

file_client.flush_data(len(processed_df))

return True

def archive_cooltier_blob_file(blob_service_client, storage_account_url, source_container,


archive_container, blob_list):

for blob in blob_list:


blob_name = blob.name
source_blob_url = f'{storage_account_url}{source_container}/{blob_name}'

# Copy source blob file to archive container and change blob access tier to 'Cool'
archive_blob_client = blob_service_client.get_blob_client(archive_container, blob_name)

archive_blob_client.start_copy_from_url(source_url=source_blob_url,
standard_blob_tier=StandardBlobTier.Cool)

(blob_service_client.get_blob_client(source_container,
blob_name)).delete_blob(delete_snapshots='include')

return True

def ingest_relational_data(container_client, blob_file_list):


def ingest_relational_data(container_client, blob_file_list):
df = pd.concat([read_csv_to_dataframe(container_client=container_client, filename=blob_name.name) for
blob_name in blob_file_list], ignore_index=True)

return df

def process_relational_data(df, columns, groupby_columns):


# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())

# Filter DataFrame (df) columns


processed_df = processed_df.loc[:, columns]

# Clean column names for easy consumption


processed_df.columns = processed_df.columns.str.strip()
processed_df.columns = processed_df.columns.str.lower()
processed_df.columns = processed_df.columns.str.replace(' ', '_')
processed_df.columns = processed_df.columns.str.replace('(', '')
processed_df.columns = processed_df.columns.str.replace(')', '')

# Filter out all empty rows, if they exist.


processed_df.dropna(inplace=True)

# Remove leading and trailing whitespace for all string values in df


df_obj_cols = processed_df.select_dtypes(['object'])
processed_df[df_obj_cols.columns] = df_obj_cols.apply(lambda x: x.str.strip())

# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')

# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)

# Capture dateparts (year and month) in new DataFrame columns


processed_df['sale_year'] = pd.DatetimeIndex(processed_df['date']).year
processed_df['sale_month'] = pd.DatetimeIndex(processed_df['date']).month

# Get Gross Sales per Segment, Country, Sale Year, and Sale Month
processed_df = processed_df.sort_values(by=['sale_year', 'sale_month']).groupby(groupby_columns,
as_index=False).agg(total_units_sold=('units_sold', sum), total_gross_sales=('gross_sales', sum))

return processed_df

def load_relational_data(processed_df, datalake_service_client, filesystem_name, dir_name, file_format,


file_prefix):
now = datetime.today().strftime("%Y%m%d_%H%M%S")
processed_filename = f'{file_prefix}_{now}.{file_format}'
write_dataframe_to_datalake(processed_df, datalake_service_client, filesystem_name, dir_name,
processed_filename)
return True

def run_cloud_etl(service_client, storage_account_url, source_container, archive_container,


source_container_client, blob_file_list, columns, groupby_columns, datalake_service_client, filesystem_name,
dir_name, file_format, file_prefix):
df = ingest_relational_data(source_container_client, blob_file_list)
df = process_relational_data(df, columns, groupby_columns)
result = load_relational_data(df, datalake_service_client, filesystem_name, dir_name, file_format,
file_prefix)
result = archive_cooltier_blob_file(service_client, storage_account_url, source_container,
archive_container, blob_file_list)

return result

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('Python HTTP trigger function processed a request.')

# Parameters/Configurations
arg_date = '2014-07-01'
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'
processed_file_format = 'parquet'
processed_file_prefix = 'financial_demo'

# List of columns relevant for analysis


cols = ['segment', 'country', 'units_sold', 'gross_sales', 'date']

# List of columns to aggregate


groupby_cols = ['segment', 'country', 'sale_year', 'sale_month']

try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]

abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
archive_container_name = 'demo-cloudetl-archive'

adls_acct_name='dlscloudetldemo'
adls_acct_url = f'https://{adls_acct_name}.dfs.core.windows.net/'
adls_fsys_name='processed-data-demo'
adls_dir_name='finance_data'
adls_secret_name='adls-access-key1'

# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)

# Initialize Azure Service SDK Clients


abs_service_client = BlobServiceClient(
account_url = abs_acct_url,
credential = az_credential
)

abs_container_client = abs_service_client.get_container_client(container=abs_container_name)

adls_service_client = DataLakeServiceClient(
account_url = adls_acct_url,
credential = az_credential
)

# Run ETL Application


process_file_list = return_blob_files(
container_client = abs_container_client,
arg_date = arg_date,
std_date_format = std_date_format
)

run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list = process_file_list,
columns = cols,
groupby_columns = groupby_cols,
datalake_service_client = adls_service_client,
filesystem_name = adls_fsys_name,
dir_name = adls_dir_name,
file_format = processed_file_format,
file_prefix = processed_file_prefix,
service_client = abs_service_client,
storage_account_url = abs_acct_url,
source_container = abs_container_name,
archive_container = archive_container_name
)
except Exception as e:
logging.info(e)

return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)

return func.HttpResponse("This HTTP triggered function executed successfully.")

6. Deploy solution to Azure


Now that the code is complete for this series deploy the local function project to the Azure Function App created
earlier in this article.
Step 1: Use the Azure Functions Core Tools again to deploy your local functions project to Azure by
running func Azure functionapp publish.

func azure functionapp publish CloudETLDemo

Step 2: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display similar output as
when you ran the function locally.

https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?name=Functions

or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."

curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?
name=Functions"

7. Clean up resources
When no longer needed, remove the resource group, and all related resources:
Run az group delete to delete the Azure Resource Group.

az group delete --name 'rg-cloudetl-demo'


Data solutions for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various data solutions on Azure.

SQL databases
PostgreSQL :
Use Python to connect and query data in Azure Database for PostgreSQL
Run a Python (Django or Flask) web app with PostgreSQL in Azure App Service
MySQL :
Use Python to connect and query data with Azure Database for MySQL
Azure SQL :
Use Python to query an Azure SQL database
MariaDB :
How to connect applications to Azure Database for MariaDB

Tables, blobs, files, NoSQL


Tables and NoSQL :
Build an Azure Cosmos DB for Table app with Python
Build a Python application using an Azure Cosmos DB for NoSQL account
Build a Cassandra app with Python SDK and Azure Cosmos DB
Create a graph database in Azure Cosmos DB using Python and the Azure portal
Build a Python app using Azure Cosmos DB for MongoDB
Blob and file storage :
Manage Azure Storage blobs with Python
Develop for Azure Files with Python
Redis Cache :
Create a Python app that uses Azure Cache for Redis

Big data and analytics


Big data analytics (Azure Data Lake analytics) :
Manage Azure Data Lake Analytics using Python
Develop U-SQL with Python for Azure Data Lake Analytics
Big data orchestration (Azure Data Factor y) :
Create a data factory and pipeline using Python
Transform data by running a Python activity in Azure Databricks
Big data streaming and event ingestion (Azure Event Hubs) :
Send events to or receive events from event hubs by using Python
Event Hubs Capture walkthrough: Python
Capture Event Hubs data in Azure Storage and read it by using Python
Hadoop (Azure HDInsights) :
Use Spark & Hive Tools for Visual Studio Code
Spark-based analytics (Azure Databricks) :
Connect to Azure Databricks from Excel, Python, or R
Run a Spark job on Azure Databricks using the Azure portal
Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark
Machine learning for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various machine learning options on Azure:
Get started creating your first ML experiment with the Python SDK
Train your first ML model
Train image classification models with MNIST data and scikit-learn using Azure Machine Learning
Auto-train an ML model
Access datasets with Python using the Azure Machine Learning Python client library
Configure automated ML experiments in Python
Deploy a data pipeline with Azure DevOps
Create and run machine learning pipelines with Azure Machine Learning SDK
Overview of Python Container Apps in Azure
10/28/2022 • 15 minutes to read • Edit Online

This article describes how to go from Python project code (for example, a web app) to a deployed Docker
container in Azure. Discussed are the general process of containerization, deployment options for containers in
Azure, and Python-specific configuration of containers in Azure.
The nature of Docker containers is that creating a Python Docker image from code and deploying that image to
a container in Azure is similar across programming languages. The language-specific considerations - Python in
this case - are in the configuration during the containerization process in Azure, in particular the Dockerfile
structure and configuration supporting Python web frameworks such as Django, Flask, and FastAPI.

Container workflow scenarios


For Python container development, some typical workflows for moving from code to container are:

SC EN A RIO DESC RIP T IO N W O RK F LO W

Dev Build Python Docker images in your Code: git clone code to dev
dev environment. environment (with Docker installed).

Build: Use Docker CLI, VS Code (with


extensions), PyCharm (with plugin).
Described in section Working with
Python Docker images and containers.

Run: In dev environment in a Docker


container.

Push: To a registry like Azure Container


Registry, Docker Hub, or private
registry.

Deploy: To Azure service from registry.

Hybrid From your dev environment, build Code: git clone code to dev
Python Docker images in Azure. environment (not necessary for Docker
to be installed).

Build: VS Code (with extensions), Azure


CLI.

Push: To Azure Container Registry

Deploy: To Azure service from registry.


SC EN A RIO DESC RIP T IO N W O RK F LO W

Azure All in the cloud; use Azure Cloud Shell Code: git clone GitHub repo to Azure
to build Python Docker images code Cloud Shell.
from GitHub repo.
Build: In Azure Cloud Shell, use Azure
CLI or Docker CLI.

Push: To registry like Azure Container


Registry, Docker Hub, or private
registry.

Deploy: To Azure service from registry.

The end goal of these workflows is to have a container running in one of the Azure resources supporting Docker
containers as listed in the next section.
A dev environment can be your local workstation with Visual Studio Code or PyCharm, Codespaces (a
development environment that's hosted in the cloud), or Visual Studio Dev Containers (a container as a
development environment).

Deployment container options in Azure


Python container apps are supported in the following services.
Web App for Containers provides an easy on-ramp for developers to take advantage of the fully managed Azure
App Service platform, but who also want a single deployable artifact containing an app and all of its
dependencies. Containerized web apps on Azure App Service can scale as needed and use streamlined CI/CD
workflows with Docker Hub, Azure Container Registry, and GitHub. For an example, see Containerized Python
web app on Azure App Service.
Azure Container Apps (ACA) is a fully managed serverless container service for containers. Container Apps
provides many application-specific concepts on top of containers, including certificates, revisions, scale, and
environments. Container Apps are a good for web applications including web sites and web APIs. For an
example, see …
Azure Container Instances (ACI) is a serverless offering, billed on consumption rather than provisioned
resources. Concepts like scale, load balancing, and certificates aren't provided with ACI containers, and ACI is a
lower-level "building block" option compared to ACA. For an example, see the tutorial Create a container image
for deployment to Azure Container Instances. The tutorial isn't Python-specific, but the concepts show apply to
all languages.
Azure Kubernetes Service (AKS) is an open source container and cluster management tool that is often referred
to as an orchestration system. For an example, see the tutorial, Deploy an Azure Kubernetes Service cluster
using the Azure CLI.
Azure Functions is an event-driven, serverless functions-as-a-service solution, optimized for running event-
driven applications using the functions programming model. Azure Functions shares many characteristics with
Azure Container Apps around scale and integration with events, but is optimized for ephemeral functions
deployed as either code or containers. For an example, see Create a function on Linux using a custom container.
Other container solutions are shown in the comparison article, Comparing Container Apps with other Azure
container options.

Virtual environments and containers


When you're running a Python project in a dev environment, using a virtual environment is a common way of
managing dependencies and ensuring reproducibility of your project setup. A virtual environment has a Python
interpreter, libraries, and scripts installed that are required by the project code running in that environment.
Dependencies for Python projects are managed through the requirements.txt file.

TIP
With containers, virtual environments aren't needed unless you're using them for testing or other reasons. If you use
virtual environments, don't copy them into the Docker image. Use the .dockerignore file to exclude them.

You can think of Docker containers as providing similar capabilities as virtual environments, but with further
improvements in reproducibility and portability. Docker container can be run anywhere containers can be run,
regardless of OS.
A Docker container contains your Python project code and everything that code needs to run. To get to that
point, you need to build your Python project code into a Docker image, and then create container, a runnable
instance of that image.
For containerizing Python projects, the key files are:

P RO JEC T F IL E DESC RIP T IO N

requirements.txt Used during the building of the Docker image to get the
correct dependencies into the image.

Dockerfile Used to specify how to build the Python Docker image. For
more information, see the section Dockerfile instructions for
Python.

.dockerignore Files and directories in .dockerignore aren't copied to the


Docker image with the COPY command in the Dockerfile.
The .dockerignore file supports exclusion patterns similar to
.gitignore files. For more information, see .dockerignore file.

Excluding files helps image build performance, but should


also be used to avoid adding sensitive information to the
image where it can be inspected. For example, the
.dockerignore should contain lines to ignore .env and .venv
(virtual environments).

Container settings for web frameworks


Web frameworks have default ports on which they listen for web requests. When working with some Azure
container solutions, you need to specify the port your container is listening on that will receive traffic.

W EB F RA M EW O RK P O RT

Django 8000

Flask 5000 or 5002

FastAPI (uvicorn) 8000

The following table shows how to set the port for difference Azure container solutions.
A Z URE C O N TA IN ER SO L UT IO N H O W TO SET W EB A P P P O RT

Web App for Containers By default, App Service assumes your custom container is
listening on either port 80 or port 8080. If your container
listens to a different port, set the WEBSITES_PORT app
setting in your App Service app. For more information, see
Configure a custom container for Azure App Service.

Azure Containers Apps Azure Container Apps allows you to expose your container
app to the public web, to your VNET, or to other container
apps within your environment by enabling ingress. Set the
ingress targetPort to the port your container listens to
for incoming requests. Application ingress endpoint is always
exposed on port 443. For more information, see Set up
HTTPS or TCP ingress in Azure Container Apps.

Azure Container Instances, Azure Kubernetes Set port during creation of a container. You need to ensure
your solution has a web framework, application server (for
example, gunicorn, uvicorn), and web server (for example,
nginx). For example, you can create two containers, one
container with a web framework and application server, and
another framework with a web server. The two containers
communicate on one port, and the web server container
exposes 80/443 for external requests.

Python Dockerfile
A Dockerfile is a text file that contains instructions for building a Docker image. The first line states the base
image to begin with. This line is followed by instructions to install required programs, copy files, and other
instructions to create a working environment. For example, some Python-specific examples for key Python
Dockerfile instructions show in the table below.

IN ST RUC T IO N P URP O SE EXA M P L E

FROM Sets the base image for subsequent FROM python:3.8-slim


instructions.

EXPOSE Tells Docker that the container listens EXPOSE 5000


on the specified network ports at
runtime.

COPY Copies files or directories from the COPY . /app


specified source and adds them to the
filesystem of the container at the
specified destination path.

RUN Runs a command inside the Docker RUN python -m pip install -r
image. For example, pull in requirements.txt
dependencies. The command runs
once at build time.

CMD The command provides the default for CMD ["gunicorn", "--bind",
executing a container. There can only "0.0.0.0:5000", "wsgi:app"]
be one CMD instruction.

The Docker build command builds Docker images from a Dockerfile and a context. A build’s context is the set of
files located in the specified path or URL. Typically, you'll build an image from the root of your Python project
and the path for the build command is "." as shown in the following example.

docker build --rm --pull --file "Dockerfile" --tag "mywebapp:latest" .

The build process can refer to any of the files in the context. For example, your build can use a COPY instruction
to reference a file in the context. Here's an example of a Dockerfile for a Python project using the Flask
framework:

FROM python:3.8-slim

EXPOSE 5000

# Keeps Python from generating .pyc files in the container.


ENV PYTHONDONTWRITEBYTECODE=1

# Turns off buffering for easier container logging


ENV PYTHONUNBUFFERED=1

# Install pip requirements.


COPY requirements.txt .
RUN python -m pip install -r requirements.txt

WORKDIR /app
COPY . /app

# Creates a non-root user with an explicit UID and adds permission to access the /app folder.
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser

# Provides defaults for an executing container; can be overridden with Docker CLI.
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "wsgi:app"]

You can create a Dockerfile by hand or create it automatically with VS Code and the Docker extension. For more
information, see Generating Docker files.
The Docker build command is part of the Docker CLI. When you use IDEs like VS Code or PyCharm, the UI
commands for working with Docker images call the build command for you and automate specifying options.

Working with Python Docker images and containers


VS Code and PyCharm
Working in an integrated development environment (IDE) for Python container development isn't necessary but
can simplify many container-related tasks. Here are some of the things you can do with VS Code and PyCharm.
Download and build Docker images.
Build images in your dev environment.
Build Docker images in Azure without Docker installed in dev environment. (For PyCharm, use the
Azure CLI to build images in Azure.)
Create and run Docker containers from an existing image, a pulled image, or directly from a Dockerfile.
Run multicontainer applications with Docker Compose.
Connect and work with container registries like Docker Hub, GitLab, JetBrains Space, Docker V2, and
other self-hosted Docker registries.
(VS Code only) Add a Dockerfile and Docker compose files that are tailored for your Python project.
To set up VS Code and PyCharm to run Docker containers in your dev environment, use the following steps.
VS Code
PyCharm

If you haven't already, install Azure Tools for VS Code.

IN ST RUC T IO N S SC REEN SH OT

Step 1 : Use SHIFT + ALT + A to open the Azure extension


and confirm you're connected to Azure.

You can also select the Azure icon on the VS Code


extensions bar.

If you aren't signed in, select Sign in to Azure and follow


the prompts.

If you have trouble accessing your Azure subscription, it may


be because you are behind a proxy. To resolve connection
issues, see Network Connections in Visual Studio Code.

Step 2 : Use CTRL + SHIFT + X to open Extensions ,


search for the Docker extension, and install the extension.

You can also select the Extensions icon on the VS Code


extensions bar.

Step 3 : Select the Docker icon in the extension bar, expand


images, and right-click an image run it as a container.

Step 4 : Monitor the Docker run output in the Terminal


window.

Azure CLI and Docker CLI


You can also work with Python Docker images and containers using the Azure CLI and Docker CLI. Both VS Code
and PyCharm have terminals where you can run these CLIs.
Use a CLI when you want finer control over build and run arguments, and for automation. For example, the
following command shows how to use the Azure CLI az acr build to specify the Docker image name.

az acr build --registry <registry-name> \


--resource-group <resource-group> \
--target pythoncontainerwebapp:latest .
As another example, consider the following command that shows how to use the Docker CLI run command. The
example shows how to run a Docker container that communicates to a MongoDB instance in your dev
environment, outside the container. The different values to complete the command are easier to automate when
specified in a command line.

docker run --rm -it \


--publish <port>:<port> --publish 27017:27017 \
--add-host mongoservice:<your-server-IP-address> \
--env CONNECTION_STRING=mongodb://mongoservice:27017 \
--env DB_NAME=<database-name> \
--env COLLECTION_NAME=<collection-name> \
containermongo:latest

For more information about this scenario, see Build and test a containerized Python web app locally.
Environment variables in containers
Python projects often make use of environment variables to pass data to code. For example, you might specify
database connection information in an environment variable so that it can be easily changed during testing. Or,
when deploying the project to production, the database connection can be changed to refer to a production
database instance.
Packages like python-dotenv are often used to read key-value pairs from an .env file and set them as
environment variables. An .env file is useful when running in a virtual environment but isn't recommended
when working with containers. Don't copy the .env file into the Docker image, especially if it contains
sensitive information and the container will be made public. Use the .dockerignore file to exclude files
from being copied into the Docker image. For more information, see the section Virtual environments and
containers in this article.
You can pass environment variables to containers in a few ways:
1. Defined in the Dockerfile as ENV instructions.
2. Passed in as --build-arg arguments with the Docker build command.
3. Passed in as --secret arguments with the Docker build command and BuildKit backend.
4. Passed in as --env or --env-file arguments with the Docker run command.

The first two options have the same drawback as noted above with .env files, namely that you're hardcoding
potentially sensitive information into a Docker image. You can inspect a Docker image and see the environment
variables, for example, with the command docker image inspect.
The third option with BuildKit allows you to pass secret information to be used in the Dockerfile for building
docker images in a safe way that won't end up stored in the final image.
The fourth option of passing in environment variables with the Docker run command means the Docker image
doesn't contain the variables. However, the variables are still visible inspecting the container instance (for
example, with docker container inspect). This option may be acceptable when access to the container instance is
controlled or in testing or dev scenarios.
Here's an example of passing environment variables using the Docker CLI run command and using the --env
argument.
# PORT=8000 for Django and 5000 for Flask
export PORT=<port-number>

docker run --rm -it \


--publish $PORT:$PORT \
--env CONNECTION_STRING=<connection-info> \
--env DB_NAME=<database-name> \
<dockerimagename:tag>

If you're using VS Code or PyCharm, the UI options for working with images and containers ultimately use
Docker CLI commands like the one shown above.
Finally, specifying environment variables when deploying a container in Azure is different than using
environment variables in your dev environment. For example:
For Web App for Containers, you configure application settings during configuration of App Service.
These settings are available to your app code as environment variables and accessed using the standard
os.environ pattern. You can change values after initial deployment when needed. For more information,
see Access app settings as environment variables.
For Azure Container Apps, you configure environment variables during initial configuration of the
container app. Subsequent modification of environment variables creates a revision of the container. In
addition, Azure Container Apps allows you to define secrets at the application level and then reference
them in environment variables. For more information, see Manage secrets in Azure Container Apps.
As another option, you can use Service Connector to help you connect Azure compute services to other backing
services. This service configures the network settings and connection information (for example, generating
environment variables) between compute services and target backing services in management plane.

Viewing container logs


View container instance logs to see diagnostic messages output from code and to troubleshoot issues in your
container's code. Here are several ways you can view logs when running a container in your dev environment :
Running a container with VS Code or PyCharm, as shown in the section VS Code and PyCharm, you can
see logs in terminal windows opened when Docker run executes.
If you're using the Docker CLI run command with the interactive flag -it , you'll see output following the
command.
In Docker Desktop, you can also view logs for a running container.
When you deploy a container in Azure , you also have access to container logs. Here are several Azure services
and how to access container logs in Azure portal.

A Z URE SERVIC E H O W TO A C C ESS LO GS IN A Z URE P O RTA L

Web App for Containers Go to the Diagnose and solve problems resource to
view logs. Diagnostics is an intelligent and interactive
experience to help you troubleshoot your app with no
configuration required. For a real-time view of logs, go to
the Monitoring - Log stream . For more detailed log
queries and configuration, see the other resources under
Monitoring .
A Z URE SERVIC E H O W TO A C C ESS LO GS IN A Z URE P O RTA L

Azure Container Apps Go to the environment resource Diagnose and solve


problems to troubleshoot environment problems. More
often, you'll want to see container logs. In the container
resource, under Application - Revision management ,
select the revision and from there you can view system and
console logs. For more detailed log queries and
configuration, see the resources under Monitoring .

Azure Container Instances Go to the Containers resource and select Logs .

For the same services listed above, here are the Azure CLI commands to access logs.

A Z URE SERVIC E A Z URE C L I C O M M A N D TO A C C ESS LO GS

Web App for Containers az webapp log

Azure Container Apps az containerapps logs

Azure Container Instances az container logs

There's also support for viewing logs in VS Code. You must have Azure Tools for VS Code installed. Below is an
example of viewing Web Apps for Containers (App Service) logs in VS Code.

Next steps
Deploy a containerized Python web app in Azure App Service
Deploy a containerized Python web app in Azure Container Apps
Overview: Containerized Python web app on Azure
10/28/2022 • 3 minutes to read • Edit Online

This tutorial shows you how to containerize a Python web app and deploy it to Azure. The single container web
app is hosted in Azure App Service and uses MongoDB for Azure Cosmos DB to store data. App Service Web
App for Containers allows you to focus on composing your containers without worrying about managing and
maintaining an underlying container orchestrator. When building web apps, Azure App Service is a good option
for taking your first steps with containers. For more information about using containers in Azure, see
Comparing Azure container options.
In this tutorial you will:
Build and run a Docker container locally. This step is optional.
Build a Docker container image directly in Azure.
Configure an App Service to create a web app based on the Docker container image.
Following this tutorial, you'll have the basis for Continuous Integration (CI) and Continuous Deployment (CD) of
a Python web app to Azure.

Service overview
The service diagram supporting this tutorial shows two environments (developer environment and Azure) and
the different Azure services used in the tutorial.

The components supporting this tutorial and shown in the diagram above are:
Azure App Service
The underlying App Service functionality that enables containerization is Web App for Containers.
Azure App Service uses the Docker container technology to host both built-in images and custom
images. In this tutorial, you'll build an image from Python code and deploy it to Web App for
Containers.
Web App for Containers uses a webhook in the registry to get notified of new images. A push of a
new image to the repository triggers App Service to pull the image and restart.
Azure Container Registry
Azure Container Registry enables you to work with Docker images and its components in Azure. It
provides a registry that's close to your deployments in Azure and that gives you control over
access, making it possible to use your Azure Active Directory groups and permissions.
In this tutorial, the registry source is Azure Container Registry, but you can also use Docker Hub or
a private registry with minor modifications.
Azure Cosmos DB for MongoDB
The Azure Cosmos DB for MongoDB is a NoSQL database used in this tutorial to store data.
Access to Azure Cosmos DB resource is via a connection string, which is passed as an environment
variable to the containerized app.

Authentication
In this tutorial, you'll build a Docker image (either locally or directly in Azure) and deploy it to Azure App Service.
The App Service pulls the container image from an Azure Container Registry repository.
The App Service uses managed identity to pull images from Azure Container Registry. Managed identity allows
you to grant permissions to the web app so that it can access other Azure resources without the need to specify
credentials. Specifically, this tutorial uses a system assigned managed identity. Managed identity is configured
during setup of App Service to use a registry container image.
The tutorial sample web app uses MongoDB to store data. The sample code connects to Azure Cosmos DB via a
connection string.

Prerequisites
To complete this tutorial, you'll need:
An Azure account where you can create:
Azure Container Registry
Azure App Service
MongoDB for Azure Cosmos DB (or access to equivalent). To create an Azure Cosmos DB for
MongoDB database, you can use the steps for Azure portal, Azure CLI, PowerShell, or VS Code. The
sample tutorial requires that you specify a MongoDB connection string, a database name, and a
collection name.
Visual Studio Code or Azure CLI, depending on what tool you'll use.
For Visual Studio Code, you'll need the Docker extension and Azure App Service extension.
Python packages:
PyMongo for connecting to MongoDB.
Flask or Django as a web framework.
Docker installed locally if you want to run container locally.

Sample app
The Python sample app is a restaurant review app that saves restaurant and review data in MongoDB. For an
example of a web app using PostgreSQL, see Deploy a Python web app to Azure with managed identity.
At the end of the tutorial, you'll have a restaurant review app deployed and running in Azure that looks like the
screenshot below.

Next step
Build and test locally
Build and test a containerized Python web app
locally
10/28/2022 • 8 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a containerized Python web app to Azure
App Service. App Service enables you to run containerized web apps and deploy through continuous
integration/continuous deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual
Studio Team Services. In this part of the tutorial, you learn how to build and run the containerized Python web
app locally. This step is optional and isn't required to deploy the sample app to Azure.
Running a Docker image locally in your development environment requires setup beyond deployment to Azure.
Think of it as an investment that can make future development cycles easier, especially when you move beyond
sample apps and you start to create your own web apps. To deploy the sample apps for Django and Flask, you
can skip this step and go to the next step in this tutorial. You can always return after deploying to Azure and
work through these steps.
The service diagram shown below highlights the components covered in this article.

1. Clone or download the sample app


Git clone
Download

Clone the repository:

# Django
git clone https://github.com/Azure-Samples/msdocs-python-django-container-web-app.git

# Flask
git clone https://github.com/Azure-Samples/msdocs-flask-django-container-web-app.git
Then navigate into that folder:

# Django
cd msdocs-python-django-container-web-app

# Flask
cd msdocs-python-flask-container-web-app

2. Build a Docker image


If you're using one of the framework sample apps available for Django and Flask, you're set to go. If you're
working with your own sample app, take a look to see how the sample apps are set up, in particular the
Dockerfile in the root directory.
VS Code
Docker CLI

These instructions require Visual Studio Code and the Docker extension. Go to the sample folder you cloned or
downloaded and open VS Code with the command code . .

IN ST RUC T IO N S SC REEN SH OT

Open the Docker extension.

If the Docker extension reports an error "Failed to connect",


make sure Docker is installed and running. If this is your first
time working with Docker, you probably won't have any
containers, images, or connected registries.

Build the image.


In the project Explorer showing the project files,
right-click the Dockerfile and select Build
Image... .
Alternately, you can use the Command Palette
(F1 or Ctrl+Shift+P ) and type "Docker Images:
Build Images" to invoke the command.
For more information about Dockerfile syntax, see the
Dockerfile reference.

Confirm the image was built.


Go to the IMAGES section of the Docker
extension.
Look for recently built image. The name of the
container image is
"msdocspythoncontainerwebapp", which is set in
the .vscode/tasks.json file.

At this point, you have built an image locally. The image you created has the name
"msdocspythoncontainerwebapp" and tag "latest". Tags are a way to define version information, intended use,
stability, or other information. For more information, see Recommendations for tagging and versioning
container images.
Images that are built from VS Code or from using the Docker CLI directly can also be viewed with the Docker
Desktop application.

3. Set up MongoDB
This tutorial assumes you have MongoDB installed locally or you have MongoDB hosted in Azure or elsewhere
that you have access to. Don't use a MongoDB database you'll use in production.
Local MongoDB
Azure Cosmos DB for MongoDB

Step 1: Install MongoDB if it isn't already.


Check if it's installed:

mongo --version

Step 2: Edit the mongod.cfg file to add your computer's IP address.


The mongod configuration file has a bindIp key that defines hostnames and IP addresses that MongoDB listens
for client connections. Add the current IP of your local development computer. The sample app running locally in
a Docker container will communicate to the host machine with this address.
For example, part of the configuration file should look like this:

net:
port: 27017
bindIp: 127.0.0.1,<local-ip-address>

Restart MongoDB to pick up changes to the configuration file.


Step 3: Create a database and collection in the local MongoDB database.
Set the database name to "restaurants_reviews" and the collection name to "restaurants_reviews". You can create
a database and collection with the VS Code MongoDB extension, the MonogoDB Shell (mongosh), or any other
MondoDB-aware tool.
For the MongoDB shell, here are example commands to create the database and collection:

> help
> use restaurants_reviews
> db.restaurants_reviews.insertOne()
> show dbs
> exit

At this point, your local MongoDB connection string is "mongodb://127.0.0.1:27017/", the database name is
"restaurants_reviews", and the collection name is "restaurants_reviews".

4. Run the image locally in a container


With information on how to connect to a MongoDB, you're ready to run the container locally. The sample app
expects MongoDB connection information to be passed in environment variables. There are several ways to get
environment variables passed to container locally. Each has advantages and disadvantages in terms of security.
You should avoid checking in any sensitive information or leaving sensitive information in code in the container.
NOTE
When deployed to Azure, the web app will get connection info from environment values set as App Service configuration
settings and none of the modifications for the local development environment scenario apply.

VS Code
Docker CLI

IN ST RUC T IO N S SC REEN SH OT

In the .vscode folder of the sample app, the settings.json file


defines what happens when you use the Docker extension
and select Run or Run Interactive from the context menu
of a Tag.The settings.json file contains two templates each for
the (MongoDB local) and (Mongo DB Azure) scenarios.
Replace both instances of <YOUR_IP_ADDRESS>
with your IP address.
Replace both instances of
<CONNECTION_STRING> with the connection
string for your MongoDB database.

NOTE
Both the database name and collection name are
assumed to be restaurants_reviews .

Run the image.


In the IMAGES section of the Docker extension,
find the built image.
Expand the image to find the latest tag, right-
click and select Run Interactive .
You'll be prompted to select the task appropriate
for your scenario, either "Interactive run
configuration (MongoDB local)" or "Interactive
run configuration (MongoDB Azure)".
With interactive run, you'll see any print statements in the
code, which can be useful for debugging. You can also select
Run which is non-interactive and doesn't keep standard
input open.

Confirm that the container is running.


In the CONTAINERS section of the Docker
extension, find the container.
Expand the Individual Containers node and
confirm that "msdocspythoncontainerwebapp" is
running. You should see a green triangle symbol
next to the container name if it's running.
IN ST RUC T IO N S SC REEN SH OT

Test the web app by right-clicking the container name and


selecting Open in Browser .

The browser will open into your default browser as


"http://127.0.0.1:8000" for Django or
"http://127.0.0.1:5000/" for Flask.

Stop the container.


In the CONTAINERS section of the Docker
extension, find the running container.
Right click the container and select Stop .

TIP
You can also run the container selecting a run or debug configuration. The Docker extension tasks in tasks.json are called
when you run or debug. The task called depends on what launch configuration you select. For the task "Docker: Python
(MongoDB local)", specify <YOUR-IP-ADDRESS>. For the task "Docker: Python (MongoDB Azure)", specify
<CONNECTION-STRING>.

You can also start a container from an image and stop it with the Docker Desktop application.

Next step
Build a container image in Azure
Build a containerized Python web app in the cloud
10/28/2022 • 6 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service.
App Service enables you to run containerized web apps and deploy through continuous integration/continuous
deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual Studio Team Services. In
this part of the tutorial, you learn how to build the containerized Python web app in the cloud.
In the previous optional part of this tutorial, a container image was build and run locally. In contrast, in this part
of the tutorial, you'll build (containerize) a Python web app into a Docker image directly in Azure Container
Registry. Building the image in Azure is typically faster and easier than building locally and then pushing the
image to a registry. Also, building in the cloud doesn't require Docker to be running in your dev environment.
Once the Docker image is in Azure Container Registry, it can be deployed to Azure App service.
The service diagram shown below highlights the components covered in this article.

1. Create an Azure Container Registry


If you already have an Azure Container Registry you can use, go to the next step. If you don't, create one.
Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to create an Azure Container Registry.

IN ST RUC T IO N S SC REEN SH OT
IN ST RUC T IO N S SC REEN SH OT

Search for "container registries" in the portal search and go


to the Container registries service.

Select + Create to start the create process.

Fill out the form and specify:


Resource group → Use an existing group or
create a new one.
Registr y name → The registry name must be
unique within Azure, and contain 5-50
alphanumeric characters.
Location → If you are using an existing resource
group, select the location to match. Otherwise,
the location is where the resource group is
created that contains the registry.
SKU → Select Standard .
When finished, select Review + create . After the validation
is complete, select Create .

After the deployment is complete, go to the new registry


and find the fully qualified name.
Go to the Over view resource of the registry.
Find the Login ser ver . It should be a fully qualified
name with "azurecr.io".

The admin account is required to deploy a container image


from a registry to Azure Web Apps for Containers. Enable
the admin user:
Got to the Access Keys resource of the registry.
Select Enabled for the Admin User .
The registry admin account is needed when you use the
Azure portal to deploy a container image as is shown in this
tutorial. The admin account is only used during the creation
of the App Service. After the App Service is created,
managed identity is used to pull images from the registry
and the admin account can be disabled.

2. Build an image in Azure Container Registry


You can build the container image directly in Azure in a few ways. First, you can use the Azure Cloud Shell, which
builds the image without using your local environment at all. You can also build the container image in Azure
from your local environment using VS Code or the Azure CLI. Building the image in the cloud doesn't require
Docker to be running in your local environment.
Azure portal
VS Code
Azure CLI

Sign in to the Azure portal to complete these steps.


Step 1. Open Azure Cloud Shell.

Step 2. Use the following az acr build command to build.

az acr build \
-r <registry-name> \
-g <resource-group> \
-t msdocspythoncontainerwebapp:latest \
<repo-path>

The last argument in the command is the fully qualified path to the repo. Use https://github.com/Azure-
Samples/msdocs-python-django-container-web-app.git for the Django sample app and
https://github.com/Azure-Samples/msdocs-python-flask-container-web-app.git for the Flask sample app.
The command above is for Bash shell. If you use PowerShell as your shell, change the line continuation character
from backslash ("\") to backtick ("`").
Step 3. Confirm the container image was created with the az acr repository list command.

az acr repository list -n <registry-name>

Next step
Deploy web app
Deploy a containerized Python app to App Service
10/28/2022 • 11 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service.
App Service enables you to run containerized web apps and deploy through continuous integration/continuous
deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual Studio Team Services.
In this part of the tutorial, you learn how to deploy the containerized Python web app to App Service using the
App Service Web App for Containers. Web App for Containers allows you to focus on composing your
containers without worrying about managing and maintaining an underlying container orchestrator.
Following the steps here, you'll end up with an App Service website using a Docker container image. The App
Service pulls the initial image from Azure Container Registry using managed identity for authentication.
The service diagram shown below highlights the components covered in this article.

1. Create the web app


Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to create the web app.

IN ST RUC T IO N S SC REEN SH OT
IN ST RUC T IO N S SC REEN SH OT

Create a new App Service:


In the Azure portal search, search for "App Services",
and select App Ser vices .
Select + Create to start the create process.

On the basic settings of the App Service, specify:


Resource Group → Use the same resource group
that the Azure Container Registry is in.
Name → Use a unique name that will be
http://<app-name>.azurewebsites.net .
Publish → Use Docker container so that the
registry image you build is used.
Operating System → Linux
Region → Use the same region as the resource
group and Azure Container Registry.
Linux Plan → Select an existing Linux plan or use a
new one.
Sku and size → Select Basic B1 . Select the Change
size link to access more options.
Zone redundancy → Select Disabled if this option
is available for the SKU selected.
Select Next: Docker to continue.

Specify Docker information of the App Service, including:


Options → Select Single Container .
Image Source → Select Azure Container
Registr y .
Registr y → The registry you created for this tutorial.
Image → An image in the registry.
Tag → "latest"
The registry admin account is needed when you use the
Azure portal to deploy a container image. If the admin
account is not enabled, you'll see an error when specifying
the Image . After the App Service is created, managed
identity is used to pull images from the registry and the
admin account can be disabled.

Go to Review + Create page and select Create .

2. Configure managed identity and webhook


Azure portal
VS Code
Azure CLI

Go to the Azure portal to follow these steps.


IN ST RUC T IO N S SC REEN SH OT

Enabled managed identity.


Go to the Identity resource of the App Service.
Select status On under System assigned .
Select Save .
Select Yes in the prompt to continue.

In the Identity resource of the App Service and where you


just enabled managed identity, select Azure role
assignments .

Add the "AcrPull" role for the system assigned managed


identity. The AcrPull role allows the App Service to pull
images from the Azure Container Registry.In "Azure role
assignments", select + Add role assignment and follow
the prompts to add:
Scope → "Resource group"
Subscription → Your subscription.
Resource group → The group with the Azure
Container Registry and App Service.
Role → "AcrPull"
Select Save to save the role.
For more information, see Assign Azure roles using the
Azure portal.

Configure App Service deployment to use managed identity.


Go to the Deployment Center resource of the App
Service.
In the Settings tab, set Authentication to
Managed Identity .
Select Save to save the changes.
IN ST RUC T IO N S SC REEN SH OT

Create a webhook that triggers updates to App Service


when new images are pushed to the Azure Container
Registry.

First, get the application scope credential:


Go to the Deployment Center resource of the App
Service.
In the FTPS credentials tab, get the Password
value under Application Scope .
Then, create the webhook using the credential value and
App Service name:
Go to the Azure Container Registry that has the
repo and container image and select the
Webhooks resource page.
On the webhooks page, select + Add .
Specify the parameters as follows:
Webhook name → Enter
"webhookforwebapp".
Location → Use the location of the registry.
Ser vice URI → A string that is combination
of App Service name and credential. See
below.
Actions → Select push .
Status → Select On .
Scope → Enter
"msdocspythoncontainerwebapp:*".
The service URI is formatted as "https://$" +
APP_SERVICE_NAME + ":" + CREDENTIAL + "@" +
APP_SERVICE_NAME +
".scm.azurewebsites.net/api/registry/webhook". For example:
"https://$msdocs-python-container-web-
app:credential@msdocs-python-container-web-
app.scm.azurewebsites.net/api/registry/webhook".

3. Configure connection to MongoDB


In this step, you specify environment variables needed to connect to MongoDB.
If you need to create an Azure Cosmos DB for MongoDB, do the following:
Create a MongoDB in Azure Cosmos DB with Azure portal, Azure CLI, PowerShell, or VS Code.
Create a database named "restaurants_reviews" and a collection named "restaurants_reviews". You can do
this using the Azure Cloud Shell and the Azure CLI. For more information, see Create a database and
collection for MongoDB for Azure Cosmos DB using Azure CLI. You can also use the VS Code Azure
Database extension to create databases and collections.
You'll need the MongoDB connection string info to follow the steps below.

Azure portal
VS Code
Azure CLI
IN ST RUC T IO N S SC REEN SH OT

Go to the App Service page for the web app.


Select Configuration under Settings on the
left resource menu.
Select Application settings at the top of the
page.

Create application settings:


1. Select + New application setting to create
settings for each of the following values:
CONNECTION_STRING → A connection string
that starts with "mongodb://".
DB_NAME → Use "restaurants_reviews".
COLLECTION_NAME → Use
"restaurants_reviews".
WEBSITES_PORT → Use "8000" for Django
and "5000" for Flask. This environment
variable specifies the port on which the
container is listening.
2. Confirm you have three settings with the correct
values.
3. Select Save to apply the settings.

4. Browse the site


To verify the site is running, go the https::<website-name>.azurewebsites.net . If successful, you should see the
restaurant review sample app. It can take a few moments for the site to start the first time. When the site
appears, add a restaurant, and a review for that restaurant to confirm the sample app is functioning.

Azure portal
VS Code
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

On the Over view resource page of the App Service page


select Browse .

5. Troubleshoot deployment
If you don't see the sample app, try the following steps.
With container deployment and App Service, always check the Deployment Center / Logs page in the
Azure portal. Confirm that the container was pulled and is running. The initial pull and running of the
container can take a few moments.
Try to restart the App Service and see if that resolves your issue.
If there are programming errors, those errors will show up in the application logs. On the Azure portal page
for the App Service, select Diagnose and solve problems /Application logs .
The sample app relies on a connection to MongoDB. Confirm that the App Service has application settings
with the correct connection info.
Confirm that managed identity is enabled for the App Service and is used in the Deployment Center. On the
Azure portal page for the App Service, go to the App Service Deployment Center resource and confirm
that Authentication is set to Managed Identity .
Check that the webhook is defined in the Azure Container Registry. The webhook enables the App Service to
pull the container image. In particular, check that Service URI ends with "/api/registry/webhook".
Different Azure Container Registry skus have different features, including number of webhooks. If you're
reusing an existing registry, you could see the message: "Quota exceeded for resource type webhooks for the
registry SKU Basic. Learn more about different SKU quotas and upgrade process: https://aka.ms/acr/tiers". If
you see this message, use a new registry, or reduce the number of registry webhooks in use.

Next step
Clean up resources
Containerize tutorial cleanup and next steps
10/28/2022 • 2 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service. In
this article, you'll clean up resources used in Azure so you don't incur other charges and help keep your Azure
subscription uncluttered. You can leave the Azure resources running if you want to use them for further
development work.

1. Clean up resources
In this tutorial, all the Azure resources were created in the same resource group. Removing the resource group
removes all resources in the resource group and is the fastest way to remove all Azure resources used for your
app.
Azure portal
VS Code
Azure CLI

Sign in to the Azure portal and follow these steps to delete a resource group.

IN ST RUC T IO N S SC REEN SH OT

Navigate to the resource group in the Azure portal.


1. Enter the name of the resource group in the search
bar at the top of the page.
2. Under the Resource Groups heading, select the
name of the resource group to navigate to it.

Select the Delete resource group button at the top of the


page.

In the confirmation dialog, enter the name of the resource


group to confirm deletion. Select Delete to delete the
resource group.

2. Next steps
After completing this tutorial, here are some next steps you can take to build upon what you learned and move
the tutorial code and deployment closer to production ready:
Deploy a web app from a geo-replicated Azure container registry
Review Security in Azure Cosmos DB
Map a custom DNS name to your app, see Tutorial: Map custom DNS name to your app.
Monitor App Service for availability, performance, and operation, see Monitoring App Service and Set up
Azure Monitor for your Python application.
Enable continuous deployment to Azure App Service, see Continuous deployment to Azure App Service,
Use CI/CD to deploy a Python web app to Azure App Service on Linux, and Design a CI/CD pipeline using
Azure DevOps.
Create reusable infrastructure as code with Azure Developer CLI (azd) Preview.

3. Related Learn modules


The following are some Learn modules that explore the technologies and themes covered in this tutorial:
Introduction to Python
Get started with Django
Create views and templates in Django
Create data-driven websites by using the Python framework Django
Deploy a Django application to Azure by using PostgreSQL
Get Started with the MongoDB API in Azure Cosmos DB
Migrate on-premises MongoDB databases to Azure Cosmos DB
Build a containerized web application with Docker
Overview: Deploy a Python web app on Azure
Container Apps
10/28/2022 • 4 minutes to read • Edit Online

This tutorial shows you how to containerize a Python web app and deploy it to Azure Container Apps. A sample
web app will be containerized and the Docker image stored in Azure Container Registry. Azure Container Apps
is configured to pull the Docker image from Container Registry and create a container. The sample app connects
to an Azure Database for PostgreSQL to demonstrate communication between Container Apps and other Azure
resources.
There are several options to build and deploy cloud native and containerized Python web apps on Azure. This
tutorial covers Azure Container Apps. Container Apps are good for running general purpose containers,
especially for applications that span many microservices deployed in containers. In this tutorial, you'll create one
container. To deploy a Python web app as a container to Azure App Service, see Containerized Python web app
on App Service.
In this tutorial you'll:
Build a Docker image from a Python web app and store the image in Azure Container Registry.
Configure Azure Container Apps to host the Docker image.
Set up a GitHub Action that updates the container with a new Docker image triggered by changes to your
GitHub repository. This last step is optional.
Following this tutorial, you'll be set up for Continuous Integration (CI) and Continuous Deployment (CD) of a
Python web app to Azure.

Service overview
The service diagram supporting this tutorial shows how your local environment, GitHub repositories, and Azure
services are used in the tutorial.

The components supporting this tutorial and shown in the diagram above are:
Azure Container Apps
Azure Container Apps enables you to run microservices and containerized applications on a serverless
platform. A serverless platform means that you enjoy the benefits of running containers with minimal
configuration. With Azure Container Apps, your applications can dynamically scale based on
characteristics such as HTTP traffic, event-driven processing, or CPU or memory load.
Container Apps pulls Docker images from Azure Container Registry. Changes to container images
trigger an update to the deployed container. You can also configure GitHub Actions to trigger updates.
Azure Container Registry
Azure Container Registry enables you to work with Docker images in Azure. Because Container
Registry is close to your deployments in Azure, you have control over access, making it possible to use
your Azure Active Directory groups and permissions to control access to Docker images.
In this tutorial, the registry source is Azure Container Registry, but you can also use Docker Hub or a
private registry with minor modifications.
Azure Database for PostgreSQL
The sample code stores application data in a PostgreSQL database.
The container app connects to PostgreSQL through environment variables configured explicitly or
with Azure Service Connector.
GitHub
The sample code for this tutorial is in a GitHub repo that you'll fork and clone locally. To set up a CI/CD
workflow with GitHub Actions, you'll need a GitHub account.
You can still follow along with this tutorial without a GitHub account, working locally or in the Azure
Cloud Shell to build the container image from the sample code repo.

Revisions and CI/CD


To make code changes and push them to a container, you create a new Docker image with your change. Then,
you push the image to Container Registry, and create a new revision of the container app. To automate this
process, an optional step in the tutorial shows you how to build a continuous integration and continuous
delivery (CI/CD) pipeline with GitHub actions. The pipeline automatically builds and deploys your code to the
Container App.
To automate this process, an optional step in the tutorial shows you how to build a continuous integration and
continuous deployment (CI/CD) pipeline with GitHub Actions. A GitHub Actions workflow automatically builds
and deploys your code to the Azure Container Apps when changes are made to a specified GitHub repository.

Authentication and security


In this tutorial, you'll build a Docker container image directly in Azure and deploy it to Azure Container Apps.
Container Apps run in the context of an environment, which is supported by an Azure Virtual Networks (VNet).
VNets are a fundamental building block for your private network in Azure. Container Apps allows you to expose
your container app to the public web by enabling ingress.
To set up continuous integration and continuous delivery (CI/CD), you'll authorize Azure Container Apps as an
OAuth App for your GitHub account. As an OAuth App, Container Apps writes a GitHub Actions workflow file to
your repo with information about Azure resources and jobs to update them. The workflow updates Azure
resources using credentials of an Azure Active Directory service principal (or existing one) with role-based
access for Container Apps and username and password for Azure Container Registry. Credentials are stored
securely in your GitHub repo.
Finally, the tutorial sample web app stores data in a PostgreSQL database. The sample code connects to
PostgreSQL via a connection string. During the configuration of the Container App, the tutorial walks you
through setting up environment variables containing connection information. You can also use an Azure Service
Connector to accomplish the same thing.

Prerequisites
To complete this tutorial, you'll need:
An Azure account where you can create:
Azure Container Registry
Azure Container Apps environment
Azure Database for PostgreSQL
Visual Studio Code or Azure CLI, depending on what tool you'll use
For Visual Studio Code, you'll need the Container Apps extension.
You can also use Azure CLI through the Azure Cloud Shell.
Python packages:
pyscopg2-binary for connecting to PostgreSQL.
Flask or Django web framework.

Sample app
The Python sample app is a restaurant review app that saves restaurant and review data in PostgreSQL. At the
end of the tutorial, you'll have a restaurant review app deployed and running in Azure Container Apps that looks
like the screenshot below.

Next step
Build and deploy to Azure Container Apps
Build and deploy a Python web app with Azure
Container Apps and PostgreSQL
10/28/2022 • 21 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a Python web app to Azure Container
Apps. Container Apps enables you to deploy containerized apps without managing complex infrastructure.
In this part of the tutorial, you learn how to containerize and deploy a Python sample web app (Django or Flask).
Specifically, you'll build the container image in the cloud and deploy it to Azure Container Apps. You'll define
environment variables that enable the container app to connect to an Azure Database for PostgreSQL - Flexible
Server instance, where the sample app stores data.
The service diagram shown below highlights the components covered in this article: building and deploying a
container image.

NOTE
Command lines in this tutorial are shown in the Bash shell, on multiple lines for clarity. For other shell types, change the
line continuation characters as appropriate. For example, for PowerShell, use back tick ("`"). Or, remove the continuation
characters and enter the command on one line.

Get the sample app


Fork and clone the sample code to your developer environment.
Step 1. Go to the GitHub repository of the sample app (Django or Flask) and select Fork .
Follow the steps to fork the directory to your GitHub account. You can also download the code repo directly to
your local machine without forking or a GitHub account, however, you won't be able to set up CI/CD discussed
later in the tutorial.
Step 2. Use the git clone command to clone the forked repo into the python-container folder:
# Django
git clone https://github.com/$USERNAME/msdocs-python-django-azure-container-apps.git python-container

# Flask
# git clone https://github.com/$USERNAME/msdocs-python-flask-azure-container-apps.git python-container

Step 3. Change directory.

cd python-container

Build a container image from web app code


After following these steps, you'll have an Azure Container Registry that contains a Docker container image built
from the sample code.
Azure portal
VS Code
Azure CLI

Step 1. In the Azure portal, search for "container registries" and select the Container Registries service in the
results.

Step 2. Select + Create to start the create process.

Step 3. Fill out the form and specify.


Resource group → Create a new one named pythoncontainer-rg.
Registr y name → The registry name must be unique within Azure, and contain 5-50 alphanumeric
characters.
Location → Select a location.
SKU → Select Standard .
When finished, select Review + create . After validation is complete, select Create .

Step 4. Enable the administrator user account.


In the Container registry you just created, go to the Access Keys resource.
Select Enabled for the Admin user .

Step 5. Select the Azure Cloud Shell icon in the top menu bar to finish configuration and building an image.
You can also go directly to Azure Cloud Shell.
Step 6. Use the az acr build command to build the image from the repo.

az acr build --registry <registry-name> \


--resource-group pythoncontainer-rg \
--image pythoncontainer:latest <repo-path>

Specify <registry-name> as the name of the registry you created. For <repo-path>, choose either the Django or
Flask repo path.
After the command completes, go to the registry's Repositories resource and confirm the image shows up.

Create a PostgreSQL Flexible Server instance


The sample app (Django or Flask) stores restaurant review data in a PostgreSQL database. In these steps, you'll
create the server that will contain the database.

Azure portal
VS Code
Azure CLI

Step 1. In Azure portal, search for "postgres flexible" and select the Azure Database for PostgreSQL flexible
ser vers service in the results.

Step 2. Select + Create to start the create process.


Step 3. Fill out the Basics settings as follows:
Resource group → The resource group used in this tutorial "pythoncontainer-rg".
Ser ver name → Enter a name for the database server that's unique across all Azure. The database server's
URL becomes https://<server-name>.postgres.database.azure.com . Allowed characters are A - Z , 0 - 9 , and
- . For example: postgres-db-<unique-id>.
Region → The same region you used for the resource group.
Admin username → Use demoadmin.
Password and Confirm password → A password that you'll use later when connecting the container app
to this database.
For all other settings, leave the defaults. When done, select Networking to go to the networking page.
Step 3. Fill out the Networking settings as follows:
Connectivity method → Select Public access .
Allow public access from any Azure ser vice → Select the checkbox, that is, allow access.
Add current client IP address → Select (add) if you plan on accessing the database from your local server.
For all other settings, leave the defaults. Select Review + Create to continue.

Step 3. Review the information and when satisfied, select Create .


Create a database on the server
At this point, you have a PostgreSQL server and now you'll create a database on the server.
psql
VS Code
Azure CLI

You can use the PostgreSQL interactive terminal psql in your local environment, or in the Azure Cloud Shell,
which is also accessible in the Azure portal. When working with psql, it's often easier to use the Cloud Shell
because all the dependencies are included for you in the shell.
Step 1. Connect to the database with psql.

psql --host=<postgres-server-name>.postgres.database.azure.com \
--port=5432 \
--username=demoadmin@<postgres-server-name> \
--dbname=postgres

Where <postgres-server-name> is the name of the PostgreSQL server. The command above will prompt you for
the admin password.
If you have trouble connecting, restart the database and try again. If you're connecting from your local
environment, your IP address must be added to the firewall rule list for the database service.
Step 2. Create the database.
At the postgres=> prompt type:

CREATE DATABASE restaurants_reviews;

The semicolon (";") at the end of the command is necessary. To verify that the database was successfully created,
use the command \c restaurants_reviews . Type \? to show help or \q to quit.
You can also connect to Azure PostgreSQL Flexible server and create a database using Azure Data Studio or any
other IDE that supports PostgreSQL.

Deploy the web app to Container Apps


Container apps are deployed to Container Apps environments, which act as a secure boundary. In the following
steps, you'll create the environment, a container inside the environment, and configure the container so that the
website is visible externally.

Azure portal
VS Code
Azure CLI

Step 1. In the portal search at the top of the screen, search for "container apps" and select the Container Apps
service in the results.
Step 2. Select + Create to start the create process.

Step 3. On the Basics page, specify the basic configuration of the container app.
Resource group → Use the group created earlier and contains the Azure Container Registry.
Container app name → python-container-app.
Region → Use the same region/location as the resource group.
Container Apps Environment → Select Create new to create a new environment named python-
container-env.
Select Next: App settings to continue configuration.
Step 4. On the App settings page, continue configuring the container app.
Use quickstar t image → Unselect checkbox.
Name → python-container-app.
Image Source → Select Azure Container Registry.
Registr y → Select the name of registry you created earlier.
Image name → Select pythoncontainer (the name of the image you built).
Image tag → Select latest.
HTTP Ingress → Select checkbox (enabled).
Ingress traffic → Select Accepting traffic from anywhere .
Target por t → Set to 8000 for Django or 5000 for Flask.
Select Review and create to go to review page. After reviewing the settings, select Create to kick off
deployment.
Step 5. After the deployment finishes, select Go to resource .
Step 6. Create a revision of the container that contains environment variables.
Select the Containers resource of the newly created container.
Then, select Edit and deploy .
On the Create and deploy new revision page, select the name of the container image, in this case
python-container-app.
On the Edit container page, create environment variables as shown below and then select Save .
Back on the Create and deploy new revision page, select Create .
Here are the following environment variables to create:
AZURE_POSTGRESQL_HOST=<postgres-server-name>.postgres.database.azure.com
AZURE_POSTGRESQL_DATABASE=restaurants_reviews
AZURE_POSTGRESQL_USERNAME=demoadmin
AZURE_POSTGRESQL_PASSWORD=<admin-password>
RUNNING_IN_PRODUCTION=1
TIP
Instead of defining environment variables as shown above, you can use Service Connector. Service Connector helps you
connect to Azure compute services to other backing services by configuring connection information and generating and
storing environment variables for you. If you use a service connector, make sure you synchronize the environment
variables in the sample code to the environment variables created with Service Connector.

Step 7. Django only, migrate and create database schema. (In the Flask sample app, it's done automatically, and
you can skip this step.)
Go to the Monitoring - Console resource of the container app.
Choose a startup command and select Connect .
At the shell prompt, type python manage.py migrate .
You don't need to migrate for revisions of the container.
Step 8. Test the website.
Go to the container app's Over view resource.
Under Essentials , select Application Url to open the website in a browser.

Here's an example of the sample website after adding a restaurant and two reviews.

Troubleshoot deployment
You forgot the Application Url to access the website.
In the Azure portal, go to the Over view page of the Container App and look for the Application Url .
In VS Code, go to the Azure extension and select the Container Apps section. Expand the
subscription, expand the container environment, and when you find the container app, right-click
python-container-app and select Browse .
With Azure CLI, use the command
az containerapp show -g pythoncontainer-rg -n python-container-app --query
properties.configuration.ingress.fqdn
.
In VS Code, the Build Image in Azure task returns an error.
If you see the message "Error: failed to download context. Please check if the URL is incorrect." in the
VS Code Output window, then refresh the registry in the Docker extension. To refresh, select the
Docker extension, go to the Registries section, find the registry and select it.
If you run the Build Image in Azure task again, check to see if your registry from a previous run
exists and if so, use it.
In the Azure portal during the creation of a Container App, you see an access error that contains "Cannot
access ACR '<name>.azurecr.io'".
This error occurs when admin credentials on the ACR are disabled. To check admin status in the portal,
go to your Azure Container Registry, select the Access keys resource, and ensure that Admin user is
enabled.
Your container image doesn't appear in the Azure Container Registry.
Check the output of the Azure CLI command or VS Code Output and look for messages to confirm
success.
Check that the name of the registry was specified correctly in your build command with the Azure CLI
or in the VS Code task prompts.
Make sure your credentials haven't expired. For example, in VS Code, find the target registry in the
Docker extension and refresh. In Azure CLI, run az login .
Website returns "Bad Request (400)".
Check the PostgreSQL environment variables passed in to the container. The 400 error often indicates
that the Python code can't connect to the PostgreSQL instance.
The sample code used in this tutorial checks for the existence of the container environment variable
RUNNING_IN_PRODUCTION , which can be set to any value like "1".
Website returns "Not Found (404)".
Check the Application Url on the Over view page for the container. If the Application Url contains
the word "internal", then ingress isn't set correctly.
Check the ingress of the container. For example, in Azure portal, go to the Ingress resource of the
container and make sure HTTP Ingress is enabled and Accepting traffic from anywhere is
selected.
Website doesn't start, you see "stream timeout", or nothing is returned.
Check the logs.
In the Azure portal, go to the Container App's Revision management resource and check the
Provision Status of the container.
If "Provisioning", then wait until provisioning has completed.
If "Failed", then select the revision and view the console logs. Choose the order of the
columns to show "Time Generated", "Stream_s", and "Log_s". Sort the logs by most-
recent first and look for Python stderr and stdout messages in the "Stream_s" column.
Python 'print' output will be stdout messages.
With the Azure CLI, use the az containerapp logs show command.
If using the Django framework, check to see if the restaurants_reviews tables exist in the database. If
not, use a console to access the container and run python manage.py migrate .

Next step
Configure continuous deployment
Configure continuous deployment for a Python web
app in Azure Container Apps
10/28/2022 • 10 minutes to read • Edit Online

This article is part of a tutorial about how to containerize and deploy a Python web app to Azure Container
Apps. Container Apps enables you to deploy containerized apps without managing complex infrastructure.
In this part of the tutorial, you learn how to configure continuous deployment or delivery (CD) for the container
app. CD is part of the DevOps practice of continuous integration / continuous delivery (CI/CD), which is
automation of your app development workflow. Specifically, you use GitHub Actions for continuous deployment.
The service diagram shown below highlights the components covered in this article: configuration of CI/CD.

NOTE
Command lines in this tutorial are shown in the Bash shell, on multiple lines for clarity. For other shell types, change the
line continuation characters as appropriate. For example, for PowerShell, use back tick ("`"). Or, remove the continuation
characters and enter the command on one line.

Prerequisites
To set up continuous deployment, you'll need:
The resources and their configuration created in the [previous article][./tutorial-deploy-python-web-app-
azure-container-apps-02.md] of this tutorial series, which includes an Azure Container Registry and a
container app in Azure Container Apps.
A GitHub account where you forked the sample code (Django or Flask) and you can connect to from
Azure Container Apps. (If you downloaded the sample code instead of forking, make sure you push your
local repo to your GitHub account.)
Optionally, Git installed in your development environment to make code changes and push to your repo
in GitHub. Alternatively, you can make the changes directly in GitHub.
Configure CD for a container
In a previous article of this tutorial, you created and configured a container app in Azure Container Apps. Part of
the configuration was pulling a Docker image from an Azure Container Registry. The container image is pulled
from the registry when creating a container revision, such as when you first set up the container app.
In the steps below, you'll set up continuous deployment, which means a new Docker image and container
revision are created based on a trigger. The trigger in this tutorial is any change to the main branch of your
repository, such as with a pull request (PR). When triggered, the workflow creates a new Docker image, pushes it
to the Azure Container Registry, and updates the container app to a new revision using the new image.
Azure portal
Azure CLI

Step 1. In the Azure portal, go to the Container App you want to configure continuous deployment for and
select the Continuous deployment resource.

Step 2. Authorize Azure Container Apps to access your GitHub account.


Select Sign in with GitHub .
In the authorization pop up, select AuthorizeAppSer vice .
Container App access to the GitHub accont can be revoked by going to the your account's security section and
revoking access.
Step 3. After sign-in with GitHub, configure the continuous deployment details.
Organization → Use your GitHub user name.
Repositor y → Select the fork of the sample app. (If you originally downloaded the sample code to your
developer environment, push the repo to GitHub.)
Branch → Select main.
Repositor y source → Select Azure Container Registr y .
Registr y → Select the Azure Container Registry you created earlier in the tutorial.
Image → Select the Docker image name. If you are following the tutorial, it's "python-container-app".
Ser vice principal → Leave Create new and let the creation process create a new service principal.
Select Star t continuous deployment to finish the configuration.
Step 4. Review the continuous deployment information.
After the continuous deployment is configured, you can find a link to the GitHub Actions workflow file created.
Azure Container Apps checked the file in to your repo.
In the configuration of continuous deployment, a service principal is used to enable GitHub Actions to access
and modify Azure resources. Access to resources is restricted by the roles assigned to the service principal. The
service principal was assigned the built-in Contributor role on the resource group containing the container app.
If you followed the steps for the portal, the service principal was automatically created for you. If you followed
the steps for the Azure CLI, you explicitly created the service principal first before configuring continuous
deployment.

Redeploy web app with GitHub Actions


In this section, you'll make a change to your forked copy of the sample repository and confirm that the change is
automatically deployed to the web site.
If you haven't already, make a fork of the sample repository (Django or Flask). You can make your code change
directly in GitHub or in your development environment from a command line with Git.

GitHub
Command line

Step 1. Go to your fork of the sample repository and start in the main branch.

Step 2. Make a change.


Go to the /templates/base.html file.
Select Edit and change the phrase "Azure Restaurant Review" to "Azure Restaurant Review - Redeployed".
Step 3. Commit the change directly to the main branch.
On the bottom of the page you editing, select the Commit button.
The commit kicks off the GitHub Actions workflow.

NOTE
We showed making a change directly in the main branch. In typical software workflows, you'll make a change in a branch
other than main and then create a pull request (PR) to merge those change into main. PRs also kick off the workflow.

About GitHub Actions


Viewing workflow history
GitHub
Command line
Step 1. On GitHub, go to your fork of the sample repository and open the Actions tab.

Workflow secrets
In the .github/workflows/<workflow-name>.yml workflow file that was added to the repo, you'll see
placeholders for credentials that are needed for the build and container app update jobs of the workflow. The
credential information is stored encrypted in the repository Settings under Security /Actions .

If credential information changes, you can update it here. For example, if the Azure Container Registry
passwords are regenerated, you'll need to update the REGISTRY_PASSWORD value. For more information, see
Encrypted secrets in the GitHub documentation.
OAuth authorized apps
When you set up continuous deployment, you authorize Azure Container Apps as an authorized OAuth App for
your GitHub account. Container Apps uses the authorized access to create a GitHub Actions YML file in
.github/workflows/<workflow-name>.yml. You can see your authorized apps and revoke permissions under
Integrations /Applications of your account.

Troubleshooting tips
Errors setting up a service principal with the Azure CLI az ad sp create-for-rba command.
You receive an error containing "InvalidSchema: No connection adapters were found".
Check the shell you're running in. If using Bash shell, set the MSYS_NO_PATHCONV variables as
follows export MSYS_NO_PATHCONV=1 . For more information, see the GitHub issue Unable to create
service principal with Azure CLI from git bash shell, no connection adapters were found..
You receive an error containing "More than one application have the same display name".
This error indicates the name is already taken for the service principal. Choose another name or leave
off the --name argument and a GUID will be automatically generated as a display name.

GitHub Actions workflow failed.


To check a workflow's status, go to the Actions tab of the repo.
If there's a failed workflow, drill into its workflow file. There should be two jobs "build" and "deploy". For a
failed job, look at the output of the job's tasks to look for problems.
If you see an error message with "TLS handshake timeout", run the workflow manually by selecting Trigger
auto deployment under the Actions tab of the repo to see if the timeout is a temporary issue.
If you set up continuous deployment for the container app as shown in this tutorial, the workflow file
(.github/workflows/<workflow-name>.yml) is created automatically for you. You shouldn't need to modify
this file for this tutorial. If you did, revert your changes and try the workflow.
Website doesn't show changes you merged in the main branch.
In GitHub: check that the GitHub Actions workflow ran and that you checked the change into the branch that
triggers the workflow.
In Azure portal: check the Azure Container Registry to see if a new Docker image was created with a
timestamp after your change to the branch.
In Azure portal: check the logs of the container app. If there's a programming error, you'll see it here.
Go to the Container App | Revision Management | <active container> | Revision details | Console logs
Choose the order of the columns to show "Time Generated", "Stream_s", and "Log_s". Sort the logs by
most-recent first and look for Python stderr and stdout messages in the "Stream_s" column. Python
'print' output will be stdout messages.
With the Azure CLI, use the az containerapp logs show command.
What happens when I disconnect continuous deployment?
Stopping continuous deployment means disconnecting your container app from your repo. To disconnect:
In Azure portal, go the container app, select the Continuous deployment resource, select
Disconnect .
With the Azure CLI, use the az container app github-action remove command.
After disconnecting, in your GitHub repo:
The .github/workflows/<workflow-name>.yml file is removed from your repo.
Secret keys aren't removed.
Azure Container Apps remains as an authorized OAuth App for your GitHub account.
After disconnecting, in Azure:
The container is left with last deployed container. You can reconnect the container app with the Azure
Container Registry, so that new container revisions pick up the latest image.
Service principals created and used for continuous deployment aren't deleted.
Next steps
If you're done with the tutorial and don't want to incur extra costs, remove the resources used. Removing a
resource group removes all resources in the group and is the fastest way to remove resources. For an example
of how to remove resource groups, see Containerize tutorial cleanup.
If you plan on building on this tutorial, here are some next steps you can take.
Set scaling rules in Azure Container Apps
Bind custom domain names and certificates in Azure Container Apps
Monitor an app in Azure Container Apps
Use the Azure libraries (SDK) for Python
10/28/2022 • 5 minutes to read • Edit Online

The open-source Azure libraries for Python simplify provisioning, managing, and using Azure resources from
Python application code.

The details you really want to know


The Azure libraries are how you communicate with Azure services from Python code that you run either
locally or in the cloud. (Whether you can run Python code within the scope of a particular service
depends on whether that service itself currently supports Python.)
The libraries support Python 3.6 or later, and it is also tested with PyPy 5.4+.
The Azure SDK for Python is composed solely of over 180 individual Python libraries that relate to
specific Azure services. There are no other tools in the "SDK".
When running code locally, authenticating with Azure relies on environment variables as described in
How to authenticate Python apps to Azure services using the Azure SDK for Python.
To install library packages with pip, use pip install <library_name> using library names from the
package index. To install library packages in conda environments, use conda install <package_name>
using names from the Microsoft channel on anaconda.org. For more information, see Install Azure
libraries.
There are distinct management and client libraries (sometimes referred to as "management plane" and
"data plane" libraries). Each set serves different purposes and is used by different kinds of code. For more
information, see the following sections later in this article:
Provision and manage Azure resources with management libraries
Connect to and use Azure resources with client libraries
Documentation for the libraries is found on the Azure for Python Reference, which is organized by Azure
Service, or the Python API browser, which is organized by package name.
To try the libraries for yourself, we first recommend setting up your local dev environment. Then you can
try any of the following standalone examples (in any order): Example: Provision a resource group,
Example: Provision and use Azure Storage, Example: Provision a web app and deploy code, Example:
Provision and use a MySQL database, and Example: Provision a virtual machine.
For demonstration videos, see Introducing the Azure SDK for Python (PyCon 2021) and Using Azure
SDKs to interact with Azure resource (PyCon 2020).
Non-essential but still interesting details
Because the Azure CLI is written in Python using the management libraries, anything you can do with
Azure CLI commands you can also do from a Python script. That said, the CLI commands provide many
helpful features such as performing multiple tasks together, automatically handling asynchronous
operations, formatting output like connection strings, and so on. Consequently, using the CLI (or its
equivalent, Azure PowerShell) for automated provisioning and management scripts can be significantly
more convenient than writing the equivalent Python code, unless you want to have a much more exacting
degree of control over the process.
The Azure libraries for Python build on top of the underlying Azure REST API, allowing you to use those
APIs through familiar Python paradigms. However, you can always use the REST API directly from Python
code, if desired.
You can find the source code for the Azure libraries on https://github.com/Azure/azure-sdk-for-python. As
an open-source project, contributions are welcome!
Although you can use the libraries with interpreters such as IronPython and Jython that we don't test
against, you may encounter isolated issues and incompatibilities.
The source repo for the library API reference documentation resides on
https://github.com/MicrosoftDocs/azure-docs-sdk-python/.
We're currently updating the Azure libraries for Python libraries to share common cloud patterns such as
authentication protocols, logging, tracing, transport protocols, buffered responses, and retries.
This shared functionality is contained in the azure-core library.
The libraries that currently work with the Core library are listed on Azure SDK for Python latest
releases. These libraries, primarily the client libraries, are sometimes referred to as "track 2".
The management libraries and any other that aren't yet updated are sometimes referred to as
"track 1".
For details on the guidelines we apply to the libraries, see the Python Guidelines: Introduction.

Provision and manage Azure resources with management libraries


The SDK's management (or "management plane") libraries, the names of which all begin with azure-mgmt- , help
you create, provision and otherwise manage Azure resources from Python scripts. All Azure services have
corresponding management libraries.
With the management libraries, you can write configuration and deployment scripts to perform the same tasks
that you can through the Azure portal or the Azure CLI. (As noted earlier, the Azure CLI is written in Python and
uses the management libraries to implement its various commands.)
The following examples illustrate how to use some of the primary management libraries:
Provision a resource group
List resource groups in a subscription
Provision Azure Storage
Provision a web app and deploy code
Provision and query a database
Provision a virtual machine
For details on working with each management library, see the README.md or README.rst file located in the
library's project folder in the SDK GitHub repository. You can also find more code snippets in the reference
documentation and the Azure Samples.
Migrating from older management libraries
If you are migrating code from older versions of the management libraries, see the following details:
If you use the ServicePrincipalCredentials class, see Authenticate with token credentials.
The names of async APIs have changed as described on Library usage patterns - asynchronous operations.
Simply said, the names of async APIs in newer libraries start with begin_ . In most cases, the API signature
remains the same.

Connect to and use Azure resources with client libraries


The SDK's client (or "data plane") libraries help you write Python application code to interact with already-
provisioned services. Client libraries exist only for those services that support a client API.
The article, Example: Use Azure Storage, provides a basic illustration of using client library.
Different Azure services also provide examples using these libraries. See the following index pages for other
links:
App hosting
Cognitive Services
Data solutions
Identity and security
Machine learning
Messaging and IoT
Other services
For details on working with each client library, see the README.md or README.rst file located in the library's
project folder in the SDK's GitHub repository. You can also find more code snippets in the reference
documentation and the Azure Samples.

Get help and connect with the SDK team


Visit the Azure libraries for Python documentation
Post questions to the community on Stack Overflow
Open issues against the SDK on GitHub
Mention @AzureSDK on Twitter
Complete a short survey about the Azure SDK for Python

Next step
We strongly recommend doing a one-time setup of your local development environment so that you can easily
use any of the Azure libraries for Python.
Set up your local dev environment >>>
Azure libraries for Python usage patterns
10/28/2022 • 7 minutes to read • Edit Online

The Azure SDK for Python is composed solely of many independent libraries, which are listed on the Python SDK
package index.
All the libraries share certain common characteristics and usage patterns, such as installation and the use of
inline JSON for object arguments.

Library installation
pip
conda

To install a specific library package, use pip install :

# Install the management library for Azure Storage


pip install azure-mgmt-storage

# Install the client library for Azure Blob Storage


pip install azure-storage-blob

pip install retrieves the latest version of a library in your current Python environment.
You can also use pip to uninstall libraries and install specific versions, including preview versions. For more
information, see How to install Azure library packages for Python.

Asynchronous operations
Many operations that you invoke through client and management client objects (such as
ComputeManagementClient.virtual_machines.begin_create_or_update and
WebSiteManagementClient.web_apps.create_or_update ) return an object of type AzureOperationPoller[<type>]
where <type> is specific to the operation in question.
Both of these methods are asynchronous. The difference in the method names is due to version differences.
Older libraries that aren't based on azure.core typically use names like create_or_update . Libraries based on
azure.core add the begin_ prefix to method names to better indicate that they are asynchronous. Migrating old
code to a newer azure.core-based library typically means adding the begin_ prefix to method names, as most
method signatures remain the same.
In either case, an AzureOperationPoller return type definitely means that the operation is asynchronous.
Accordingly, you must call that poller's result method to wait for the operation to finish and obtain its result.
The following code, taken from Example: Provision and deploy a web app, shows an example of using the poller
to wait for a result:
poller = app_service_client.web_apps.begin_create_or_update(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": LOCATION,
"server_farm_id": plan_result.id,
"site_config": {
"linux_fx_version": "python|3.8"
}
}
)

web_app_result = poller.result()

In this case, the return value of begin_create_or_update is of type AzureOperationPoller[Site] , which means that
the return value of poller.result() is a Site object.

Exceptions
In general, the Azure libraries raise exceptions when operations fail to perform as intended, including failed
HTTP requests to the Azure REST API. For app code, then, you can use try...except blocks around library
operations.
For more information on the type of exceptions that may be raised, see the documentation for the operation in
question.

Logging
The most recent Azure libraries use the Python standard logging library to generate log output. You can set the
logging level for individual libraries, groups of libraries, or all libraries. Once you register a logging stream
handler, you can then enable logging for a specific client object or a specific operation. For more information,
see Logging in the Azure libraries.

Proxy configuration
To specify a proxy, you can use environment variables or optional arguments. For more information, see How to
configure proxies.

Optional arguments for client objects and methods


In the library reference documentation, you often see a **kwargs or **operation_config argument in the
signature of a client object constructor or a specific operation method. These placeholders indicate that the
object or method in question may support additional named arguments. Typically, the reference documentation
indicates the specific arguments you can use. There are also some general arguments that are often supported
as described in the following sections.
Arguments for libraries based on azure.core
These arguments apply to those libraries listed on Python - New Libraries.

NAME TYPE DEFA ULT DESC RIP T IO N

logging_enable bool False Enables logging. For more


information, see Logging in
the Azure libraries.
NAME TYPE DEFA ULT DESC RIP T IO N

proxies dict {} Proxy server URLs. For


more information, see How
to configure proxies.

use_env_settings bool True If True, allows use of


HTTP_PROXY and
HTTPS_PROXY environment
variables for proxies. If False,
the environment variables
are ignored. For more
information, see How to
configure proxies.

connection_timeout int 300 The timeout in seconds for


making a connection to
Azure REST API endpoints.

read_timeout int 300 The timeout in seconds for


completing an Azure REST
API operation (that is,
waiting for a response).

retry_total int 10 The number of allowable


retry attempts for REST API
calls. Use retry_total=0
to disable retries.

retry_mode enum exponential Applies retry timing in a


linear or exponential
manner. If 'single', retries are
made at regular intervals. If
'exponential', each retry
waits twice as long as the
previous retry.

Individual libraries are not obligated to support any of these arguments, so always consult the reference
documentation for each library for exact details.
Arguments for non-core libraries
NAME TYPE DEFA ULT DESC RIP T IO N

verify bool True Verify the SSL certificate.

cert str None Path to local certificate for


client-side verification.

timeout int 30 Timeout for establishing a


server connection in
seconds.

allow_redirects bool False Enable redirects.

max_redirects int 30 Maximum number of


allowed redirects.
NAME TYPE DEFA ULT DESC RIP T IO N

proxies dict {} Proxy server URL. For more


information, see How to
configure proxies.

use_env_proxies bool False Enable reading of proxy


settings from local
environment variables.

retries int 10 Total number of allowable


retry attempts.

enable_http_logger bool False Enable logs of HTTP in


debug mode.

Inline JSON pattern for object arguments


Many operations within the Azure libraries allow you to express object arguments either as discrete objects or
as inline JSON.
For example, suppose you have a ResourceManagementClient object through which you create a resource group
with its create_or_update ) method. The second argument to this method is of type ResourceGroup .
To call create_or_update you can create a discrete instance of ResourceGroup directly with its required
arguments ( location in this case):

rg_result = resource_client.resource_groups.create_or_update(
"PythonSDKExample-rg",
ResourceGroup(location="centralus")
)

Alternately, you can pass the same parameters as inline JSON:

rg_result = resource_client.resource_groups.create_or_update(
"PythonAzureExample-rg",
{
"location": "centralus"
}
)

When using JSON, the Azure libraries automatically convert the inline JSON to the appropriate object type for
the argument in question.
Objects can also have nested object arguments, in which case you can also use nested JSON.
For example, suppose you have an instance of the KeyVaultManagementClient object, and are calling its
create_or_update method. In this case, the third argument is of type VaultCreateOrUpdateParameters , which itself
contains an argument of type VaultProperties . VaultProperties , in turn, contains object arguments of type
Sku and list[AccessPolicyEntry] . A Sku contains a SkuName object, and each AccessPolicyEntry contains a
Permissions object.

To call begin_create_or_update with embedded objects, you use code like the following (assuming tenant_id
and object_id are already defined). You can also create the necessary objects before the function call.
# Provision a Key Vault using inline parameters
poller = keyvault_client.vaults.begin_create_or_update(
RESOURCE_GROUP_NAME,
KEY_VAULT_NAME_A,
VaultCreateOrUpdateParameters(
location = "centralus",
properties = VaultProperties(
tenant_id = tenant_id,
sku = Sku(
name="standard",
family="A"
),
access_policies = [
AccessPolicyEntry(
tenant_id = tenant_id,
object_id = object_id,
permissions = Permissions(
keys = ['all'],
secrets = ['all']
)
)
]
)
)
)

key_vault1 = poller.result()

The same call using inline JSON appears as follows:

# Provision a Key Vault using inline JSON


poller = keyvault_client.vaults.begin_create_or_update(
RESOURCE_GROUP_NAME,
KEY_VAULT_NAME_B,
{
'location': 'centralus',
'properties': {
'sku': {
'name': 'standard',
'family': 'A'
},
'tenant_id': tenant_id,
'access_policies': [{
'tenant_id': tenant_id,
'object_id': object_id,
'permissions': {
'keys': ['all'],
'secrets': ['all']
}
}]
}
}
)

key_vault2 = poller.result()

Because both forms are equivalent, you can choose whichever you prefer and even intermix them. (The full code
for these examples can be found on GitHub.)
If your JSON isn't formed properly, you typically get the error, "DeserializationError: Unable to deserialize to
object: type, AttributeError: 'str' object has no attribute 'get'". A common cause of this error is that you're
providing a single string for a property when the library expects a nested JSON object. For example, using
"sku": "standard" in the previous example generates this error because the sku parameter is a Sku object
that expects inline object JSON, in this case { "name": "standard"} , which maps to the expected SkuName type.

Next steps
Now that you understand the common patterns for using the Azure libraries for Python, see the following
standalone examples to explore specific management and client library scenarios. You can try these examples in
any order as they are neither sequential nor interdependent.
Example: Create a resource group
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Authenticate Python apps to Azure services by
using the Azure SDK for Python
10/28/2022 • 7 minutes to read • Edit Online

When an application needs to access an Azure resource like Azure Storage, Azure Key Vault, or Azure Cognitive
Services, the application must be authenticated to Azure. This requirement is true for all applications, whether
they're deployed to Azure, deployed on-premises, or under development on a local developer workstation. This
article describes the recommended approaches to authenticate an app to Azure when you use the Azure SDK for
Python.

Recommended app authentication approach


Use token-based authentication rather than connection strings for your apps when they authenticate to Azure
resources. The Azure SDK for Python provides classes that support token-based authentication. Apps can
seamlessly authenticate to Azure resources whether the app is in local development, deployed to Azure, or
deployed to an on-premises server.
The specific type of token-based authentication an app uses to authenticate to Azure resources depends on
where the app is being run. The types of token-based authentication are shown in the following diagram.

When a developer is running an app during local development: The app authenticates to Azure by
using either an application service principal for local development or the developer's Azure credentials. These
options are discussed in the section Authentication during local development.
When an app is hosted on Azure: The app authenticates to Azure resources by using a managed identity.
This option is discussed in the section Authentication in server environments.
When an app is hosted and deployed on-premises: The app authenticates to Azure resources by using
an application service principal. This option is discussed in the section Authentication in server environments.
DefaultAzureCredential
The DefaultAzureCredential class provided by the Azure SDK allows apps to use different authentication
methods depending on the environment in which they're run. In this way, apps can be promoted from local
development to test environments to production without code changes.
You configure the appropriate authentication method for each environment, and DefaultAzureCredential
automatically detects and uses that authentication method. The use of DefaultAzureCredential is preferred over
manually coding conditional logic or feature flags to use different authentication methods in different
environments.
Details about using the DefaultAzureCredential class are discussed in the section Use DefaultAzureCredential in
an application.
Advantages of token-based authentication
Use token-based authentication instead of using connection strings when you build apps for Azure. Token-based
authentication offers the following advantages over authenticating with connection strings:
The token-based authentication methods described in this article allow you to establish the specific
permissions needed by the app on the Azure resource. This practice follows the principle of least privilege. In
contrast, a connection string grants full rights to the Azure resource.
Anyone or any app with a connection string can connect to an Azure resource, but token-based
authentication methods scope access to the resource to only the apps intended to access the resource.
With a managed identity, there's no application secret to store. The app is more secure because there's no
connection string or application secret that can be compromised.
The azure.identity package in the Azure SDK manages tokens for you behind the scenes. Managed tokens
make using token-based authentication as easy to use as a connection string.
Limit the use of connection strings to initial proof-of-concept apps or development prototypes that don't access
production or sensitive data. Otherwise, the token-based authentication classes available in the Azure SDK are
always preferred when they're authenticating to Azure resources.

Authentication in server environments


When you're hosting in a server environment, each application is assigned a unique application identity per
environment where the application runs. In Azure, an app identity is represented by a service principal. This
special type of security principal identifies and authenticates apps to Azure. The type of service principal to use
for your app depends on where your app is running:

A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N

Apps hosted in Azure Apps hosted in Azure should use a managed identity service
principal. Managed identities are designed to represent the
identity of an app hosted in Azure and can only be used with
Azure-hosted apps.

For example, a Django web app hosted in Azure App Service


would be assigned a managed identity. The managed
identity assigned to the app would then be used to
authenticate the app to other Azure services.

Learn about auth from Azure-hosted apps


A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N

Apps hosted outside of Azure Apps hosted outside of Azure (for example, on-premises
(for example, on-premises apps) apps) that need to connect to Azure services should use an
application service principal. An application service principal
represents the identity of the app in Azure and is created
through the application registration process.

For example, consider a Django web app hosted on-


premises that makes use of Azure Blob Storage. You would
create an application service principal for the app by using
the app registration process. The AZURE_CLIENT_ID ,
AZURE_TENANT_ID , and AZURE_CLIENT_SECRET would all
be stored as environment variables to be read by the
application at runtime and allow the app to authenticate to
Azure by using the application service principal.

Learn about auth from apps hosted outside of Azure

Authentication during local development


When an application runs on a developer's workstation during local development, it still must authenticate to
any Azure services used by the app. There are two main strategies for authenticating apps to Azure during local
development:

A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N

Create dedicated application service principal objects to be In this method, dedicated application service principal
used during local development. objects are set up by using the app registration process for
use during local development. The identity of the service
principal is then stored as environment variables to be
accessed by the app when it's run in local development.

This method allows you to assign the specific resource


permissions needed by the app to the service principal
objects used by developers during local development. This
practice makes sure the application only has access to the
specific resources it needs and replicates the permissions the
app will have in production.

The downside of this approach is the need to create separate


service principal objects for each developer who works on an
application.

Learn about auth from Azure-hosted apps


A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N

Authenticate the app to Azure by using the developer's In this method, a developer must be signed in to Azure from
credentials during local development. either the Azure Tools extension for Visual Studio Code, the
Azure CLI, or Azure PowerShell on their local workstation.
The application then can access the developer's credentials
from the credential store and use those credentials to access
Azure resources from the app.

This method has the advantage of easier setup because a


developer only needs to sign in to their Azure account from
Visual Studio Code or the Azure CLI. The disadvantage of
this approach is that the developer's account likely has more
permissions than required by the application. As a result, the
application doesn't accurately replicate the permissions it will
run with in production.

Learn about auth from Azure-hosted apps

Use DefaultAzureCredential in an application


To use DefaultAzureCredential in a Python app, add the azure.identity package to your application.

pip install azure-identity

The following code example shows how to instantiate a DefaultAzureCredential object and use it with an Azure
SDK client class. In this case, it's a BlobServiceClient object used to access Azure Blob Storage.

from azure.identity import DefaultAzureCredential


from azure.storage.blob import BlobServiceClient

# Acquire a credential object


credential = DefaultAzureCredential()

blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=credential)

The DefaultAzureCredential object automatically detects the authentication mechanism configured for the app
and obtains the necessary tokens to authenticate the app to Azure. If an application makes use of more than one
SDK client, you can use the same credential object with each SDK client object.
Sequence of authentication methods when you use DefaultAzureCredential
Internally, DefaultAzureCredential implements a chain of credential providers for authenticating applications to
Azure resources. Each credential provider can detect if credentials of that type are configured for the app. The
DefaultAzureCredential object sequentially checks each provider in order and uses the credentials from the first
provider that has credentials configured.
The order in which DefaultAzureCredential looks for credentials is shown in the following diagram and table:
C REDEN T IA L T Y P E DESC RIP T IO N

Application service principal The DefaultAzureCredential object reads a set of


environment variables to determine if an application service
principal (application user) was set for the app. If so,
DefaultAzureCredential uses these values to
authenticate the app to Azure.

This method is most often used in server environments, but


you can also use it when you develop locally.

Managed identity If the application is deployed to an Azure host with managed


identity enabled, DefaultAzureCredential authenticates
the app to Azure by using that managed identity.
Authentication by using a managed identity is discussed in
the section Authentication in server environments.

This method is only available when an application is hosted


in Azure by using a service like Azure App Service, Azure
Functions, or Azure Virtual Machines.

Visual Studio Code If you've authenticated to Azure by using the Visual Studio
Code Azure account plug-in, DefaultAzureCredential
authenticates the app to Azure by using that same account.

Azure CLI If you've authenticated to Azure by using the az login


command in the Azure CLI, DefaultAzureCredential
authenticates the app to Azure by using that same account.

Azure PowerShell If you've authenticated to Azure by using the


Connect-AzAccount cmdlet from Azure PowerShell,
DefaultAzureCredential authenticates the app to Azure
by using that same account.

Interactive If enabled, DefaultAzureCredential interactively


authenticates you via the current system's default browser.
By default, this option is disabled.
Authenticate Python apps to Azure services during
local development using service principals
10/28/2022 • 11 minutes to read • Edit Online

When creating cloud applications, developers need to debug and test applications on their local workstation.
When an application is run on a developer's workstation during local development, it still must authenticate to
any Azure services used by the app. This article covers how to set up dedicated application service principal
objects to be used during local development.

Dedicated application service principals for local development allow you to follow the principle of least privilege
during app development. Since permissions are scoped to exactly what is needed for the app during
development, app code is prevented from accidentally accessing an Azure resource intended for use by a
different app. This also prevents bugs from occurring when the app is moved to production because the app
was overprivileged in the dev environment.
An application service principal is set up for the app when the app is registered in Azure. When registering apps
for local development, it's recommended to:
Create separate app registrations for each developer working on the app. This will create separate application
service principals for each developer to use during local development and avoid the need for developers to
share credentials for a single application service principal.
Create separate app registrations per app. This scopes the app's permissions to only what is needed by the
app.
During local development, environment variables are set with the application service principal's identity. The
Azure SDK for Python reads these environment variables and uses this information to authenticate the app to
the Azure resources it needs.

1 - Register the application in Azure


Application service principal objects are created with an app registration in Azure. This can be done using either
the Azure portal or Azure CLI.
Azure portal
Azure CLI

Sign in to the Azure portal and follow these steps.


IN ST RUC T IO N S SC REEN SH OT

In the Azure portal:


1. Enter app registrations in the search bar at the top of
the Azure portal.
2. Select the item labeled App registrations under the
Ser vices heading on the menu that appears below
the search bar.

On the App registrations page, select + New


registration .

On the Register an application page, fill out the form as


follows.
1. Name → Enter a name for the app registration in
Azure. It is recommended this name include the app
name, the user the app registration is for, and an
identifier like 'dev' to indicate this app registration is
for use in local development.
2. Suppor ted account types → Accounts in this
organizational directory only.
Select Register to register your app and create the
application service principal.

On the App registration page for your app:


1. Application (client) ID → This is the app id the app
will use to access Azure during local development.
Copy this value to a temporary location in a text
editor as you will need it in a future step.
2. Director y (tenant) id → This value will also be
needed by your app when it authenticates to Azure.
Copy this value to a temporary location in a text
editor it will also be needed it in a future step.
3. Client credentials → You must set the client
credentials for the app before your app can
authenticate to Azure and use Azure services. Select
Add a certificate or secret to add credentials for your
app.

On the Certificates & secrets page, select + New client


secret .
IN ST RUC T IO N S SC REEN SH OT

The Add a client secret dialog will pop out from the right-
hand side of the page. In this dialog:
1. Description → Enter a value of Current.
2. Expires → Select a value of 24 months.
Select Add to add the secret.

On the Certificates & secrets page, you will be shown the


value of the client secret.

Copy this value to a temporary location in a text editor as


you will need it in a future step.

IMPORTANT: This is the only time you will see this


value. Once you leave or refresh this page, you will not be
able to see this value again. You may add an additional client
secret without invalidating this client secret, but you will not
see this value again.

2 - Create an Azure AD security group for local development


Since there typically multiple developers who work on an application, it's recommended to create an Azure AD
group to encapsulate the roles (permissions) the app needs in local development rather than assigning the roles
to individual service principal objects. This offers the following advantages.
Every developer is assured to have the same roles assigned since roles are assigned at the group level.
If a new role is needed for the app, it only needs to be added to the Azure AD group for the app.
If a new developer joins the team, a new application service principal is created for the developer and added
to the group, assuring the developer has the right permissions to work on the app.

Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Navigate to the Azure Active Directory page in the Azure


portal by typing Azure Active Directory into the search box
at the top of the page and then selecting Azure Active
Directory from under services.
IN ST RUC T IO N S SC REEN SH OT

On the Azure Active Directory page, select Groups from the


left-hand menu.

On the All groups page, select New group .

On the New Group page:


1. Group type → Security
2. Group name → A name for the security group,
typically created from the application name. It is also
helpful to include a string like local-dev in the name
of the group to indicate the purpose of the group.
3. Group description → A description of the purpose
of the group.
4. Select the No members selected link under
Members to add members to the group.

On the Add members dialog box:


1. Use the search box to filter the list of principal names
in the list.
2. Select the application service principals for local
development for this app. As objects are selected,
they will be greyed out and move to the Selected
items list at the bottom of the dialog.
3. When finished, select the Select button.
IN ST RUC T IO N S SC REEN SH OT

Back on the New group page, select Create to create the


group.

The group will be created and you will be taken back to the
All groups page. It may take up to 30 seconds for the
group to appear and you may need to refresh the page due
to caching in the Azure portal.

3 - Assign roles to the application


Next, you need to determine what roles (permissions) your app needs on what resources and assign those roles
to your app. In this example, the roles will be assigned to the Azure Active Directory group created in step 2.
Roles can be assigned a role at a resource, resource group, or subscription scope. This example will show how to
assign roles at the resource group scope since most applications group all their Azure resources into a single
resource group.

Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Locate the resource group for your application by searching


for the resource group name using the search box at the top
of the Azure portal.

Navigate to your resource group by selecting the resource


group name under the Resource Groups heading in the
dialog box.

On the page for the resource group, select Access control


(IAM) from the left-hand menu.

On the Access control (IAM) page:


1. Select the Role assignments tab.
2. Select + Add from the top menu and then Add role
assignment from the resulting drop-down menu.
IN ST RUC T IO N S SC REEN SH OT

The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.

The next Add role assignment page allows you to specify


what user to assign the role to.
1. Select User, group, or service principal under Assign
access to.
2. Select + Select members under Members
A dialog box will open on the right-hand side of the Azure
portal.

In the Select members dialog:


1. The Select text box can be used to filter the list of
users and groups in your subscription. If needed,
type the first few characters of the local development
Azure AD group you created for the app.
2. Select the local development Azure AD group
associated with your application.
Select Select at the bottom of the dialog to continue.

The Azure AD group will now show as selected on the Add


role assignment screen.

Select Review + assign to go to the final page and then


Review + assign again to complete the process.

4 - Set local development environment variables


The DefaultAzureCredential object will look for the service principal information in a set of environment
variables at runtime. Since most developers work on multiple applications, it's recommended to use a package
like python-dotenv to access environment from a .env file stored in the application's directory during
development. This scopes the environment variables used to authenticate the application to Azure such that they
can only be used by this application.
The .env file is never checked into source control since it contains the application secret key for Azure. The
standard .gitignore file for Python automatically excludes the .env file from check-in.
To use the python-dotenv package, first install the package in your application.

pip install python-dotenv

Then, create a .env file in your application root directory. Set the environment variable values with values
obtained from the app registration process as follows:
AZURE_CLIENT_ID → The app ID value.
AZURE_TENANT_ID → The tenant ID value.
AZURE_CLIENT_SECRET → The password/credential generated for the app.

AZURE_CLIENT_ID=00000000-0000-0000-0000-000000000000
AZURE_TENANT_ID=11111111-1111-1111-1111-111111111111
AZURE_CLIENT_SECRET=abcdefghijklmnopqrstuvwxyz

Finally, in the startup code for your application, use the python-dotenv library to read the environment variables
from the .env file on startup.

from dotenv import load_dotenv

if ( os.environ['ENVIRONMENT'] == 'development'):
print("Loading environment variables from .env file")
load_dotenv(".env")

5 - Implement DefaultAzureCredential in your application


To authenticate Azure SDK client objects to Azure, your application should use the DefaultAzureCredential class
from the azure.identity package. In this scenario, DefaultAzureCredential will detect the environment
variables AZURE_CLIENT_ID , AZURE_TENANT_ID , and AZURE_CLIENT_SECRET are set and read those variables to get
the application service principal information to connect to Azure with.
Start by adding the azure.identity package to your application.

pip install azure-identity

Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.

An example of this is shown in the following code segment.


from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient

# Acquire a credential object


token_credential = DefaultAzureCredential()

blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
Authenticate Python apps to Azure services during
local development using developer accounts
10/28/2022 • 8 minutes to read • Edit Online

When creating cloud applications, developers need to debug and test applications on their local workstation.
When an application is run on a developer's workstation during local development, it still must authenticate to
any Azure services used by the app. This article covers how to use a developer's Azure credentials to
authenticate the app to Azure during local development.

For an app to authenticate to Azure during local development using the developer's Azure credentials, the
developer must be signed-in to Azure from the VS Code Azure Tools extension, the Azure CLI, or Azure
PowerShell. The Azure SDK for Python is able to detect that the developer is signed-in from one of these tools
and then obtain the necessary credentials from the credentials cache to authenticate the app to Azure as the
signed-in user.
This approach is easiest to set up for a development team since it takes advantage of the developers' existing
Azure accounts. However, a developer's account will likely have more permissions than required by the
application, therefore exceeding the permissions the app will run with in production. As an alternative, you can
create application service principals to use during local development which can be scoped to have only the
access needed by the app.

1 - Create Azure AD group for local development


Since there are almost always multiple developers who work on an application, it's recommended to first create
an Azure AD group to encapsulate the roles (permissions) the app needs in local development. This offers the
following advantages.
Every developer is assured to have the same roles assigned since roles are assigned at the group level.
If a new role is needed for the app, it only needs to be added to the Azure AD group for the app.
If a new developer joins the team, they simply must be added to the correct Azure AD group to get the
correct permissions to work on the app.
If you have an existing Azure AD group for your development team, you can use that group. Otherwise,
complete the following steps to create an Azure AD group.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT

Navigate to the Azure Active Directory page in the Azure


portal by typing Azure Active Directory into the search box
at the top of the page and then selecting Azure Active
Directory from under services.

On the Azure Active Directory page, select Groups from the


left-hand menu.

On the All groups page, select New group .

On the New Group page:


1. Group type → Security
2. Group name → A name for the security group,
typically created from the application name. It is also
helpful to include a string like local-dev in the name
of the group to indicate the purpose of the group.
3. Group description → A description of the purpose
of the group.
4. Select the No members selected link under
Members to add members to the group.
IN ST RUC T IO N S SC REEN SH OT

On the Add members dialog box:


1. Use the search box to filter the list of user names in
the list.
2. Select the user(s) for local development for this app.
As objects are selected, they will move to the
Selected items list at the bottom of the dialog.
3. When finished, select the Select button.

Back on the New group page, select Create to create the


group.

The group will be created and you will be taken back to the
All groups page. It may take up to 30 seconds for the
group to appear and you may need to refresh the page due
to caching in the Azure portal.

2 - Assign roles to the Azure AD group


Next, you need to determine what roles (permissions) your app needs on what resources and assign those roles
to your app. In this example, the roles will be assigned to the Azure Active Directory group created in step 1.
Roles can be assigned a role at a resource, resource group, or subscription scope. This example will show how to
assign roles at the resource group scope since most applications group all their Azure resources into a single
resource group.

Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Locate the resource group for your application by searching


for the resource group name using the search box at the top
of the Azure portal.

Navigate to your resource group by selecting the resource


group name under the Resource Groups heading in the
dialog box.
IN ST RUC T IO N S SC REEN SH OT

On the page for the resource group, select Access control


(IAM) from the left-hand menu.

On the Access control (IAM) page:


1. Select the Role assignments tab.
2. Select + Add from the top menu and then Add role
assignment from the resulting drop-down menu.

The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.

The next Add role assignment page allows you to specify


what user to assign the role to.
1. Select User, group, or service principal under Assign
access to.
2. Select + Select members under Members
A dialog box will open on the right-hand side of the Azure
portal.

In the Select members dialog:


1. The Select text box can be used to filter the list of
users and groups in your subscription. If needed,
type the first few characters of the local development
Azure AD group you created for the app.
2. Select the local development Azure AD group
associated with your application.
Select Select at the bottom of the dialog to continue.
IN ST RUC T IO N S SC REEN SH OT

The Azure AD group will now show as selected on the Add


role assignment screen.

Select Review + assign to go to the final page and then


Review + assign again to complete the process.

3 - Sign-in to Azure using VS Code, the Azure CLI, or Azure


PowerShell
VS Code Azure Tools extension
Azure CLI
Azure PowerShell

For an app to use the developer credentials from VS Code, the VS Code Azure Tools extension must be installed
in VS Code.
Install the Azure Tools extensions for VS Code
On the left-hand panel, you'll see an Azure icon. Select this icon, and a control panel for Azure services will
appear. Choose Sign in to Azure... under any service to complete the authentication process for the Azure
tools in Visual Studio Code.

4 - Implement DefaultAzureCredential in your application


To authenticate Azure SDK client objects to Azure, your application should use the DefaultAzureCredential class
from the azure.identity package. In this scenario, DefaultAzureCredential will sequentially check to see if the
developer has signed-in to Azure using the VS Code Azure tools extension, the Azure CLI, or Azure PowerShell. If
the developer is signed-in to Azure using any of these tools, then the credentials used to sign into the tool will
be used by the app to authenticate to Azure with.
Start by adding the azure.identity package to your application.

pip install azure-identity

Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.

An example of this is shown in the following code segment.

from azure.identity import DefaultAzureCredential


from azure.storage.blob import BlobServiceClient

# Acquire a credential object


token_credential = DefaultAzureCredential()

blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
Authenticating Azure-hosted apps to Azure
resources with the Azure SDK for Python
10/28/2022 • 7 minutes to read • Edit Online

When an app is hosted in Azure using a service like Azure App Service, Azure Virtual Machines, or Azure
Container Instances, the recommended approach to authenticating an app to Azure resources is to use a
managed identity.
A managed identity provides an identity for your app such that it can connect to other Azure resources without
the need to use a secret key or other application secret. Internally, Azure knows the identity of your app and
what resources it's allowed to connect to. Azure uses this information to automatically obtain Azure AD tokens
for the app to allow it to connect to other Azure resources, all without you having to manage any application
secrets.

Managed identity types


There are two types of managed identities:
System-assigned managed identities - This type of managed identity is provided by and tied directly to
an Azure resource. When you enable managed identity on an Azure resource, you get a system-assigned
managed identity for that resource. A system-assigned managed identity is tied to the lifecycle of the Azure
resource it's associated with. When the resource is deleted, Azure automatically deletes the identity for you.
Since all you have to do is enable managed identity for the Azure resource hosting your code, this is the
easiest type of managed identity to use.
User-assigned managed identities - You may also create a managed identity as a standalone Azure
resource. This is most frequently used when your solution has multiple workloads that run on multiple Azure
resources that all need to share the same identity and same permissions. For example, if your solution had
components that ran on multiple App Service and virtual machine instances that all needed access to the
same set of Azure resources, creating and using a user-assigned managed identity across those resources
would make sense.
This article will cover the steps to enable and use a system-assigned managed identity for an app. If you need to
use a user-assigned managed identity, see the article Manage user-assigned managed identities to see how to
create a user-assigned managed identity.

1 - Enable managed identity in the Azure resource hosting the app


The first step is to enable managed identity on Azure resource hosting your app. For example, if you're hosting a
Django application using Azure App Service, you need to enable managed identity for the App Service web app
that is hosting your app. If you were using a virtual machine to host your app, you would enable your VM to use
managed identity.
You can enable managed identity to be used for an Azure resource using either the Azure portal or the Azure
CLI.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT

Navigate to the resource that hosts your application code in


the Azure portal.

For example, you can type the name of your resource in the
search box at the top of the page and navigate to it by
selecting it in the dialog box.

On the page for your resource, select the Identity menu item
from the left-hand menu.

All Azure resources capable of supporting managed identity


will have an Identity menu item even though the layout of
the menu may vary slightly.

On the Identity page:


1. Change the Status slider to On.
2. Click Save.
A confirmation dialog will verify you want to enable
managed identity for your service. Answer Yes and managed
identity will be enabled for the Azure resource.

2 - Assign roles to the managed identity


Next, you need to determine what roles (permissions) your app needs and assign the managed identity to those
roles in Azure. A managed identity can be assigned roles at a resource, resource group, or subscription scope.
This example will show how to assign roles at the resource group scope since most applications group all their
Azure resources into a single resource group.
Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Locate the resource group for your application by searching


for the resource group name using the search box at the top
of the Azure portal.

Navigate to your resource group by selecting the resource


group name under the Resource Groups heading in the
dialog box.
IN ST RUC T IO N S SC REEN SH OT

On the page for the resource group, select Access control


(IAM) from the left-hand menu.

On the Access control (IAM) page:


1. Select the Role assignments tab.
2. Select + Add from the top menu and then Add role
assignment from the resulting drop-down menu.

The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.

The next Add role assignment page allows you to specify


what user to assign the role to.
1. Select Managed identity under Assign access to.
2. Select + Select members under Members
A dialog box will open on the right-hand side of the Azure
portal.

In the Select managed identities dialog:


1. The Managed identity dropdown and Select text box
can be used to filter the list of managed identities in
your subscription. In this example by selecting App
Service, only managed identities associated with an
App Service are displayed.
2. Select the managed identity for the Azure resource
hosting your application.
Select Select at the bottom of the dialog to continue.
IN ST RUC T IO N S SC REEN SH OT

The managed identity will now show as selected on the Add


role assignment screen.

Select Review + assign to go to the final page and then


Review + assign again to complete the process.

3 - Implement DefaultAzureCredential in your application


The DefaultAzureCredential class will automatically detect that a managed identity is being used and use the
managed identity to authenticate to other Azure resources. As discussed in the Azure SDK for Python
authentication overview article, DefaultAzureCredential supports multiple authentication methods and
determines the authentication method being used at runtime. In this way, your app can use different
authentication methods in different environments without implementing environment specific code.
First, add the azure.identity package to your application.

pip install azure-identity

Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.

An example of this is shown in the following code segment.

from azure.identity import DefaultAzureCredential


from azure.storage.blob import BlobServiceClient

# Acquire a credential object


token_credential = DefaultAzureCredential()

blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)

When the above code is run on your local workstation during local development, the SDK method,
DefaultAzureCredential(), looks in the environment variables for an application service principal or at VS Code,
the Azure CLI, or Azure PowerShell for a set of developer credentials, either of which can be used to authenticate
the app to Azure resources during local development. In this way, this same code can be used to authenticate
your app to Azure resources during both local development and when deployed to Azure.
Authenticate to Azure resources from Python apps
hosted on-premises
10/28/2022 • 7 minutes to read • Edit Online

Apps hosted outside of Azure (for example on-premises or at a third-party data center) should use an
application service principal to authenticate to Azure when accessing Azure resources. Application service
principal objects are created using the app registration process in Azure. When an application service principal is
created, a client ID and client secret will be generated for your app. The client ID, client secret, and your tenant ID
are then stored in environment variables so they can be used by the Azure SDK for Python to authenticate your
app to Azure at runtime.
A different app registration should be created for each environment the app is hosted in. This allows
environment specific resource permissions to be configured for each service principal and make sure an app
deployed to one environment does not talk to Azure resources that are part of another environment.

1 - Register the application in Azure


An app can be registered with Azure using either the Azure portal or the Azure CLI.
Azure portal
Azure CLI

Sign in to the Azure portal and follow these steps.

IN ST RUC T IO N S SC REEN SH OT

In the Azure portal:


1. Enter app registrations in the search bar at the top of
the Azure portal.
2. Select the item labeled App registrations under the
under Ser vices heading on the menu that appears
below the search bar.

On the App registrations page, select + New


registration .
IN ST RUC T IO N S SC REEN SH OT

On the Register an application page, fill out the form as


follows.
1. Name → Enter a name for the app registration in
Azure. It is recommended this name include the app
name and environment (test, prod) the app
registration is for.
2. Suppor ted account types → Accounts in this
organizational directory only.
Select Register to register your app and create the
application service principal.

On the App registration page for your app:


1. Application (client) ID → This is the app id the app
will use to access Azure during local development.
Copy this value to a temporary location in a text
editor as you will need it in a future step.
2. Director y (tenant) id → This value will also be
needed by your app when it authenticates to Azure.
Copy this value to a temporary location in a text
editor it will also be needed it in a future step.
3. Client credentials → You must set the client
credentials for the app before your app can
authenticate to Azure and use Azure services. Select
Add a certificate or secret to add credentials for your
app.

On the Certificates & secrets page, select + New client


secret .

The Add a client secret dialog will pop out from the right-
hand side of the page. In this dialog:
1. Description → Enter a value of Current.
2. Expires → Select a value of 24 months.
Select Add to add the secret.

IMPORTANT: Set a reminder in your calendar prior


to the expiration date of the secret. This way, you can
add a new secret prior and update your apps prior to the
expiration of this secret and avoid a service interruption in
your app.

On the Certificates & secrets page, you will be shown the


value of the client secret.

Copy this value to a temporary location in a text editor as


you will need it in a future step.

IMPORTANT: This is the only time you will see this


value. Once you leave or refresh this page, you will not be
able to see this value again. You may add an additional client
secret without invalidating this client secret, but you will not
see this value again.
2 - Assign roles to the application service principal
Next, you need to determine what roles (permissions) your app needs on what resources and assign those roles
to your app. Roles can be assigned a role at a resource, resource group, or subscription scope. This example will
show how to assign roles for the service principal at the resource group scope since most applications group all
their Azure resources into a single resource group.

Azure portal
Azure CLI

IN ST RUC T IO N S SC REEN SH OT

Locate the resource group for your application by searching


for the resource group name using the search box at the top
of the Azure portal.

Navigate to your resource group by selecting the resource


group name under the Resource Groups heading in the
dialog box.

On the page for the resource group, select Access control


(IAM) from the left-hand menu.

On the Access control (IAM) page:


1. Select the Role assignments tab.
2. Select + Add from the top menu and then Add role
assignment from the resulting drop-down menu.

The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.

The next Add role assignment page allows you to specify


what user to assign the role to.
1. Select User, group, or service principal under Assign
access to.
2. Select + Select members under Members
A dialog box will open on the right-hand side of the Azure
portal.
IN ST RUC T IO N S SC REEN SH OT

In the Select members dialog:


1. The Select text box can be used to filter the list of
users and groups in your subscription. If needed,
type the first few characters of the service principal
you created for the app to filter the list.
2. Select the service principal associated with your
application.
Select Select at the bottom of the dialog to continue.

The service principal will now show as selected on the Add


role assignment screen.

Select Review + assign to go to the final page and then


Review + assign again to complete the process.

3 - Configure environment variables for application


You must set the AZURE_CLIENT_ID , AZURE_TENANT_ID , and AZURE_CLIENT_SECRET environment variables for the
process that runs your Python app to make the application service principal credentials available to your app at
runtime. The DefaultAzureCredential object looks for the service principal information in these environment
variables.
When using Gunicorn to run Python web apps in a UNIX server environment, environment variables for an app
can be specified by using the EnvironmentFile directive in the gunicorn.server file as shown below.

[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=www-user
Group=www-data
WorkingDirectory=/path/to/python-app
EnvironmentFile=/path/to/python-app/py-env/app-environment-variables
ExecStart=/path/to/python-app/py-env/gunicorn --config config.py wsgi:app

[Install]
WantedBy=multi-user.target

The file specified in the EnvironmentFile directive should contain a list of environment variables with their
values as shown below.
AZURE_CLIENT_ID=<value>
AZURE_TENANT_ID=<value>
AZURE_CLIENT_SECRET=<value>

4 - Implement DefaultAzureCredential in application


To authenticate Azure SDK client objects to Azure, your application should use the DefaultAzureCredential class
from the azure.identity package.
Start by adding the azure.identity package to your application.

pip install azure-identity

Next, for any Python code that creates an Azure SDK client object in your app, you will want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.

An example of this is shown in the following code segment.

from azure.identity import DefaultAzureCredential


from azure.storage.blob import BlobServiceClient

# Acquire a credential object


token_credential = DefaultAzureCredential()

blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)

When the above code instantiates the DefaultAzureCredential object, DefaultAzureCredential reads the
environment variables AZURE_SUBSCRIPTION_ID , AZURE_TENANT_ID , AZURE_CLIENT_ID , and AZURE_CLIENT_SECRET for
the application service principal information to connect to Azure with.
Additional methods to authenticate to Azure
resources from Python apps
10/28/2022 • 3 minutes to read • Edit Online

This article lists additional methods apps may use to authenticate to Azure resources. The methods on this page
are less commonly used and when possible, it is encouraged to use one of the methods outlined in the
authenticating Python apps to Azure using the Azure SDK overview article.

Interactive browser authentication


This method interactively authenticates an application through InteractiveBrowserCredential by collecting user
credentials in the default system.
Interactive browser authentication enables the application for all operations allowed by the interactive login
credentials. As a result, if you are the owner or administrator of your subscription, your code has inherent access
to most resources in that subscription without having to assign any specific permissions. For this reason, the use
of interactive browser authentication is discouraged for anything but experimentation.
Enable applications for interactive browser authentication
Perform the following steps to enable the application to authenticate through the interactive browser flow. These
steps also work for device code authentication described later. Following this process is necessary only if using
InteractiveBrowserCredential in your code.

1. On the Azure portal, navigate to Azure Active Directory and select App registrations on the left-hand
menu.
2. Select the registration for your app, then select Authentication .
3. Under Advanced settings , select Yes for Allow public client flows .
4. Select Save to apply the changes.
5. To authorize the application for specific resources, navigate to the resource in question, select API
Permissions , and enable Microsoft Graph and other resources you want to access. Microsoft Graph is
usually enabled by default.
a. You must also be the admin of your tenant to grant consent to your application when you log in for
the first time.
If you can't configure the device code flow option on your Active Directory, your application may need to be
multi-tenant. To make this change, navigate to the Authentication panel, select Accounts in any
organizational director y (under Suppor ted account types ), and then select Yes for Allow public client
flows .
Example using InteractiveBrowserCredential
The following example demonstrates using an InteractiveBrowserCredential to authenticate with the
SubscriptionClient :
# Show Azure subscription information

import os
from azure.identity import InteractiveBrowserCredential
from azure.mgmt.resource import SubscriptionClient

credential = InteractiveBrowserCredential()
subscription_client = SubscriptionClient(credential)

subscription = next(subscription_client.subscriptions.list())
print(subscription.subscription_id)

For more exact control, such as setting redirect URIs, you can supply specific arguments to
InteractiveBrowserCredential such as redirect_uri .

Device code authentication


This method interactively authenticates a user on devices with limited UI (typically devices without a keyboard):
1. When the application attempts to authenticate, the credential prompts the user with a URL and an
authentication code.
2. The user visits the URL on a separate browser-enabled device (a computer, smartphone, etc.) and enters the
code.
3. The user follows a normal authentication process in the browser.
4. Upon successful authentication, the application is authenticated on the device.
For more information, see Microsoft identity platform and the OAuth 2.0 device authorization grant flow.
Device code authentication in a development environment enables the application for all operations allowed by
the interactive login credentials. As a result, if you are the owner or administrator of your subscription, your
code has inherent access to most resources in that subscription without having to assign any specific
permissions. However, you can use this method with a specific client ID, rather than the default, for which you
can assign specific permissions.

Authentication with a username and password


This method authenticates an application using previous-collected credentials and the
UsernamePasswordCredential object.

This method of authentication is discouraged because it's less secure than other flows. Also, this method is not
interactive and is therefore not compatible with any form of multi-factor authentication or consent
prompting. The application must already have consent from the user or a directory administrator.
Furthermore, this method authenticates only work and school accounts; Microsoft accounts are not supported.
For more information, see Sign up your organization to use Azure Active Directory.
# Show Azure subscription information

import os
from azure.mgmt.resource import SubscriptionClient
from azure.identity import UsernamePasswordCredential

# Retrieve the information necessary for the credentials, which are assumed to
# be in environment variables for the purpose of this example.
client_id = os.environ["AZURE_CLIENT_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]
username = os.environ["AZURE_USERNAME"]
password = os.environ["AZURE_PASSWORD"]

credential = UsernamePasswordCredential(client_id=client_id, tenant_id = tenant_id,


username = username, password = password)

subscription_client = SubscriptionClient(credential)

subscription = next(subscription_client.subscriptions.list())
print(subscription.subscription_id)
Walkthrough: Integrated authentication for Python
apps with Azure services
10/28/2022 • 2 minutes to read • Edit Online

Azure Active Directory (Azure AD) along with Azure Key Vault provide a comprehensive and convenient means
for applications to authenticate with Azure services and third-party services where access keys are involved.
After providing some background, this walkthrough explains these authentication features in the context of the
sample, github.com/Azure-Samples/python-integrated-authentication.

Part 1: Background
Although many Azure services rely solely on role-based access control for authorization, certain services control
access to their respective resources by using secrets or keys. Such services include Azure Storage, databases,
Cognitive Services, Key Vault, and Event Hubs.
When creating a cloud app that accesses these services, you can use the Azure portal, the Azure CLI, or Azure
PowerShell to create and configure keys for your app. The keys you create are tied to specific access policies and
prevent access to those app-specific resources by any other unauthorized code.
Within this general design, cloud apps must typically manage those keys and authenticate with each service
individually, a process that can be both tedious and error-prone. Managing keys directly in app code also risks
exposing those keys in source control and keys might be stored on unsecured developer workstations.
Fortunately, Azure provides two specific services to simplify the process and provide greater security:
Azure Key Vault provides secure cloud-based storage for access keys (along with cryptographic keys and
certificates, which aren't covered in this article). By using Key Vault, the app accesses such keys only at
run time so that they never appear directly in source code.
With Azure Active Directory (Azure AD) Managed Identities, the app needs to authenticate only once with
Active Directory. The app is then automatically authenticated with other Azure services, including Key
Vault. As a result, your code never needs to concern itself with keys or other credentials for those Azure
services. Better still, you can run the same code both locally and in the cloud with minimal configuration
requirements.
This walkthrough shows how to use Azure AD managed identity and Key Vault together in the same app. By
using Azure AD and Key Vault together, your app never needs to authenticate itself with individual Azure
services, and can easily and securely access any keys necessary for third-party services.

IMPORTANT
This article uses the common, generic term "key" to refer to what are stored as "secrets" in Azure Key Vault, such as an
access key for a REST API. This usage should not be confused with Key Vault's management of cryptographic keys, which
is a separate feature from Key Vault's secrets.

Example cloud app scenario


To understand Azure's authentication process more deeply, consider the following scenario:
A main app exposes a public (non-authenticated) API endpoint that responds to HTTP requests with JSON
data. The example endpoint as shown in this article is implemented as a simple Flask app deployed to
Azure App Service.
To generate its response, the API invokes a third-party API that requires an access key. The app retrieves
that access key from Azure Key Vault at run time.
Before the API returns a response, it writes a message to an Azure Storage Queue for later processing.
(The specific processing of these messages isn't relevant to the main scenario.)

NOTE
Although a public API endpoint is usually protected by its own access key, for the purposes of this article we assume the
endpoint is open and unauthenticated. This assumption avoids any confusion between the app's authentication needs
with those of an external caller of this endpoint. This scenario doesn't demonstrate such an external caller.

Part 2 - Authentication requirements >>>


Part 2: Authentication needs in the scenario
10/28/2022 • 3 minutes to read • Edit Online

Previous part: Introduction and background


Within this example scenario, the main app has the following authentication requirements:
Authenticate with Azure Key Vault to access the stored third-party API key.
Authenticate with the third-party API using the API key.
Authenticate with Azure Queue Storage using the necessary credentials for the storage account.
With these three distinct requirements, the application has to manage three sets of credentials: two for Azure
resources (Key Vault and Queue Storage) and one for an external resource (the third-party API).
As noted earlier, you can securely manage all the credentials in Key Vault except for those credentials needed for
Key Vault itself. Once the application is authenticated with Key Vault, it can then retrieve any other keys at run
time to authenticate with services like Queue Storage.
This approach, however, still requires the app to separately manage credentials for Key Vault. How then can you
manage that credential securely and have it work both for local development and in your production
deployment in the cloud?
A partial solution is to store the key in a server-side environment variable, which at least keeps the key out of
source control. For example, you can set an environment variable through an application setting with Azure App
Service and Azure Functions. The downside of this approach is that code on a developer workstation you must
replicate that environment variable locally, which risks exposure of the credentials and/or accidental inclusion in
source control. You could work around the problem to some extent by implementing special procedures in the
development version of your code, but doing so adds complexity to your development process.
Fortunately, integrated authentication with Azure Active Directory (AD) allows an app to avoid handling any
Azure credentials at all.

Integrated authentication with managed identity


Many Azure services, like Storage and Key Vault, are integrated with Azure Active Directory (Azure AD) such that
when you authenticate the application with Azure AD using a managed identity, it's automatically authenticated
with other connected resources. Authorization for the identity is handled through role-based access control
(RBAC) and occasionally through other access policies.
This integration means that you never need to handle any Azure-related credentials in your app code and those
credentials never appear on developer workstations or in source control. Furthermore, any handling of keys for
third-party APIs and services is done entirely at run time, thus keeping those keys secure.
Managed identity specifically works with apps that are deployed to Azure. For local development, you create a
separate service principal to serve as the app identity when running locally. You make this service principal
available to the Azure libraries using environment variables as described in Authenticate Python apps to Azure
services during local development using service principals. You also assign role permissions to this service
principal alongside the managed identity used in the cloud.
Once you do these steps for the local service principal, the same code works both locally and in the cloud to
authenticate the app with Azure resources. These details are discussed in How to authenticate and authorize
apps, but the short version is as follows:
1. In your code, create a DefaultAzureCredential object that automatically uses your managed identity
when running on Azure and your separate service principal when running locally.
2. Use this credential when you create the appropriate client object for whatever resource you want to
access (Key Vault, Queue Storage, etc.).
3. Authentication then takes place when you call an operation method through the client object, which
generates a REST API call to the resource.
4. If the app identity is valid, then Azure also checks whether that identity is also authorized for the specific
operation.
The remainder of this tutorial demonstrates all the details of the process in the context of the example scenario
and the accompanying sample code.
In the sample's provisioning script, all of the resources are created under a resource group named
auth-scenario-rg . This group is created using the Azure CLI az group create command.

Part 3 - Third-party API implementation >>>


Part 3: Example third-party API implementation
10/28/2022 • 2 minutes to read • Edit Online

Previous part: Authentication requirements


In our example scenario, the main app's public endpoint uses a third-party API that's secured by an access key.
This section shows an implementation of the third-party API using Azure Functions, but the API could be
implemented in other ways and deployed to a different cloud server or web host. The only important aspect is
that client requests to the protected endpoint must include the access key. Any app that invokes this API must
securely manage that key.
For demonstration purposes, this API is deployed to the endpoint,
https://msdocs-example-api.azurewebsites.net/api/RandomNumber . To call the API, however, you must provide the
access key d0c5atM1cr0s0ft either in a ?code= URL parameter or in an 'x-functions-key' property of the HTTP
header. For example, try this URL in a browser or curl: https://msdocs-example-
api.azurewebsites.net/api/RandomNumber?code=d0c5atM1cr0s0ft.
If the access key is valid, the endpoint returns a JSON response that contains a single property, "value", the value
of which is a number between 1 and 999, such as {"value": 959} .
The endpoint is implemented in Python and deployed to Azure Functions. The code is as follows:

import logging
import random
import json

import azure.functions as func

def main(req: func.HttpRequest) -> func.HttpResponse:


logging.info('RandomNumber invoked via HTTP trigger.')

random_value = random.randint(1, 1000)


dict = { "value" : random_value }
return func.HttpResponse(json.dumps(dict))

In the sample repository, this code is found under third_party_api/RandomNumber/__init__.py. The folder,
RandomNumber, provides the name of the function and __init__.py contains the code. Another file in that folder,
function.json, describes when the function is triggered. Other files in the third_party_api parent folder provide
details for the Azure Function "app" that hosts the function itself.
To deploy the code, the sample's provisioning script performs the following steps:
1. Create a backing storage account for Azure Functions with the Azure CLI command,
az storage account create .

2. Create an Azure Functions "app" with the Azure CLI command, az function app create .
3. After waiting 60 seconds for the host to be fully provisioned, deploy the code using the Azure Functions
Core Tools command, func azure functionapp publish
4. Assign the access key, d0c5atM1cr0s0ft , to the function. (See Securing Azure Functions for a background
on function keys.)
In the provisioning script, this step is accomplished through a REST API call to the Functions Key
Management API because the Azure CLI doesn't presently support this particular feature. To call that REST
API, the provisioning script must first use another REST API call to retrieve the Function app's master key.
You can also assign access keys through the Azure portal. On the page for the Functions app, select
Functions , then select the specific function to secure (which is named RandomNumber in this example). On
the function's page, select Function Keys to open the page where you can create and manage these
keys.
Part 4 - Main app implementation >>>
Part 4: Example main application implementation
10/28/2022 • 2 minutes to read • Edit Online

Previous part: Third-party API implementation


The main app in our scenario is a simple Flask app that's deployed to Azure App Service. The app provides a
public API endpoint named /api/v1/getcode, which generates a code for some other purpose in the app (say,
with two-factor authentication for human users). The main app also provides a simple home page that displays
a link to the API endpoint.
The sample's provisioning script performs the following steps:
1. Create the App Service host and deploy the code with the Azure CLI command, az webapp up .
2. Create an Azure Storage account for the main app (using az storage account create ).
3. Create a Queue in the storage account named "code-requests" (using az storage queue create ).
4. To ensure that the app is allowed to write to the queue, use az role assignment create to assign the
"Storage Queue Data Contributor" role to the app. For more information about roles, see How to assign
role permissions using the Azure CLI.
The main app code is as follows; explanations of important details are given in the next parts of this series.

from flask import Flask, request, jsonify


import requests, random, string, os
from datetime import datetime
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.storage.queue import QueueClient

app = Flask(__name__)
app.config["DEBUG"] = True

number_url = os.environ["THIRD_PARTY_API_ENDPOINT"]

# Authenticate with Azure. First, obtain the DefaultAzureCredential


credential = DefaultAzureCredential()

# Next, get the client for the Key Vault. You must have first enabled managed identity
# on the App Service for the credential to authenticate with Key Vault.
key_vault_url = os.environ["KEY_VAULT_URL"]
keyvault_client = SecretClient(vault_url=key_vault_url, credential=credential)

# Obtain the secret: for this step to work you must add the app's service principal to
# the key vault's access policies for secret management.
api_secret_name = os.environ["THIRD_PARTY_API_SECRET_NAME"]
vault_secret = keyvault_client.get_secret(api_secret_name)

# The "secret" from Key Vault is an object with multiple properties. The key we
# want for the third-party API is in the value property.
access_key = vault_secret.value

# Set up the Storage queue client to which we write messages


queue_url = os.environ["STORAGE_QUEUE_URL"]
queue_client = QueueClient.from_queue_url(queue_url=queue_url, credential=credential)

@app.route('/', methods=['GET'])
def home():
return f'Home page of the main app. Make a request to <a href="./api/v1/getcode">/api/v1/getcode</a>.'

def random_char(num):
return ''.join(random.choice(string.ascii_letters) for x in range(num))

@app.route('/api/v1/getcode', methods=['GET'])
def get_code():
headers = {
'Content-Type': 'application/json',
'x-functions-key': access_key
}

r = requests.get(url = number_url, headers = headers)

if (r.status_code != 200):
return "Could not get you a code.", r.status_code

data = r.json()
chars1 = random_char(3)
chars2 = random_char(3)
code_value = f"{chars1}-{data['value']}-{chars2}"
code = { "code": code_value, "timestamp" : str(datetime.utcnow()) }

# Log a queue message with the code for, say, a process that invalidates
# the code after a certain period of time.
queue_client.send_message(code)

return jsonify(code)

if __name__ == '__main__':
app.run()

Part 5 - Dependencies and environment variables >>>


Part 5: Main app dependencies, import statements,
and environment variables
10/28/2022 • 2 minutes to read • Edit Online

Previous part: Main app implementation


This part examines the Python libraries brought into the main app and the environment variables required by
the code. When deployed to Azure, you use application settings in Azure App Service to provide environment
variables.

Dependencies and import statements


The app code requires on the following libraries: Flask, the standard HTTP requests library, and the Azure
libraries for Active Directory (azure.identity), Key Vault (azure.keyvault.secrets), and Queue Storage
(azure.storage.queue). These libraries are included in the app's requirements.txt file:

flask
requests
azure.identity
azure.keyvault.secrets
azure.storage.queue

When your deploy the app to Azure App Service, Azure automatically installs these requirements on the host
server. When running locally, you install them in your environment with pip install -r requirements.txt .
The code file starts with the required import statements for the parts of the libraries we're using:

from flask import Flask, request, jsonify


import requests, random, string, os
from datetime import datetime
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
from azure.storage.queue import QueueClient

Environment variables
The app code depends on four environment variables:

VA RIA B L E VA L UE

THIRD_PARTY_API_ENDPOINT The URL of the third-party API, such as


https://msdocs-example-
api.azurewebsites.net/api/RandomNumber
described in Part 3.

KEY_VAULT_URL The URL of the Azure Key Vault in which you've stored the
access key for the third-party API.

THIRD_PARTY_API_SECRET_NAME The name of the secret in Key Vault that contains the access
key for the third-party API.
VA RIA B L E VA L UE

STORAGE_QUEUE_URL The URL of an Azure Storage Queue that's been configured


in Azure, such as
https://msdocsexamplemainapp.queue.core.windows.net/code-
requests
(see Part 4). Because the queue name is included at the end
of the URL, you don't see the name anywhere in the code.

How you set these variables depends on where the code is running:
When running the code locally, you create these variables within whatever command shell you're using.
(If you deploy the app to a virtual machine, you would create similar server-side variables.) You can use a
library like python-dotenv, which reads key-value pairs from an .env file and sets them as environment
variables
When the code is deployed to Azure App Service as is shown in this walkthrough, you don't have access
to the server itself. In this case, you create application settings with the same names, which then appear to
the app as environment variables.
The provisioning scripts create these settings using the Azure CLI command, az webapp config appsettings set .
All four variables are set with a single command.
To create settings through the Azure portal, see Configure an App Service app in the Azure portal.
When running the code locally, you also need to specify environment variables that contain information about
your local service principal. DefaultAzureCredential looks for these values. When deployed to App Service, you
do not need to set these values as the managed identity will be used instead to authenticate.

VA RIA B L E VA L UE

AZURE_TENANT_ID The Azure Active Directory tenant (directory) ID.

AZURE_CLIENT_ID The client (application) ID of an App Registration in the


tenant.

AZURE_CLIENT_SECRET A client secret that was generated for the App Registration.

For more information, see Authenticate Python apps to Azure services during local development using service
principals.
Part 6 - Main app startup code >>>
Part 6: Main app startup code
10/28/2022 • 2 minutes to read • Edit Online

Previous part: Dependencies and environment variables


The app's startup code, which follows the import statements, initializes different variables used in the functions
that handle HTTP requests.
First, we create the Flask app object and retrieve the third-party API endpoint URL from the environment
variable:

app = Flask(__name__)
app.config["DEBUG"] = True

number_url = os.environ["THIRD_PARTY_API_ENDPOINT"]

Next, we obtain the DefaultAzureCredential object, which is the recommended credential to use when
authenticating with Azure services. See Authenticate Azure hosted applications with DefaultAzureCredential.

credential = DefaultAzureCredential()

When run locally, DefaultAzureCredentiallooks for the AZURE_TENANT_ID , AZURE_CLIENT_ID , and


AZURE_CLIENT_SECRET environment variables that contain information for your local service principal. When run
in the cloud, DefaultAzureCredential automatically uses the service principal registered for the app, which is
typically contained within the managed identity.
The code next retrieves the third-party API's access key from Azure Key Vault. In the provisioning script, the Key
Vault is created using az keyvault create , and the secret is stored with az keyvault secret set .
The Key Vault resource itself is accessed through a URL, which is loaded from the KEY_VAULT_URL environment
variable.

key_vault_url = os.environ["KEY_VAULT_URL"]

To connect to the key vault, we must create a suitable client object. Because we want to retrieve a secret, we use
the SecretClient , which requires the key vault URL and the credential object that represents the identity under
which the app is running.

keyvault_client = SecretClient(vault_url=key_vault_url, credential=credential)

Creating the SecretClient object doesn't authenticate the credential in any way. The SecretClient is simply a
client-side construct that internally manages the resource URL and the credential. Authentication and
authorization happen only when you invoke an operation through the client, such as get_secret , which
generates a REST API call to the Azure resource.
api_secret_name = os.environ["THIRD_PARTY_API_SECRET_NAME"]
vault_secret = keyvault_client.get_secret(api_secret_name)

# The "secret" from Key Vault is an object with multiple properties. The key we
# want for the third-party API is in the value property.
access_key = vault_secret.value

Even if the app identity is authorized to access the key vault, it must still be authorized to access secrets.
Otherwise, the get_secret call fails. For this reason, the provisioning script sets a "get secrets" access policy for
the app using the Azure CLI command, az keyvault set-policy . For more information, see Key Vault
Authentication and Grant your app access to Key Vault. The latter article shows how to set an access policy using
the Azure portal. (The article is also written for managed identity, but applies equally to a local service principle
used in development.)
Finally, the app code sets up the client object through which we can write messages to an Azure Storage Queue.
The Queue's URL is in the environment variable STORAGE_QUEUE_URL .

queue_url = os.environ["STORAGE_QUEUE_URL"]
queue_client = QueueClient.from_queue_url(queue_url=queue_url, credential=credential)

As with Key Vault, we use a specific client object from the Azure libraries, QueueClient , and its from_queue_url
method to connect to the resource located at the URL in question. Once again, attempting to create this client
object validates that the app identity represented by the credential is authorized to access the queue. As noted
earlier, this authorization was granted by assigning the "Storage Queue Data Contributor" role to the main app.
Assuming all this startup code succeeds, the app has all its internal variables in place to support its
/api/v1/getcode API endpoint.
Part 7 - Main app endpoint >>>
Part 7: Main application API endpoint
10/28/2022 • 4 minutes to read • Edit Online

Previous part: Main app startup code


The app URL path /api/v1/getcode for the API generates a JSON response that contains an alphanumerical code
and a timestamp.
First, the @app.route decorator tells Flask that the get_code function handles requests to the /api/v1/getcode
URL.

@app.route('/api/v1/getcode', methods=['GET'])
def get_code():

Next, we call the third-party API, the URL of which is in number_url , providing the access key that we retrieve
from the key vault in the header.

headers = {
'Content-Type': 'application/json',
'x-functions-key': access_key
}

r = requests.get(url = number_url, headers = headers)

if (r.status_code != 200):
return "Could not get you a code.", r.status_code

The example third-party API is deployed to the serverless environment of Azure Functions. The x-functions-key
property in the header is specifically how Azure Functions expects an access key to appear in a header. For more
information, see Azure Functions HTTP trigger - Authorization keys. If calling the API fails for any reason, the
code returns an error message and the status code.
Assuming that the API call succeeds and returns a numerical value, we then construct a more complex code
using that number plus some random characters (using our own random_char function).

data = r.json()
chars1 = random_char(3)
chars2 = random_char(3)
code_value = f"{chars1}-{data['value']}-{chars2}"
code = { "code": code_value, "timestamp" : str(datetime.utcnow()) }

The code variable here contains the full JSON response for the app's API, which includes the code value and a
timestamp. An example response would be {"code":"ojE-161-pTv","timestamp":"2020-04-15 16:54:48.816549"} .
Before we return that response, however, we write a message in our storage queue using the Queue client's
send_message method:

queue_client.send_message(code)

return jsonify(code)
Processing queue messages
Messages stored in the queue can be viewed and managed through the Azure portal, with the Azure CLI
command az storage message get , or with Azure Storage Explorer. The sample repository includes a script
(test.cmd and test.sh) to request a code from the app endpoint and then check the message queue. There's also a
script to clear the queue using the az storage message clear command.
Typically, an app like this example would have another process that asynchronously pulls messages from the
queue for further processing. As mentioned earlier, the response generated by this API endpoint might be used
elsewhere in the app with two-factor user authentication. In that case, the app should invalidate the code after a
certain period of time, say 10 minutes. A simple way to do this task would be to maintain a table of valid two-
factor authentication codes, which are used by its user sign-in procedure. The app would then have a simple
queue-watching process with the following logic (in pseudo-code):

pull a message from the queue and retrieve the code.

if (code is already in the table):


remove the code from the table, thereby invalidating it
else:
add the code to the table, making it valid
call queue_client.send_message(code, visibility_timeout=600)

This pseudo-code employs the send_message method's optional visibility_timeout parameter, which specifies
the number of seconds before the message becomes visible in the queue. Because the default timeout is zero,
messages initially written by the API endpoint become immediately visible to the queue-watching process. As a
result, that process stores them in the valid code table right away. The process queues the same message again
with the timeout, so that it will receive the code again 10 minutes later, at which point it removes it from the
table.

Implementing the main app API endpoint in Azure Functions


The code shown previously in this article uses the Flask web framework to create its API endpoint. Because Flask
needs to run with a web server, such code must be deployed to Azure App Service or to a virtual machine.
An alternate deployment option is the serverless environment of Azure Functions. In this case, all the startup
code and the API endpoint code would be contained within the same function that's bound to an HTTP trigger.
As with App Service, you use function application settings to create environment variables for your code.
One piece of the implementation that becomes easier is authenticating with Queue Storage. Instead of obtaining
a QueueClient object using the queue's URL and a credential object, you create a queue storage binding for the
function. The binding handles all the authentication behind the scenes. With such a binding, your function is
given a ready-to-use client object as a parameter. For more information and example code, see Connect Azure
Functions to Azure Queue Storage.

Next steps
Through this tutorial, you've learned how apps authenticate with other Azure services using managed identity,
and how apps can use Azure Key Vault to store any other necessary secrets for third-party APIs.
The same pattern demonstrated here with Azure Key Vault and Azure Storage applies with all other Azure
services. The crucial step is that you set the correct role permissions for the app within that service's page on the
Azure portal, or through the Azure CLI. (See How to assign role permissions). Be sure to check the service
documentation to see whether you need to configure any other access policies.
Always remember that you need to assign the same roles and access policies to any service principal you're
using for local development.
In short, having completed this walkthrough, you can apply your knowledge to any number of other Azure
services and any number of other external services.
One subject that we haven't touched upon in this tutorial is authentication of users. To explore this area for web
apps, begin with Authenticate and authorize users end-to-end in Azure App Service.

See also
How to authenticate and authorize Python apps on Azure
Walkthrough sample: github.com/Azure-Samples/python-integrated-authentication
Azure Active Directory documentation
Azure Key Vault documentation
How to install Azure library packages for Python
10/28/2022 • 2 minutes to read • Edit Online

The Azure SDK for Python is composed solely of many individual libraries that can be installed in standard
Python or Conda environments.
Libraries for standard Python environments are listed in the package index.
Packages for Conda environments are listed in the Microsoft channel on anaconda.org. Azure packages have
names that begin with azure- .
With these Azure libraries you can provision and manage resources on Azure services (using the management
libraries, whose names begin with azure-mgmt ) and connect with those resources from app code (using the
client libraries, whose names begin with just azure- ).

Install the latest version of a library


pip
conda

pip install <library>

pip install retrieves the latest version of a library in your current Python environment.
On Linux systems, you must install a library for each user separately. Installing libraries for all users with
sudo pip install isn't supported.

You can use any package name listed in the package index.

Install specific library versions


pip
conda

Be sure you've added the Microsoft channel to your Conda configuration (you need to do this only once):

conda config --add channels "Microsoft"

pip install <library>==<version>

Specify the desired version on the command line with pip install .
You can use any package name listed in the package index.

Install preview packages


pip
conda
pip install --pre <library>

To install the latest preview of a library, include the --pre flag on the command line.
Microsoft periodically releases preview library packages that support upcoming features, with the caveat that
the library is subject to change and must not be used in production projects.
You can use any package name listed in the package index.

Verify a library installation


pip
conda

pip show <library>

If the library is installed, pip show displays version and other summary information, otherwise the command
displays nothing.
You can also use pip freeze or pip list to see all the libraries that are installed in your current Python
environment.
You can use any package name listed in the package index.

Uninstall a library
pip
conda

pip uninstall library.

To uninstall a library, use pip uninstall .


You can use any package name listed in the package index.
Azure libraries package index
10/28/2022 • 33 minutes to read • Edit Online

NOTE
For Conda libraries, see the Microsoft channel on anaconda.org.

Libraries using azure.core


All libraries

Libraries using azure.core


NAME PA C K A GE DO C S SO URC E

Administration PyPI 4.2.0 docs GitHub 4.2.0

Anomaly Detector PyPI 3.0.0b5 docs GitHub 3.0.0b5

App Configuration PyPI 1.3.0 docs GitHub 1.3.0

App Configuration Provider PyPI 1.0.0b1 docs GitHub 1.0.0b1

Artifacts PyPI 0.14.0 docs GitHub 0.14.0

Attestation PyPI 1.0.0 docs GitHub 1.0.0

Avro Encoder PyPI 1.0.0 docs GitHub 1.0.0

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store AIO

Azure Mixed Reality PyPI 1.0.0b1 docs GitHub 1.0.0b1


Authentication

Azure Remote Rendering PyPI 1.0.0b1 docs GitHub 1.0.0b1

Blobs PyPI 12.14.1 docs GitHub 12.14.1

Blobs Changefeed PyPI 12.0.0b4 docs GitHub 12.0.0b4

Certificates PyPI 4.6.0 docs GitHub 4.6.0

Cognitive Search PyPI 11.3.0 docs GitHub 11.3.0


PyPI 11.4.0b1 GitHub 11.4.0b1
NAME PA C K A GE DO C S SO URC E

Communication Chat PyPI 1.1.0 docs GitHub 1.1.0

Communication Email PyPI 1.0.0b1 docs GitHub 1.0.0b1

Communication Identity PyPI 1.3.0 docs GitHub 1.3.0

Communication Network PyPI 1.1.0b1 docs GitHub 1.1.0b1


Traversal

Communication Phone PyPI 1.0.1 docs GitHub 1.0.1


Numbers PyPI 1.1.0b2 GitHub 1.1.0b2

Communication Rooms PyPI 1.0.0b2 docs GitHub 1.0.0b2

Communication Sms PyPI 1.0.1 docs GitHub 1.0.1

Confidential Ledger PyPI 1.0.0 docs GitHub 1.0.0

Container Registry PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b1 GitHub 1.1.0b1

Conversation Analysis PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b2 GitHub 1.1.0b2

Core - Client - Core PyPI 1.26.0 docs GitHub 1.26.0

Core - Client - Experimental PyPI 1.0.0b1 docs GitHub 1.0.0b1

Core - Client - Tracing PyPI 1.0.0b9 docs GitHub 1.0.0b9


Opentelemetry

Cosmos DB PyPI 4.3.0 docs GitHub 4.3.0


PyPI 4.3.1b1 GitHub 4.3.1b1

Digital Twins Core PyPI 1.2.0 docs GitHub 1.2.0

Document Translation PyPI 1.0.0 docs GitHub 1.0.0

Event Grid PyPI 4.9.0 docs GitHub 4.9.0

Event Hubs PyPI 5.10.1 docs GitHub 5.10.1

Farming PyPI 1.0.0b1 docs GitHub 1.0.0b1

Files Data Lake PyPI 12.9.1 docs GitHub 12.9.1

Files Shares PyPI 12.10.1 docs GitHub 12.10.1

Form Recognizer PyPI 3.2.0 docs GitHub 3.2.0


NAME PA C K A GE DO C S SO URC E

Identity PyPI 1.11.0 docs GitHub 1.11.0


PyPI 1.12.0b2 GitHub 1.12.0b2

IoT Device Update PyPI 1.0.0 docs GitHub 1.0.0

Keys PyPI 4.7.0 docs GitHub 4.7.0


PyPI 4.8.0b1 GitHub 4.8.0b1

Load Testing PyPI 1.0.0b2 docs GitHub 1.0.0b2

Machine Learning PyPI 1.0.0 docs GitHub 1.0.0

Managed Private Endpoints PyPI 0.4.0 docs GitHub 0.4.0

Maps Geolocation PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Render PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Route PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Search PyPI 1.0.0b2 docs GitHub 1.0.0b2

Media Analytics Edge PyPI 1.0.0b2 docs GitHub 1.0.0b2

Metrics Advisor PyPI 1.0.0 docs GitHub 1.0.0

Monitor Ingestion PyPI 1.0.0b1 docs GitHub 1.0.0b1

Monitor OpenTelemetry PyPI 1.0.0b8 docs GitHub 1.0.0b8


Exporter

Monitor Query PyPI 1.0.3 docs GitHub 1.0.3

Purview Account PyPI 1.0.0b1 docs GitHub 1.0.0b1

Purview Catalog PyPI 1.0.0b4 docs GitHub 1.0.0b4

Purview Scanning PyPI 1.0.0b2 docs GitHub 1.0.0b2

Question Answering PyPI 1.1.0 docs GitHub 1.1.0

Queues PyPI 12.5.0 docs GitHub 12.5.0

Schema Registry PyPI 1.2.0 docs GitHub 1.2.0

Schema Registry - Avro PyPI 1.0.0b4 docs GitHub 1.0.0b4

Secrets PyPI 4.6.0 docs GitHub 4.6.0

Service Bus PyPI 7.8.1 docs GitHub 7.8.1


PyPI 7.9.0a1 GitHub 7.9.0a1
NAME PA C K A GE DO C S SO URC E

Spark PyPI 0.7.0 docs GitHub 0.7.0

Synapse - AccessControl PyPI 0.7.0 docs GitHub 0.7.0

Synapse - Monitoring PyPI 0.2.0 docs GitHub 0.2.0

Tables PyPI 12.4.1 docs GitHub 12.4.1

Text Analytics PyPI 5.2.1 docs GitHub 5.2.1

Video Analyzer Edge PyPI 1.0.0b4 docs GitHub 1.0.0b4

Web PubSub PyPI 1.0.1 docs GitHub 1.0.1

Resource Management - PyPI 1.0.0b5 docs GitHub 1.0.0b5


Chaos

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Devcenter

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Elasticsan

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Security Devops

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


Advisor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Agfood

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Agrifood

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Alerts Management PyPI 2.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


API Management

Resource Management - PyPI 2.2.0 docs GitHub 2.2.0


App Configuration

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


App Containers PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


App Platform

Resource Management - PyPI 3.1.0 docs GitHub 3.1.0


Application Insights
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Attestation

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Authorization

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automanage

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automation PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Azure AD B2C

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Arc Data

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Stack

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Azure Stack HCI

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


Azure VMware Solution

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bare Metal Infrastructure

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Billing

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bot Service PyPI 2.0.0b3 GitHub 2.0.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Change Analysis

Resource Management - PyPI 13.3.0 docs GitHub 13.3.0


Cognitive Services

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Commerce

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Communication PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 29.0.0 docs GitHub 29.0.0


Compute

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confidential Ledger
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confluent PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Connected VMWare

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Container Instances

Resource Management - PyPI 20.6.0 docs GitHub 20.6.0


Container Service

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Content Delivery Network PyPI 12.1.0b1 GitHub 12.1.0b1

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Cosmos DB PyPI 9.0.0b1 GitHub 9.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Cost Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Custom Providers

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dashboard

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box Edge

Resource Management - PyPI 2.9.0 docs GitHub 2.9.0


Data Factory

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Data Migration

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Data Protection

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Share

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Databricks PyPI 1.1.0b1 GitHub 1.1.0b1

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Deployment Manager

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Desktop Virtualization
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Device Update

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


DevTest Labs

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


DNS Resolver

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dynatrace

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Edge Order

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Education

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Elastic

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Event Hubs

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Extended Location

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Fluid Relay

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Guest Config

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


HANA on Azure

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


HDInsight

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Health Bot

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Healthcare APIs

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Hybrid Compute

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Hybrid Kubernetes

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Hybrid Network
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 2.3.0 docs GitHub 2.3.0


IoT Hub

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


KeyVault

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Kubernetes Configuration

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Kusto

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Lab Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Load Testing

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Log Analytics PyPI 13.0.0b5 GitHub 13.0.0b5

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Logic Apps

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Logz

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Machine Learning Services

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maintenance PyPI 2.1.0b1 GitHub 2.1.0b1

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Managed Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Groups

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Partner

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maps

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Marketplace Ordering

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Media Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Mixed Reality
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Mobile Network

Resource Management - PyPI 5.0.1 docs GitHub 5.0.1


Monitor

Resource Management - PyPI 9.0.1 docs GitHub 9.0.1


NetApp

Resource Management - PyPI 22.1.0 docs GitHub 22.1.0


Network

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Nginx

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Notification Hubs

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Oep

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Operations Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Orbital

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Peering

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Policy Insights PyPI 1.1.0b3 GitHub 1.1.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Portal

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Power BI Dedicated

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Purview

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quantum

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quota

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Rdbms PyPI 10.2.0b3 GitHub 10.2.0b3

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Recovery Services
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 5.0.0 docs GitHub 5.0.0


Recovery Services Backup PyPI 5.1.0b2 GitHub 5.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Recovery Services Site
Recovery

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Red Hat OpenShift

Resource Management - PyPI 14.0.0 docs GitHub 14.0.0


Redis

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Redis Enterprise

Resource Management - PyPI 1.0.0b1 GitHub 1.0.0b1


Region Move

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Relay

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Reservations

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Resource Connector

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Resource Health

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Resource Mover PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 21.2.1 docs GitHub 21.2.1


Resources

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Scheduler PyPI 7.0.0b1

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Scvmm

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Search

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Security

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Security Insight PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Serial Console
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 8.1.0 docs GitHub 8.1.0


Service Bus

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Service Fabric

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Service Fabric Managed PyPI 2.0.0b2 GitHub 2.0.0b2
Clusters

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Service Linker

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


SignalR

Resource Management - PyPI 3.0.1 docs GitHub 3.0.1


SQL PyPI 4.0.0b4 GitHub 4.0.0b4

Resource Management - PyPI 0.5.0 docs GitHub 0.5.0


SQL Virtual Machine

Resource Management - PyPI 20.1.0 docs GitHub 20.1.0


Storage

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Pool

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Sync

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Stream Analytics

Resource Management - PyPI 3.1.1 docs GitHub 3.1.1


Subscription

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Support

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Synapse PyPI 2.1.0b5 GitHub 2.1.0b5

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Test Base

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Time Series Insights

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Web

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Web PubSub
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Work Load Monitor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Workloads

All libraries
NAME PA C K A GE DO C S SO URC E

Administration PyPI 4.2.0 docs GitHub 4.2.0

Anomaly Detector PyPI 3.0.0b5 docs GitHub 3.0.0b5

App Configuration PyPI 1.3.0 docs GitHub 1.3.0

App Configuration Provider PyPI 1.0.0b1 docs GitHub 1.0.0b1

Artifacts PyPI 0.14.0 docs GitHub 0.14.0

Attestation PyPI 1.0.0 docs GitHub 1.0.0

Avro Encoder PyPI 1.0.0 docs GitHub 1.0.0

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store AIO

Azure Mixed Reality PyPI 1.0.0b1 docs GitHub 1.0.0b1


Authentication

Azure Remote Rendering PyPI 1.0.0b1 docs GitHub 1.0.0b1

Blobs PyPI 12.14.1 docs GitHub 12.14.1

Blobs Changefeed PyPI 12.0.0b4 docs GitHub 12.0.0b4

Certificates PyPI 4.6.0 docs GitHub 4.6.0

Cognitive Search PyPI 11.3.0 docs GitHub 11.3.0


PyPI 11.4.0b1 GitHub 11.4.0b1

Communication Chat PyPI 1.1.0 docs GitHub 1.1.0

Communication Email PyPI 1.0.0b1 docs GitHub 1.0.0b1

Communication Identity PyPI 1.3.0 docs GitHub 1.3.0


NAME PA C K A GE DO C S SO URC E

Communication Network PyPI 1.1.0b1 docs GitHub 1.1.0b1


Traversal

Communication Phone PyPI 1.0.1 docs GitHub 1.0.1


Numbers PyPI 1.1.0b2 GitHub 1.1.0b2

Communication Rooms PyPI 1.0.0b2 docs GitHub 1.0.0b2

Communication Sms PyPI 1.0.1 docs GitHub 1.0.1

Confidential Ledger PyPI 1.0.0 docs GitHub 1.0.0

Container Registry PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b1 GitHub 1.1.0b1

Conversation Analysis PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b2 GitHub 1.1.0b2

Core - Client - Core PyPI 1.26.0 docs GitHub 1.26.0

Core - Client - Experimental PyPI 1.0.0b1 docs GitHub 1.0.0b1

Core - Client - Tracing PyPI 1.0.0b9 docs GitHub 1.0.0b9


Opentelemetry

Cosmos DB PyPI 4.3.0 docs GitHub 4.3.0


PyPI 4.3.1b1 GitHub 4.3.1b1

Digital Twins Core PyPI 1.2.0 docs GitHub 1.2.0

Document Translation PyPI 1.0.0 docs GitHub 1.0.0

Event Grid PyPI 4.9.0 docs GitHub 4.9.0

Event Hubs PyPI 5.10.1 docs GitHub 5.10.1

Farming PyPI 1.0.0b1 docs GitHub 1.0.0b1

Files Data Lake PyPI 12.9.1 docs GitHub 12.9.1

Files Shares PyPI 12.10.1 docs GitHub 12.10.1

Form Recognizer PyPI 3.2.0 docs GitHub 3.2.0

Identity PyPI 1.11.0 docs GitHub 1.11.0


PyPI 1.12.0b2 GitHub 1.12.0b2

IoT Device Update PyPI 1.0.0 docs GitHub 1.0.0

Keys PyPI 4.7.0 docs GitHub 4.7.0


PyPI 4.8.0b1 GitHub 4.8.0b1
NAME PA C K A GE DO C S SO URC E

Load Testing PyPI 1.0.0b2 docs GitHub 1.0.0b2

Machine Learning PyPI 1.0.0 docs GitHub 1.0.0

Managed Private Endpoints PyPI 0.4.0 docs GitHub 0.4.0

Maps Geolocation PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Render PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Route PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Search PyPI 1.0.0b2 docs GitHub 1.0.0b2

Media Analytics Edge PyPI 1.0.0b2 docs GitHub 1.0.0b2

Metrics Advisor PyPI 1.0.0 docs GitHub 1.0.0

Monitor Ingestion PyPI 1.0.0b1 docs GitHub 1.0.0b1

Monitor OpenTelemetry PyPI 1.0.0b8 docs GitHub 1.0.0b8


Exporter

Monitor Query PyPI 1.0.3 docs GitHub 1.0.3

Purview Account PyPI 1.0.0b1 docs GitHub 1.0.0b1

Purview Catalog PyPI 1.0.0b4 docs GitHub 1.0.0b4

Purview Scanning PyPI 1.0.0b2 docs GitHub 1.0.0b2

Question Answering PyPI 1.1.0 docs GitHub 1.1.0

Queues PyPI 12.5.0 docs GitHub 12.5.0

Schema Registry PyPI 1.2.0 docs GitHub 1.2.0

Schema Registry - Avro PyPI 1.0.0b4 docs GitHub 1.0.0b4

Secrets PyPI 4.6.0 docs GitHub 4.6.0

Service Bus PyPI 7.8.1 docs GitHub 7.8.1


PyPI 7.9.0a1 GitHub 7.9.0a1

Spark PyPI 0.7.0 docs GitHub 0.7.0

Synapse - AccessControl PyPI 0.7.0 docs GitHub 0.7.0

Synapse - Monitoring PyPI 0.2.0 docs GitHub 0.2.0


NAME PA C K A GE DO C S SO URC E

Tables PyPI 12.4.1 docs GitHub 12.4.1

Text Analytics PyPI 5.2.1 docs GitHub 5.2.1

Video Analyzer Edge PyPI 1.0.0b4 docs GitHub 1.0.0b4

Web PubSub PyPI 1.0.1 docs GitHub 1.0.1

Resource Management - PyPI 1.0.0b5 docs GitHub 1.0.0b5


Chaos

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Devcenter

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Elasticsan

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Security Devops

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


Advisor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Agfood

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Agrifood

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Alerts Management PyPI 2.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


API Management

Resource Management - PyPI 2.2.0 docs GitHub 2.2.0


App Configuration

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


App Containers PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


App Platform

Resource Management - PyPI 3.1.0 docs GitHub 3.1.0


Application Insights

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Attestation

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Authorization
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automanage

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automation PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Azure AD B2C

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Arc Data

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Stack

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Azure Stack HCI

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


Azure VMware Solution

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bare Metal Infrastructure

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Billing

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bot Service PyPI 2.0.0b3 GitHub 2.0.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Change Analysis

Resource Management - PyPI 13.3.0 docs GitHub 13.3.0


Cognitive Services

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Commerce

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Communication PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 29.0.0 docs GitHub 29.0.0


Compute

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confidential Ledger

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confluent PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Connected VMWare
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Container Instances

Resource Management - PyPI 20.6.0 docs GitHub 20.6.0


Container Service

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Content Delivery Network PyPI 12.1.0b1 GitHub 12.1.0b1

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Cosmos DB PyPI 9.0.0b1 GitHub 9.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Cost Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Custom Providers

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dashboard

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box Edge

Resource Management - PyPI 2.9.0 docs GitHub 2.9.0


Data Factory

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Data Migration

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Data Protection

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Share

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Databricks PyPI 1.1.0b1 GitHub 1.1.0b1

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Deployment Manager

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Desktop Virtualization

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Device Update

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


DevTest Labs
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


DNS Resolver

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dynatrace

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Edge Order

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Education

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Elastic

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Event Hubs

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Extended Location

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Fluid Relay

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Guest Config

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


HANA on Azure

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


HDInsight

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Health Bot

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Healthcare APIs

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Hybrid Compute

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Hybrid Kubernetes

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Hybrid Network

Resource Management - PyPI 2.3.0 docs GitHub 2.3.0


IoT Hub

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


KeyVault
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Kubernetes Configuration

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Kusto

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Lab Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Load Testing

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Log Analytics PyPI 13.0.0b5 GitHub 13.0.0b5

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Logic Apps

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Logz

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Machine Learning Services

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maintenance PyPI 2.1.0b1 GitHub 2.1.0b1

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Managed Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Groups

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Partner

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maps

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Marketplace Ordering

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Media Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Mixed Reality

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Mobile Network

Resource Management - PyPI 5.0.1 docs GitHub 5.0.1


Monitor
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 9.0.1 docs GitHub 9.0.1


NetApp

Resource Management - PyPI 22.1.0 docs GitHub 22.1.0


Network

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Nginx

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Notification Hubs

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Oep

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Operations Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Orbital

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Peering

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Policy Insights PyPI 1.1.0b3 GitHub 1.1.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Portal

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Power BI Dedicated

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Purview

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quantum

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quota

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Rdbms PyPI 10.2.0b3 GitHub 10.2.0b3

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Recovery Services

Resource Management - PyPI 5.0.0 docs GitHub 5.0.0


Recovery Services Backup PyPI 5.1.0b2 GitHub 5.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Recovery Services Site
Recovery
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Red Hat OpenShift

Resource Management - PyPI 14.0.0 docs GitHub 14.0.0


Redis

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Redis Enterprise

Resource Management - PyPI 1.0.0b1 GitHub 1.0.0b1


Region Move

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Relay

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Reservations

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Resource Connector

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Resource Health

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Resource Mover PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 21.2.1 docs GitHub 21.2.1


Resources

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Scheduler PyPI 7.0.0b1

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Scvmm

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Search

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Security

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Security Insight PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Serial Console

Resource Management - PyPI 8.1.0 docs GitHub 8.1.0


Service Bus

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Service Fabric
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Service Fabric Managed PyPI 2.0.0b2 GitHub 2.0.0b2
Clusters

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Service Linker

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


SignalR

Resource Management - PyPI 3.0.1 docs GitHub 3.0.1


SQL PyPI 4.0.0b4 GitHub 4.0.0b4

Resource Management - PyPI 0.5.0 docs GitHub 0.5.0


SQL Virtual Machine

Resource Management - PyPI 20.1.0 docs GitHub 20.1.0


Storage

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Pool

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Sync

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Stream Analytics

Resource Management - PyPI 3.1.1 docs GitHub 3.1.1


Subscription

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Support

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Synapse PyPI 2.1.0b5 GitHub 2.1.0b5

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Test Base

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Time Series Insights

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Web

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Web PubSub

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Work Load Monitor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Workloads
NAME PA C K A GE DO C S SO URC E

azure-agrifood-nspkg PyPI 1.0.0

azure-ai-language-nspkg PyPI 1.0.0

azure-ai-translation-nspkg PyPI 1.0.0

azure-communication- PyPI 1.0.0b4


administration

azure-iothub- PyPI 1.2.0


provisioningserviceclient

azure-iot-modelsrepository PyPI 1.0.0b1

azure-iot-nspkg PyPI 1.0.1

azure-media-nspkg PyPI 1.0.0

azure-messaging-nspkg PyPI 1.0.0

azure-mixedreality-nspkg PyPI 1.0.0

azure-monitor-nspkg PyPI 1.0.0

azure-opentelemetry- PyPI 1.0.0b2


exporter-azuremonitor

azure-purview- PyPI 1.0.0b1


administration

azure-purview-nspkg PyPI 2.0.0

azure-security-nspkg PyPI 1.0.0

iotedgedev PyPI 3.3.6

iotedgehubdev PyPI 0.14.16

text-analytics PyPI 1.0.2

Administration PyPI 4.2.0 docs GitHub 4.2.0

Anomaly Detector PyPI 3.0.0b5 docs GitHub 3.0.0b5

Anomaly Detector PyPI 0.3.0 docs GitHub 0.3.0

App Configuration PyPI 1.3.0 docs GitHub 1.3.0

App Configuration Provider PyPI 1.0.0b1 docs GitHub 1.0.0b1

Application Insights PyPI 0.1.1 GitHub 0.1.1


NAME PA C K A GE DO C S SO URC E

Artifacts PyPI 0.14.0 docs GitHub 0.14.0

Attestation PyPI 1.0.0 docs GitHub 1.0.0

Autosuggest PyPI 0.2.0 GitHub 0.2.0

Avro Encoder PyPI 1.0.0 docs GitHub 1.0.0

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store

Azure Blob Storage PyPI 1.1.4 docs GitHub 1.1.4


Checkpoint Store AIO

Azure Mixed Reality PyPI 1.0.0b1 docs GitHub 1.0.0b1


Authentication

Azure Remote Rendering PyPI 1.0.0b1 docs GitHub 1.0.0b1

Batch PyPI 12.0.0 docs GitHub 12.0.0

Blobs PyPI 12.14.1 docs GitHub 12.14.1

Blobs Changefeed PyPI 12.0.0b4 docs GitHub 12.0.0b4

Certificates PyPI 4.6.0 docs GitHub 4.6.0

Cognitive Search PyPI 11.3.0 docs GitHub 11.3.0


PyPI 11.4.0b1 GitHub 11.4.0b1

Cognitive Services PyPI 3.0.0 GitHub 3.0.0


Knowledge Namespace
Package

Cognitive Services PyPI 3.0.1 GitHub 3.0.1


Language Namespace
Package

Cognitive Services PyPI 3.0.1 GitHub 3.0.1


Namespace Package

Cognitive Services Search PyPI 3.0.1 GitHub 3.0.1


Namespace Package

Cognitive Services Vision PyPI 3.0.1 GitHub 3.0.1


Namespace Package

Common PyPI 1.1.28 docs GitHub 1.1.28

Common PyPI 2.1.0 GitHub 2.1.0

Communication Chat PyPI 1.1.0 docs GitHub 1.1.0


NAME PA C K A GE DO C S SO URC E

Communication Email PyPI 1.0.0b1 docs GitHub 1.0.0b1

Communication Identity PyPI 1.3.0 docs GitHub 1.3.0

Communication Namespace PyPI 0.0.0b1 docs


Package

Communication Network PyPI 1.1.0b1 docs GitHub 1.1.0b1


Traversal

Communication Phone PyPI 1.0.1 docs GitHub 1.0.1


Numbers PyPI 1.1.0b2 GitHub 1.1.0b2

Communication Rooms PyPI 1.0.0b2 docs GitHub 1.0.0b2

Communication Sms PyPI 1.0.1 docs GitHub 1.0.1

Computer Vision PyPI 0.9.0 docs GitHub 0.9.0

Confidential Ledger PyPI 1.0.0 docs GitHub 1.0.0

Container Registry PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b1 GitHub 1.1.0b1

Content Moderator PyPI 1.0.0 GitHub 1.0.0

Conversation Analysis PyPI 1.0.0 docs GitHub 1.0.0


PyPI 1.1.0b2 GitHub 1.1.0b2

Core - Client - Core PyPI 1.26.0 docs GitHub 1.26.0

Core - Client - Experimental PyPI 1.0.0b1 docs GitHub 1.0.0b1

Core - Client - Tracing PyPI 1.0.0b8 docs GitHub 1.0.0b8


Opencensus

Core - Client - Tracing PyPI 1.0.0b9 docs GitHub 1.0.0b9


Opentelemetry

Core Namespace Package PyPI 3.0.2 GitHub 3.0.2

Cosmos DB PyPI 4.3.0 docs GitHub 4.3.0


PyPI 4.3.1b1 GitHub 4.3.1b1

Custom Image Search PyPI 0.2.0 GitHub 0.2.0

Custom Search PyPI 0.3.0 GitHub 0.3.0

Custom Vision PyPI 3.1.0 docs GitHub 3.1.0

Data Lake Storage PyPI 0.0.51


NAME PA C K A GE DO C S SO URC E

Data Namespace Package PyPI 1.0.0 docs

Dev Tools PyPI 1.2.0 GitHub 1.2.0

Digital Twins Core PyPI 1.2.0 docs GitHub 1.2.0

Digital Twins Namespace PyPI 1.0.0


Package

Doc Warden PyPI 0.7.2 GitHub 0.7.2

Document Translation PyPI 1.0.0 docs GitHub 1.0.0

Entity Search PyPI 2.0.0 GitHub 2.0.0

Event Grid PyPI 4.9.0 docs GitHub 4.9.0

Event Hubs PyPI 5.10.1 docs GitHub 5.10.1

Face PyPI 0.6.0 docs GitHub 0.6.0

Farming PyPI 1.0.0b1 docs GitHub 1.0.0b1

Files Data Lake PyPI 12.9.1 docs GitHub 12.9.1

Files Shares PyPI 2.1.0 GitHub 2.1.0

Files Shares PyPI 12.10.1 docs GitHub 12.10.1

Form Recognizer PyPI 3.2.0 docs GitHub 3.2.0

Form Recognizer PyPI 0.1.1 GitHub 0.1.1

Graph RBAC PyPI 0.61.1 docs GitHub 0.61.1

Identity PyPI 1.11.0 docs GitHub 1.11.0


PyPI 1.12.0b2 GitHub 1.12.0b2

Image Search PyPI 2.0.0 GitHub 2.0.0

Ink Recognizer PyPI 1.0.0b1 GitHub 1.0.0b1

IoT Device PyPI 2.12.0

IoT Device Update PyPI 1.0.0 docs GitHub 1.0.0

IoT Hub PyPI 2.6.0

Key Vault PyPI 4.2.0 GitHub 4.2.0


NAME PA C K A GE DO C S SO URC E

Key Vault Namespace PyPI 1.0.0 GitHub 1.0.0


Package

Keys PyPI 4.7.0 docs GitHub 4.7.0


PyPI 4.8.0b1 GitHub 4.8.0b1

Kusto Data PyPI 2.0.0

Language Understanding PyPI 0.7.0 docs GitHub 0.7.0


(LUIS)

Load Testing PyPI 1.0.0b2 docs GitHub 1.0.0b2

Log Analytics PyPI 0.1.1 GitHub 0.1.1

Machine Learning PyPI 1.0.0 docs GitHub 1.0.0

Machine Learning PyPI 1.2.0

Managed Private Endpoints PyPI 0.4.0 docs GitHub 0.4.0

Maps Geolocation PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Render PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Route PyPI 1.0.0b1 docs GitHub 1.0.0b1

Maps Search PyPI 1.0.0b2 docs GitHub 1.0.0b2

Media Analytics Edge PyPI 1.0.0b2 docs GitHub 1.0.0b2

Metrics Advisor PyPI 1.0.0 docs GitHub 1.0.0

Monitor PyPI 0.4.0 GitHub 0.4.0

Monitor Ingestion PyPI 1.0.0b1 docs GitHub 1.0.0b1

Monitor OpenTelemetry PyPI 1.0.0b8 docs GitHub 1.0.0b8


Exporter

Monitor Query PyPI 1.0.3 docs GitHub 1.0.3

MsRest PyPI 0.7.1 GitHub 0.7.1

MsRest Azure PyPI 0.6.4 GitHub 0.6.4

News Search PyPI 2.0.0 GitHub 2.0.0

Personalizer PyPI 0.1.0 GitHub 0.1.0

Purview Account PyPI 1.0.0b1 docs GitHub 1.0.0b1


NAME PA C K A GE DO C S SO URC E

Purview Catalog PyPI 1.0.0b4 docs GitHub 1.0.0b4

Purview Scanning PyPI 1.0.0b2 docs GitHub 1.0.0b2

QnA Maker PyPI 0.3.0 docs GitHub 0.3.0

Question Answering PyPI 1.1.0 docs GitHub 1.1.0

Queues PyPI 12.5.0 docs GitHub 12.5.0

Schema Registry PyPI 1.2.0 docs GitHub 1.2.0

Schema Registry - Avro PyPI 1.0.0b4 docs GitHub 1.0.0b4

Search Namespace Package PyPI 1.0.0 GitHub 1.0.0

Secrets PyPI 4.6.0 docs GitHub 4.6.0

Service Bus PyPI 7.8.1 docs GitHub 7.8.1


PyPI 7.9.0a1 GitHub 7.9.0a1

Service Fabric PyPI 8.2.0.0 docs GitHub 8.2.0.0

Spark PyPI 0.7.0 docs GitHub 0.7.0

Speech PyPI 1.14.0

Spell Check PyPI 2.0.0 GitHub 2.0.0

Storage PyPI 0.37.0 GitHub 0.37.0

Storage Namespace PyPI 3.1.0 GitHub 3.1.0


Package

Synapse PyPI 0.1.1 docs GitHub 0.1.1

Synapse - AccessControl PyPI 0.7.0 docs GitHub 0.7.0

Synapse - Monitoring PyPI 0.2.0 docs GitHub 0.2.0

Synapse Namespace PyPI 1.0.0 GitHub 1.0.0


Package

Tables PyPI 12.4.1 docs GitHub 12.4.1

Text Analytics PyPI 5.2.1 docs GitHub 5.2.1

Text Analytics PyPI 0.2.1 GitHub 0.2.1

Text Analytics Namespace PyPI 1.0.0 GitHub 1.0.0


Package
NAME PA C K A GE DO C S SO URC E

Tox Monorepo PyPI 0.1.2 GitHub 0.1.2

Uamqp PyPI 1.6.1 GitHub 1.6.1

Video Analyzer Edge PyPI 1.0.0b4 docs GitHub 1.0.0b4

Video Search PyPI 2.0.0 GitHub 2.0.0

Visual Search PyPI 0.2.0 GitHub 0.2.0

Web PubSub PyPI 1.0.1 docs GitHub 1.0.1

Web Search PyPI 2.0.0 GitHub 2.0.0

Core - Management - Core PyPI 1.3.2 docs GitHub 1.3.2

Resource Management PyPI 5.0.0 GitHub 5.0.0

Resource Management - PyPI 1.0.0b5 docs GitHub 1.0.0b5


Chaos

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Devcenter

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Elasticsan

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Security Devops

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


Advisor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Agfood

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Agrifood

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Alerts Management PyPI 2.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


API Management

Resource Management - PyPI 2.2.0 docs GitHub 2.2.0


App Configuration

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


App Containers PyPI 2.0.0b1 GitHub 2.0.0b1
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


App Platform

Resource Management - PyPI 3.1.0 docs GitHub 3.1.0


Application Insights

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Attestation

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Authorization

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automanage

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Automation PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Azure AD B2C

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Arc Data

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Azure Stack

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Azure Stack HCI

Resource Management - PyPI 7.1.0 docs GitHub 7.1.0


Azure VMware Solution

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bare Metal Infrastructure

Resource Management - PyPI 16.2.0 docs GitHub 16.2.0


Batch

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Batch AI

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Billing

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Bot Service PyPI 2.0.0b3 GitHub 2.0.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Change Analysis

Resource Management - PyPI 13.3.0 docs GitHub 13.3.0


Cognitive Services
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Commerce

Resource Management - PyPI 0.20.0


Common

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Communication PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 29.0.0 docs GitHub 29.0.0


Compute

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confidential Ledger

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Confluent PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Connected VMWare

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Consumption

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Container Instances

Resource Management - PyPI 20.6.0 docs GitHub 20.6.0


Container Service

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Content Delivery Network PyPI 12.1.0b1 GitHub 12.1.0b1

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Cosmos DB PyPI 9.0.0b1 GitHub 9.0.0b1

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Cost Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Custom Providers

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dashboard

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Box Edge

Resource Management - PyPI 2.9.0 docs GitHub 2.9.0


Data Factory
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 0.6.0 docs GitHub 0.6.0


Data Lake Analytics

Resource Management - PyPI 3.0.1 GitHub 3.0.1


Data Lake Namespace
Package

Resource Management - PyPI 0.5.0 docs GitHub 0.5.0


Data Lake Storage

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Data Migration

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Data Protection

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Data Share

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Databricks PyPI 1.1.0b1 GitHub 1.1.0b1

Resource Management - PyPI 2.0.0 docs


Datadog

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Deployment Manager

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Desktop Virtualization

Resource Management - PyPI 0.2.0 docs GitHub 0.2.0


Dev Spaces

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Device Update

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


DevTest Labs

Resource Management - PyPI 6.2.0 docs GitHub 6.2.0


Digital Twins

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


DNS

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


DNS Resolver

Resource Management - PyPI 0.1.3 GitHub 0.1.3


Document DB

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Dynatrace
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 0.1.0 GitHub 0.1.0


Edge Gateway

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Edge Order

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Education

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Elastic

Resource Management - PyPI 10.2.0 docs GitHub 10.2.0


Event Grid

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Event Hubs

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Extended Location

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Fluid Relay

Resource Management - PyPI 1.0.1 docs GitHub 1.0.1


Frontdoor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Guest Config

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


HANA on Azure

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


HDInsight

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Health Bot

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Healthcare APIs

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Hybrid Compute

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Hybrid Kubernetes

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Hybrid Network

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Image Builder
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 9.0.0 docs GitHub 9.0.0


IoT Central

Resource Management - PyPI 2.3.0 docs GitHub 2.3.0


IoT Hub

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


IoT Hub Provisioning
Services

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


KeyVault

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Kubernetes Configuration

Resource Management - PyPI 3.0.0 docs GitHub 3.0.0


Kusto

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Lab Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Load Testing

Resource Management - PyPI 12.0.0 docs GitHub 12.0.0


Log Analytics PyPI 13.0.0b5 GitHub 13.0.0b5

Resource Management - PyPI 10.0.0 docs GitHub 10.0.0


Logic Apps

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Logz

Resource Management - PyPI 0.4.1 docs GitHub 0.4.1


Machine Learning Compute

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Machine Learning Services

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maintenance PyPI 2.1.0b1 GitHub 2.1.0b1

Resource Management - PyPI 6.1.0 docs GitHub 6.1.0


Managed Identity

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Managed Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Groups

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Management Partner
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Maps

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Marketplace Ordering

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Media Services

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Mixed Reality

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Mobile Network

Resource Management - PyPI 5.0.1 docs GitHub 5.0.1


Monitor

Resource Management - PyPI 3.0.2 GitHub 3.0.2


Namespace Package

Resource Management - PyPI 9.0.1 docs GitHub 9.0.1


NetApp

Resource Management - PyPI 22.1.0 docs GitHub 22.1.0


Network

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Nginx

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Notification Hubs

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Oep

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Operations Management

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Orbital

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Peering

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Policy Insights PyPI 1.1.0b3 GitHub 1.1.0b3

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Portal

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Power BI Dedicated
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Power BI Embedded

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Private DNS

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Purview

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quantum

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Quota

Resource Management - PyPI 10.1.0 docs GitHub 10.1.0


Rdbms PyPI 10.2.0b3 GitHub 10.2.0b3

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Recovery Services

Resource Management - PyPI 5.0.0 docs GitHub 5.0.0


Recovery Services Backup PyPI 5.1.0b2 GitHub 5.1.0b2

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Recovery Services Site
Recovery

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Red Hat OpenShift

Resource Management - PyPI 14.0.0 docs GitHub 14.0.0


Redis

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Redis Enterprise

Resource Management - PyPI 1.0.0b1 GitHub 1.0.0b1


Region Move

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Relay

Resource Management - PyPI 2.1.0 docs GitHub 2.1.0


Reservations

Resource Management - PyPI 1.0.0b2 docs GitHub 1.0.0b2


Resource Connector

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Resource Graph

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Resource Health
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Resource Mover PyPI 1.1.0b2 GitHub 1.1.0b2

Resource Management - PyPI 21.2.1 docs GitHub 21.2.1


Resources

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Scheduler PyPI 7.0.0b1

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Scvmm

Resource Management - PyPI 8.0.0 docs GitHub 8.0.0


Search

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Security

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Security Insight PyPI 2.0.0b1 GitHub 2.0.0b1

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Serial Console

Resource Management - PyPI 2.0.0 GitHub 2.0.0


Server Manager

Resource Management - PyPI 8.1.0 docs GitHub 8.1.0


Service Bus

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Service Fabric

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Service Fabric Managed PyPI 2.0.0b2 GitHub 2.0.0b2
Clusters

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


Service Linker

Resource Management - PyPI 1.1.0 docs GitHub 1.1.0


SignalR

Resource Management - PyPI 3.0.1 docs GitHub 3.0.1


SQL PyPI 4.0.0b4 GitHub 4.0.0b4

Resource Management - PyPI 0.5.0 docs GitHub 0.5.0


SQL Virtual Machine

Resource Management - PyPI 20.1.0 docs GitHub 20.1.0


Storage

Resource Management - PyPI 1.3.0 docs GitHub 1.3.0


Storage Cache
NAME PA C K A GE DO C S SO URC E

Resource Management - PyPI 0.1.0 docs GitHub 0.1.0


Storage Import Export

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Pool

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Storage Sync

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Stream Analytics

Resource Management - PyPI 3.1.1 docs GitHub 3.1.1


Subscription

Resource Management - PyPI 6.0.0 docs GitHub 6.0.0


Support

Resource Management - PyPI 2.0.0 docs GitHub 2.0.0


Synapse PyPI 2.1.0b5 GitHub 2.1.0b5

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Test Base

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Time Series Insights

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Traffic Manager

Resource Management - PyPI 0.2.0 docs GitHub 0.2.0


VM Ware Cloud Simple

Resource Management - PyPI 7.0.0 docs GitHub 7.0.0


Web

Resource Management - PyPI 1.0.0 docs GitHub 1.0.0


Web PubSub

Resource Management - PyPI 1.0.0b3 docs GitHub 1.0.0b3


Work Load Monitor

Resource Management - PyPI 1.0.0b1 docs GitHub 1.0.0b1


Workloads

Service Management - PyPI 0.20.7 GitHub 0.20.7


Legacy
Azure libraries for Python API reference
10/28/2022 • 2 minutes to read • Edit Online

Full reference for all services:


Python API browser >>>
We are piloting per-service reference sections, starting with Storage (blobs, files, queues). Please provide
feedback on this experience.
Try the Storage reference pilot >>>
Example: Use the Azure libraries to provision a
resource group
10/28/2022 • 3 minutes to read • Edit Online

This example demonstrates how to use the Azure SDK management libraries in a Python script to provision a
resource group. (The Equivalent Azure CLI command is given later in this article. If you prefer to use the Azure
portal, see Create resource groups.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create and activate a virtual environment for this project.

2: Install the Azure library packages


Create a file named requirements.txt with the following contents:

azure-mgmt-resource>=18.0.0
azure-identity>=1.5.0

Be sure to use these versions of the libraries. Using older versions will result in errors such as
"'AzureCliCredential' object object has no attribute 'signed_session'."
In a terminal or command prompt with the virtual environment activated, install the requirements:

pip install -r requirements.txt

3: Write code to provision a resource group


Create a Python file named provision_rg.py with the following code. The comments explain the details:
# Import the needed credential and management objects from the libraries.
from azure.mgmt.resource import ResourceManagementClient
from azure.identity import AzureCliCredential
import os

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable.


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Obtain the management object for resources.


resource_client = ResourceManagementClient(credential, subscription_id)

# Provision the resource group.


rg_result = resource_client.resource_groups.create_or_update(
"PythonAzureExample-rg",
{
"location": "centralus"
}
)

# Within the ResourceManagementClient is an object named resource_groups,


# which is of class ResourceGroupsOperations, which contains methods like
# create_or_update.
#
# The second parameter to create_or_update here is technically a ResourceGroup
# object. You can create the object directly using ResourceGroup(location=LOCATION)
# or you can express the object as inline JSON as shown here. For details,
# see Inline JSON pattern for object arguments at
# https://docs.microsoft.com/azure/developer/python/azure-sdk-overview#inline-json-pattern-for-object-
arguments.

print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")

# The return value is another ResourceGroup object with all the details of the
# new group. In this case the call is synchronous: the resource group has been
# provisioned by the time the call returns.

# To update the resource group, repeat the call with different properties, such
# as tags:
rg_result = resource_client.resource_groups.create_or_update(
"PythonAzureExample-rg",
{
"location": "centralus",
"tags": { "environment":"test", "department":"tech" }
}
)

print(f"Updated resource group {rg_result.name} with tags")

# Optional lines to delete the resource group. begin_delete is asynchronous.


# poller = resource_client.resource_groups.begin_delete(rg_result.name)
# result = poller.result()

This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)

4: Run the script


python provision_rg.py

5: Verify the resource group


You can verify that the group exists through the Azure portal or the Azure CLI.
Azure portal: open the Azure portal, select Resource groups , and check that the group is listed. If you've
already had the portal open, use the Refresh command to update the list.
Azure CLI: run the following command:

az group show -n PythonAzureExample-rg

6: Clean up resources
az group delete -n PythonAzureExample-rg --no-wait

Run this command if you don't need to keep the resource group provisioned in this example. Resource groups
don't incur any ongoing charges in your subscription, but it's a good practice to clean up any group that you
aren't actively using. The --no-wait argument allows the command to return immediately instead of waiting for
the operation to finish.
You can also use the ResourceManagementClient.resource_groups.delete method to delete a resource group from
code.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:

az group create -n PythonAzureExample-rg -l centralus

See also
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to list resource
groups and resources
10/28/2022 • 3 minutes to read • Edit Online

This example demonstrates how to use the Azure SDK management libraries in a Python script to perform two
tasks:
List all the resource groups in an Azure subscription.
List resources within a specific resource group.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
The Equivalent Azure CLI command is given later in this article.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create and activate a virtual environment for this project.

2: Install the Azure library packages


Create a file named requirements.txt with the following contents:

azure-mgmt-resource>=18.0.0
azure-identity>=1.5.0

Be sure to use these versions of the libraries. Using older versions will result in errors such as
"'AzureCliCredential' object object has no attribute 'signed_session'."
In a terminal or command prompt with the virtual environment activated, install the requirements:

pip install -r requirements.txt

3: Write code to work with resource groups


3a. List resource groups in a subscription
Create a Python file named list_groups.py with the following code. The comments explain the details:
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
import os

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable.


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Obtain the management object for resources.


resource_client = ResourceManagementClient(credential, subscription_id)

# Retrieve the list of resource groups


group_list = resource_client.resource_groups.list()

# Show the groups in formatted output


column_width = 40

print("Resource Group".ljust(column_width) + "Location")


print("-" * (column_width * 2))

for group in list(group_list):


print(f"{group.name:<{column_width}}{group.location}")

3b. List resources within a specific resource group


Create a Python file named list_resources.py with the following code. The comments explain the details.
By default, the code lists resources in "myResourceGroup". To use a different resource group, set the
RESOURCE_GROUP_NAME environment variable to the desired group name.
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
import os

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable.


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Retrieve the resource group to use, defaulting to "myResourceGroup".


resource_group = os.getenv("RESOURCE_GROUP_NAME", "myResourceGroup")

# Obtain the management object for resources.


resource_client = ResourceManagementClient(credential, subscription_id)

# Retrieve the list of resources in "myResourceGroup" (change to any name desired).


# The expand argument includes additional properties in the output.
resource_list = resource_client.resources.list_by_resource_group(
resource_group, expand = "createdTime,changedTime")

# Show the groups in formatted output


column_width = 36

print("Resource".ljust(column_width) + "Type".ljust(column_width)
+ "Create date".ljust(column_width) + "Change date".ljust(column_width))
print("-" * (column_width * 4))

for resource in list(resource_list):


print(f"{resource.name:<{column_width}}{resource.type:<{column_width}}"
f"{str(resource.created_time):<{column_width}}{str(resource.changed_time):<{column_width}}")

Authentication in the code


This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)

4: Run the scripts


List all resources groups in the subscription:

python list_groups.py

List all resources in a resource group:

python list_resources.py

For reference: equivalent Azure CLI commands


The following Azure CLI command lists resource groups in a subscription using JSON output:
az group list

The following command lists resources within the "myResourceGroup" in the centralus region (the location
argument is necessary to identify a specific data center):

az resource list --resource-group myResourceGroup --location centralus

See also
Example: Provision a resource group
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Provision Azure Storage using the Azure
libraries for Python
10/28/2022 • 5 minutes to read • Edit Online

In this article, you learn how to use the Azure management libraries in a Python script to provision a resource
group that contains and Azure Storage account and a Blob storage container. (Equivalent Azure CLI commands
are given later in this article. If you prefer to use the Azure portal, see Create an Azure storage account and
Create a blob container.)
After provisioning the resources, see Example: Use Azure Storage to use the Azure client libraries in Python
application code to upload a file to the Blob storage container.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create a service principal for local development, and create and activate a virtual environment for this
project.

2: Install the needed Azure library packages


1. Create a requirements.txt file that lists the management libraries used in this example:

azure-mgmt-resource
azure-mgmt-storage
azure-identity

2. In your terminal with the virtual environment activated, install the requirements:

pip install -r requirements.txt

3: Write code to provision storage resources


This section describes how to provision storage resources from Python code. If you prefer, you can also
provision resources through the Azure portal or through the equivalent Azure CLI commands.
Create a Python file named provision_blob.py with the following code. The comments explain the details:

import os, random

# Import the needed management objects from the libraries. The azure.common library
# is installed automatically with the other libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.storage import StorageManagementClient

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()
# Retrieve subscription ID from environment variable.
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Obtain the management object for resources.


resource_client = ResourceManagementClient(credential, subscription_id)

# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-Storage-rg"
LOCATION = "centralus"

# Step 1: Provision the resource group.

rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })

print(f"Provisioned resource group {rg_result.name}")

# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group

# Step 2: Provision the storage account, starting with a management object.

storage_client = StorageManagementClient(credential, subscription_id)

# This example uses the CLI profile credentials because we assume the script
# is being used to provision the resource in the same way the Azure CLI would be used.

STORAGE_ACCOUNT_NAME = f"pythonazurestorage{random.randint(1,100000):05}"

# You can replace the storage account here with any unique name. A random number is used
# by default, but note that the name changes every time you run this script.
# The name must be 3-24 lower case letters and numbers only.

# Check if the account name is available. Storage account names must be unique across
# Azure because they're used in URLs.
availability_result = storage_client.storage_accounts.check_name_availability(
{ "name": STORAGE_ACCOUNT_NAME }
)

if not availability_result.name_available:
print(f"Storage name {STORAGE_ACCOUNT_NAME} is already in use. Try another name.")
exit()

# The name is available, so provision the account


poller = storage_client.storage_accounts.begin_create(RESOURCE_GROUP_NAME, STORAGE_ACCOUNT_NAME,
{
"location" : LOCATION,
"kind": "StorageV2",
"sku": {"name": "Standard_LRS"}
}
)

# Long-running operations return a poller object; calling poller.result()


# waits for completion.
account_result = poller.result()
print(f"Provisioned storage account {account_result.name}")

# Step 3: Retrieve the account's primary access key and generate a connection string.
keys = storage_client.storage_accounts.list_keys(RESOURCE_GROUP_NAME, STORAGE_ACCOUNT_NAME)

print(f"Primary key for storage account: {keys.keys[0].value}")

conn_string = f"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=
{STORAGE_ACCOUNT_NAME};AccountKey={keys.keys[0].value}"
print(f"Connection string: {conn_string}")

# Step 4: Provision the blob container in the account (this call is synchronous)
CONTAINER_NAME = "blob-container-01"
container = storage_client.blob_containers.create(RESOURCE_GROUP_NAME, STORAGE_ACCOUNT_NAME, CONTAINER_NAME,
{})

# The fourth argument is a required BlobContainer object, but because we don't need any
# special values there, so we just pass empty JSON.

print(f"Provisioned blob container {container.name}")

This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
StorageManagementClient (azure.mgmt.storage)

4. Run the script


python provision_blob.py

The script will take a minute or two to complete.

5: Verify the resources


1. Open the Azure portal to verify that the resource group and storage account were provisioned as
expected. You may need to wait a minute and also select Show hidden types in the resource group to
see a storage account provisioned from a Python script:

2. Select the storage account, then select Data storage > Containers in the left-hand menu to verify that
the "blob-container-01" appears:
3. If you want to try using these provisioned resources from application code, continue with Example: Use
Azure Storage.
For an additional example of using the Azure Storage management library, see the Manage Python Storage
sample.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:

cmd
bash

rem Provision the resource group

az group create -n PythonAzureExample-Storage-rg -l centralus

rem Provision the storage account

az storage account create -g PythonAzureExample-Storage-rg -l centralus ^


-n pythonazurestorage12345 --kind StorageV2 --sku Standard_LRS

rem Retrieve the connection string

az storage account show-connection-string -g PythonAzureExample-Storage-rg ^


-n pythonazurestorage12345

rem Provision the blob container; NOTE: this command assumes you have an environment variable
rem named AZURE_STORAGE_CONNECTION_STRING with the connection string for the storage account.

set AZURE_STORAGE_CONNECTION_STRING=<connection_string>
az storage container create --account-name pythonazurestorage12345 -n blob-container-01

6: Clean up resources
Leave the resources in place if you want to follow the article Example: Use Azure Storage to use these resources
in app code.
Otherwise, run the following command to avoid ongoing charges in your subscription.

az group delete -n PythonAzureExample-Storage-rg --no-wait

You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.

See also
Example: Use Azure Storage
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Access Azure Storage using the Azure
libraries for Python
10/28/2022 • 5 minutes to read • Edit Online

This example demonstrated how to use the Azure client libraries in Python application code to upload a file to
that Blob storage container. The example assumes you have provisioned the resources shown in Example:
Provision Azure Storage.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create a service principal for local development, set environment variables for the service principal
(see below), and create and activate a virtual environment for this project.

2: Install library packages


1. In your requirements.txt file, add line for the needed client library package and save the file:

azure-storage-blob
azure-identity

2. In your terminal or command prompt, reinstall requirements:

pip install -r requirements.txt

3: Create a file to upload


Create a source file named sample-source.txt (as the code expects), with contents like the following:

Hello there, Azure Storage. I'm a friendly file ready to be stored in a blob.

4: Use blob storage from app code


The following sections (numbered 4a and 4b) demonstrate two means to access the blob container provisioned
through Example: Provision Azure Storage.
The first method (section 4a below) authenticates the app with DefaultAzureCredential as described in
Authenticate Azure hosted applications with DefaultAzureCredential. With this method you must first assign the
appropriate permissions to the app identity, which is the recommended practice.
The second method (section 4b below) uses a connection string to access the storage account directly. Although
this method seems simpler, it has two significant drawbacks:
A connection string inherently authenticates the connecting agent with the Storage account rather than
with individual resources within that account. As a result, a connection string provides grants broader
authorization than may be required.
A connection string contains an access key in plain text and therefore presents potential vulnerabilities if
it's improperly constructed or improperly secured. If such a connection string is exposed it can be used to
access a wide range of resources within the Storage account.
For these reasons, we recommend using the authentication method in production code.
4a: Use blob storage with authentication
1. Create an environment variable named AZURE_STORAGE_BLOB_URL :

cmd
bash

set AZURE_STORAGE_BLOB_URL=https://pythonazurestorage12345.blob.core.windows.net

Replace "pythonazurestorage12345" with the name of your specific storage account.


This AZURE_STORAGE_BLOB_URL environment variable is used only by this example and it not used by the
Azure libraries.
2. Create a file named use_blob_auth.py with the following code. The comments explain the steps.

import os
from azure.identity import DefaultAzureCredential

# Import the client object from the SDK library


from azure.storage.blob import BlobClient

credential = DefaultAzureCredential()

# Retrieve the storage blob service URL, which is of the form


# https://pythonsdkstorage12345.blob.core.windows.net/
storage_url = os.environ["AZURE_STORAGE_BLOB_URL"]

# Create the client object using the storage URL and the credential
blob_client = BlobClient(storage_url,
container_name="blob-container-01", blob_name="sample-blob.txt", credential=credential)

# Open a local file and upload its contents to Blob Storage


with open("./sample-source.txt", "rb") as data:
blob_client.upload_blob(data)

Reference links:
DefaultAzureCredential (azure.identity)
BlobClient (azure.storage.blob)
3. Attempt to run the code (which fails intentionally):

python use_blob_auth.py

4. Observe the error "This request is not authorized to perform this operation using this permission." The
error is expected because the local service principal that you're using does not yet have permission to
access the blob container.
5. Grant container permissions to the service principal using the Azure CLI command az role assignment
create (it's a long one!):
cmd
bash

az role assignment create --assignee %AZURE_CLIENT_ID% ^


--role "Storage Blob Data Contributor" ^
--scope "/subscriptions/%AZURE_SUBSCRIPTION_ID%/resourceGroups/PythonAzureExample-Storage-
rg/providers/Microsoft.Storage/storageAccounts/pythonazurestorage12345/blobServices/default/container
s/blob-container-01"

The --scope argument identifies where this role assignment applies. In this example, you grant the
"Storage Blob Data Contributor" role to the specific container named "blob-container-01".
Replace pythonazurestorage12345 with the exact name of your storage account. You can also adjust the
name of the resource group and blob container, if necessary. If you use the wrong name, you see the
error, "Can not perform requested operation on nested resource. Parent resource
'pythonazurestorage12345' not found."
If needed, also replace PythonAzureExample-Storage-rg with the name of the resource group that contains
your storage account. The resource group shown here is what's used in Example: Provision Azure Storage.
The --scope argument in this command also uses the AZURE_CLIENT_ID and AZURE_SUBSCRIPTION_ID
environment variables, which you should already have set in your local environment for your service
principal by following Configure your local Python dev environment for Azure.
6. Wait a minute or two for the permissions to propagate , then run the code again to verify that it
now works. If you see the permissions error again, wait a little longer, then try the code again.
For more information on role assignments, see How to assign role permissions using the Azure CLI.
4b: Use blob storage with a connection string
1. Create an environment variable named AZURE_STORAGE_CONNECTION_STRING , the value of which is the full
connection string for the storage account. (This environment variable is also used by various Azure CLI
comments.)
2. Create a Python file named use_blob_conn_string.py with the following code. The comments explain the
steps.

import os

# Import the client object from the SDK library


from azure.storage.blob import BlobClient

# Retrieve the connection string from an environment variable. Note that a connection
# string grants all permissions to the caller, making it less secure than obtaining a
# BlobClient object using credentials.
conn_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]

# Create the client object for the resource identified by the connection string,
# indicating also the blob container and the name of the specific blob we want.
blob_client = BlobClient.from_connection_string(conn_string,
container_name="blob-container-01", blob_name="sample-blob.txt")

# Open a local file and upload its contents to Blob Storage


with open("./sample-source.txt", "rb") as data:
blob_client.upload_blob(data)

3. Run the code:


python use_blob_conn_string.py

Again, although this method is simple, a connection string authorizes all operations in a storage account. With
production code it's better to use specific permissions as described in the previous section.

5. Verify blob creation


After running the code of either method, go to the Azure portal, navigate into the blob container to verify that a
new blob exists named sample-blob.txt with the same contents as the sample-source.txt file:

6: Clean up resources
az group delete -n PythonAzureExample-Storage-rg --no-wait

Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.

See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision a web app and deploy code
Example: Provision Azure Storage
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision and
deploy a web app
10/28/2022 • 5 minutes to read • Edit Online

This example demonstrates how to use the Azure SDK management libraries in a Python script to provision a
web app on Azure App Service and deploy app code from a GitHub repository. (Equivalent Azure CLI commands
are given at later in this article.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create a service principal for local development, and create and activate a virtual environment for this
project.

2: Install the needed Azure library packages


Create a file named requirements.txt with the following contents:

azure-mgmt-resource
azure-mgmt-web
azure-identity

In a terminal or command prompt with the virtual environment activated, install the requirements:

pip install -r requirements.txt

3: Fork the sample repository


Visit https://github.com/Azure-Samples/python-docs-hello-world and fork the repository into your own GitHub
account. You use a fork to ensure that you have permissions to deploy the repository to Azure.

Then create an environment variable named REPO_URL with the URL of your fork. The example code in the next
section depends on this environment variable:
cmd
bash
set REPO_URL=<url_of_your_fork>

4: Write code to provision and deploy a web app


Create a Python file named provision_deploy_web_app.py with the following code. The comments explain the
details:

import random, os
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.web import WebSiteManagementClient

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = 'PythonAzureExample-WebApp-rg'
LOCATION = "centralus"

# Step 1: Provision the resource group.


resource_client = ResourceManagementClient(credential, subscription_id)

rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })

print(f"Provisioned resource group {rg_result.name}")

# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group

#Step 2: Provision the App Service plan, which defines the underlying VM for the web app.

# Names for the App Service plan and App Service. We use a random number with the
# latter to create a reasonably unique name. If you've already provisioned a
# web app and need to re-run the script, set the WEB_APP_NAME environment
# variable to that name instead.
SERVICE_PLAN_NAME = 'PythonAzureExample-WebApp-plan'
WEB_APP_NAME = os.environ.get("WEB_APP_NAME", f"PythonAzureExample-WebApp-{random.randint(1,100000):05}")

# Obtain the client object


app_service_client = WebSiteManagementClient(credential, subscription_id)

# Provision the plan; Linux is the default


poller = app_service_client.app_service_plans.begin_create_or_update(RESOURCE_GROUP_NAME,
SERVICE_PLAN_NAME,
{
"location": LOCATION,
"reserved": True,
"sku" : {"name" : "B1"}
}
)

plan_result = poller.result()

print(f"Provisioned App Service plan {plan_result.name}")

# Step 3: With the plan in place, provision the web app itself, which is the process that can host
# whatever code we want to deploy to it.
poller = app_service_client.web_apps.begin_create_or_update(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": LOCATION,
"server_farm_id": plan_result.id,
"site_config": {
"linux_fx_version": "python|3.8"
}
}
)

web_app_result = poller.result()

print(f"Provisioned web app {web_app_result.name} at {web_app_result.default_host_name}")

# Step 4: deploy code from a GitHub repository. For Python code, App Service on Linux runs
# the code inside a container that makes certain assumptions about the structure of the code.
# For more information, see How to configure Python apps,
# https://docs.microsoft.com/azure/app-service/containers/how-to-configure-python.
#
# The create_or_update_source_control method doesn't provision a web app. It only sets the
# source control configuration for the app. In this case we're simply pointing to
# a GitHub repository.
#
# You can call this method again to change the repo.

REPO_URL = os.environ["REPO_URL"]

poller = app_service_client.web_apps.begin_create_or_update_source_control(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": "GitHub",
"repo_url": REPO_URL,
"branch": "master"
}
)

sc_result = poller.result()

print(f"Set source control on web app to {sc_result.branch} branch of {sc_result.repo_url}")

This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
WebSiteManagementClient (azure.mgmt.web import)

5: Run the script


python provision_deploy_web_app.py

6: Verify the web app deployment


1. Visit the deployed web site by running the following command:
az webapp browse -n PythonAzureExample-WebApp-12345

Replace "PythonAzureExample-WebApp-12345" with the specific name of your web app.


You should see "Hello, World!" in the browser.
2. Visit the Azure portal, select Resource groups , and check that "PythonAzureExample-WebApp-rg" is
listed. Then Navigate into that list to verify the expected resources exist, namely the App Service Plan and
the App Service.

7: Clean up resources
az group delete -n PythonAzureExample-WebApp-rg --no-wait

Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.delete method to delete a resource group from
code.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:
cmd
bash

az group create -l centralus -n PythonAzureExample-WebApp-rg

az appservice plan create -n PythonAzureExample-WebApp-plan --is-linux --sku F1

az webapp create -g PythonAzureExample-WebApp-rg -n PythonAzureExample-WebApp-12345 ^


--plan PythonAzureExample-WebApp-plan --runtime "python|3.8"

rem You can use --deployment-source-url with the first create command. It is shown here
rem to match the sequence of the Python code.

az webapp create -n PythonAzureExample-WebApp-12345 --plan PythonAzureExample-WebApp-plan ^


--deployment-source-url %REPO_URL% --runtime "python|3.8"

rem Replace <your_fork> with the specific URL of your forked repository.

See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and use a MySQL database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision a
database
10/28/2022 • 6 minutes to read • Edit Online

This example demonstrates how to use the Azure SDK management libraries in a Python script to provision an
Azure MySQL database. It also provides a simple script to query the database using the mysql-connector library
(not part of the Azure SDK). (Equivalent Azure CLI commands are given at later in this article. If you prefer to use
the Azure portal, see Create a PostgreSQL server or Create a MariaDB server.)
You can use similar code to provision a PostgreSQL or MariaDB database.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create a service principal for local development, and create and activate a virtual environment for this
project.

2: Install the needed Azure library packages


Create a file named requirements.txt with the following contents:

azure-mgmt-resource
azure-mgmt-rdbms
azure-identity
mysql
mysql-connector

The specific version requirement for azure-mgmt-resource is to ensure that you use a version compatible with
the current version of azure-mgmt-web. These versions are not based on azure.core and therefore use older
methods for authentication.
In a terminal or command prompt with the virtual environment activated, install the requirements:

pip install -r requirements.txt

NOTE
On Windows, attempting to install the mysql library into a 32-bit Python library produces an error about the mysql.h file.
In this case, install a 64-bit version of Python and try again.

3: Write code to provision the database


Create a Python file named provision_db.py with the following code. The comments explain the details.

import random, os
from azure.identity import AzureCliCredential
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.rdbms.mysql import MySQLManagementClient
from azure.mgmt.rdbms.mysql.models import ServerForCreate, ServerPropertiesForDefaultCreate, ServerVersion

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = 'PythonAzureExample-DB-rg'
LOCATION = "westus"

# Step 1: Provision the resource group.


resource_client = ResourceManagementClient(credential, subscription_id)

rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })

print(f"Provisioned resource group {rg_result.name}")

# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group

# Step 2: Provision the database server

# We use a random number to create a reasonably unique database server name.


# If you've already provisioned a database and need to re-run the script, set
# the DB_SERVER_NAME environment variable to that name instead.
#
# Also set DB_USER_NAME and DB_USER_PASSWORD variables to avoid using the defaults.

db_server_name = os.environ.get("DB_SERVER_NAME", f"PythonAzureExample-MySQL-{random.randint(1,100000):05}")


db_admin_name = os.environ.get("DB_ADMIN_NAME", "azureuser")
db_admin_password = os.environ.get("DB_ADMIN_PASSWORD", "ChangePa$$w0rd24")

# Obtain the management client object


mysql_client = MySQLManagementClient(credential, subscription_id)

# Provision the server and wait for the result


poller = mysql_client.servers.begin_create(RESOURCE_GROUP_NAME,
db_server_name,
ServerForCreate(
location=LOCATION,
properties=ServerPropertiesForDefaultCreate(
administrator_login=db_admin_name,
administrator_login_password=db_admin_password,
version=ServerVersion.FIVE7
)
)
)

server = poller.result()

print(f"Provisioned MySQL server {server.name}")

# Step 3: Provision a firewall rule to allow the local workstation to connect

RULE_NAME = "allow_ip"
ip_address = os.environ["PUBLIC_IP_ADDRESS"]

# For the above code, create an environment variable named PUBLIC_IP_ADDRESS that
# contains your workstation's public IP address as reported by a site like
# https://whatismyipaddress.com/.

# Provision the rule and wait for completion


# Provision the rule and wait for completion
poller = mysql_client.firewall_rules.begin_create_or_update(RESOURCE_GROUP_NAME,
db_server_name, RULE_NAME,
{ "start_ip_address": ip_address, "end_ip_address": ip_address }
)

firewall_rule = poller.result()

print(f"Provisioned firewall rule {firewall_rule.name}")

# Step 4: Provision a database on the server

db_name = os.environ.get("DB_NAME", "example-db1")

poller = mysql_client.databases.begin_create_or_update(RESOURCE_GROUP_NAME,
db_server_name, db_name, {})

db_result = poller.result()

print(f"Provisioned MySQL database {db_result.name} with ID {db_result.id}")

You must create an environment variable named PUBLIC_IP_ADDRESS with your workstation's IP address for this
sample to run.
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
ResourceManagementClient (azure.mgmt.resource)
MySQLManagementClient (azure.mgmt.rdbms.mysql)
ServerForCreate (azure.mgmt.rdbms.mysql.models)
ServerPropertiesForDefaultCreate (azure.mgmt.rdbms.mysql.models)
ServerVersion (azure.mgmt.rdbms.mysql.models)
Also see: - PostgreSQLManagementClient (azure.mgmt.rdbms.postgresql) - MariaDBManagementClient
(azure.mgmt.rdbms.mariadb)

4: Run the script


python provision_db.py

5: Insert a record and query the database


1. Create a file named use_db.py with the following code. Note the dependencies on the DB_SERVER_NAME ,
DB_ADMIN_NAME , and DB_ADMIN_PASSWORD environment variables, which should be populated with the
values from the provisioning code. This code work only for MySQL; you use different libraries for
PostgreSQL and MariaDB.
import os
import mysql.connector

db_server_name = os.environ["DB_SERVER_NAME"]
db_admin_name = os.getenv("DB_ADMIN_NAME", "azureuser")
db_admin_password = os.getenv("DB_ADMIN_PASSWORD", "ChangePa$$w0rd24")

db_name = os.getenv("DB_NAME", "example-db1")


db_port = os.getenv("DB_PORT", 3306)

connection = mysql.connector.connect(user=f"{db_admin_name}@{db_server_name}",
password=db_admin_password, host=f"{db_server_name}.mysql.database.azure.com",
port=db_port, database=db_name, ssl_ca='./BaltimoreCyberTrustRoot.crt.pem')

cursor = connection.cursor()

"""
# Alternate pyodbc connection; include pyodbc in requirements.txt
import pyodbc

driver = "{MySQL ODBC 5.3 UNICODE Driver}"

connect_string = f"DRIVER={driver};PORT=3306;SERVER={db_server_name}.mysql.database.azure.com;" \
f"DATABASE={DB_NAME};UID={db_admin_name};PWD={db_admin_password}"

connection = pyodbc.connect(connect_string)
"""

table_name = "ExampleTable1"

sql_create = f"CREATE TABLE {table_name} (name varchar(255), code int)"

cursor.execute(sql_create)
print(f"Successfully created table {table_name}")

sql_insert = f"INSERT INTO {table_name} (name, code) VALUES ('Azure', 1)"


insert_data = "('Azure', 1)"

cursor.execute(sql_insert)
print("Successfully inserted data into table")

sql_select_values= f"SELECT * FROM {table_name}"

cursor.execute(sql_select_values)
row = cursor.fetchone()

while row:
print(str(row[0]) + " " + str(row[1]))
row = cursor.fetchone()

connection.commit()

All of this code uses the mysql.connector API. The only Azure-specific part is the full host domain for
MySQL server (mysql.database.azure.com).
2. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server
from https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem and save the certificate file to
the same folder as the Python file. (This step is described on Obtain an SSL Certificate in the Azure
Database for MySQL documentation.)
3. Run the code:

python use_db.py
6: Clean up resources
az group delete -n PythonAzureExample-DB-rg --no-wait

Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script. For a
PostgreSQL database, use az postgres commands; for MariaDB, use az mariadb commands.

cmd
bash

az group create -l centralus -n PythonAzureExample-DB-rg

az mysql server create -l westus -g PythonAzureExample-DB-rg -n PythonAzureExample-MySQL-12345 ^


-u azureuser -p ChangePa$$w0rd24 --sku-name B_Gen5_1

# Change the IP address to the public IP address of your workstation, that is, the address shown
# by a site like https://whatismyipaddress.com/.

az mysql server firewall-rule create -g PythonAzureExample-DB-rg --server PythonAzureExample-MySQL-12345 ^


-n allow_ip --start-ip-address 10.11.12.13 --end-ip-address 10.11.12.13

az mysql db create -g PythonAzureExample-DB-rg --server PythonAzureExample-MySQL-12345 -n example-db1

See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and deploy a web app
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision a
virtual machine
10/28/2022 • 6 minutes to read • Edit Online

This example demonstrates how to use the Azure SDK management libraries in a Python script to create a
resource group that contains a Linux virtual machine. (Equivalent Azure CLI commands are given at the later in
this article. If you prefer to use the Azure portal, see Create a Linux VM and Create a Windows VM.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.

NOTE
Provisioning a virtual machine through code is a multi-step process that involves provisioning a number of other
resources that the virtual machine requires. If you're simply running such code from the command line, it's much easier to
use the az vm create command, which automatically provisions these secondary resources with defaults for any setting
you choose to omit. The only required arguments are a resource group, VM name, image name, and login credentials. For
more information, see Quick Create a virtual machine with the Azure CLI.

1: Set up your local development environment


If you haven't already, follow all the instructions on Configure your local Python dev environment for Azure.
Be sure to create a service principal for local development, and create and activate a virtual environment for this
project.

2: Install the needed Azure library packages


1. Create a requirements.txt file that lists the management libraries used in this example:

azure-mgmt-resource
azure-mgmt-compute
azure-mgmt-network
azure-identity

2. In your terminal or command prompt with the virtual environment activated, install the management
libraries listed in requirements.txt:

pip install -r requirements.txt

3: Write code to provision a virtual machine


Create a Python file named provision_vm.py with the following code. The comments explain the details:

# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
import os
import os

print(f"Provisioning a virtual machine...some operations might take a minute or two.")

# Acquire a credential object using CLI-based authentication.


credential = AzureCliCredential()

# Retrieve subscription ID from environment variable.


subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]

# Step 1: Provision a resource group

# Obtain the management object for resources, using the credentials from the CLI login.
resource_client = ResourceManagementClient(credential, subscription_id)

# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
LOCATION = "westus2"

# Provision the resource group.


rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{
"location": LOCATION
}
)

print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")

# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group

# Step 2: provision a virtual network

# A virtual machine requires a network interface client (NIC). A NIC requires


# a virtual network and subnet along with an IP address. Therefore we must provision
# these downstream components first, then provision the NIC, after which we
# can provision the VM.

# Network and IP address names


VNET_NAME = "python-example-vnet"
SUBNET_NAME = "python-example-subnet"
IP_NAME = "python-example-ip"
IP_CONFIG_NAME = "python-example-ip-config"
NIC_NAME = "python-example-nic"

# Obtain the management object for networks


network_client = NetworkManagementClient(credential, subscription_id)

# Provision the virtual network and wait for completion


poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME,
{
"location": LOCATION,
"address_space": {
"address_prefixes": ["10.0.0.0/16"]
}
}
)

vnet_result = poller.result()

print(f"Provisioned virtual network {vnet_result.name} with address prefixes


{vnet_result.address_space.address_prefixes}")

# Step 3: Provision the subnet and wait for completion


poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME, SUBNET_NAME,
{ "address_prefix": "10.0.0.0/24" }
)
subnet_result = poller.result()

print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")

# Step 4: Provision an IP address and wait for completion


poller = network_client.public_ip_addresses.begin_create_or_update(RESOURCE_GROUP_NAME,
IP_NAME,
{
"location": LOCATION,
"sku": { "name": "Standard" },
"public_ip_allocation_method": "Static",
"public_ip_address_version" : "IPV4"
}
)

ip_address_result = poller.result()

print(f"Provisioned public IP address {ip_address_result.name} with address {ip_address_result.ip_address}")

# Step 5: Provision the network interface client


poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
NIC_NAME,
{
"location": LOCATION,
"ip_configurations": [ {
"name": IP_CONFIG_NAME,
"subnet": { "id": subnet_result.id },
"public_ip_address": {"id": ip_address_result.id }
}]
}
)

nic_result = poller.result()

print(f"Provisioned network interface client {nic_result.name}")

# Step 6: Provision the virtual machine

# Obtain the management object for virtual machines


compute_client = ComputeManagementClient(credential, subscription_id)

VM_NAME = "ExampleVM"
USERNAME = "azureuser"
PASSWORD = "ChangePa$$w0rd24"

print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")

# Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
# on a Standard DS1 v2 plan with a public IP address and a default virtual network/subnet.

poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,


{
"location": LOCATION,
"storage_profile": {
"image_reference": {
"publisher": 'Canonical',
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"hardware_profile": {
"vm_size": "Standard_DS1_v2"
},
"os_profile": {
"computer_name": VM_NAME,
"computer_name": VM_NAME,
"admin_username": USERNAME,
"admin_password": PASSWORD
},
"network_profile": {
"network_interfaces": [{
"id": nic_result.id,
}]
}
}
)

vm_result = poller.result()

print(f"Provisioned virtual machine {vm_result.name}")

This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
NetworkManagementClient (azure.mgmt.network)
ComputeManagementClient (azure.mgmt.compute)

4. Run the script


python provision_vm.py

The provisioning process takes a few minutes to complete.

5. Verify the resources


Open the Azure portal, navigate to the "PythonAzureExample-VM-rg" resource group, and note the virtual
machine, virtual disk, network security group, public IP address, network interface, and virtual network:
For reference: equivalent Azure CLI commands
cmd
bash

rem Provision the resource group

az group create -n PythonAzureExample-VM-rg -l centralus

rem Provision a virtual network and subnet

az network vnet create -g PythonAzureExample-VM-rg -n python-example-vnet ^


--address-prefix 10.0.0.0/16 --subnet-name python-example-subnet ^
--subnet-prefix 10.0.0.0/24

rem Provision a public IP address

az network public-ip create -g PythonAzureExample-VM-rg -n python-example-ip ^


--allocation-method Dynamic --version IPv4

rem Provision a network interface client

az network nic create -g PythonAzureExample-VM-rg --vnet-name python-example-vnet ^


--subnet python-example-subnet -n python-example-nic ^
--public-ip-address python-example-ip

rem Provision the virtual machine

az vm create -g PythonAzureExample-VM-rg -n ExampleVM -l "centralus" ^


--nics python-example-nic --image UbuntuLTS ^
--admin-username azureuser --admin-password ChangePa$$w0rd24

6: Clean up resources
az group delete -n PythonAzureExample-VM-rg --no-wait
Run this command if you don't need to keep the resources created in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.

See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
The following resources contain more comprehensive examples using Python to create a virtual machine:
Create and manage Windows VMs in Azure using Python. You can use this example to create Linux VMs by
changing the storage_profile parameter.
Azure Virtual Machines Management Samples - Python (GitHub). The sample demonstrates additional
management operations like starting and restarting a VM, stopping and deleting a VM, increasing the disk
size, and managing data disks.
Use Azure Managed Disks with the Azure libraries
(SDK) for Python
10/28/2022 • 3 minutes to read • Edit Online

Azure Managed Disks provide a simplified disk management, enhanced scalability, better security, and better
scaling without having to work directly with storage accounts.
You use the azure-mgmt-compute library to administer Managed Disks. (For an example of provisioning a virtual
machine with the azure-mgmt-compute library, see Example - Provision a virtual machine.)

Standalone Managed Disks


You can create standalone Managed Disks in a number of ways as illustrated in the following sections.
Create an empty Managed Disk

from azure.mgmt.compute.models import DiskCreateOption

poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'disk_size_gb': 20,
'creation_data': {
'create_option': DiskCreateOption.empty
}
}
)
disk_resource = poller.result()

Create a Managed Disk from blob storage

from azure.mgmt.compute.models import DiskCreateOption

poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'creation_data': {
'create_option': DiskCreateOption.import_enum,
'source_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd'
}
}
)
disk_resource = poller.result()

Create a Managed Disk image from blob storage


from azure.mgmt.compute.models import DiskCreateOption

poller = compute_client.images.begin_create_or_update(
'my_resource_group',
'my_image_name',
{
'location': 'eastus',
'storage_profile': {
'os_disk': {
'os_type': 'Linux',
'os_state': "Generalized",
'blob_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd',
'caching': "ReadWrite",
}
}
}
)
image_resource = poller.result()

Create a Managed Disk from your own image

from azure.mgmt.compute.models import DiskCreateOption

# If you don't know the id, do a 'get' like this to obtain it


managed_disk = compute_client.disks.get(self.group_name, 'myImageDisk')

poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'creation_data': {
'create_option': DiskCreateOption.copy,
'source_resource_id': managed_disk.id
}
}
)

disk_resource = poller.result()

Virtual machine with Managed Disks


You can create a Virtual Machine with an implicit Managed Disk for a specific disk image, which relieves you
from specifying all the details.
A Managed Disk is created implicitly when creating VM from an OS image in Azure. In the storage_profile
parameter, the os_disk is optional and you don't have to create a storage account as required precondition to
create a Virtual Machine.

storage_profile = azure.mgmt.compute.models.StorageProfile(
image_reference = azure.mgmt.compute.models.ImageReference(
publisher='Canonical',
offer='UbuntuServer',
sku='16.04-LTS',
version='latest'
)
)

For a complete example on how to create a virtual machine using the Azure management libraries, for Python,
see Example - Provision a virtual machine.
You can also create a storage_profile from your own image:

# If you don't know the id, do a 'get' like this to obtain it


image = compute_client.images.get(self.group_name, 'myImageDisk')

storage_profile = azure.mgmt.compute.models.StorageProfile(
image_reference = azure.mgmt.compute.models.ImageReference(
id = image.id
)
)

You can easily attach a previously provisioned Managed Disk.:

vm = compute.virtual_machines.get(
'my_resource_group',
'my_vm'
)
managed_disk = compute_client.disks.get('my_resource_group', 'myDisk')

vm.storage_profile.data_disks.append({
'lun': 12, # You choose the value, depending of what is available for you
'name': managed_disk.name,
'create_option': DiskCreateOptionTypes.attach,
'managed_disk': {
'id': managed_disk.id
}
})

async_update = compute_client.virtual_machines.begin_create_or_update(
'my_resource_group',
vm.name,
vm,
)
async_update.wait()

Virtual machine scale sets with Managed Disks


Before Managed Disks, you needed to create a storage account manually for all the VMs you wanted inside your
Scale Set, and then use the list parameter vhd_containers to provide all the storage account name to the Scale
Set RestAPI. (For a migration guide, see Convert a scale set template to a manage disk scale set template.)
Because you don't need to manage storage accounts with Azure Managed Disks, your storage_profile can now
be exactly the same as the one used in VM creation:

'storage_profile': {
'image_reference': {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},

The full sample is as follows:


naming_infix = "PyTestInfix"

vmss_parameters = {
'location': self.region,
"overprovision": True,
"upgrade_policy": {
"mode": "Manual"
},
'sku': {
'name': 'Standard_A1',
'tier': 'Standard',
'capacity': 5
},
'virtual_machine_profile': {
'storage_profile': {
'image_reference': {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},
'os_profile': {
'computer_name_prefix': naming_infix,
'admin_username': 'Foo12',
'admin_password': 'BaR@123!!!!',
},
'network_profile': {
'network_interface_configurations' : [{
'name': naming_infix + 'nic',
"primary": True,
'ip_configurations': [{
'name': naming_infix + 'ipconfig',
'subnet': {
'id': subnet.id
}
}]
}]
}
}
}

# Create VMSS test


result_create = compute_client.virtual_machine_scale_sets.begin_create_or_update(
'my_resource_group',
'my_scale_set',
vmss_parameters,
)
vmss_result = result_create.result()

Other operations with Managed Disks


Resizing a Managed Disk

managed_disk = compute_client.disks.get('my_resource_group', 'myDisk')


managed_disk.disk_size_gb = 25

async_update = self.compute_client.disks.begin_create_or_update(
'my_resource_group',
'myDisk',
managed_disk
)
async_update.wait()
Update the storage account type of the Managed Disks

from azure.mgmt.compute.models import StorageAccountTypes

managed_disk = compute_client.disks.get('my_resource_group', 'myDisk')


managed_disk.account_type = StorageAccountTypes.standard_lrs

async_update = self.compute_client.disks.begin_create_or_update(
'my_resource_group',
'myDisk',
managed_disk
)
async_update.wait()

Create an image from blob storage

async_create_image = compute_client.images.create_or_update(
'my_resource_group',
'myImage',
{
'location': 'westus',
'storage_profile': {
'os_disk': {
'os_type': 'Linux',
'os_state': "Generalized",
'blob_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd',
'caching': "ReadWrite",
}
}
}
)
image = async_create_image.result()

Create a snapshot of a Managed Disk that is currently attached to a virtual machine

managed_disk = compute_client.disks.get('my_resource_group', 'myDisk')

async_snapshot_creation = self.compute_client.snapshots.begin_create_or_update(
'my_resource_group',
'mySnapshot',
{
'location': 'westus',
'creation_data': {
'create_option': 'Copy',
'source_uri': managed_disk.id
}
}
)
snapshot = async_snapshot_creation.result()

See also
Example: Provision a virtual machine
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and use a MySQL database
Complete a short survey about the Azure SDK for Python
Configure logging in the Azure libraries for Python
10/28/2022 • 5 minutes to read • Edit Online

Azure Libraries for Python that are based on azure.core page provide logging output using the standard Python
logging library.
The general process to work with logging is as follows:
1. Acquire the logging object for the desired library and set the logging level.
2. Register a handler for the logging stream.
3. To include HTTP information, pass a logging_enable=True parameter to a client object constructor, a
credential object constructor, or to a specific method.
Details are provided in the remaining sections of this article.
As a general rule, the best resource for understanding logging usage within the libraries is to browse the SDK
source code at github.com/Azure/azure-sdk-for-python. We encourage you to clone this repository locally so
you can easily search for details when needed, as the following sections suggest.

Set logging levels


import logging

# ...

# Acquire the logger for a library (azure.mgmt.resource in this example)


logger = logging.getLogger('azure.mgmt.resource')

# Set the desired logging level


logger.setLevel(logging.DEBUG)

This example acquires the logger for the azure.mgmt.resource library, then sets the logging level to
logging.DEBUG .
You can call logger.setLevel at any time to change the logging level for different segments of code.
To set a level for a different library, use that library's name in the logging.getLogger call. For example, the azure-
eventhubs library provides a logger named azure.eventhubs , the azure-storage-queue library provides a logger
named azure.storage.queue , and so on. (The SDK source code frequently uses the statement
logging.getLogger(__name__) , which acquires a logger using the name of the containing module.)

You can also use more general namespaces. For example,

import logging

# Set the logging level for all azure-storage-* libraries


logger = logging.getLogger('azure.storage')
logger.setLevel(logging.INFO)

# Set the logging level for all azure-* libraries


logger = logging.getLogger('azure')
logger.setLevel(logging.ERROR)

Note that the azure logger is used by some libraries instead of a specific logger. For example, the azure-
storage-blob library uses the azure logger.
You can use the logger.isEnabledFor method to check whether any given logging level is enabled:

print(f"Logger enabled for ERROR={logger.isEnabledFor(logging.ERROR)}, " \


f"WARNING={logger.isEnabledFor(logging.WARNING)}, " \
f"INFO={logger.isEnabledFor(logging.INFO)}, " \
f"DEBUG={logger.isEnabledFor(logging.DEBUG)}")

Logging levels are the same as the standard logging library levels. The following table describes the general use
of these logging levels in the Azure libraries for Python:

LO GGIN G L EVEL T Y P IC A L USE

logging.ERROR Failures where the application is unlikely to recover (such as


out of memory).

logging.WARNING (default) A function fails to perform its intended task (but not when
the function can recover, such as retrying a REST API call).
Functions typically log a warning when raising exceptions.
The warning level automatically enables the error level.

logging.INFO Function operates normally or a service call is canceled. Info


events typically include requests, responses, and headers.
The info level automatically enables the error and warning
levels.

logging.DEBUG Detailed information that is commonly used for


troubleshooting and includes a stack trace for exceptions.
The debug level automatically enables the info, warning, and
error levels. CAUTION: If you also set
logging_enable=True , the debug level includes sensitive
information such as account keys in headers and other
credentials. Be sure to protect these logs to avoid
compromising security.

logging.NOTSET Disable all logging.

Library-specific logging level behavior


The exact logging behavior at each level depends on the library in question. Some libraries, such as
azure.eventhub, perform extensive logging whereas other libraries do very little.
The best way to examine the exact logging for a library is to search for the logging levels in the Azure SDK for
Python source code:
1. In the repository folder, navigate into the sdk folder, then navigate into the folder for the specific service
of interest.
2. In that folder, search for any of the following strings:
_LOGGER.error
_LOGGER.warning
_LOGGER.info
_LOGGER.debug

Register a log stream handler


To capture logging output, you must register at least one log stream handler in your code:

import logging

# Direct logging output to stdout. Without adding a handler,


# no logging output is visible.
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)

This example registers a handler that directs log output to stdout. You can use other types of handlers as
described on logging.handlers in the Python documentation or use the standard logging.basicConfig method.

Enable HTTP logging for a client object or operation


By default, logging within the Azure libraries does not include any HTTP information. To include HTTP
information in log output (as DEBUG level), you must specifically pass logging_enable=True to a client or
credential object constructor or to a specific method.
CAUTION : HTTP logging can reveal includes sensitive information such as account keys in headers and other
credentials. Be sure to protect these logs to avoid compromising security.
Enable HTTP logging for a client object (DEBUG level)

from azure.storage.blob import BlobClient


from azure.identity import DefaultAzureCredential

# Enable HTTP logging on the client object when using DEBUG level
# endpoint is the Blob storage URL.
client = BlobClient(endpoint, DefaultAzureCredential(), logging_enable=True)

Enabling HTTP logging for a client object enables logging for all operations invoked through that object.
Enable HTTP logging for a credential object (DEBUG level)

from azure.storage.blob import BlobClient


from azure.identity import DefaultAzureCredential

# Enable HTTP logging on the credential object when using DEBUG level
credential = DefaultAzureCredential(logging_enable=True)

# endpoint is the Blob storage URL.


client = BlobClient(endpoint, credential)

Enabling HTTP logging for a credential object enables logging for all operations invoked through that object,
specifically, but not for operations in a client object that don't involve authentication.
Enable logging for an individual method (DEBUG level)

from azure.storage.blob import BlobClient


from azure.identity import DefaultAzureCredential

# endpoint is the Blob storage URL.


client = BlobClient(endpoint, DefaultAzureCredential())

# Enable HTTP logging for only this operation when using DEBUG level
client.create_container("container01", logging_enable=True)

Example logging output


The following code is that shown in Example: Use a storage account with the addition of enabling DEBUG and
HTTP logging:

import os
import sys
import logging
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobClient

logger = logging.getLogger('azure')
logger.setLevel(logging.DEBUG)

# Set the logging level for the azure.storage.blob library


logger = logging.getLogger('azure.storage.blob')
logger.setLevel(logging.DEBUG)

# Direct logging output to stdout. Without adding a handler,


# no logging output is visible.
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)

credential = DefaultAzureCredential()
storage_url = os.environ["AZURE_STORAGE_BLOB_URL"]

# Enable logging on the client object


blob_client = BlobClient(storage_url, container_name="blob-container-01",
blob_name="sample-blob9.txt", credential=credential)

with open("./sample-source.txt", "rb") as data:


blob_client.upload_blob(data, logging_enable=True)

The logging output is as follows:

Request URL: 'https://pythonsdkstorage12345.blob.core.windows.net/blob-container-01/sample-


blob.txt'
Request method: 'PUT'
Request headers:
'Content-Type': 'application/octet-stream'
'Content-Length': '79'
'x-ms-version': '2019-07-07'
'x-ms-blob-type': 'BlockBlob'
'If-None-Match': '*'
'x-ms-date': 'Mon, 01 Jun 2020 22:54:14 GMT'
'x-ms-client-request-id': 'd081f88e-a45a-11ea-b9eb-0c5415dfd03a'
'User-Agent': 'azsdk-python-storage-blob/12.3.1 Python/3.8.3 (Windows-10-10.0.18362-SP0)'
'Authorization': '*****'
Request body:
b"Hello there, Azure Storage. I'm a friendly file ready to be stored in a blob.\r\n"
Response status: 201
Response headers:
'Content-Length': '0'
'Content-MD5': 'kvMIzjEi6O8EqTVnZJNakQ=='
'Last-Modified': 'Mon, 01 Jun 2020 22:54:14 GMT'
'ETag': '"0x8D8067EB52FF7BC"'
'Server': 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0'
'x-ms-request-id': '5df479b1-f01e-00d0-5b67-382916000000'
'x-ms-client-request-id': 'd081f88e-a45a-11ea-b9eb-0c5415dfd03a'
'x-ms-version': '2019-07-07'
'x-ms-content-crc64': 'QmecNePSHnY='
'x-ms-request-server-encrypted': 'true'
'Date': 'Mon, 01 Jun 2020 22:54:14 GMT'
Response content:
How to configure proxies for the Azure libraries
10/28/2022 • 2 minutes to read • Edit Online

A proxy server URL has of the form http[s]://[username:password@]<ip_address_or_domain>:<port>/ where the


username:password combination is optional.
You can then configure a proxy globally by using environment variables, or you can specify a proxy by passing
an argument named proxies to an individual client constructor or operation method.

Global configuration
To configure a proxy globally for your script or app, define HTTP_PROXY or HTTPS_PROXY environment variables
with the server URL. These variables work any version of the Azure libraries.
These environment variables are ignored if you pass the parameter use_env_settings=False to a client object
constructor or operation method.
From Python code

import os
os.environ["HTTP_PROXY"] = "http://10.10.1.10:1180"

# Alternate URL and variable forms:


# os.environ["HTTP_PROXY"] = "http://username:[email protected]:1180"
# os.environ["HTTPS_PROXY"] = "http://10.10.1.10:1180"
# os.environ["HTTPS_PROXY"] = "http://username:[email protected]:1180"

From the CLI


cmd
bash

rem Non-authenticated HTTP server:


set HTTP_PROXY=http://10.10.1.10:1180

rem Authenticated HTTP server:


set HTTP_PROXY=http://username:[email protected]:1180

rem Non-authenticated HTTPS server:


set HTTPS_PROXY=http://10.10.1.10:1180

rem Authenticated HTTPS server:


set HTTPS_PROXY=http://username:[email protected]:1180

Per-client or per-method configuration


To configure a proxy for a specific client object or operation method, specify a proxy server with an argument
named proxies .
For example, the following code from the article Example: use Azure storage specifies an HTTPS proxy with user
credentials with the BlobClient constructor. In this case, the object comes from the azure.storage.blob library,
which is based on azure.core.
from azure.identity import DefaultAzureCredential

# Import the client object from the SDK library


from azure.storage.blob import BlobClient

credential = DefaultAzureCredential()

storage_url = "your_url"

blob_client = BlobClient(storage_url, container_name="blob-container-01",


blob_name="sample-blob.txt", credential=credential,
proxies={ "https": "https://username:[email protected]:1180" }
)

# Other forms that the proxy URL might take:


# proxies={ "http": "http://10.10.1.10:1180" }
# proxies={ "http": "http://username:[email protected]:1180" }
# proxies={ "https": "https://10.10.1.10:1180" }
Multi-cloud: Connect to all regions with the Azure
libraries for Python
10/28/2022 • 2 minutes to read • Edit Online

You can use the Azure libraries for Python to connect to all regions where Azure is available.
By default, the Azure libraries are configured to connect to the global Azure cloud.

Using pre-defined sovereign cloud constants


Pre-defined sovereign cloud constants are provided by the AzureAuthorityHosts module of the azure.identity
library:
AZURE_PUBLIC_CLOUD
AZURE_CHINA
AZURE_GOVERNMENT

To use a definition, import the appropriate constant from azure.identity.AzureAuthorityHosts and apply it when
creating client objects.
When using DefaultAzureCredential , as shown in the following example, you can specify the cloud by using the
appropriate value from azure.identity.AzureAuthorityHosts .

import os
from msrestazure.azure_cloud import AZURE_CHINA_CLOUD as CLOUD
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from azure.identity import DefaultAzureCredential, AzureAuthorityHosts

# Assumes the subscription ID and tenant ID to use are in the AZURE_SUBSCRIPTION_ID and
# AZURE_TENANT_ID environment variables
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]

# When using sovereign domains (that is, any cloud other than AZURE_PUBLIC_CLOUD),
# you must use an authority with DefaultAzureCredential.
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_CHINA)

resource_client = ResourceManagementClient(
credential, subscription_id,
base_url=CLOUD.endpoints.resource_manager,
credential_scopes=[CLOUD.endpoints.resource_manager + "/.default"])

subscription_client = SubscriptionClient(
credential,
base_url=CLOUD.endpoints.resource_manager,
credential_scopes=[CLOUD.endpoints.resource_manager + "/.default"])

Using your own cloud definition


The following code uses get_cloud_from_metadata_endpoint with the Azure Resource Manager endpoint for a
private cloud (such as one built on Azure Stack):
import os
from msrestazure.azure_cloud import get_cloud_from_metadata_endpoint
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from azure.identity import DefaultAzureCredential
from azure.profiles import KnownProfiles

# Assumes the subscription ID and tenant ID to use are in the AZURE_SUBSCRIPTION_ID and
# AZURE_TENANT_ID environment variables
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]

stack_cloud = get_cloud_from_metadata_endpoint("https://contoso-azurestack-arm-endpoint.com")

# When using a private cloud, you must use an authority with DefaultAzureCredential.
# The active_directory endpoint should be a URL like https://login.microsoftonline.com.
credential = DefaultAzureCredential(authority=stack_cloud.endpoints.active_directory)

resource_client = ResourceManagementClient(
credential, subscription_id,
base_url=stack_cloud.endpoints.resource_manager,
profile=KnownProfiles.v2019_03_01_hybrid,
credential_scopes=[stack_cloud.endpoints.active_directory_resource_id + "/.default"])

subscription_client = SubscriptionClient(
credential,
base_url=stack_cloud.endpoints.resource_manager,
profile=KnownProfiles.v2019_03_01_hybrid,
credential_scopes=[stack_cloud.endpoints.active_directory_resource_id + "/.default"])
Azure libraries for Python API reference
10/28/2022 • 2 minutes to read • Edit Online

Full reference for all services:


Python API browser >>>
We are piloting per-service reference sections, starting with Storage (blobs, files, queues). Please provide
feedback on this experience.
Try the Storage reference pilot >>>
Hosting Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various app hosting options on Azure:
Ser verless hosting :
Create a function in Azure using the Azure CLI that responds to HTTP requests
Connect Azure Functions to Azure Storage using command line tools
Create an Azure Functions project using Visual Studio Code
Connect Azure Functions to Azure Storage using Visual Studio Code
Web app hosting and monitoring :
Create a Python app in Azure App Service on Linux
Configure a Linux Python app for Azure App Service
Set up Azure Monitor for your Python application
Container hosting :
Deploy an Azure Kubernetes Service cluster using the Azure CLI
Deploy a container instance in Azure using the Azure CLI
Create your first Service Fabric container application on Linux
Batch jobs :
Use Python API to run an Azure Batch job
Tutorial: Run a parallel workload with Azure Batch using the Python API
Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Vir tual machines :
Create a Linux virtual machine with the Azure CLI
Data solutions for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various data solutions on Azure.

SQL databases
PostgreSQL :
Use Python to connect and query data in Azure Database for PostgreSQL
Run a Python (Django or Flask) web app with PostgreSQL in Azure App Service
MySQL :
Use Python to connect and query data with Azure Database for MySQL
Azure SQL :
Use Python to query an Azure SQL database
MariaDB :
How to connect applications to Azure Database for MariaDB

Tables, blobs, files, NoSQL


Tables and NoSQL :
Build an Azure Cosmos DB for Table app with Python
Build a Python application using an Azure Cosmos DB for NoSQL account
Build a Cassandra app with Python SDK and Azure Cosmos DB
Create a graph database in Azure Cosmos DB using Python and the Azure portal
Build a Python app using Azure Cosmos DB for MongoDB
Blob and file storage :
Manage Azure Storage blobs with Python
Develop for Azure Files with Python
Redis Cache :
Create a Python app that uses Azure Cache for Redis

Big data and analytics


Big data analytics (Azure Data Lake analytics) :
Manage Azure Data Lake Analytics using Python
Develop U-SQL with Python for Azure Data Lake Analytics
Big data orchestration (Azure Data Factor y) :
Create a data factory and pipeline using Python
Transform data by running a Python activity in Azure Databricks
Big data streaming and event ingestion (Azure Event Hubs) :
Send events to or receive events from event hubs by using Python
Event Hubs Capture walkthrough: Python
Capture Event Hubs data in Azure Storage and read it by using Python
Hadoop (Azure HDInsights) :
Use Spark & Hive Tools for Visual Studio Code
Spark-based analytics (Azure Databricks) :
Connect to Azure Databricks from Excel, Python, or R
Run a Spark job on Azure Databricks using the Azure portal
Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark
Identity and security for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various identity and security options on Azure:
Authentication and identity
Add sign-in with Microsoft to a Python web app
Acquire a token and call Microsoft Graph API from a Python console app using app's identity
Security and key/secret/cer tificate storage
Store and retrieve certificates with Key Vault
Store and retrieve keys with Key Vault
Store and retrieve secrets with Key Vault
Machine learning for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various machine learning options on Azure:
Get started creating your first ML experiment with the Python SDK
Train your first ML model
Train image classification models with MNIST data and scikit-learn using Azure Machine Learning
Auto-train an ML model
Access datasets with Python using the Azure Machine Learning Python client library
Configure automated ML experiments in Python
Deploy a data pipeline with Azure DevOps
Create and run machine learning pipelines with Azure Machine Learning SDK
AI service for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

Azure Cognitive Services make extensive AI capabilities easily available to applications in areas such as
computer vision and image processing, language analysis and translation, speech, decision-making, and
comprehensive search.
Because Azure Cognitive Services continues to evolve, the best way to find getting started material for Python is
to begin on the Azure Cognitive Service hub page. Select a service of interest and then expand the Quickstar ts
node. Under Quickstar ts , look for subsections about using the client libraries or the REST API. The articles in
those subsections include Python where supported.
Go to the Cognitive Services hub page >>>
Also see the following articles for Azure Cognitive Search, which is in a separate part of the documentation from
Cognitive Services:
Create an Azure Cognitive Search index in Python using Jupyter notebooks.
Use Python and AI to generate searchable content from Azure blobs
Messaging and IoT for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

The following articles help you get started with various messaging options on Azure.

Messaging
Notifications :
How to use Notification Hubs from Python
Queues :
How to use Azure Queue storage v2.1 from Python
Azure Queue storage client library v12 for Python
Use Azure Service Bus queues with Python
Use Service Bus topics and subscriptions with Python
Real-time web functionality (SignalR) :
Create a chat room with Azure Functions and SignalR Service using Python

Event ingestion
Event ingestion :
Ingest real-time data with Event Hubs using Python
Event Hubs data in Azure Storage and read it by using Python
Route custom events to web endpoint with Azure CLI and Event Grid

Internet of Things (IoT)


IoT Hub :
Send telemetry from a device to an IoT hub and read it with a back-end application
Send cloud-to-device messages with IoT Hub
Upload files from your device to the cloud with IoT Hub
Schedule and broadcast jobs
Control a device connected to an IoT hub
Device provisioning :
Create and provision a simulated TPM device
Enroll TPM device to IoT Hub Device Provisioning Service
Create and provision a simulated X.509 device
Enroll X.509 devices to the Device Provisioning Service
IoT Central/IoT Edge :
Tutorial: Create and connect a client application to your Azure IoT Central application (Python)
Tutorial: Develop and deploy a Python IoT Edge module for Linux devices
Other services for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online

Media streaming :
Connect to Media Services v3 API
Automation :
Tutorial: Create a Python runbook
DevOps :
Create a CI/CD pipeline for Python with Azure DevOps Starter
Build Python apps
Geographical mapping :
Tutorial: Route electric vehicles by using Azure Notebooks
Tutorial: Join sensor data with weather forecast data by using Azure Notebooks
Burrows-Wheeler Aligner (BWA) and the Genome Analysis Toolkit (GATK) :
Run a workflow through the Microsoft Genomics service
Resource management :
Run your first Resource Graph query using Python
Vir tual machine management :
Create and manage Windows VMs in Azure using Python

You might also like