Azure
Azure
You're a Python developer, and you're ready to develop cloud applications for Microsoft Azure. To help you
prepare for a long and productive career, this series of three articles orients you to the basic landscape of cloud
development on Azure.
Resources are the building blocks of a cloud application. The cloud development process thus begins with
creating the appropriate environment into which you can deploy the different parts of the application. Put
simply, you cannot deploy any code or data to Azure until you've allocated and configured—that is
provisioned—the suitable target resources.
The process of creating the environment for your application, then, involves identifying the relevant services and
resource types involved, and then provisioning those resources. The provisioning process is essentially how you
construct the computing system to which you deploy your application. Provisioning is also the point at which
you begin renting those resources from Azure.
There are hundreds of different types of resources at your disposal, from basic "infrastructure" resources like
virtual machines, where you retain full control and responsibility for the software you deploy, to higher-level
"platform" services that provide a more managed environment where you concern yourself with only data and
application code.
Finding the right services for your application, and balancing their relative costs, can be challenging, but is also
part of the creative fun of cloud development. To understand the many choices, review the Azure developer's
guide. Here, let's next discuss how you actually work with all of these services and resources.
NOTE
You've probably seen and perhaps have grown weary of the terms IaaS (infrastructure-as-a-service), PaaS (platform-as-a-
service), and so on. The as-a-service part reflects the reality that you generally don't have physical access to the data
centers themselves. Instead, you use tools like the Azure portal, the Azure CLI, or Azure's REST API to provision
infrastructure resources, platform resources, and so on. As a service, Azure is always standing by waiting to receive your
requests.
On this developer center, we spare you the IaaS, PaaS, etc. jargon because "as-a-service" is just inherent to the cloud to
begin with!
NOTE
A hybrid cloud refers to the combination of private computers and data centers with cloud resources like Azure, and has
its own considerations beyond what's covered in the previous discussion. Furthermore, this discussion assumes new
application development; scenarios that involve rearchitecting and migrating existing on-premises applications are not
covered here.
NOTE
You might hear the terms cloud native and cloud enabled applications, which are often discussed as the same thing. There
are differences, however. A cloud enabled application is often one that is migrated, as a whole, from an on-premises data
center to cloud-based servers. Oftentimes, such applications retain their original structure and are simply deployed to
virtual machines in the cloud (and therefore across geographic regions). Such a migration allows the application to scale
to meet global demand without having to provision new hardware in your own data center. However, scaling must be
done at the virtual machine (or infrastructure) level, even if only one part of the application needs increased performance.
A cloud native application, on the other hand, is written from the outset to take advantage of the many different,
independently scalable services available in a cloud such as Azure. Cloud native applications are more loosely structured
(using micro-service architectures, for example), which allows you to more precisely configure deployment and scaling for
each part. Such a structure simplifies maintenance and often dramatically reduces costs because you need pay for
premium services only where necessary.
For more information, see Build cloud-native applications in Azure and Architecting Cloud Native .NET Applications for
Azure, the principles of which apply to applications written in any language.
Next step
Provisioning, accessing, and managing resources >>>
Configure your local Python dev environment for
Azure
10/28/2022 • 6 minutes to read • Edit Online
To develop Python applications using Azure, you first want to configure your local development environment.
Configuration includes creating an Azure account, installing tools for Azure development, and connecting those
tools to your Azure account.
Developing on Azure requires Python 3.7 or higher. To verify the version of Python on your workstation, in a
console window type the command python3 --version for macOS/Linux or py --version for Windows.
To learn more about installing extensions in Visual Studio Code, refer to the Extension Marketplace document on
the Visual Studio Code website.
After installing the Azure Tools extension, sign in with your Azure account. On the left-hand panel, you'll see an
Azure icon. Select this icon, and a control panel for Azure services will appear. Choose Sign in to Azure... to
complete the authentication process.
NOTE
If you see the error "Cannot find subscription with name [subscription ID]", this may be because you are behind a
proxy and unable to reach the Azure API. Configure HTTP_PROXY and HTTPS_PROXY environment variables with your
proxy information in your terminal:
# Windows
set HTTPS_PROXY=https://username:password@proxy:8080
set HTTP_PROXY=http://username:password@proxy:8080
# macOS/Linux
export HTTPS_PROXY=https://username:password@proxy:8080
export HTTP_PROXY=http://username:password@proxy:8080
Install on macOS
Install on Linux
Install on Windows
The Azure CLI is installed through homebrew on macOS. If you don't have homebrew available on your system,
install homebrew before continuing.
This command will first update your brew repository information and then install the Azure CLI.
After installing, sign-in to your Azure account from the Azure CLI by typing the command az login in a
terminal window on your workstation.
az login
The Azure CLI will open your default browser to complete the sign-in process.
Windows
macOS/Linux
# py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
py -3 -m venv .venv
This command runs the Python venv module and creates a virtual environment in a folder named
".venv". Typically, .gitignore files have a ".venv" entry so that the virtual environment doesn't get checked
in with your code checkins.
4. Activate the virtual environment:
Windows
macOS/Linux
source .venv/Scripts/activate
Once you activate that environment (which Visual Studio Code does automatically), running pip install
installs a library into that environment only. Python code running in a virtual environment uses the specific
package versions installed into that virtual environment. Using different virtual environments allows different
applications to use different versions of a package, which is sometimes required. To learn more about virtual
environments, see Virtual Environments and Packages in the Python docs.
For example, if your requirements are in a requirements.txt file, then inside the activated virtual environment,
you can install them with:
Next step
Provisioning, accessing, and managing resources >>>
Provisioning, accessing, and managing resources on
Azure
10/28/2022 • 6 minutes to read • Edit Online
You can use any or all of these complementary methods to create, configure, and manage whatever Azure
resources you need. In fact, you typically use all three in the course of a development project, and it's worth your
time to become familiar with each of them.
Within this developer center, we primarily show how to provision resources using both the Azure CLI and
Python code that uses the Azure libraries. Using the portal is well covered in the documentation for each
individual service.
NOTE
The Azure libraries for Python are sometimes referred to as the Azure SDK for Python. However, there are no SDK
components other than the libraries, which you acquire through the Python package manager, pip.
Azure portal
The Azure portal is Azure's fully customizable, browser-based user interface through which you can provision
and manage resources with all Azure services. To access the portal, you must first sign in using a Microsoft
Account, and then create a free Azure account with a subscription.
Pros : The user interface makes it easy to explore services and all their various configuration options. Setting
configuration values is secure because no information is stored on the local workstation.
Cons : Working with the portal is a manual process and can't be easily automated. To remember what you did to
change a configuration, for example, you generally record your steps in a separate document.
Azure CLI
The Azure CLI is Azure's open source command-line interface. Once you're signed in to the Azure CLI (using the
az login command), you can perform the same tasks that you can through the portal.
Pros : Easily automated through scripts and processing of output. Provides higher-level commands that
provision multiple resources together for common tasks, such as deploying a web app. Scripts can be managed
in source control.
Cons : Steeper learning curve than using the portal, and commands are subject to bugs. Error messages aren't
always helpful.
You can also use the Azure PowerShell module in place of the Azure CLI, although the Azure CLI's Linux-style
commands are typically more familiar to Python developers.
In place of the local CLI or PowerShell, you can use the same commands in the Azure Cloud Shell,
https://shell.azure.com/. The Cloud Shell is convenient because it's automatically authenticated with Azure once
it opens and has the same capabilities you would through the Azure portal. The Cloud Shell also comes pre-
configured with many different tools that would be inconvenient to install locally, especially if you need to run
only one or two commands.
Because Cloud Shell isn't a local environment, it's more suitable for singular operations like you'd do through
the portal rather than scripted automation. Nevertheless, you can clone source repositories (for example, GitHub
repositories) in the Cloud Shell. As a result, you can develop automation scripts locally, store them in a
repository, clone the repository in Cloud Shell, and then run them there.
Next step
The Azure development flow >>>
The Azure development flow: provision, code, test,
deploy, and manage
10/28/2022 • 7 minutes to read • Edit Online
Provision Azure CLI, Azure portal, Cloud Shell, Provision resource groups; provision
Python scripts using Azure specific resources in those groups;
management libraries configure resources to be ready for use
from app code and/or ready to receive
Python code in deployments.
Code Code editor (such as Visual Studio Write Python code using the Azure
Code and PyCharm), Azure libraries, client libraries to interact with
reference documentation provisioned resources.
Test Python runtime, debugger Run Python code locally against active
cloud resources (typically dev or test
resources rather than production
resources). The code itself isn't yet
hosted on Azure, which helps you
debug and iterate quickly.
Deploy Azure CLI, GitHub, DevOps Once code has been tested locally,
deploy it to an appropriate Azure
hosting service where the code itself
can run in the cloud. Deployed code
typically runs against staging or
production resources.
Manage Azure CLI, Azure portal, Python scripts, Monitor app performance and
Azure Monitor responsiveness, make adjustments in
production environment, migrate
improvements back to dev
environment for the next round of
provisioning and development.
Next steps
You're now familiar with the basic structure of Azure and the overall development flow: provision resources,
write and test code, deploy the code to Azure, and then monitor and manage those resources.
The next step is to get familiar with the Azure libraries for Python, which you'll be using in many parts of the
flow.
Learn to use the Azure libraries for Python >>>
Overview: Deploy a Python web app to Azure with
managed identity
10/28/2022 • 3 minutes to read • Edit Online
In this tutorial, you'll deploy Python code (Django or Flask ) to create and deploy a web app running in Azure
App Service. The web app uses managed identity to access Azure Storage and Azure Database for PostgreSQL
resources.
Each article in the tutorial covers a part or service shown in the service diagram below. The left side of the
diagram shows the local or development environment with a Python app using a local PostgreSQL instance and
a local storage emulator. The right side of the diagram shows the Python app deployed in Azure with Azure App
Service, Azure Database for PostgreSQL, and Azure Storage Service.
NOTE
If you are following this tutorial with your own app, look at the requirements.txt file description in each project's
README.md file (Flask, Django) to see what packages you'll need and how DefaultAzureCredential is implemented.
Flask
Django
cd msdocs-flask-web-app-managed-identity
For now, you're done setting up the sample app. In later steps, you'll optionally configure the app for use in a
local development environment or as a deployed web app in Azure.
Next step
Run the web app locally >>>
Configure and run the Python app locally with a
PostgreSQL instance and a storage emulator
10/28/2022 • 5 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll learn how to run the Python app locally.
This optional step requires a local PostgreSQL instance, a local storage emulator, and other setup steps. If you
skip this step now, you can return to it after you've completed the rest of the tutorial.
TIP
Instead of using local storage emulation, you could use Azure Storage and authenticate locally with developer account or
AD group. For more information, see Authenticate Python apps to Azure services during local development using
developer accounts. The rest of this article shows local emulation of storage with Azurite.
mkcert -install
mkcert -cert-file cert.pem -key-file key.pem localhost 127.0.0.1
The last command creates a cert.pem and key.pem file. mkcert creates certificates signed by your own private
CA that your machine is automatically configured to trust when you run mkcert -install .
With this package, you can run the app locally using the certificate and key you created as shown in a later step.
Windows
macOS/Linux
bash
PowerShell terminal
azurite-blob \
--location "<folder-path>" \
--debug "<folder-path>\debug.log" \
--oauth basic \
--cert "<project-root>\cert.pem" \
--key "<project-root\key.pem"
TIP
One way of getting the correct certificates into Azure Storage Explorer, is to get them from your browser. First, make sure
the Python app is running locally with TLS (SSL). (See the next step for details.) Then, select the lock icon next to URL in
the browser. Export all certificates in the certification path to .cer files. If you followed the steps above with mkcert , there
should be two items in the path. Import these .cer files into Storage Explorer.
Connecting Azure Storage Explorer to Azurite is covered in the article Use the Azurite emulator for local Azure
Storage development. If you encounter errors connecting, refer to the SSL certificate issues section of the
Storage Explorer Troubleshooting guide.
IN ST RUC T IO N S SC REEN SH OT
The sample app uses the python-dotenv to read environment variables from the .env file.
Next, create the restaurant and review database tables:
Flask
Django
flask db init
flask db migrate -m "initial migration"
Run the app with HTTPS using the certificate and key files you created:
Flask
Django
The sample Flask and Django apps use the azure.identity package, which contains the DefaultAzureCredential.
The DefaultAzureCredential can be used with Azurite and the Azure Python SDK.
To test your Python app locally, go to https://127.0.0.1:8000 (Django) or http://127.0.0.1:5000 (Flask). Your
Python app is running locally with local PostgreSQL instance and Azurite storage emulator.
If you run into DefaultAzureCredential issues, make sure you're signed in to Azure. For example, in the Azure
CLI, you can use az login , in Visual Studio Code use the command palette (Ctrl+Shift+P) to run the Azure:
Sign In command, and in Azure PowerShell use Connect-AzAccount .
Here's an example screenshot of the sample app:
Next step
Create an App Service to host the Python app >>>
Create a Python web app in App Service and
enable managed identity
10/28/2022 • 7 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure App Service to host a
Python web app and create a system assigned managed identity for the web app. The managed identity is
authenticated with Azure AD, so you don’t have to store credentials in code when accessing other Azure
resources.
Sign in to the Azure portal and follow these steps to create your Azure resource.
IN ST RUC T IO N S SC REEN SH OT
On the Create Web App page, fill out the form as follows:
1. Resource Group → Select Create new and use
the name msdocs-web-app-rg.
2. Name → Use msdocs-web-app-<unique-id>.
The name must be unique across Azure with the
web app's URL
https://<app-service-
name>.azurewebsites.com
).
3. Runtime stack → Python 3.9
4. Region → Any Azure region near you.
5. App Ser vice Plan → Select Create new under
Linux Plan and use the name of msdocs-web-
app.
6. App Ser vice Plan → Select Change size under
Sku and size to select a different App Service
plan.
Next step
Create a storage account >>>
Create an Azure storage account and configure a
role for managed identity
10/28/2022 • 6 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure Blob Storage account to
store images saved by the sample app.
Sign in to the Azure portal and follow these steps to create an Azure Storage account.
IN ST RUC T IO N S SC REEN SH OT
The Add role assignment page lists all of the roles that
can be assigned for the resource group.
1. Use the search box to find the role Storage Blob Data
Contributor.
2. In the Storage Blob Data Contributor row of the
role table, select View .
3. In the BuiltInRole page, select Select role .
4. Back on the Add role assignment page, select
Next .
Next step
Create an Azure database for PostgreSQL >>>
Create an Azure Database for PostgreSQL and
configure managed identity
10/28/2022 • 14 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll create an Azure Database for PostgreSQL
Service.
NOTE
Managed identity is currently only supported in PostgreSQL Single Server.
Azure portal
VS Code
Azure CLI
Sign in to the Azure portal and follow these steps to create your Azure Database for PostgreSQL resource.
IN ST RUC T IO N S SC REEN SH OT
In the portal:
1. Enter postgres in the search bar at the top of the
Azure portal.
2. Select the item labeled Azure Database for
PostgreSQL ser vers under the under Ser vices
heading on the menu that appears below the
search bar.
IN ST RUC T IO N S SC REEN SH OT
On the Single ser ver page, fill out the form as follows:
1. Resource group → Select and use a name of
msdocs-web-app-rg.
2. Ser ver name → Enter a name such as msdocs-
web-app-postgres-database-<unique-id>. The
name must be unique across Azure with the
database server's URL
https://<server-
name>.postgres.database.azure.com
). Allowed characters are A - Z , 0 - 9 , and -
.
3. Data source → None
4. Region → Same Azure region used for the App
Service.
5. Version → Keep the default (which is the latest
version).
6. Compute + storage → Select Configure
ser ver to select a different Compute + storage
plan, which is discussed below.
7. Admin username → Enter an admin username
following the portal suggestions for naming.
8. Password → Enter the admin password.
9. Confirm password → Re-enter the admin
password.
Azure portal
VS Code
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
3. Create a database
psql
VS Code
In your local environment, or anywhere you can use the PostgreSQL interactive terminal psql such as the Azure
Cloud Shell, connect to the PostgreSQL database server to create the restaurant database.
Start psql:
psql --host=<server-name>.postgres.database.azure.com \
--port=5432 \
--username=<admin-user>@<server-name> \
--dbname=postgres
The values of <server-name> and <admin-user> are the values from a previous step, used in the creation of the
PostgreSQL database service. The command above will prompt you for the admin password. If you have trouble
connecting, restart the database and try again. If you're connecting from your local environment, your IP
address must be added to the firewall rule list for the database service.
At the postgres=> prompt, create the database:
CREATE DATABASE restaurant;
The semicolon (";") at the end of the command is necessary. To verify that the restaurant database was
successfully created, use the command \c restaurant to change the prompt from postgres=> (default) to the
restaurant-> . Type \? to show help or \q to quit.
You can also create a database using Azure Data Studio or any other IDE, and Visual Studio Code with the Azure
Tools extension pack installed.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
IN ST RUC T IO N S SC REEN SH OT
Next, you need to grant the identity permission to access the database. This grant is done by creating a new role
that identifies the managed identity as one that can access the database. If you are already in the Azure portal,
you can use the Azure Cloud Shell to complete this task.
TIP
Alternatively, you can connect to the database with a local instance of PostgreSQL or Azure Data Studio. For the
PostgreSQL interactive terminal psql used locally, you still need to generate a token with az account get-access-token.
Azure Data Studio is integrated with Azure Active Directory such that the token is generated automatically. Regardless of
how you connect, make sure you specify the user name as <azure-ad-user-name>@<server-name>.
If you sign into the Cloud Shell with an account other than the one that was set as admin for PostgreSQL, then
change accounts with az login .
In a Cloud Shell, you can choose between Bash and PowerShell.
bash
PowerShell terminal
# Sign into Azure as the Azure AD user that was set as Active Directory admin
# az login
In the PostgreSQL database, run the following commands to create a role that the web app will use to access the
database.
You'll use the user name webappuser as an App Service configuration setting in the next step.
Next step
Deploy to the Python app to Azure
Deploy and configure a Python web app in Azure
with managed identity
10/28/2022 • 14 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll configure the App Service and then deploy
the Python app to it.
IN ST RUC T IO N S SC REEN SH OT
To deploy a web app from VS Code, you must have the Azure Tools extension pack installed and be signed into
Azure from VS Code.
IN ST RUC T IO N S SC REEN SH OT
Azure portal
VS Code
Azure CLI
Navigate to page for the App Service instance in the Azure portal.
1. Select SSH , under Development Tools on the left resource menu.
2. Then Go to open an SSH console on the web app server. It may take a minute to connect the first time.
If you can't connect with SSH, see Troubleshooting tips.
Step 2. In the SSH session, run commands to migrate the models into the database:
Flask
Django
When you deploy the Flask sample app to Azure App Service, the database tables are automatically created in
Azure Database for PostgreSQL server. If you try to run flask db init you'll receive the message "Directory
migrations already exists and is not empty."
If you can't migrate the models, see Troubleshooting tips.
TIP
In an SSH session, for Django you can also create users with the python manage.py createsuperuser command like
you would with a typical Django app. For more information, see the documentation for django django-admin and
manage.py. Use the superuser account to access the /admin portion of the web site. For Flask, use an extension such as
Flask-admin to provide the same functionality.
5. Troubleshooting tips
Here are a few tips for troubleshooting your deployment:
When you deploy Python code to App Service, a built-in Linux container is created to run the web app. If a
deployment isn't successful, in the Azure portal check the Deployment Center | Logs generated during
the build of the container to confirm the deployment failed. If there was a failure, go to the Diagnose
and solve problems resource of the App Service to check the diagnostic logging. The Application
logging logs are the most useful for troubleshooting failed deployments. Be sure to check the timestamp
of the logging entries to make sure they correspond to the deployment you're troubleshooting. There
may be a delay in writing the logs and you might need to wait to see the logging information for the
deployment.
If you encounter errors related to connecting to the database while doing the migration, check the values
of the application settings of the App Service, specifically DBHOST , DBNAME , and DBUSER . Without these
settings, the web app can't communicate with the database.
If you have the database connection information correctly specified, confirm that you set up managed
identity for the database correctly.
If you can't open an SSH session to connect to your Azure App Service, then the app might have failed to
start. Check the diagnostic logs for details, and in particular, the application logs. Errors can occur for
many reasons. For example, if you haven't created the necessary app settings in the previous section, the
logs will indicate KeyError: 'DBNAME' .
Check that there's an App Service configuration setting SCM_DO_BUILD_DURING_DEPLOYMENT set to true or
1 . For more information and background on how Azure App Service runs Python apps, see Configure a
Linux Python app for Azure App Service.
If you're deploying to App Service using local Git and you specified the wrong credentials, it might get
cached and you need to clear these credentials. For more information about Git credentials, see Git Tools -
Credential Storage. On Windows, you can open the Credential Manager / Windows Credentials, find the
credentials and remove it.
If deployment is successful and the web app is running, print statements in the code write to the log
stream. In the Azure portal, go to the App Service and open the Log Stream resource. For more
information, see Enable diagnostics logging for apps in Azure App Service - Stream logs.
Next step
Clean up resources >>>
Clean up and next steps of managed identity
tutorial
10/28/2022 • 2 minutes to read • Edit Online
This article is part of a tutorial about deploying a Python app to Azure App Service. The web app uses managed
identity to authenticate to other Azure resources. In this article, you'll clean up resources used in Azure so you
don't incur other charges and help keep your Azure subscription uncluttered. You can leave the Azure resources
running if you want to use them for further development work.
1. Clean up resources
In this tutorial, all the Azure resources were created in the same resource group. Removing the resource group
removes all resources in the resource group and is the fastest way to remove all Azure resources used for your
app.
Azure portal
VS Code
Azure CLI
Sign in to the Azure portal and follow these steps to delete a resource group.
IN ST RUC T IO N S SC REEN SH OT
2. Next steps
After completing this tutorial, here are some next steps you can take to build upon what you learned and move
the tutorial code and deployment closer to production ready:
Secure communication to your Azure Database for PostgreSQL server, see Use Virtual Network service
endpoints and rules for Azure Database for PostgreSQL - Single Server.
Map a custom DNS name to your app, see Tutorial: Map custom DNS name to your app.
Monitor App Service for availability, performance, and operation, see Monitoring App Service and Set up
Azure Monitor for your Python application.
Enable continuous deployment to Azure App Service, see Continuous deployment to Azure App Service,
Use CI/CD to deploy a Python web app to Azure App Service on Linux, and Design a CI/CD pipeline using
Azure DevOps.
More details on how App Service runs a Python app, see Configure Python app.
Review PostgresSQL best practices, see Best practices for building an application with Azure Database for
PostgreSQL.
Learn more about security for Blob storage, see Security recommendations for Blob storage.
This article walks you through setting up your local environment to develop Python web apps and deploy them
to Azure. Your web app can be pure Python or use one of the common Python-based web frameworks like
Django, Flask, or FastAPI.
Python web apps developed locally can be deployed to services such as Azure App Service, Azure Container
Apps, or Azure Static Web Apps. There are many options for deployment. For example for App Service
deployment, you can choose to deploy from code, a Docker container, or a Static Web App. If you deploy from
code, you can deploy with Visual Studio Code, with the Azure CLI, from a local Git repository, or with GitHub
actions. If you deploy in a Docker Container, you can do so from Azure Container Registry, Docker Hub, or any
private registry.
Before continuing with this article, we suggest you review the Set up your dev environment for guidance on
setting up your dev environment for Python and Azure. Below, we'll discuss setup and configuration specific to
Python web app development.
After you get your local environment setup for Python web app development, you'll be ready to tackle these
articles:
Quickstart: Create a Python (Django or Flask) web app in Azure App Service.
Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
Tutorial: Deploy a Python web app to Azure with managed identity
TIP
Make sure you have Python extension installed. For an overview of working with Python in VS Code, see Getting Started
with Python in VS Code.
In VS code, you work with Azure resources through VS Code extensions. You can install extensions from the
Extensions View or the key combination Ctrl+Shift+X. For Python web apps, you'll likely be working with one
or more of the following extensions:
The Azure App Service extension enables you to interact with Azure App Service from within Visual
Studio Code. App Service provides fully managed hosting for web applications including websites and
web APIs.
The Azure Static Web Apps extension enables you to create Azure Static Web Apps directly from VS Code.
Static Web Apps is serverless and a good choice for static content hosting.
If you plan on working with containers, then install:
The Docker extension to build and work with containers locally. For example, you can run a
containerized Python web app on Azure App Service using Web Apps for Containers.
The Azure Container Apps extension to create and deploy containerized apps directly from Visual
Studio Code.
There are other extensions such as the Azure Storage, Azure Databases, and Azure Resources extensions.
You can always add these and other extensions as needed.
Extensions in Visual Studio Code are accessible as you would expect in a typical IDE interface and with rich
keyword support using the VS Code command palette. To access the command palette, use the key combination
Ctrl+Shift+P. The command palette is a good way to see all the possible actions you can take on an Azure
resource. The screenshot below shows some of the actions for App Service.
Here's an example Azure CLI command to create a web app and associated resources, and deploy it to Azure in
one command using az webapp up. Run the command in the root directory of your web app.
bash
PowerShell terminal
az webapp up \
--runtime PYTHON:3.9 \
--sku B1 \
--logs
For more about this example, see Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service.
Keep in mind that for some of your Azure workflow you can also use the Azure CLI from an Azure Cloud Shell.
Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources.
The azure-identity package allows your web app to authenticate with Azure Active Directory (Azure AD). For
authentication in your web app code, it's recommended that you use the DefaultAzureCredential in the
azure-identity package. Here's an example of how to access Azure Storage. The pattern is similar for other
Azure resources.
azure_credential = DefaultAzureCredential()
blob_service_client = BlobServiceClient(
account_url=account_url,
credential=azure_credential)
The DefaultAzureCredential will look in predefined locations for account information, for example, in
environment variables, in the VS Code Account extension, or from the Azure CLI sign-in. For in-depth
information on the DefaultAzureCredential logic, see Authenticate Python apps to Azure services by using the
Azure SDK for Python.
Django
Flask
FastAPI
Create a sample project using the django-admin startproject command. The project includes a manage.py file
that is the entry point for running the app.
Django
Flask
FastAPI
http://127.0.0.1:8000/
At this point, add a requirements.txt file and then you can deploy the web app to Azure or containerize it with
Docker and then deploy it.
Next steps
Quickstart: Create a Python (Django or Flask) web app in Azure App Service.
Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
Tutorial: Deploy a Python web app to Azure with managed identity
Configure a custom startup file for Python apps on
Azure App Service
10/28/2022 • 4 minutes to read • Edit Online
In this article, you learn about configuring a custom startup file, if needed, for a Python web app hosted on
Azure App Service. For running locally, you don't need a startup file. However, when you deploy a web app to
Azure App Service, your code is run in Docker container that can use any startup commands if they are present.
You need a custom startup file in the following cases:
You want to start the Gunicorn default web server with extra arguments beyond the defaults, which are
--bind=0.0.0.0 --timeout 600 .
Your app is built with a framework other than Flask or Django, or you want to use a different web server
besides Gunicorn.
You have a Flask app whose main code file is named something other than app.py or application.py*, or
the app object is named something other than app .
In other words, unless you have an app.py or application.py in the root folder of your project, and the
Flask app object is named app , then you need a custom startup command.
For more information, see Configure Python Apps - Container startup process.
# <module> is the folder that contains wsgi.py. If you need to use a subfolder,
# specify the parent of <module> using --chdir.
gunicorn --bind=0.0.0.0 --timeout 600 <module>.wsgi
If you want to change any of the Gunicorn arguments, such as using --timeout 1200 , then create a command
file with those modifications. For more information, see Container startup process - Django app.
Star tup file is in a subfolder : for example, if the startup file is myapp/website.py and the app object is
app , then use Gunicorn's --chdir argument to specify the folder and then name the startup file and app
object as usual:
Star tup file is within a module : in the python-sample-vscode-flask-tutorial code, the webapp.py
startup file is contained within the folder hello_app, which is itself a module with an __init__.py file. The
app object is named app and is defined in __init__.py and webapp.py uses a relative import.
Because of this arrangement, pointing Gunicorn to webapp:app produces the error, "Attempted relative
import in non-package," and the app fails to start.
In this situation, create a shim file that imports the app object from the module, and then have Gunicorn
launch the app using the shim. The python-sample-vscode-flask-tutorial code, for example, contains
startup.py with the following contents:
You use python -m because web servers installed via requirements.txt aren't added to the Python global
environment and therefore can't be invoked directly. The python -m command invokes the server from
within the current virtual environment.
Overview: Cloud-based, serverless ETL using
Python on Azure
10/28/2022 • 2 minutes to read • Edit Online
This series shows you one way to create a serverless, cloud-based Extract, Transform, and Load Python solution
using an Azure Function App.
The Azure Function App securely ingests data from Azure Storage Blob. Then, the data is processed using Pandas
and loaded into an Azure Data Lake Store. Finally, the source data file is archived using Cool-Tier Access in an
Azure Storage Blob.
Next Step
Next: Get started >>>
Create resources for a cloud-based, serverless ETL
solution using Python on Azure
10/28/2022 • 12 minutes to read • Edit Online
This article shows you how to use Azure CLI to deploy and configure the Azure resources used for our cloud-
based, serverless ETL.
IMPORTANT
To complete each part of this series, you must create all of these resources in advance. Create each of the resources in a
single resource group for organization and ease of resource clean-up.
Prerequisites
Before you can begin the steps in this article, complete the tasks below:
Azure subscription, if you don't have an Azure subscription, create one for free
Python 3.7 or later is installed.
python --version
Azure CLI (2.0.46 or later); the CLI commands can be run in the Azure Cloud Shell or you can install Azure
CLI locally.
az --version
code --version
Install the latest version of Azure Functions Core Tools, version 4 or later.
func --version
az login
Step 2: When using the Azure CLI, you can turn on the param-persist option that automatically stores
parameters for continued use. To learn more, see Azure CLI persisted parameter. [optional]
az config param-persist on
IMPORTANT
Be sure to create and activate a local virtual environment for this project.
service_location='eastus'
resource_group_name='rg-cloudetl-demo'
# Create an Azure Resource Group to organize the Azure services used in this series logically
az group create \
--location $service_location \
--name $resource_group_name
NOTE
You can not host Linux and Windows apps in the same resource group. Suppose you have an existing resource group
named rg-cloudetl-demo with a Windows function app or web app. In that case, you must use a different resource group.
storage_acct_name='stcloudetldemodata'
# Create a general-purpose storage account in your resource group and assign it an identity
az storage account create \
--name $storage_acct_name \
--resource-group $resource_group_name \
--location $service_location \
--sku Standard_LRS \
--assign-identity
Step 2: Run the az role assignment create to add the 'Storage Blob Data Contributor' role to your user
email.
user_email='[email protected]'
IMPORTANT
Role assignment creation could take a minute to apply in Azure. It is recommended to wait a moment before running the
next command in this article.
Step 2: Run az storage account show to capture the storage account ID.
Step 3: Run az storage account keys list to capture one of the storage account access keys for the next
section.
Step 2: Run az storage account keys list to capture one of the ADLS storage account access keys for the
next section.
NOTE
It is very easy to turn a data lake into a data swamp. So, it is important to govern the data that resides in your data lake.
Azure Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud,
and software-as-a-service (SaaS) data. Easily create a holistic, up-to-date map of your data landscape with automated
data discovery, sensitive data classification, and end-to-end data lineage.
Step 2: Run az storage fs directory create to create the directory (folder) in the newly created file system
to land our processed data.
key_vault_name='kv-cloudetl-demo'
Step 2: Set a 'secret' in Azure Key Vault to store the Blob Storage Account access key. Run az keyvault
secret set to create and set a secret in Azure Key Vault.
abs_secret_name='abs-access-key1'
adls_secret_name='adls-access-key1'
IMPORTANT
If your secret value contains special characters, you will need to 'escape' the special character by wrapping it with double
quotes and the entire string in single quotes. Otherwise, the secret value is not set correctly.
Will not work: "This is my secret value & it has a special character."
Will not work: "This is my secret value '&' it has a special character."
Will work : 'this is my secret value "&" it has a special character'
export KEY_VAULT_NAME=$key_vault_name
export ABS_SECRET_NAME=$abs_secret_name
export ADLS_SECRET_NAME=$adls_secret_name
cd CloudETLDemo_Local
Step 3: Add functions to your project by using the following command, where the --name argument is
the unique name of your function and the --template argument specifies the function's trigger (HTTP).
func start
Step 5: Grab the localhost URL at the bottom and append '?name=Functions' to the query string.
http://localhost:7071/api/demo_relational_data_cloudetl?name=Functions
Step 6: When finished, use 'Ctrl +C ' and choose y to stop the functions host.
Initialize a Python Function App in Azure
An Azure Function App must be created to host our data ingestion function. This Function App is what we
deploy our local dev function to once complete.
Step 1: Run az functionapp create to create the function app in Azure.
funcapp_name='CloudETLFunc'
NOTE
App Name is also the default DNS domain for the function app.
Step 2: Run az functionapp config appsettings set to store Azure Key Vault name and Azure Blob Storage
access key application configurations.
# Update function app's settings to include Azure Key Vault environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "KEY_VAULT_NAME=kv-cloudetl-demo"
# Update function app's settings to include Azure Blob Storage Access Key in Azure Key Vault secret
environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ABS_SECRET_NAME=abs-access-key1"
# Update function app's settings to include Azure Data Lake Storage Gen 2 Access Key in Azure Key
Vault secret environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ADLS_SECRET_NAME=adls-access-key1"
# set permissions policy for function app to key vault - get list and set
az keyvault set-policy \
--name $key_vault_name \
--resource-group $resource_group_name \
--object-id $func_principal_id \
--secret-permission get list set
Step 2: Run az role assignment create to assign 'Key Vault Secrets User' built-in role to Azure Function
App.
# Create a 'Key Vault Contributor' role assignment for function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Key Vault Contributor' \
--scope $kv_scope
# Assign the 'Storage Blob Data Contributor' role to the function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Storage Blob Data Contributor' \
--resource-group $resource_group_name
# Assign the 'Storage Queue Data Contributor' role to the function app managed identity
az role assignment create \
--assignee $func_principal_id \
--role 'Storage Queue Data Contributor' \
--resource-group $resource_group_name
NOTE
If you already have your data (blob) uploaded, you can skip to the next ar ticle in this series .
Sample Data
M A N UFA C T
UN IT S URIN G GRO SS
SEGM EN T C O UN T RY P RO DUC T SO L D P RIC E SA L E P RIC E SA L ES DAT E
Step 1: Create a file named 'financial_sample.csv' locally that contains this data by copying the below
data into the file:
Step 2: Upload your data (blob) to your storage container by running az storage blob upload.
Next Step
Next: Securely ingest relational data >>>
Ingest data from Azure Blob Storage using a Python
Azure Function and Azure Key Vault
10/28/2022 • 7 minutes to read • Edit Online
In this article, you'll learn how to retrieve a secret from a Key Vault to securely access Azure Storage Blob data
using a serverless Python Function.
The data needed for analytics is typically gathered from various disparate data sources. Data ingestion is the
process of extracting data from these data sources into a data store and is the first step of an Extract, Transform,
and Load (ETL) solution. There are two types of data ingestion: Batch Processing and Streaming. Batch
processing is when a large amount of data is processed simultaneously, with subprocesses executing
simultaneously in sequential order. This article focuses on batch processing using a serverless Python Function
to retrieve data securely from Azure Blob Storage using Azure Key Vault.
Prerequisites
This article assumes you have set up your environment as described in the previous articles:
Configure your local Python dev environment for Azure
Create resources
TIP
Capture the below information from the previous article to use later in this article:
Azure Blob Storage Account name
Azure Blob Container name
Azure Key Vault name
Sample Data Filename
1. Install required Python Azure SDK libraries
Open the requirements.txt file created in the previous article and complete the following steps.
Step 1: Create and activate Python virtual environment.
Step 2: Review the file contents and ensure the following Python Azure SDK libraries are listed:
azure-identity
azure-storage-blob
azure-keyvault-secrets
azure-functions
pandas
Step 3: In a terminal, with a virtual environment activated, run the 'pip install' command to install the
required libraries.
printenv ABS_SECRET_NAME
printenv ADLS_SECRET_NAME
printenv KEY_VAULT_NAME
Step 2: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below
code.
import logging
import os
import azure.functions as func
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
# Parameters/Configurations
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
NOTE
In this example, the logged-in user is used to authenticate to Key Vault, which is the preferred method for local
development. A managed identity must be assigned to an App Service or Virtual Machine for applications deployed to
Azure. For more information, see Managed Identity Overview.
# Parameters/Configurations
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
Step 2: Open the 'init.py' class file of the demo_relational_data_cloudetl function. Then add the below
code to gather a list of blobs.
import logging
import os
from io import StringIO
import pandas as pd
from datetime import datetime, timedelta
return blob_files
# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
abs_container_client = abs_service_client.get_container_client(container=abs_container_name)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
import logging
import os
from io import StringIO
import pandas as pd
from datetime import datetime, timedelta
return blob_files
return df
return True
# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
access_key_secret = secret_client.get_secret(blob_secret_name)
abs_container_client = abs_service_client.get_container_client(container=abs_container_name)
run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list= process_file_list
)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
Step 4: Execute the function locally and review the execution log to ensure the output is correct.
Segment Country Product Units Sold Manufacturing Price Sale Price Gross Sales Date
0 Government Canada Carretera 1618.5 $3.00 $20.00 "$32,370.00"
1/1/2014
1 Government Germany Carretera 1321 $3.00 $20.00 "$26,420.00"
1/1/2014
2 Midmarket France Carretera 2178 $3.00 $15.00 "$32,670.00"
6/1/2014
3 Midmarket Germany Carretera 888 $3.00 $15.00 "$13,320.00"
6/1/2014
4 Midmarket Mexico Carretera 2470 $3.00 $15.00 "$37,050.00"
6/1/2014
Step 2: Add environment variables to the Azure App Config Setting within the Azure portal.
# Update function app's settings to include Azure Key Vault environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "KEY_VAULT_NAME=kv-cloudetl-demo"
# Update function app's settings to include Azure Blob Storage Access Key in Azure Key Vault secret
environment variable.
az functionapp config appsettings set --name CloudETLDemo --resource-group rg-cloudetl-demo --
settings "ABS_SECRET_NAME=abs-access-key1"
Step 3: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display similar output as
when you ran the function locally.
https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?name=Functions
or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."
curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?
name=Functions"
Next Step
Next: Process relational data for analytics >>>
Transform relational data with Pandas and Azure
Function Apps
10/28/2022 • 8 minutes to read • Edit Online
In this article, you'll use the Pandas Python library in a serverless function to prepare relational data and start to
build out a data lake.
The 'Transform' stage handles data cleansing, validation, and business logic implementation required for later
analysis.
Some essential tasks are to compile, convert, reformat, validate, and cleanse the data in a 'staging' or 'data
landing zone' before loading it into the targeted analytic data store.
Source data is often captured in a format not ideal for data analytics. That's why, the data must be cleansed and
manipulated to address any data issues. By taking this step, you increase the integrity of your data, leading to
insights of higher quality.
There are different kinds of data problems that can occur in any data processing pipeline. This article addresses
a few common problems and provides solutions using the Python Pandas library.
Prerequisites
If you haven't already, follow all the instructions and complete the following articles to set up your local and
Azure dev environment:
Configure your local Python dev environment for Azure
Create resources
Ingest relational data
In a terminal or command prompt with a virtual environment activated, run the 'pip install' command to install
the required libraries.
IMPORTANT
Be sure to capture the following information for this article:
Azure Resource Group Name
Azure Blob Storage Account Name
Azure Key Vault URL
Also, activate the local virtual environment created in previous articles for this project.
def process_relational_data(df):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())
return processed_df
Step 2: Add the below code to filter out the unneeded columns from the DataFrame.
def process_relational_data(df, columns):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())
return processed_df
Step 3: Add the below code to clean the column values in the DataFrame.
return processed_df
# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')
return processed_df
Step 2: Add the below code to standardize the currency columns with special characters in the
DataFrame.
# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')
# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)
return processed_df
# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')
# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)
return processed_df
Step 2: Add the below code to the demo_relational_data_cloudetl function to aggregate the DataFrame
based on the business requirements.
def process_relational_data(df, columns, groupby_columns):
# Remove leading and trailing whitespace in df column names
processed_df = df.rename(columns=lambda x: x.strip())
# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')
# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)
# Get Gross Sales per Segment, Country, Sale Year, and Sale Month
processed_df = processed_df.sort_values(by=['sale_year', 'sale_month']).groupby(groupby_columns,
as_index=False).agg(total_units_sold=('units_sold', sum), total_gross_sales=('gross_sales', sum))
return processed_df
return True
Step 2: Add the below code to the demo_relational_data_cloudetl function to integrate data processing
to the overall Cloud ETL solution.
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# Parameters/Configurations
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
archive_container_name = 'demo-cloudetl-archive'
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
abs_container_client = abs_service_client.get_container_client(container=abs_container_name)
run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list = process_file_list,
columns = cols,
groupby_columns = groupby_cols,
service_client = abs_service_client,
storage_account_url = abs_acct_url,
source_container = abs_container_name,
archive_container = archive_container_name
)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
Step 2: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display a similar outcome as
when you ran the function locally.
https://msdocs-azurefunctions.azurewebsites.net/api/ingest_relational_data?name=Functions
or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."
curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/ingest_relational_data?name=Functions"
Next Step
Next: Load and archive processed relational data >>>
Load relational data into Azure Data Lake Storage
with Azure Functions
10/28/2022 • 7 minutes to read • Edit Online
This article, loads processed data into Azure Data Lake Storage Gen 2 using a serverless Python Function. The
data is then archived using Azure Blob Storage Access Tiers.
The final step of our solution loads the now processed data into the target data store. The data can be loaded
using a row by row approach, or ideally a bulk insert/load process.
TIP
Use bulk loading/bulk insert functions to load the well transformed data
User manual/individual inserts for questionable datasets.
Prerequisites
Azure subscription, if you not, create one for free before you begin.
The Azure Functions Core Tools version 3.x
Visual Studio Code on one of the supported platforms.
The PowerShell extension for Visual Studio Code
The Azure Functions extension for Visual Studio Code
Python 3.7 or later installed
azure-storage-file-datalake
azure-identity
azure-storage-blob
azure-keyvault-secrets
azure-functions
azure-mgmt-storage
pandas
pyarrow
fastparquet
In a terminal or command prompt with a virtual environment activated, run the 'pip install' command to
install the required libraries.
3. Load processed relational data into Azure Data Lake Storage Gen 2
Once the data is transformed into a format ideal for analysis, load the data into an analytical data store. The data
store can be a database system, data warehouse, data lake, or Hadoop. Each destination has different
approaches for loading data reliability and optimized performance. The data can now be used for analysis and
business intelligence.
This article loads the transformed data into Azure Data Lake Storage (ADLS) Gen 2. As previously discussed,
ADLS is the recommended data storage solution for analytic workloads. Various compute and analytic Azure
services can easily connect to Azure Data Lake Storage Gen 2.
Step 1: Open the 'init.py' class file of the demo_relational_data_cloudetl function and add the below
helper function to load a DataFrame to ADLS Gen 2.
processed_df = df.to_parquet(index=False)
file_client.upload_data(data=processed_df,overwrite=True, length=len(processed_df))
file_client.flush_data(len(processed_df))
return True
Step 2: Add the below code to create a function to hold any code relevant to loading relational data in
our solution.
def load_relational_data(processed_df, datalake_service_client, filesystem_name, dir_name,
file_format, file_prefix):
now = datetime.today().strftime("%Y%m%d_%H%M%S")
processed_filename = f'{file_prefix}_{now}.{file_format}'
write_dataframe_to_datalake(processed_df, datalake_service_client, filesystem_name, dir_name,
processed_filename)
return True
# Copy source blob file to archive container and change blob access tier to 'Cool'
archive_blob_client = blob_service_client.get_blob_client(archive_container, blob_name)
archive_blob_client.start_copy_from_url(source_url=source_blob_url,
standard_blob_tier=StandardBlobTier.Cool)
(blob_service_client.get_blob_client(source_container,
blob_name)).delete_blob(delete_snapshots='include')
return True
Step 2: Add the below code to the demo_relational_data_cloudetl function to integrate data archiving to
the overall Cloud ETL run.
return result
import logging
import os
import pandas as pd
import pyarrow
import fastparquet
from io import StringIO
from datetime import datetime, timedelta
return blob_files
file_path = f'{dir_name}/{filename}'
processed_df = df.to_parquet(index=False)
file_client.upload_data(data=processed_df,overwrite=True, length=len(processed_df))
file_client.flush_data(len(processed_df))
return True
# Copy source blob file to archive container and change blob access tier to 'Cool'
archive_blob_client = blob_service_client.get_blob_client(archive_container, blob_name)
archive_blob_client.start_copy_from_url(source_url=source_blob_url,
standard_blob_tier=StandardBlobTier.Cool)
(blob_service_client.get_blob_client(source_container,
blob_name)).delete_blob(delete_snapshots='include')
return True
return df
# Convert column to datetime: attempt to infer date format, return NA where conversion fails.
processed_df['date'] = pd.to_datetime( processed_df['date'], infer_datetime_format=True,
errors='coerce')
# Convert object/string to numeric and handle special characters for each currency column
processed_df['gross_sales'] = processed_df['gross_sales'].replace({'\$': '', ',': ''},
regex=True).astype(float)
# Get Gross Sales per Segment, Country, Sale Year, and Sale Month
processed_df = processed_df.sort_values(by=['sale_year', 'sale_month']).groupby(groupby_columns,
as_index=False).agg(total_units_sold=('units_sold', sum), total_gross_sales=('gross_sales', sum))
return processed_df
return result
# Parameters/Configurations
arg_date = '2014-07-01'
arg_date = '2014-07-01'
std_date_format = '%Y-%m-%d'
processed_file_format = 'parquet'
processed_file_prefix = 'financial_demo'
try:
# Set variables from appsettings configurations/Environment Variables.
key_vault_name = os.environ["KEY_VAULT_NAME"]
key_vault_Uri = f"https://{key_vault_name}.vault.azure.net"
blob_secret_name = os.environ["ABS_SECRET_NAME"]
abs_acct_name='stcloudetldemodata'
abs_acct_url=f'https://{abs_acct_name}.blob.core.windows.net/'
abs_container_name='demo-cloudetl-data'
archive_container_name = 'demo-cloudetl-archive'
adls_acct_name='dlscloudetldemo'
adls_acct_url = f'https://{adls_acct_name}.dfs.core.windows.net/'
adls_fsys_name='processed-data-demo'
adls_dir_name='finance_data'
adls_secret_name='adls-access-key1'
# Authenticate and securely retrieve Key Vault secret for access key value.
az_credential = DefaultAzureCredential()
secret_client = SecretClient(vault_url=key_vault_Uri, credential= az_credential)
access_key_secret = secret_client.get_secret(blob_secret_name)
abs_container_client = abs_service_client.get_container_client(container=abs_container_name)
adls_service_client = DataLakeServiceClient(
account_url = adls_acct_url,
credential = az_credential
)
run_cloud_etl(
source_container_client = abs_container_client,
blob_file_list = process_file_list,
columns = cols,
groupby_columns = groupby_cols,
datalake_service_client = adls_service_client,
filesystem_name = adls_fsys_name,
dir_name = adls_dir_name,
file_format = processed_file_format,
file_prefix = processed_file_prefix,
service_client = abs_service_client,
storage_account_url = abs_acct_url,
source_container = abs_container_name,
archive_container = archive_container_name
)
except Exception as e:
logging.info(e)
return func.HttpResponse(
f"!! This HTTP triggered function executed unsuccessfully. \n\t {e} ",
status_code=200
)
Step 2: To invoke the HTTP Trigger function in Azure, make an HTTP request using the function URL in a
browser or with a tool like 'curl'.
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display similar output as
when you ran the function locally.
https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?name=Functions
or
Run 'curl' with the Invoke URL , appending the parameter &name=Functions . The output of the command
should be the text, "Hello Functions."
curl -s "https://msdocs-azurefunctions.azurewebsites.net/api/demo_relational_data_cloudetl?
name=Functions"
7. Clean up resources
When no longer needed, remove the resource group, and all related resources:
Run az group delete to delete the Azure Resource Group.
The following articles help you get started with various data solutions on Azure.
SQL databases
PostgreSQL :
Use Python to connect and query data in Azure Database for PostgreSQL
Run a Python (Django or Flask) web app with PostgreSQL in Azure App Service
MySQL :
Use Python to connect and query data with Azure Database for MySQL
Azure SQL :
Use Python to query an Azure SQL database
MariaDB :
How to connect applications to Azure Database for MariaDB
The following articles help you get started with various machine learning options on Azure:
Get started creating your first ML experiment with the Python SDK
Train your first ML model
Train image classification models with MNIST data and scikit-learn using Azure Machine Learning
Auto-train an ML model
Access datasets with Python using the Azure Machine Learning Python client library
Configure automated ML experiments in Python
Deploy a data pipeline with Azure DevOps
Create and run machine learning pipelines with Azure Machine Learning SDK
Overview of Python Container Apps in Azure
10/28/2022 • 15 minutes to read • Edit Online
This article describes how to go from Python project code (for example, a web app) to a deployed Docker
container in Azure. Discussed are the general process of containerization, deployment options for containers in
Azure, and Python-specific configuration of containers in Azure.
The nature of Docker containers is that creating a Python Docker image from code and deploying that image to
a container in Azure is similar across programming languages. The language-specific considerations - Python in
this case - are in the configuration during the containerization process in Azure, in particular the Dockerfile
structure and configuration supporting Python web frameworks such as Django, Flask, and FastAPI.
Dev Build Python Docker images in your Code: git clone code to dev
dev environment. environment (with Docker installed).
Hybrid From your dev environment, build Code: git clone code to dev
Python Docker images in Azure. environment (not necessary for Docker
to be installed).
Azure All in the cloud; use Azure Cloud Shell Code: git clone GitHub repo to Azure
to build Python Docker images code Cloud Shell.
from GitHub repo.
Build: In Azure Cloud Shell, use Azure
CLI or Docker CLI.
The end goal of these workflows is to have a container running in one of the Azure resources supporting Docker
containers as listed in the next section.
A dev environment can be your local workstation with Visual Studio Code or PyCharm, Codespaces (a
development environment that's hosted in the cloud), or Visual Studio Dev Containers (a container as a
development environment).
TIP
With containers, virtual environments aren't needed unless you're using them for testing or other reasons. If you use
virtual environments, don't copy them into the Docker image. Use the .dockerignore file to exclude them.
You can think of Docker containers as providing similar capabilities as virtual environments, but with further
improvements in reproducibility and portability. Docker container can be run anywhere containers can be run,
regardless of OS.
A Docker container contains your Python project code and everything that code needs to run. To get to that
point, you need to build your Python project code into a Docker image, and then create container, a runnable
instance of that image.
For containerizing Python projects, the key files are:
requirements.txt Used during the building of the Docker image to get the
correct dependencies into the image.
Dockerfile Used to specify how to build the Python Docker image. For
more information, see the section Dockerfile instructions for
Python.
W EB F RA M EW O RK P O RT
Django 8000
The following table shows how to set the port for difference Azure container solutions.
A Z URE C O N TA IN ER SO L UT IO N H O W TO SET W EB A P P P O RT
Web App for Containers By default, App Service assumes your custom container is
listening on either port 80 or port 8080. If your container
listens to a different port, set the WEBSITES_PORT app
setting in your App Service app. For more information, see
Configure a custom container for Azure App Service.
Azure Containers Apps Azure Container Apps allows you to expose your container
app to the public web, to your VNET, or to other container
apps within your environment by enabling ingress. Set the
ingress targetPort to the port your container listens to
for incoming requests. Application ingress endpoint is always
exposed on port 443. For more information, see Set up
HTTPS or TCP ingress in Azure Container Apps.
Azure Container Instances, Azure Kubernetes Set port during creation of a container. You need to ensure
your solution has a web framework, application server (for
example, gunicorn, uvicorn), and web server (for example,
nginx). For example, you can create two containers, one
container with a web framework and application server, and
another framework with a web server. The two containers
communicate on one port, and the web server container
exposes 80/443 for external requests.
Python Dockerfile
A Dockerfile is a text file that contains instructions for building a Docker image. The first line states the base
image to begin with. This line is followed by instructions to install required programs, copy files, and other
instructions to create a working environment. For example, some Python-specific examples for key Python
Dockerfile instructions show in the table below.
RUN Runs a command inside the Docker RUN python -m pip install -r
image. For example, pull in requirements.txt
dependencies. The command runs
once at build time.
CMD The command provides the default for CMD ["gunicorn", "--bind",
executing a container. There can only "0.0.0.0:5000", "wsgi:app"]
be one CMD instruction.
The Docker build command builds Docker images from a Dockerfile and a context. A build’s context is the set of
files located in the specified path or URL. Typically, you'll build an image from the root of your Python project
and the path for the build command is "." as shown in the following example.
The build process can refer to any of the files in the context. For example, your build can use a COPY instruction
to reference a file in the context. Here's an example of a Dockerfile for a Python project using the Flask
framework:
FROM python:3.8-slim
EXPOSE 5000
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder.
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# Provides defaults for an executing container; can be overridden with Docker CLI.
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "wsgi:app"]
You can create a Dockerfile by hand or create it automatically with VS Code and the Docker extension. For more
information, see Generating Docker files.
The Docker build command is part of the Docker CLI. When you use IDEs like VS Code or PyCharm, the UI
commands for working with Docker images call the build command for you and automate specifying options.
IN ST RUC T IO N S SC REEN SH OT
For more information about this scenario, see Build and test a containerized Python web app locally.
Environment variables in containers
Python projects often make use of environment variables to pass data to code. For example, you might specify
database connection information in an environment variable so that it can be easily changed during testing. Or,
when deploying the project to production, the database connection can be changed to refer to a production
database instance.
Packages like python-dotenv are often used to read key-value pairs from an .env file and set them as
environment variables. An .env file is useful when running in a virtual environment but isn't recommended
when working with containers. Don't copy the .env file into the Docker image, especially if it contains
sensitive information and the container will be made public. Use the .dockerignore file to exclude files
from being copied into the Docker image. For more information, see the section Virtual environments and
containers in this article.
You can pass environment variables to containers in a few ways:
1. Defined in the Dockerfile as ENV instructions.
2. Passed in as --build-arg arguments with the Docker build command.
3. Passed in as --secret arguments with the Docker build command and BuildKit backend.
4. Passed in as --env or --env-file arguments with the Docker run command.
The first two options have the same drawback as noted above with .env files, namely that you're hardcoding
potentially sensitive information into a Docker image. You can inspect a Docker image and see the environment
variables, for example, with the command docker image inspect.
The third option with BuildKit allows you to pass secret information to be used in the Dockerfile for building
docker images in a safe way that won't end up stored in the final image.
The fourth option of passing in environment variables with the Docker run command means the Docker image
doesn't contain the variables. However, the variables are still visible inspecting the container instance (for
example, with docker container inspect). This option may be acceptable when access to the container instance is
controlled or in testing or dev scenarios.
Here's an example of passing environment variables using the Docker CLI run command and using the --env
argument.
# PORT=8000 for Django and 5000 for Flask
export PORT=<port-number>
If you're using VS Code or PyCharm, the UI options for working with images and containers ultimately use
Docker CLI commands like the one shown above.
Finally, specifying environment variables when deploying a container in Azure is different than using
environment variables in your dev environment. For example:
For Web App for Containers, you configure application settings during configuration of App Service.
These settings are available to your app code as environment variables and accessed using the standard
os.environ pattern. You can change values after initial deployment when needed. For more information,
see Access app settings as environment variables.
For Azure Container Apps, you configure environment variables during initial configuration of the
container app. Subsequent modification of environment variables creates a revision of the container. In
addition, Azure Container Apps allows you to define secrets at the application level and then reference
them in environment variables. For more information, see Manage secrets in Azure Container Apps.
As another option, you can use Service Connector to help you connect Azure compute services to other backing
services. This service configures the network settings and connection information (for example, generating
environment variables) between compute services and target backing services in management plane.
Web App for Containers Go to the Diagnose and solve problems resource to
view logs. Diagnostics is an intelligent and interactive
experience to help you troubleshoot your app with no
configuration required. For a real-time view of logs, go to
the Monitoring - Log stream . For more detailed log
queries and configuration, see the other resources under
Monitoring .
A Z URE SERVIC E H O W TO A C C ESS LO GS IN A Z URE P O RTA L
For the same services listed above, here are the Azure CLI commands to access logs.
There's also support for viewing logs in VS Code. You must have Azure Tools for VS Code installed. Below is an
example of viewing Web Apps for Containers (App Service) logs in VS Code.
Next steps
Deploy a containerized Python web app in Azure App Service
Deploy a containerized Python web app in Azure Container Apps
Overview: Containerized Python web app on Azure
10/28/2022 • 3 minutes to read • Edit Online
This tutorial shows you how to containerize a Python web app and deploy it to Azure. The single container web
app is hosted in Azure App Service and uses MongoDB for Azure Cosmos DB to store data. App Service Web
App for Containers allows you to focus on composing your containers without worrying about managing and
maintaining an underlying container orchestrator. When building web apps, Azure App Service is a good option
for taking your first steps with containers. For more information about using containers in Azure, see
Comparing Azure container options.
In this tutorial you will:
Build and run a Docker container locally. This step is optional.
Build a Docker container image directly in Azure.
Configure an App Service to create a web app based on the Docker container image.
Following this tutorial, you'll have the basis for Continuous Integration (CI) and Continuous Deployment (CD) of
a Python web app to Azure.
Service overview
The service diagram supporting this tutorial shows two environments (developer environment and Azure) and
the different Azure services used in the tutorial.
The components supporting this tutorial and shown in the diagram above are:
Azure App Service
The underlying App Service functionality that enables containerization is Web App for Containers.
Azure App Service uses the Docker container technology to host both built-in images and custom
images. In this tutorial, you'll build an image from Python code and deploy it to Web App for
Containers.
Web App for Containers uses a webhook in the registry to get notified of new images. A push of a
new image to the repository triggers App Service to pull the image and restart.
Azure Container Registry
Azure Container Registry enables you to work with Docker images and its components in Azure. It
provides a registry that's close to your deployments in Azure and that gives you control over
access, making it possible to use your Azure Active Directory groups and permissions.
In this tutorial, the registry source is Azure Container Registry, but you can also use Docker Hub or
a private registry with minor modifications.
Azure Cosmos DB for MongoDB
The Azure Cosmos DB for MongoDB is a NoSQL database used in this tutorial to store data.
Access to Azure Cosmos DB resource is via a connection string, which is passed as an environment
variable to the containerized app.
Authentication
In this tutorial, you'll build a Docker image (either locally or directly in Azure) and deploy it to Azure App Service.
The App Service pulls the container image from an Azure Container Registry repository.
The App Service uses managed identity to pull images from Azure Container Registry. Managed identity allows
you to grant permissions to the web app so that it can access other Azure resources without the need to specify
credentials. Specifically, this tutorial uses a system assigned managed identity. Managed identity is configured
during setup of App Service to use a registry container image.
The tutorial sample web app uses MongoDB to store data. The sample code connects to Azure Cosmos DB via a
connection string.
Prerequisites
To complete this tutorial, you'll need:
An Azure account where you can create:
Azure Container Registry
Azure App Service
MongoDB for Azure Cosmos DB (or access to equivalent). To create an Azure Cosmos DB for
MongoDB database, you can use the steps for Azure portal, Azure CLI, PowerShell, or VS Code. The
sample tutorial requires that you specify a MongoDB connection string, a database name, and a
collection name.
Visual Studio Code or Azure CLI, depending on what tool you'll use.
For Visual Studio Code, you'll need the Docker extension and Azure App Service extension.
Python packages:
PyMongo for connecting to MongoDB.
Flask or Django as a web framework.
Docker installed locally if you want to run container locally.
Sample app
The Python sample app is a restaurant review app that saves restaurant and review data in MongoDB. For an
example of a web app using PostgreSQL, see Deploy a Python web app to Azure with managed identity.
At the end of the tutorial, you'll have a restaurant review app deployed and running in Azure that looks like the
screenshot below.
Next step
Build and test locally
Build and test a containerized Python web app
locally
10/28/2022 • 8 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a containerized Python web app to Azure
App Service. App Service enables you to run containerized web apps and deploy through continuous
integration/continuous deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual
Studio Team Services. In this part of the tutorial, you learn how to build and run the containerized Python web
app locally. This step is optional and isn't required to deploy the sample app to Azure.
Running a Docker image locally in your development environment requires setup beyond deployment to Azure.
Think of it as an investment that can make future development cycles easier, especially when you move beyond
sample apps and you start to create your own web apps. To deploy the sample apps for Django and Flask, you
can skip this step and go to the next step in this tutorial. You can always return after deploying to Azure and
work through these steps.
The service diagram shown below highlights the components covered in this article.
# Django
git clone https://github.com/Azure-Samples/msdocs-python-django-container-web-app.git
# Flask
git clone https://github.com/Azure-Samples/msdocs-flask-django-container-web-app.git
Then navigate into that folder:
# Django
cd msdocs-python-django-container-web-app
# Flask
cd msdocs-python-flask-container-web-app
These instructions require Visual Studio Code and the Docker extension. Go to the sample folder you cloned or
downloaded and open VS Code with the command code . .
IN ST RUC T IO N S SC REEN SH OT
At this point, you have built an image locally. The image you created has the name
"msdocspythoncontainerwebapp" and tag "latest". Tags are a way to define version information, intended use,
stability, or other information. For more information, see Recommendations for tagging and versioning
container images.
Images that are built from VS Code or from using the Docker CLI directly can also be viewed with the Docker
Desktop application.
3. Set up MongoDB
This tutorial assumes you have MongoDB installed locally or you have MongoDB hosted in Azure or elsewhere
that you have access to. Don't use a MongoDB database you'll use in production.
Local MongoDB
Azure Cosmos DB for MongoDB
mongo --version
net:
port: 27017
bindIp: 127.0.0.1,<local-ip-address>
> help
> use restaurants_reviews
> db.restaurants_reviews.insertOne()
> show dbs
> exit
At this point, your local MongoDB connection string is "mongodb://127.0.0.1:27017/", the database name is
"restaurants_reviews", and the collection name is "restaurants_reviews".
VS Code
Docker CLI
IN ST RUC T IO N S SC REEN SH OT
NOTE
Both the database name and collection name are
assumed to be restaurants_reviews .
TIP
You can also run the container selecting a run or debug configuration. The Docker extension tasks in tasks.json are called
when you run or debug. The task called depends on what launch configuration you select. For the task "Docker: Python
(MongoDB local)", specify <YOUR-IP-ADDRESS>. For the task "Docker: Python (MongoDB Azure)", specify
<CONNECTION-STRING>.
You can also start a container from an image and stop it with the Docker Desktop application.
Next step
Build a container image in Azure
Build a containerized Python web app in the cloud
10/28/2022 • 6 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service.
App Service enables you to run containerized web apps and deploy through continuous integration/continuous
deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual Studio Team Services. In
this part of the tutorial, you learn how to build the containerized Python web app in the cloud.
In the previous optional part of this tutorial, a container image was build and run locally. In contrast, in this part
of the tutorial, you'll build (containerize) a Python web app into a Docker image directly in Azure Container
Registry. Building the image in Azure is typically faster and easier than building locally and then pushing the
image to a registry. Also, building in the cloud doesn't require Docker to be running in your dev environment.
Once the Docker image is in Azure Container Registry, it can be deployed to Azure App service.
The service diagram shown below highlights the components covered in this article.
Sign in to the Azure portal and follow these steps to create an Azure Container Registry.
IN ST RUC T IO N S SC REEN SH OT
IN ST RUC T IO N S SC REEN SH OT
az acr build \
-r <registry-name> \
-g <resource-group> \
-t msdocspythoncontainerwebapp:latest \
<repo-path>
The last argument in the command is the fully qualified path to the repo. Use https://github.com/Azure-
Samples/msdocs-python-django-container-web-app.git for the Django sample app and
https://github.com/Azure-Samples/msdocs-python-flask-container-web-app.git for the Flask sample app.
The command above is for Bash shell. If you use PowerShell as your shell, change the line continuation character
from backslash ("\") to backtick ("`").
Step 3. Confirm the container image was created with the az acr repository list command.
Next step
Deploy web app
Deploy a containerized Python app to App Service
10/28/2022 • 11 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service.
App Service enables you to run containerized web apps and deploy through continuous integration/continuous
deployment (CI/CD) capabilities with Docker Hub, Azure Container Registry, and Visual Studio Team Services.
In this part of the tutorial, you learn how to deploy the containerized Python web app to App Service using the
App Service Web App for Containers. Web App for Containers allows you to focus on composing your
containers without worrying about managing and maintaining an underlying container orchestrator.
Following the steps here, you'll end up with an App Service website using a Docker container image. The App
Service pulls the initial image from Azure Container Registry using managed identity for authentication.
The service diagram shown below highlights the components covered in this article.
Sign in to the Azure portal and follow these steps to create the web app.
IN ST RUC T IO N S SC REEN SH OT
IN ST RUC T IO N S SC REEN SH OT
Azure portal
VS Code
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
Azure portal
VS Code
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
5. Troubleshoot deployment
If you don't see the sample app, try the following steps.
With container deployment and App Service, always check the Deployment Center / Logs page in the
Azure portal. Confirm that the container was pulled and is running. The initial pull and running of the
container can take a few moments.
Try to restart the App Service and see if that resolves your issue.
If there are programming errors, those errors will show up in the application logs. On the Azure portal page
for the App Service, select Diagnose and solve problems /Application logs .
The sample app relies on a connection to MongoDB. Confirm that the App Service has application settings
with the correct connection info.
Confirm that managed identity is enabled for the App Service and is used in the Deployment Center. On the
Azure portal page for the App Service, go to the App Service Deployment Center resource and confirm
that Authentication is set to Managed Identity .
Check that the webhook is defined in the Azure Container Registry. The webhook enables the App Service to
pull the container image. In particular, check that Service URI ends with "/api/registry/webhook".
Different Azure Container Registry skus have different features, including number of webhooks. If you're
reusing an existing registry, you could see the message: "Quota exceeded for resource type webhooks for the
registry SKU Basic. Learn more about different SKU quotas and upgrade process: https://aka.ms/acr/tiers". If
you see this message, use a new registry, or reduce the number of registry webhooks in use.
Next step
Clean up resources
Containerize tutorial cleanup and next steps
10/28/2022 • 2 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a Python web app to Azure App Service. In
this article, you'll clean up resources used in Azure so you don't incur other charges and help keep your Azure
subscription uncluttered. You can leave the Azure resources running if you want to use them for further
development work.
1. Clean up resources
In this tutorial, all the Azure resources were created in the same resource group. Removing the resource group
removes all resources in the resource group and is the fastest way to remove all Azure resources used for your
app.
Azure portal
VS Code
Azure CLI
Sign in to the Azure portal and follow these steps to delete a resource group.
IN ST RUC T IO N S SC REEN SH OT
2. Next steps
After completing this tutorial, here are some next steps you can take to build upon what you learned and move
the tutorial code and deployment closer to production ready:
Deploy a web app from a geo-replicated Azure container registry
Review Security in Azure Cosmos DB
Map a custom DNS name to your app, see Tutorial: Map custom DNS name to your app.
Monitor App Service for availability, performance, and operation, see Monitoring App Service and Set up
Azure Monitor for your Python application.
Enable continuous deployment to Azure App Service, see Continuous deployment to Azure App Service,
Use CI/CD to deploy a Python web app to Azure App Service on Linux, and Design a CI/CD pipeline using
Azure DevOps.
Create reusable infrastructure as code with Azure Developer CLI (azd) Preview.
This tutorial shows you how to containerize a Python web app and deploy it to Azure Container Apps. A sample
web app will be containerized and the Docker image stored in Azure Container Registry. Azure Container Apps
is configured to pull the Docker image from Container Registry and create a container. The sample app connects
to an Azure Database for PostgreSQL to demonstrate communication between Container Apps and other Azure
resources.
There are several options to build and deploy cloud native and containerized Python web apps on Azure. This
tutorial covers Azure Container Apps. Container Apps are good for running general purpose containers,
especially for applications that span many microservices deployed in containers. In this tutorial, you'll create one
container. To deploy a Python web app as a container to Azure App Service, see Containerized Python web app
on App Service.
In this tutorial you'll:
Build a Docker image from a Python web app and store the image in Azure Container Registry.
Configure Azure Container Apps to host the Docker image.
Set up a GitHub Action that updates the container with a new Docker image triggered by changes to your
GitHub repository. This last step is optional.
Following this tutorial, you'll be set up for Continuous Integration (CI) and Continuous Deployment (CD) of a
Python web app to Azure.
Service overview
The service diagram supporting this tutorial shows how your local environment, GitHub repositories, and Azure
services are used in the tutorial.
The components supporting this tutorial and shown in the diagram above are:
Azure Container Apps
Azure Container Apps enables you to run microservices and containerized applications on a serverless
platform. A serverless platform means that you enjoy the benefits of running containers with minimal
configuration. With Azure Container Apps, your applications can dynamically scale based on
characteristics such as HTTP traffic, event-driven processing, or CPU or memory load.
Container Apps pulls Docker images from Azure Container Registry. Changes to container images
trigger an update to the deployed container. You can also configure GitHub Actions to trigger updates.
Azure Container Registry
Azure Container Registry enables you to work with Docker images in Azure. Because Container
Registry is close to your deployments in Azure, you have control over access, making it possible to use
your Azure Active Directory groups and permissions to control access to Docker images.
In this tutorial, the registry source is Azure Container Registry, but you can also use Docker Hub or a
private registry with minor modifications.
Azure Database for PostgreSQL
The sample code stores application data in a PostgreSQL database.
The container app connects to PostgreSQL through environment variables configured explicitly or
with Azure Service Connector.
GitHub
The sample code for this tutorial is in a GitHub repo that you'll fork and clone locally. To set up a CI/CD
workflow with GitHub Actions, you'll need a GitHub account.
You can still follow along with this tutorial without a GitHub account, working locally or in the Azure
Cloud Shell to build the container image from the sample code repo.
Prerequisites
To complete this tutorial, you'll need:
An Azure account where you can create:
Azure Container Registry
Azure Container Apps environment
Azure Database for PostgreSQL
Visual Studio Code or Azure CLI, depending on what tool you'll use
For Visual Studio Code, you'll need the Container Apps extension.
You can also use Azure CLI through the Azure Cloud Shell.
Python packages:
pyscopg2-binary for connecting to PostgreSQL.
Flask or Django web framework.
Sample app
The Python sample app is a restaurant review app that saves restaurant and review data in PostgreSQL. At the
end of the tutorial, you'll have a restaurant review app deployed and running in Azure Container Apps that looks
like the screenshot below.
Next step
Build and deploy to Azure Container Apps
Build and deploy a Python web app with Azure
Container Apps and PostgreSQL
10/28/2022 • 21 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a Python web app to Azure Container
Apps. Container Apps enables you to deploy containerized apps without managing complex infrastructure.
In this part of the tutorial, you learn how to containerize and deploy a Python sample web app (Django or Flask).
Specifically, you'll build the container image in the cloud and deploy it to Azure Container Apps. You'll define
environment variables that enable the container app to connect to an Azure Database for PostgreSQL - Flexible
Server instance, where the sample app stores data.
The service diagram shown below highlights the components covered in this article: building and deploying a
container image.
NOTE
Command lines in this tutorial are shown in the Bash shell, on multiple lines for clarity. For other shell types, change the
line continuation characters as appropriate. For example, for PowerShell, use back tick ("`"). Or, remove the continuation
characters and enter the command on one line.
# Flask
# git clone https://github.com/$USERNAME/msdocs-python-flask-azure-container-apps.git python-container
cd python-container
Step 1. In the Azure portal, search for "container registries" and select the Container Registries service in the
results.
Step 5. Select the Azure Cloud Shell icon in the top menu bar to finish configuration and building an image.
You can also go directly to Azure Cloud Shell.
Step 6. Use the az acr build command to build the image from the repo.
Specify <registry-name> as the name of the registry you created. For <repo-path>, choose either the Django or
Flask repo path.
After the command completes, go to the registry's Repositories resource and confirm the image shows up.
Azure portal
VS Code
Azure CLI
Step 1. In Azure portal, search for "postgres flexible" and select the Azure Database for PostgreSQL flexible
ser vers service in the results.
You can use the PostgreSQL interactive terminal psql in your local environment, or in the Azure Cloud Shell,
which is also accessible in the Azure portal. When working with psql, it's often easier to use the Cloud Shell
because all the dependencies are included for you in the shell.
Step 1. Connect to the database with psql.
psql --host=<postgres-server-name>.postgres.database.azure.com \
--port=5432 \
--username=demoadmin@<postgres-server-name> \
--dbname=postgres
Where <postgres-server-name> is the name of the PostgreSQL server. The command above will prompt you for
the admin password.
If you have trouble connecting, restart the database and try again. If you're connecting from your local
environment, your IP address must be added to the firewall rule list for the database service.
Step 2. Create the database.
At the postgres=> prompt type:
The semicolon (";") at the end of the command is necessary. To verify that the database was successfully created,
use the command \c restaurants_reviews . Type \? to show help or \q to quit.
You can also connect to Azure PostgreSQL Flexible server and create a database using Azure Data Studio or any
other IDE that supports PostgreSQL.
Azure portal
VS Code
Azure CLI
Step 1. In the portal search at the top of the screen, search for "container apps" and select the Container Apps
service in the results.
Step 2. Select + Create to start the create process.
Step 3. On the Basics page, specify the basic configuration of the container app.
Resource group → Use the group created earlier and contains the Azure Container Registry.
Container app name → python-container-app.
Region → Use the same region/location as the resource group.
Container Apps Environment → Select Create new to create a new environment named python-
container-env.
Select Next: App settings to continue configuration.
Step 4. On the App settings page, continue configuring the container app.
Use quickstar t image → Unselect checkbox.
Name → python-container-app.
Image Source → Select Azure Container Registry.
Registr y → Select the name of registry you created earlier.
Image name → Select pythoncontainer (the name of the image you built).
Image tag → Select latest.
HTTP Ingress → Select checkbox (enabled).
Ingress traffic → Select Accepting traffic from anywhere .
Target por t → Set to 8000 for Django or 5000 for Flask.
Select Review and create to go to review page. After reviewing the settings, select Create to kick off
deployment.
Step 5. After the deployment finishes, select Go to resource .
Step 6. Create a revision of the container that contains environment variables.
Select the Containers resource of the newly created container.
Then, select Edit and deploy .
On the Create and deploy new revision page, select the name of the container image, in this case
python-container-app.
On the Edit container page, create environment variables as shown below and then select Save .
Back on the Create and deploy new revision page, select Create .
Here are the following environment variables to create:
AZURE_POSTGRESQL_HOST=<postgres-server-name>.postgres.database.azure.com
AZURE_POSTGRESQL_DATABASE=restaurants_reviews
AZURE_POSTGRESQL_USERNAME=demoadmin
AZURE_POSTGRESQL_PASSWORD=<admin-password>
RUNNING_IN_PRODUCTION=1
TIP
Instead of defining environment variables as shown above, you can use Service Connector. Service Connector helps you
connect to Azure compute services to other backing services by configuring connection information and generating and
storing environment variables for you. If you use a service connector, make sure you synchronize the environment
variables in the sample code to the environment variables created with Service Connector.
Step 7. Django only, migrate and create database schema. (In the Flask sample app, it's done automatically, and
you can skip this step.)
Go to the Monitoring - Console resource of the container app.
Choose a startup command and select Connect .
At the shell prompt, type python manage.py migrate .
You don't need to migrate for revisions of the container.
Step 8. Test the website.
Go to the container app's Over view resource.
Under Essentials , select Application Url to open the website in a browser.
Here's an example of the sample website after adding a restaurant and two reviews.
Troubleshoot deployment
You forgot the Application Url to access the website.
In the Azure portal, go to the Over view page of the Container App and look for the Application Url .
In VS Code, go to the Azure extension and select the Container Apps section. Expand the
subscription, expand the container environment, and when you find the container app, right-click
python-container-app and select Browse .
With Azure CLI, use the command
az containerapp show -g pythoncontainer-rg -n python-container-app --query
properties.configuration.ingress.fqdn
.
In VS Code, the Build Image in Azure task returns an error.
If you see the message "Error: failed to download context. Please check if the URL is incorrect." in the
VS Code Output window, then refresh the registry in the Docker extension. To refresh, select the
Docker extension, go to the Registries section, find the registry and select it.
If you run the Build Image in Azure task again, check to see if your registry from a previous run
exists and if so, use it.
In the Azure portal during the creation of a Container App, you see an access error that contains "Cannot
access ACR '<name>.azurecr.io'".
This error occurs when admin credentials on the ACR are disabled. To check admin status in the portal,
go to your Azure Container Registry, select the Access keys resource, and ensure that Admin user is
enabled.
Your container image doesn't appear in the Azure Container Registry.
Check the output of the Azure CLI command or VS Code Output and look for messages to confirm
success.
Check that the name of the registry was specified correctly in your build command with the Azure CLI
or in the VS Code task prompts.
Make sure your credentials haven't expired. For example, in VS Code, find the target registry in the
Docker extension and refresh. In Azure CLI, run az login .
Website returns "Bad Request (400)".
Check the PostgreSQL environment variables passed in to the container. The 400 error often indicates
that the Python code can't connect to the PostgreSQL instance.
The sample code used in this tutorial checks for the existence of the container environment variable
RUNNING_IN_PRODUCTION , which can be set to any value like "1".
Website returns "Not Found (404)".
Check the Application Url on the Over view page for the container. If the Application Url contains
the word "internal", then ingress isn't set correctly.
Check the ingress of the container. For example, in Azure portal, go to the Ingress resource of the
container and make sure HTTP Ingress is enabled and Accepting traffic from anywhere is
selected.
Website doesn't start, you see "stream timeout", or nothing is returned.
Check the logs.
In the Azure portal, go to the Container App's Revision management resource and check the
Provision Status of the container.
If "Provisioning", then wait until provisioning has completed.
If "Failed", then select the revision and view the console logs. Choose the order of the
columns to show "Time Generated", "Stream_s", and "Log_s". Sort the logs by most-
recent first and look for Python stderr and stdout messages in the "Stream_s" column.
Python 'print' output will be stdout messages.
With the Azure CLI, use the az containerapp logs show command.
If using the Django framework, check to see if the restaurants_reviews tables exist in the database. If
not, use a console to access the container and run python manage.py migrate .
Next step
Configure continuous deployment
Configure continuous deployment for a Python web
app in Azure Container Apps
10/28/2022 • 10 minutes to read • Edit Online
This article is part of a tutorial about how to containerize and deploy a Python web app to Azure Container
Apps. Container Apps enables you to deploy containerized apps without managing complex infrastructure.
In this part of the tutorial, you learn how to configure continuous deployment or delivery (CD) for the container
app. CD is part of the DevOps practice of continuous integration / continuous delivery (CI/CD), which is
automation of your app development workflow. Specifically, you use GitHub Actions for continuous deployment.
The service diagram shown below highlights the components covered in this article: configuration of CI/CD.
NOTE
Command lines in this tutorial are shown in the Bash shell, on multiple lines for clarity. For other shell types, change the
line continuation characters as appropriate. For example, for PowerShell, use back tick ("`"). Or, remove the continuation
characters and enter the command on one line.
Prerequisites
To set up continuous deployment, you'll need:
The resources and their configuration created in the [previous article][./tutorial-deploy-python-web-app-
azure-container-apps-02.md] of this tutorial series, which includes an Azure Container Registry and a
container app in Azure Container Apps.
A GitHub account where you forked the sample code (Django or Flask) and you can connect to from
Azure Container Apps. (If you downloaded the sample code instead of forking, make sure you push your
local repo to your GitHub account.)
Optionally, Git installed in your development environment to make code changes and push to your repo
in GitHub. Alternatively, you can make the changes directly in GitHub.
Configure CD for a container
In a previous article of this tutorial, you created and configured a container app in Azure Container Apps. Part of
the configuration was pulling a Docker image from an Azure Container Registry. The container image is pulled
from the registry when creating a container revision, such as when you first set up the container app.
In the steps below, you'll set up continuous deployment, which means a new Docker image and container
revision are created based on a trigger. The trigger in this tutorial is any change to the main branch of your
repository, such as with a pull request (PR). When triggered, the workflow creates a new Docker image, pushes it
to the Azure Container Registry, and updates the container app to a new revision using the new image.
Azure portal
Azure CLI
Step 1. In the Azure portal, go to the Container App you want to configure continuous deployment for and
select the Continuous deployment resource.
GitHub
Command line
Step 1. Go to your fork of the sample repository and start in the main branch.
NOTE
We showed making a change directly in the main branch. In typical software workflows, you'll make a change in a branch
other than main and then create a pull request (PR) to merge those change into main. PRs also kick off the workflow.
Workflow secrets
In the .github/workflows/<workflow-name>.yml workflow file that was added to the repo, you'll see
placeholders for credentials that are needed for the build and container app update jobs of the workflow. The
credential information is stored encrypted in the repository Settings under Security /Actions .
If credential information changes, you can update it here. For example, if the Azure Container Registry
passwords are regenerated, you'll need to update the REGISTRY_PASSWORD value. For more information, see
Encrypted secrets in the GitHub documentation.
OAuth authorized apps
When you set up continuous deployment, you authorize Azure Container Apps as an authorized OAuth App for
your GitHub account. Container Apps uses the authorized access to create a GitHub Actions YML file in
.github/workflows/<workflow-name>.yml. You can see your authorized apps and revoke permissions under
Integrations /Applications of your account.
Troubleshooting tips
Errors setting up a service principal with the Azure CLI az ad sp create-for-rba command.
You receive an error containing "InvalidSchema: No connection adapters were found".
Check the shell you're running in. If using Bash shell, set the MSYS_NO_PATHCONV variables as
follows export MSYS_NO_PATHCONV=1 . For more information, see the GitHub issue Unable to create
service principal with Azure CLI from git bash shell, no connection adapters were found..
You receive an error containing "More than one application have the same display name".
This error indicates the name is already taken for the service principal. Choose another name or leave
off the --name argument and a GUID will be automatically generated as a display name.
The open-source Azure libraries for Python simplify provisioning, managing, and using Azure resources from
Python application code.
Next step
We strongly recommend doing a one-time setup of your local development environment so that you can easily
use any of the Azure libraries for Python.
Set up your local dev environment >>>
Azure libraries for Python usage patterns
10/28/2022 • 7 minutes to read • Edit Online
The Azure SDK for Python is composed solely of many independent libraries, which are listed on the Python SDK
package index.
All the libraries share certain common characteristics and usage patterns, such as installation and the use of
inline JSON for object arguments.
Library installation
pip
conda
pip install retrieves the latest version of a library in your current Python environment.
You can also use pip to uninstall libraries and install specific versions, including preview versions. For more
information, see How to install Azure library packages for Python.
Asynchronous operations
Many operations that you invoke through client and management client objects (such as
ComputeManagementClient.virtual_machines.begin_create_or_update and
WebSiteManagementClient.web_apps.create_or_update ) return an object of type AzureOperationPoller[<type>]
where <type> is specific to the operation in question.
Both of these methods are asynchronous. The difference in the method names is due to version differences.
Older libraries that aren't based on azure.core typically use names like create_or_update . Libraries based on
azure.core add the begin_ prefix to method names to better indicate that they are asynchronous. Migrating old
code to a newer azure.core-based library typically means adding the begin_ prefix to method names, as most
method signatures remain the same.
In either case, an AzureOperationPoller return type definitely means that the operation is asynchronous.
Accordingly, you must call that poller's result method to wait for the operation to finish and obtain its result.
The following code, taken from Example: Provision and deploy a web app, shows an example of using the poller
to wait for a result:
poller = app_service_client.web_apps.begin_create_or_update(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": LOCATION,
"server_farm_id": plan_result.id,
"site_config": {
"linux_fx_version": "python|3.8"
}
}
)
web_app_result = poller.result()
In this case, the return value of begin_create_or_update is of type AzureOperationPoller[Site] , which means that
the return value of poller.result() is a Site object.
Exceptions
In general, the Azure libraries raise exceptions when operations fail to perform as intended, including failed
HTTP requests to the Azure REST API. For app code, then, you can use try...except blocks around library
operations.
For more information on the type of exceptions that may be raised, see the documentation for the operation in
question.
Logging
The most recent Azure libraries use the Python standard logging library to generate log output. You can set the
logging level for individual libraries, groups of libraries, or all libraries. Once you register a logging stream
handler, you can then enable logging for a specific client object or a specific operation. For more information,
see Logging in the Azure libraries.
Proxy configuration
To specify a proxy, you can use environment variables or optional arguments. For more information, see How to
configure proxies.
Individual libraries are not obligated to support any of these arguments, so always consult the reference
documentation for each library for exact details.
Arguments for non-core libraries
NAME TYPE DEFA ULT DESC RIP T IO N
rg_result = resource_client.resource_groups.create_or_update(
"PythonSDKExample-rg",
ResourceGroup(location="centralus")
)
rg_result = resource_client.resource_groups.create_or_update(
"PythonAzureExample-rg",
{
"location": "centralus"
}
)
When using JSON, the Azure libraries automatically convert the inline JSON to the appropriate object type for
the argument in question.
Objects can also have nested object arguments, in which case you can also use nested JSON.
For example, suppose you have an instance of the KeyVaultManagementClient object, and are calling its
create_or_update method. In this case, the third argument is of type VaultCreateOrUpdateParameters , which itself
contains an argument of type VaultProperties . VaultProperties , in turn, contains object arguments of type
Sku and list[AccessPolicyEntry] . A Sku contains a SkuName object, and each AccessPolicyEntry contains a
Permissions object.
To call begin_create_or_update with embedded objects, you use code like the following (assuming tenant_id
and object_id are already defined). You can also create the necessary objects before the function call.
# Provision a Key Vault using inline parameters
poller = keyvault_client.vaults.begin_create_or_update(
RESOURCE_GROUP_NAME,
KEY_VAULT_NAME_A,
VaultCreateOrUpdateParameters(
location = "centralus",
properties = VaultProperties(
tenant_id = tenant_id,
sku = Sku(
name="standard",
family="A"
),
access_policies = [
AccessPolicyEntry(
tenant_id = tenant_id,
object_id = object_id,
permissions = Permissions(
keys = ['all'],
secrets = ['all']
)
)
]
)
)
)
key_vault1 = poller.result()
key_vault2 = poller.result()
Because both forms are equivalent, you can choose whichever you prefer and even intermix them. (The full code
for these examples can be found on GitHub.)
If your JSON isn't formed properly, you typically get the error, "DeserializationError: Unable to deserialize to
object: type, AttributeError: 'str' object has no attribute 'get'". A common cause of this error is that you're
providing a single string for a property when the library expects a nested JSON object. For example, using
"sku": "standard" in the previous example generates this error because the sku parameter is a Sku object
that expects inline object JSON, in this case { "name": "standard"} , which maps to the expected SkuName type.
Next steps
Now that you understand the common patterns for using the Azure libraries for Python, see the following
standalone examples to explore specific management and client library scenarios. You can try these examples in
any order as they are neither sequential nor interdependent.
Example: Create a resource group
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Authenticate Python apps to Azure services by
using the Azure SDK for Python
10/28/2022 • 7 minutes to read • Edit Online
When an application needs to access an Azure resource like Azure Storage, Azure Key Vault, or Azure Cognitive
Services, the application must be authenticated to Azure. This requirement is true for all applications, whether
they're deployed to Azure, deployed on-premises, or under development on a local developer workstation. This
article describes the recommended approaches to authenticate an app to Azure when you use the Azure SDK for
Python.
When a developer is running an app during local development: The app authenticates to Azure by
using either an application service principal for local development or the developer's Azure credentials. These
options are discussed in the section Authentication during local development.
When an app is hosted on Azure: The app authenticates to Azure resources by using a managed identity.
This option is discussed in the section Authentication in server environments.
When an app is hosted and deployed on-premises: The app authenticates to Azure resources by using
an application service principal. This option is discussed in the section Authentication in server environments.
DefaultAzureCredential
The DefaultAzureCredential class provided by the Azure SDK allows apps to use different authentication
methods depending on the environment in which they're run. In this way, apps can be promoted from local
development to test environments to production without code changes.
You configure the appropriate authentication method for each environment, and DefaultAzureCredential
automatically detects and uses that authentication method. The use of DefaultAzureCredential is preferred over
manually coding conditional logic or feature flags to use different authentication methods in different
environments.
Details about using the DefaultAzureCredential class are discussed in the section Use DefaultAzureCredential in
an application.
Advantages of token-based authentication
Use token-based authentication instead of using connection strings when you build apps for Azure. Token-based
authentication offers the following advantages over authenticating with connection strings:
The token-based authentication methods described in this article allow you to establish the specific
permissions needed by the app on the Azure resource. This practice follows the principle of least privilege. In
contrast, a connection string grants full rights to the Azure resource.
Anyone or any app with a connection string can connect to an Azure resource, but token-based
authentication methods scope access to the resource to only the apps intended to access the resource.
With a managed identity, there's no application secret to store. The app is more secure because there's no
connection string or application secret that can be compromised.
The azure.identity package in the Azure SDK manages tokens for you behind the scenes. Managed tokens
make using token-based authentication as easy to use as a connection string.
Limit the use of connection strings to initial proof-of-concept apps or development prototypes that don't access
production or sensitive data. Otherwise, the token-based authentication classes available in the Azure SDK are
always preferred when they're authenticating to Azure resources.
A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N
Apps hosted in Azure Apps hosted in Azure should use a managed identity service
principal. Managed identities are designed to represent the
identity of an app hosted in Azure and can only be used with
Azure-hosted apps.
Apps hosted outside of Azure Apps hosted outside of Azure (for example, on-premises
(for example, on-premises apps) apps) that need to connect to Azure services should use an
application service principal. An application service principal
represents the identity of the app in Azure and is created
through the application registration process.
A UT H EN T IC AT IO N M ET H O D DESC RIP T IO N
Create dedicated application service principal objects to be In this method, dedicated application service principal
used during local development. objects are set up by using the app registration process for
use during local development. The identity of the service
principal is then stored as environment variables to be
accessed by the app when it's run in local development.
Authenticate the app to Azure by using the developer's In this method, a developer must be signed in to Azure from
credentials during local development. either the Azure Tools extension for Visual Studio Code, the
Azure CLI, or Azure PowerShell on their local workstation.
The application then can access the developer's credentials
from the credential store and use those credentials to access
Azure resources from the app.
The following code example shows how to instantiate a DefaultAzureCredential object and use it with an Azure
SDK client class. In this case, it's a BlobServiceClient object used to access Azure Blob Storage.
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=credential)
The DefaultAzureCredential object automatically detects the authentication mechanism configured for the app
and obtains the necessary tokens to authenticate the app to Azure. If an application makes use of more than one
SDK client, you can use the same credential object with each SDK client object.
Sequence of authentication methods when you use DefaultAzureCredential
Internally, DefaultAzureCredential implements a chain of credential providers for authenticating applications to
Azure resources. Each credential provider can detect if credentials of that type are configured for the app. The
DefaultAzureCredential object sequentially checks each provider in order and uses the credentials from the first
provider that has credentials configured.
The order in which DefaultAzureCredential looks for credentials is shown in the following diagram and table:
C REDEN T IA L T Y P E DESC RIP T IO N
Visual Studio Code If you've authenticated to Azure by using the Visual Studio
Code Azure account plug-in, DefaultAzureCredential
authenticates the app to Azure by using that same account.
When creating cloud applications, developers need to debug and test applications on their local workstation.
When an application is run on a developer's workstation during local development, it still must authenticate to
any Azure services used by the app. This article covers how to set up dedicated application service principal
objects to be used during local development.
Dedicated application service principals for local development allow you to follow the principle of least privilege
during app development. Since permissions are scoped to exactly what is needed for the app during
development, app code is prevented from accidentally accessing an Azure resource intended for use by a
different app. This also prevents bugs from occurring when the app is moved to production because the app
was overprivileged in the dev environment.
An application service principal is set up for the app when the app is registered in Azure. When registering apps
for local development, it's recommended to:
Create separate app registrations for each developer working on the app. This will create separate application
service principals for each developer to use during local development and avoid the need for developers to
share credentials for a single application service principal.
Create separate app registrations per app. This scopes the app's permissions to only what is needed by the
app.
During local development, environment variables are set with the application service principal's identity. The
Azure SDK for Python reads these environment variables and uses this information to authenticate the app to
the Azure resources it needs.
The Add a client secret dialog will pop out from the right-
hand side of the page. In this dialog:
1. Description → Enter a value of Current.
2. Expires → Select a value of 24 months.
Select Add to add the secret.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
The group will be created and you will be taken back to the
All groups page. It may take up to 30 seconds for the
group to appear and you may need to refresh the page due
to caching in the Azure portal.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.
Then, create a .env file in your application root directory. Set the environment variable values with values
obtained from the app registration process as follows:
AZURE_CLIENT_ID → The app ID value.
AZURE_TENANT_ID → The tenant ID value.
AZURE_CLIENT_SECRET → The password/credential generated for the app.
AZURE_CLIENT_ID=00000000-0000-0000-0000-000000000000
AZURE_TENANT_ID=11111111-1111-1111-1111-111111111111
AZURE_CLIENT_SECRET=abcdefghijklmnopqrstuvwxyz
Finally, in the startup code for your application, use the python-dotenv library to read the environment variables
from the .env file on startup.
if ( os.environ['ENVIRONMENT'] == 'development'):
print("Loading environment variables from .env file")
load_dotenv(".env")
Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
Authenticate Python apps to Azure services during
local development using developer accounts
10/28/2022 • 8 minutes to read • Edit Online
When creating cloud applications, developers need to debug and test applications on their local workstation.
When an application is run on a developer's workstation during local development, it still must authenticate to
any Azure services used by the app. This article covers how to use a developer's Azure credentials to
authenticate the app to Azure during local development.
For an app to authenticate to Azure during local development using the developer's Azure credentials, the
developer must be signed-in to Azure from the VS Code Azure Tools extension, the Azure CLI, or Azure
PowerShell. The Azure SDK for Python is able to detect that the developer is signed-in from one of these tools
and then obtain the necessary credentials from the credentials cache to authenticate the app to Azure as the
signed-in user.
This approach is easiest to set up for a development team since it takes advantage of the developers' existing
Azure accounts. However, a developer's account will likely have more permissions than required by the
application, therefore exceeding the permissions the app will run with in production. As an alternative, you can
create application service principals to use during local development which can be scoped to have only the
access needed by the app.
The group will be created and you will be taken back to the
All groups page. It may take up to 30 seconds for the
group to appear and you may need to refresh the page due
to caching in the Azure portal.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.
For an app to use the developer credentials from VS Code, the VS Code Azure Tools extension must be installed
in VS Code.
Install the Azure Tools extensions for VS Code
On the left-hand panel, you'll see an Azure icon. Select this icon, and a control panel for Azure services will
appear. Choose Sign in to Azure... under any service to complete the authentication process for the Azure
tools in Visual Studio Code.
Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
Authenticating Azure-hosted apps to Azure
resources with the Azure SDK for Python
10/28/2022 • 7 minutes to read • Edit Online
When an app is hosted in Azure using a service like Azure App Service, Azure Virtual Machines, or Azure
Container Instances, the recommended approach to authenticating an app to Azure resources is to use a
managed identity.
A managed identity provides an identity for your app such that it can connect to other Azure resources without
the need to use a secret key or other application secret. Internally, Azure knows the identity of your app and
what resources it's allowed to connect to. Azure uses this information to automatically obtain Azure AD tokens
for the app to allow it to connect to other Azure resources, all without you having to manage any application
secrets.
For example, you can type the name of your resource in the
search box at the top of the page and navigate to it by
selecting it in the dialog box.
On the page for your resource, select the Identity menu item
from the left-hand menu.
IN ST RUC T IO N S SC REEN SH OT
The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.
Next, for any Python code that creates an Azure SDK client object in your app, you'll want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
When the above code is run on your local workstation during local development, the SDK method,
DefaultAzureCredential(), looks in the environment variables for an application service principal or at VS Code,
the Azure CLI, or Azure PowerShell for a set of developer credentials, either of which can be used to authenticate
the app to Azure resources during local development. In this way, this same code can be used to authenticate
your app to Azure resources during both local development and when deployed to Azure.
Authenticate to Azure resources from Python apps
hosted on-premises
10/28/2022 • 7 minutes to read • Edit Online
Apps hosted outside of Azure (for example on-premises or at a third-party data center) should use an
application service principal to authenticate to Azure when accessing Azure resources. Application service
principal objects are created using the app registration process in Azure. When an application service principal is
created, a client ID and client secret will be generated for your app. The client ID, client secret, and your tenant ID
are then stored in environment variables so they can be used by the Azure SDK for Python to authenticate your
app to Azure at runtime.
A different app registration should be created for each environment the app is hosted in. This allows
environment specific resource permissions to be configured for each service principal and make sure an app
deployed to one environment does not talk to Azure resources that are part of another environment.
IN ST RUC T IO N S SC REEN SH OT
The Add a client secret dialog will pop out from the right-
hand side of the page. In this dialog:
1. Description → Enter a value of Current.
2. Expires → Select a value of 24 months.
Select Add to add the secret.
Azure portal
Azure CLI
IN ST RUC T IO N S SC REEN SH OT
The Add role assignment page lists all of the roles that can
be assigned for the resource group.
1. Use the search box to filter the list to a more
manageable size. This example shows how to filter for
Storage Blob roles.
2. Select the role that you want to assign.
Select Next to go to the next screen.
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=www-user
Group=www-data
WorkingDirectory=/path/to/python-app
EnvironmentFile=/path/to/python-app/py-env/app-environment-variables
ExecStart=/path/to/python-app/py-env/gunicorn --config config.py wsgi:app
[Install]
WantedBy=multi-user.target
The file specified in the EnvironmentFile directive should contain a list of environment variables with their
values as shown below.
AZURE_CLIENT_ID=<value>
AZURE_TENANT_ID=<value>
AZURE_CLIENT_SECRET=<value>
Next, for any Python code that creates an Azure SDK client object in your app, you will want to:
1. Import the DefaultAzureCredential class from the azure.identity module.
2. Create a DefaultAzureCredential object.
3. Pass the DefaultAzureCredential object to the Azure SDK client object constructor.
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential)
When the above code instantiates the DefaultAzureCredential object, DefaultAzureCredential reads the
environment variables AZURE_SUBSCRIPTION_ID , AZURE_TENANT_ID , AZURE_CLIENT_ID , and AZURE_CLIENT_SECRET for
the application service principal information to connect to Azure with.
Additional methods to authenticate to Azure
resources from Python apps
10/28/2022 • 3 minutes to read • Edit Online
This article lists additional methods apps may use to authenticate to Azure resources. The methods on this page
are less commonly used and when possible, it is encouraged to use one of the methods outlined in the
authenticating Python apps to Azure using the Azure SDK overview article.
1. On the Azure portal, navigate to Azure Active Directory and select App registrations on the left-hand
menu.
2. Select the registration for your app, then select Authentication .
3. Under Advanced settings , select Yes for Allow public client flows .
4. Select Save to apply the changes.
5. To authorize the application for specific resources, navigate to the resource in question, select API
Permissions , and enable Microsoft Graph and other resources you want to access. Microsoft Graph is
usually enabled by default.
a. You must also be the admin of your tenant to grant consent to your application when you log in for
the first time.
If you can't configure the device code flow option on your Active Directory, your application may need to be
multi-tenant. To make this change, navigate to the Authentication panel, select Accounts in any
organizational director y (under Suppor ted account types ), and then select Yes for Allow public client
flows .
Example using InteractiveBrowserCredential
The following example demonstrates using an InteractiveBrowserCredential to authenticate with the
SubscriptionClient :
# Show Azure subscription information
import os
from azure.identity import InteractiveBrowserCredential
from azure.mgmt.resource import SubscriptionClient
credential = InteractiveBrowserCredential()
subscription_client = SubscriptionClient(credential)
subscription = next(subscription_client.subscriptions.list())
print(subscription.subscription_id)
For more exact control, such as setting redirect URIs, you can supply specific arguments to
InteractiveBrowserCredential such as redirect_uri .
This method of authentication is discouraged because it's less secure than other flows. Also, this method is not
interactive and is therefore not compatible with any form of multi-factor authentication or consent
prompting. The application must already have consent from the user or a directory administrator.
Furthermore, this method authenticates only work and school accounts; Microsoft accounts are not supported.
For more information, see Sign up your organization to use Azure Active Directory.
# Show Azure subscription information
import os
from azure.mgmt.resource import SubscriptionClient
from azure.identity import UsernamePasswordCredential
# Retrieve the information necessary for the credentials, which are assumed to
# be in environment variables for the purpose of this example.
client_id = os.environ["AZURE_CLIENT_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]
username = os.environ["AZURE_USERNAME"]
password = os.environ["AZURE_PASSWORD"]
subscription_client = SubscriptionClient(credential)
subscription = next(subscription_client.subscriptions.list())
print(subscription.subscription_id)
Walkthrough: Integrated authentication for Python
apps with Azure services
10/28/2022 • 2 minutes to read • Edit Online
Azure Active Directory (Azure AD) along with Azure Key Vault provide a comprehensive and convenient means
for applications to authenticate with Azure services and third-party services where access keys are involved.
After providing some background, this walkthrough explains these authentication features in the context of the
sample, github.com/Azure-Samples/python-integrated-authentication.
Part 1: Background
Although many Azure services rely solely on role-based access control for authorization, certain services control
access to their respective resources by using secrets or keys. Such services include Azure Storage, databases,
Cognitive Services, Key Vault, and Event Hubs.
When creating a cloud app that accesses these services, you can use the Azure portal, the Azure CLI, or Azure
PowerShell to create and configure keys for your app. The keys you create are tied to specific access policies and
prevent access to those app-specific resources by any other unauthorized code.
Within this general design, cloud apps must typically manage those keys and authenticate with each service
individually, a process that can be both tedious and error-prone. Managing keys directly in app code also risks
exposing those keys in source control and keys might be stored on unsecured developer workstations.
Fortunately, Azure provides two specific services to simplify the process and provide greater security:
Azure Key Vault provides secure cloud-based storage for access keys (along with cryptographic keys and
certificates, which aren't covered in this article). By using Key Vault, the app accesses such keys only at
run time so that they never appear directly in source code.
With Azure Active Directory (Azure AD) Managed Identities, the app needs to authenticate only once with
Active Directory. The app is then automatically authenticated with other Azure services, including Key
Vault. As a result, your code never needs to concern itself with keys or other credentials for those Azure
services. Better still, you can run the same code both locally and in the cloud with minimal configuration
requirements.
This walkthrough shows how to use Azure AD managed identity and Key Vault together in the same app. By
using Azure AD and Key Vault together, your app never needs to authenticate itself with individual Azure
services, and can easily and securely access any keys necessary for third-party services.
IMPORTANT
This article uses the common, generic term "key" to refer to what are stored as "secrets" in Azure Key Vault, such as an
access key for a REST API. This usage should not be confused with Key Vault's management of cryptographic keys, which
is a separate feature from Key Vault's secrets.
NOTE
Although a public API endpoint is usually protected by its own access key, for the purposes of this article we assume the
endpoint is open and unauthenticated. This assumption avoids any confusion between the app's authentication needs
with those of an external caller of this endpoint. This scenario doesn't demonstrate such an external caller.
import logging
import random
import json
In the sample repository, this code is found under third_party_api/RandomNumber/__init__.py. The folder,
RandomNumber, provides the name of the function and __init__.py contains the code. Another file in that folder,
function.json, describes when the function is triggered. Other files in the third_party_api parent folder provide
details for the Azure Function "app" that hosts the function itself.
To deploy the code, the sample's provisioning script performs the following steps:
1. Create a backing storage account for Azure Functions with the Azure CLI command,
az storage account create .
2. Create an Azure Functions "app" with the Azure CLI command, az function app create .
3. After waiting 60 seconds for the host to be fully provisioned, deploy the code using the Azure Functions
Core Tools command, func azure functionapp publish
4. Assign the access key, d0c5atM1cr0s0ft , to the function. (See Securing Azure Functions for a background
on function keys.)
In the provisioning script, this step is accomplished through a REST API call to the Functions Key
Management API because the Azure CLI doesn't presently support this particular feature. To call that REST
API, the provisioning script must first use another REST API call to retrieve the Function app's master key.
You can also assign access keys through the Azure portal. On the page for the Functions app, select
Functions , then select the specific function to secure (which is named RandomNumber in this example). On
the function's page, select Function Keys to open the page where you can create and manage these
keys.
Part 4 - Main app implementation >>>
Part 4: Example main application implementation
10/28/2022 • 2 minutes to read • Edit Online
app = Flask(__name__)
app.config["DEBUG"] = True
number_url = os.environ["THIRD_PARTY_API_ENDPOINT"]
# Next, get the client for the Key Vault. You must have first enabled managed identity
# on the App Service for the credential to authenticate with Key Vault.
key_vault_url = os.environ["KEY_VAULT_URL"]
keyvault_client = SecretClient(vault_url=key_vault_url, credential=credential)
# Obtain the secret: for this step to work you must add the app's service principal to
# the key vault's access policies for secret management.
api_secret_name = os.environ["THIRD_PARTY_API_SECRET_NAME"]
vault_secret = keyvault_client.get_secret(api_secret_name)
# The "secret" from Key Vault is an object with multiple properties. The key we
# want for the third-party API is in the value property.
access_key = vault_secret.value
@app.route('/', methods=['GET'])
def home():
return f'Home page of the main app. Make a request to <a href="./api/v1/getcode">/api/v1/getcode</a>.'
def random_char(num):
return ''.join(random.choice(string.ascii_letters) for x in range(num))
@app.route('/api/v1/getcode', methods=['GET'])
def get_code():
headers = {
'Content-Type': 'application/json',
'x-functions-key': access_key
}
if (r.status_code != 200):
return "Could not get you a code.", r.status_code
data = r.json()
chars1 = random_char(3)
chars2 = random_char(3)
code_value = f"{chars1}-{data['value']}-{chars2}"
code = { "code": code_value, "timestamp" : str(datetime.utcnow()) }
# Log a queue message with the code for, say, a process that invalidates
# the code after a certain period of time.
queue_client.send_message(code)
return jsonify(code)
if __name__ == '__main__':
app.run()
flask
requests
azure.identity
azure.keyvault.secrets
azure.storage.queue
When your deploy the app to Azure App Service, Azure automatically installs these requirements on the host
server. When running locally, you install them in your environment with pip install -r requirements.txt .
The code file starts with the required import statements for the parts of the libraries we're using:
Environment variables
The app code depends on four environment variables:
VA RIA B L E VA L UE
KEY_VAULT_URL The URL of the Azure Key Vault in which you've stored the
access key for the third-party API.
THIRD_PARTY_API_SECRET_NAME The name of the secret in Key Vault that contains the access
key for the third-party API.
VA RIA B L E VA L UE
How you set these variables depends on where the code is running:
When running the code locally, you create these variables within whatever command shell you're using.
(If you deploy the app to a virtual machine, you would create similar server-side variables.) You can use a
library like python-dotenv, which reads key-value pairs from an .env file and sets them as environment
variables
When the code is deployed to Azure App Service as is shown in this walkthrough, you don't have access
to the server itself. In this case, you create application settings with the same names, which then appear to
the app as environment variables.
The provisioning scripts create these settings using the Azure CLI command, az webapp config appsettings set .
All four variables are set with a single command.
To create settings through the Azure portal, see Configure an App Service app in the Azure portal.
When running the code locally, you also need to specify environment variables that contain information about
your local service principal. DefaultAzureCredential looks for these values. When deployed to App Service, you
do not need to set these values as the managed identity will be used instead to authenticate.
VA RIA B L E VA L UE
AZURE_CLIENT_SECRET A client secret that was generated for the App Registration.
For more information, see Authenticate Python apps to Azure services during local development using service
principals.
Part 6 - Main app startup code >>>
Part 6: Main app startup code
10/28/2022 • 2 minutes to read • Edit Online
app = Flask(__name__)
app.config["DEBUG"] = True
number_url = os.environ["THIRD_PARTY_API_ENDPOINT"]
Next, we obtain the DefaultAzureCredential object, which is the recommended credential to use when
authenticating with Azure services. See Authenticate Azure hosted applications with DefaultAzureCredential.
credential = DefaultAzureCredential()
key_vault_url = os.environ["KEY_VAULT_URL"]
To connect to the key vault, we must create a suitable client object. Because we want to retrieve a secret, we use
the SecretClient , which requires the key vault URL and the credential object that represents the identity under
which the app is running.
Creating the SecretClient object doesn't authenticate the credential in any way. The SecretClient is simply a
client-side construct that internally manages the resource URL and the credential. Authentication and
authorization happen only when you invoke an operation through the client, such as get_secret , which
generates a REST API call to the Azure resource.
api_secret_name = os.environ["THIRD_PARTY_API_SECRET_NAME"]
vault_secret = keyvault_client.get_secret(api_secret_name)
# The "secret" from Key Vault is an object with multiple properties. The key we
# want for the third-party API is in the value property.
access_key = vault_secret.value
Even if the app identity is authorized to access the key vault, it must still be authorized to access secrets.
Otherwise, the get_secret call fails. For this reason, the provisioning script sets a "get secrets" access policy for
the app using the Azure CLI command, az keyvault set-policy . For more information, see Key Vault
Authentication and Grant your app access to Key Vault. The latter article shows how to set an access policy using
the Azure portal. (The article is also written for managed identity, but applies equally to a local service principle
used in development.)
Finally, the app code sets up the client object through which we can write messages to an Azure Storage Queue.
The Queue's URL is in the environment variable STORAGE_QUEUE_URL .
queue_url = os.environ["STORAGE_QUEUE_URL"]
queue_client = QueueClient.from_queue_url(queue_url=queue_url, credential=credential)
As with Key Vault, we use a specific client object from the Azure libraries, QueueClient , and its from_queue_url
method to connect to the resource located at the URL in question. Once again, attempting to create this client
object validates that the app identity represented by the credential is authorized to access the queue. As noted
earlier, this authorization was granted by assigning the "Storage Queue Data Contributor" role to the main app.
Assuming all this startup code succeeds, the app has all its internal variables in place to support its
/api/v1/getcode API endpoint.
Part 7 - Main app endpoint >>>
Part 7: Main application API endpoint
10/28/2022 • 4 minutes to read • Edit Online
@app.route('/api/v1/getcode', methods=['GET'])
def get_code():
Next, we call the third-party API, the URL of which is in number_url , providing the access key that we retrieve
from the key vault in the header.
headers = {
'Content-Type': 'application/json',
'x-functions-key': access_key
}
if (r.status_code != 200):
return "Could not get you a code.", r.status_code
The example third-party API is deployed to the serverless environment of Azure Functions. The x-functions-key
property in the header is specifically how Azure Functions expects an access key to appear in a header. For more
information, see Azure Functions HTTP trigger - Authorization keys. If calling the API fails for any reason, the
code returns an error message and the status code.
Assuming that the API call succeeds and returns a numerical value, we then construct a more complex code
using that number plus some random characters (using our own random_char function).
data = r.json()
chars1 = random_char(3)
chars2 = random_char(3)
code_value = f"{chars1}-{data['value']}-{chars2}"
code = { "code": code_value, "timestamp" : str(datetime.utcnow()) }
The code variable here contains the full JSON response for the app's API, which includes the code value and a
timestamp. An example response would be {"code":"ojE-161-pTv","timestamp":"2020-04-15 16:54:48.816549"} .
Before we return that response, however, we write a message in our storage queue using the Queue client's
send_message method:
queue_client.send_message(code)
return jsonify(code)
Processing queue messages
Messages stored in the queue can be viewed and managed through the Azure portal, with the Azure CLI
command az storage message get , or with Azure Storage Explorer. The sample repository includes a script
(test.cmd and test.sh) to request a code from the app endpoint and then check the message queue. There's also a
script to clear the queue using the az storage message clear command.
Typically, an app like this example would have another process that asynchronously pulls messages from the
queue for further processing. As mentioned earlier, the response generated by this API endpoint might be used
elsewhere in the app with two-factor user authentication. In that case, the app should invalidate the code after a
certain period of time, say 10 minutes. A simple way to do this task would be to maintain a table of valid two-
factor authentication codes, which are used by its user sign-in procedure. The app would then have a simple
queue-watching process with the following logic (in pseudo-code):
This pseudo-code employs the send_message method's optional visibility_timeout parameter, which specifies
the number of seconds before the message becomes visible in the queue. Because the default timeout is zero,
messages initially written by the API endpoint become immediately visible to the queue-watching process. As a
result, that process stores them in the valid code table right away. The process queues the same message again
with the timeout, so that it will receive the code again 10 minutes later, at which point it removes it from the
table.
Next steps
Through this tutorial, you've learned how apps authenticate with other Azure services using managed identity,
and how apps can use Azure Key Vault to store any other necessary secrets for third-party APIs.
The same pattern demonstrated here with Azure Key Vault and Azure Storage applies with all other Azure
services. The crucial step is that you set the correct role permissions for the app within that service's page on the
Azure portal, or through the Azure CLI. (See How to assign role permissions). Be sure to check the service
documentation to see whether you need to configure any other access policies.
Always remember that you need to assign the same roles and access policies to any service principal you're
using for local development.
In short, having completed this walkthrough, you can apply your knowledge to any number of other Azure
services and any number of other external services.
One subject that we haven't touched upon in this tutorial is authentication of users. To explore this area for web
apps, begin with Authenticate and authorize users end-to-end in Azure App Service.
See also
How to authenticate and authorize Python apps on Azure
Walkthrough sample: github.com/Azure-Samples/python-integrated-authentication
Azure Active Directory documentation
Azure Key Vault documentation
How to install Azure library packages for Python
10/28/2022 • 2 minutes to read • Edit Online
The Azure SDK for Python is composed solely of many individual libraries that can be installed in standard
Python or Conda environments.
Libraries for standard Python environments are listed in the package index.
Packages for Conda environments are listed in the Microsoft channel on anaconda.org. Azure packages have
names that begin with azure- .
With these Azure libraries you can provision and manage resources on Azure services (using the management
libraries, whose names begin with azure-mgmt ) and connect with those resources from app code (using the
client libraries, whose names begin with just azure- ).
pip install retrieves the latest version of a library in your current Python environment.
On Linux systems, you must install a library for each user separately. Installing libraries for all users with
sudo pip install isn't supported.
You can use any package name listed in the package index.
Be sure you've added the Microsoft channel to your Conda configuration (you need to do this only once):
Specify the desired version on the command line with pip install .
You can use any package name listed in the package index.
To install the latest preview of a library, include the --pre flag on the command line.
Microsoft periodically releases preview library packages that support upcoming features, with the caveat that
the library is subject to change and must not be used in production projects.
You can use any package name listed in the package index.
If the library is installed, pip show displays version and other summary information, otherwise the command
displays nothing.
You can also use pip freeze or pip list to see all the libraries that are installed in your current Python
environment.
You can use any package name listed in the package index.
Uninstall a library
pip
conda
NOTE
For Conda libraries, see the Microsoft channel on anaconda.org.
All libraries
NAME PA C K A GE DO C S SO URC E
This example demonstrates how to use the Azure SDK management libraries in a Python script to provision a
resource group. (The Equivalent Azure CLI command is given later in this article. If you prefer to use the Azure
portal, see Create resource groups.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
azure-mgmt-resource>=18.0.0
azure-identity>=1.5.0
Be sure to use these versions of the libraries. Using older versions will result in errors such as
"'AzureCliCredential' object object has no attribute 'signed_session'."
In a terminal or command prompt with the virtual environment activated, install the requirements:
# The return value is another ResourceGroup object with all the details of the
# new group. In this case the call is synchronous: the resource group has been
# provisioned by the time the call returns.
# To update the resource group, repeat the call with different properties, such
# as tags:
rg_result = resource_client.resource_groups.create_or_update(
"PythonAzureExample-rg",
{
"location": "centralus",
"tags": { "environment":"test", "department":"tech" }
}
)
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
6: Clean up resources
az group delete -n PythonAzureExample-rg --no-wait
Run this command if you don't need to keep the resource group provisioned in this example. Resource groups
don't incur any ongoing charges in your subscription, but it's a good practice to clean up any group that you
aren't actively using. The --no-wait argument allows the command to return immediately instead of waiting for
the operation to finish.
You can also use the ResourceManagementClient.resource_groups.delete method to delete a resource group from
code.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:
See also
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to list resource
groups and resources
10/28/2022 • 3 minutes to read • Edit Online
This example demonstrates how to use the Azure SDK management libraries in a Python script to perform two
tasks:
List all the resource groups in an Azure subscription.
List resources within a specific resource group.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
The Equivalent Azure CLI command is given later in this article.
azure-mgmt-resource>=18.0.0
azure-identity>=1.5.0
Be sure to use these versions of the libraries. Using older versions will result in errors such as
"'AzureCliCredential' object object has no attribute 'signed_session'."
In a terminal or command prompt with the virtual environment activated, install the requirements:
print("Resource".ljust(column_width) + "Type".ljust(column_width)
+ "Create date".ljust(column_width) + "Change date".ljust(column_width))
print("-" * (column_width * 4))
python list_groups.py
python list_resources.py
The following command lists resources within the "myResourceGroup" in the centralus region (the location
argument is necessary to identify a specific data center):
See also
Example: Provision a resource group
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Provision Azure Storage using the Azure
libraries for Python
10/28/2022 • 5 minutes to read • Edit Online
In this article, you learn how to use the Azure management libraries in a Python script to provision a resource
group that contains and Azure Storage account and a Blob storage container. (Equivalent Azure CLI commands
are given later in this article. If you prefer to use the Azure portal, see Create an Azure storage account and
Create a blob container.)
After provisioning the resources, see Example: Use Azure Storage to use the Azure client libraries in Python
application code to upload a file to the Blob storage container.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
azure-mgmt-resource
azure-mgmt-storage
azure-identity
2. In your terminal with the virtual environment activated, install the requirements:
# Import the needed management objects from the libraries. The azure.common library
# is installed automatically with the other libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.storage import StorageManagementClient
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-Storage-rg"
LOCATION = "centralus"
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })
# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
# This example uses the CLI profile credentials because we assume the script
# is being used to provision the resource in the same way the Azure CLI would be used.
STORAGE_ACCOUNT_NAME = f"pythonazurestorage{random.randint(1,100000):05}"
# You can replace the storage account here with any unique name. A random number is used
# by default, but note that the name changes every time you run this script.
# The name must be 3-24 lower case letters and numbers only.
# Check if the account name is available. Storage account names must be unique across
# Azure because they're used in URLs.
availability_result = storage_client.storage_accounts.check_name_availability(
{ "name": STORAGE_ACCOUNT_NAME }
)
if not availability_result.name_available:
print(f"Storage name {STORAGE_ACCOUNT_NAME} is already in use. Try another name.")
exit()
# Step 3: Retrieve the account's primary access key and generate a connection string.
keys = storage_client.storage_accounts.list_keys(RESOURCE_GROUP_NAME, STORAGE_ACCOUNT_NAME)
conn_string = f"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=
{STORAGE_ACCOUNT_NAME};AccountKey={keys.keys[0].value}"
print(f"Connection string: {conn_string}")
# Step 4: Provision the blob container in the account (this call is synchronous)
CONTAINER_NAME = "blob-container-01"
container = storage_client.blob_containers.create(RESOURCE_GROUP_NAME, STORAGE_ACCOUNT_NAME, CONTAINER_NAME,
{})
# The fourth argument is a required BlobContainer object, but because we don't need any
# special values there, so we just pass empty JSON.
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
StorageManagementClient (azure.mgmt.storage)
2. Select the storage account, then select Data storage > Containers in the left-hand menu to verify that
the "blob-container-01" appears:
3. If you want to try using these provisioned resources from application code, continue with Example: Use
Azure Storage.
For an additional example of using the Azure Storage management library, see the Manage Python Storage
sample.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:
cmd
bash
rem Provision the blob container; NOTE: this command assumes you have an environment variable
rem named AZURE_STORAGE_CONNECTION_STRING with the connection string for the storage account.
set AZURE_STORAGE_CONNECTION_STRING=<connection_string>
az storage container create --account-name pythonazurestorage12345 -n blob-container-01
6: Clean up resources
Leave the resources in place if you want to follow the article Example: Use Azure Storage to use these resources
in app code.
Otherwise, run the following command to avoid ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.
See also
Example: Use Azure Storage
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision a web app and deploy code
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Access Azure Storage using the Azure
libraries for Python
10/28/2022 • 5 minutes to read • Edit Online
This example demonstrated how to use the Azure client libraries in Python application code to upload a file to
that Blob storage container. The example assumes you have provisioned the resources shown in Example:
Provision Azure Storage.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
azure-storage-blob
azure-identity
Hello there, Azure Storage. I'm a friendly file ready to be stored in a blob.
cmd
bash
set AZURE_STORAGE_BLOB_URL=https://pythonazurestorage12345.blob.core.windows.net
import os
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
# Create the client object using the storage URL and the credential
blob_client = BlobClient(storage_url,
container_name="blob-container-01", blob_name="sample-blob.txt", credential=credential)
Reference links:
DefaultAzureCredential (azure.identity)
BlobClient (azure.storage.blob)
3. Attempt to run the code (which fails intentionally):
python use_blob_auth.py
4. Observe the error "This request is not authorized to perform this operation using this permission." The
error is expected because the local service principal that you're using does not yet have permission to
access the blob container.
5. Grant container permissions to the service principal using the Azure CLI command az role assignment
create (it's a long one!):
cmd
bash
The --scope argument identifies where this role assignment applies. In this example, you grant the
"Storage Blob Data Contributor" role to the specific container named "blob-container-01".
Replace pythonazurestorage12345 with the exact name of your storage account. You can also adjust the
name of the resource group and blob container, if necessary. If you use the wrong name, you see the
error, "Can not perform requested operation on nested resource. Parent resource
'pythonazurestorage12345' not found."
If needed, also replace PythonAzureExample-Storage-rg with the name of the resource group that contains
your storage account. The resource group shown here is what's used in Example: Provision Azure Storage.
The --scope argument in this command also uses the AZURE_CLIENT_ID and AZURE_SUBSCRIPTION_ID
environment variables, which you should already have set in your local environment for your service
principal by following Configure your local Python dev environment for Azure.
6. Wait a minute or two for the permissions to propagate , then run the code again to verify that it
now works. If you see the permissions error again, wait a little longer, then try the code again.
For more information on role assignments, see How to assign role permissions using the Azure CLI.
4b: Use blob storage with a connection string
1. Create an environment variable named AZURE_STORAGE_CONNECTION_STRING , the value of which is the full
connection string for the storage account. (This environment variable is also used by various Azure CLI
comments.)
2. Create a Python file named use_blob_conn_string.py with the following code. The comments explain the
steps.
import os
# Retrieve the connection string from an environment variable. Note that a connection
# string grants all permissions to the caller, making it less secure than obtaining a
# BlobClient object using credentials.
conn_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]
# Create the client object for the resource identified by the connection string,
# indicating also the blob container and the name of the specific blob we want.
blob_client = BlobClient.from_connection_string(conn_string,
container_name="blob-container-01", blob_name="sample-blob.txt")
Again, although this method is simple, a connection string authorizes all operations in a storage account. With
production code it's better to use specific permissions as described in the previous section.
6: Clean up resources
az group delete -n PythonAzureExample-Storage-rg --no-wait
Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.
See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision a web app and deploy code
Example: Provision Azure Storage
Example: Provision and query a database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision and
deploy a web app
10/28/2022 • 5 minutes to read • Edit Online
This example demonstrates how to use the Azure SDK management libraries in a Python script to provision a
web app on Azure App Service and deploy app code from a GitHub repository. (Equivalent Azure CLI commands
are given at later in this article.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
azure-mgmt-resource
azure-mgmt-web
azure-identity
In a terminal or command prompt with the virtual environment activated, install the requirements:
Then create an environment variable named REPO_URL with the URL of your fork. The example code in the next
section depends on this environment variable:
cmd
bash
set REPO_URL=<url_of_your_fork>
import random, os
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.web import WebSiteManagementClient
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = 'PythonAzureExample-WebApp-rg'
LOCATION = "centralus"
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })
# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
#Step 2: Provision the App Service plan, which defines the underlying VM for the web app.
# Names for the App Service plan and App Service. We use a random number with the
# latter to create a reasonably unique name. If you've already provisioned a
# web app and need to re-run the script, set the WEB_APP_NAME environment
# variable to that name instead.
SERVICE_PLAN_NAME = 'PythonAzureExample-WebApp-plan'
WEB_APP_NAME = os.environ.get("WEB_APP_NAME", f"PythonAzureExample-WebApp-{random.randint(1,100000):05}")
plan_result = poller.result()
# Step 3: With the plan in place, provision the web app itself, which is the process that can host
# whatever code we want to deploy to it.
poller = app_service_client.web_apps.begin_create_or_update(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": LOCATION,
"server_farm_id": plan_result.id,
"site_config": {
"linux_fx_version": "python|3.8"
}
}
)
web_app_result = poller.result()
# Step 4: deploy code from a GitHub repository. For Python code, App Service on Linux runs
# the code inside a container that makes certain assumptions about the structure of the code.
# For more information, see How to configure Python apps,
# https://docs.microsoft.com/azure/app-service/containers/how-to-configure-python.
#
# The create_or_update_source_control method doesn't provision a web app. It only sets the
# source control configuration for the app. In this case we're simply pointing to
# a GitHub repository.
#
# You can call this method again to change the repo.
REPO_URL = os.environ["REPO_URL"]
poller = app_service_client.web_apps.begin_create_or_update_source_control(RESOURCE_GROUP_NAME,
WEB_APP_NAME,
{
"location": "GitHub",
"repo_url": REPO_URL,
"branch": "master"
}
)
sc_result = poller.result()
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
WebSiteManagementClient (azure.mgmt.web import)
7: Clean up resources
az group delete -n PythonAzureExample-WebApp-rg --no-wait
Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.delete method to delete a resource group from
code.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script:
cmd
bash
rem You can use --deployment-source-url with the first create command. It is shown here
rem to match the sequence of the Python code.
rem Replace <your_fork> with the specific URL of your forked repository.
See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and use a MySQL database
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision a
database
10/28/2022 • 6 minutes to read • Edit Online
This example demonstrates how to use the Azure SDK management libraries in a Python script to provision an
Azure MySQL database. It also provides a simple script to query the database using the mysql-connector library
(not part of the Azure SDK). (Equivalent Azure CLI commands are given at later in this article. If you prefer to use
the Azure portal, see Create a PostgreSQL server or Create a MariaDB server.)
You can use similar code to provision a PostgreSQL or MariaDB database.
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
azure-mgmt-resource
azure-mgmt-rdbms
azure-identity
mysql
mysql-connector
The specific version requirement for azure-mgmt-resource is to ensure that you use a version compatible with
the current version of azure-mgmt-web. These versions are not based on azure.core and therefore use older
methods for authentication.
In a terminal or command prompt with the virtual environment activated, install the requirements:
NOTE
On Windows, attempting to install the mysql library into a 32-bit Python library produces an error about the mysql.h file.
In this case, install a 64-bit version of Python and try again.
import random, os
from azure.identity import AzureCliCredential
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.rdbms.mysql import MySQLManagementClient
from azure.mgmt.rdbms.mysql.models import ServerForCreate, ServerPropertiesForDefaultCreate, ServerVersion
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = 'PythonAzureExample-DB-rg'
LOCATION = "westus"
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{ "location": LOCATION })
# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
server = poller.result()
RULE_NAME = "allow_ip"
ip_address = os.environ["PUBLIC_IP_ADDRESS"]
# For the above code, create an environment variable named PUBLIC_IP_ADDRESS that
# contains your workstation's public IP address as reported by a site like
# https://whatismyipaddress.com/.
firewall_rule = poller.result()
poller = mysql_client.databases.begin_create_or_update(RESOURCE_GROUP_NAME,
db_server_name, db_name, {})
db_result = poller.result()
You must create an environment variable named PUBLIC_IP_ADDRESS with your workstation's IP address for this
sample to run.
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
ResourceManagementClient (azure.mgmt.resource)
MySQLManagementClient (azure.mgmt.rdbms.mysql)
ServerForCreate (azure.mgmt.rdbms.mysql.models)
ServerPropertiesForDefaultCreate (azure.mgmt.rdbms.mysql.models)
ServerVersion (azure.mgmt.rdbms.mysql.models)
Also see: - PostgreSQLManagementClient (azure.mgmt.rdbms.postgresql) - MariaDBManagementClient
(azure.mgmt.rdbms.mariadb)
db_server_name = os.environ["DB_SERVER_NAME"]
db_admin_name = os.getenv("DB_ADMIN_NAME", "azureuser")
db_admin_password = os.getenv("DB_ADMIN_PASSWORD", "ChangePa$$w0rd24")
connection = mysql.connector.connect(user=f"{db_admin_name}@{db_server_name}",
password=db_admin_password, host=f"{db_server_name}.mysql.database.azure.com",
port=db_port, database=db_name, ssl_ca='./BaltimoreCyberTrustRoot.crt.pem')
cursor = connection.cursor()
"""
# Alternate pyodbc connection; include pyodbc in requirements.txt
import pyodbc
connect_string = f"DRIVER={driver};PORT=3306;SERVER={db_server_name}.mysql.database.azure.com;" \
f"DATABASE={DB_NAME};UID={db_admin_name};PWD={db_admin_password}"
connection = pyodbc.connect(connect_string)
"""
table_name = "ExampleTable1"
cursor.execute(sql_create)
print(f"Successfully created table {table_name}")
cursor.execute(sql_insert)
print("Successfully inserted data into table")
cursor.execute(sql_select_values)
row = cursor.fetchone()
while row:
print(str(row[0]) + " " + str(row[1]))
row = cursor.fetchone()
connection.commit()
All of this code uses the mysql.connector API. The only Azure-specific part is the full host domain for
MySQL server (mysql.database.azure.com).
2. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server
from https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem and save the certificate file to
the same folder as the Python file. (This step is described on Obtain an SSL Certificate in the Azure
Database for MySQL documentation.)
3. Run the code:
python use_db.py
6: Clean up resources
az group delete -n PythonAzureExample-DB-rg --no-wait
Run this command if you don't need to keep the resources provisioned in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.
For reference: equivalent Azure CLI commands
The following Azure CLI commands complete the same provisioning steps as the Python script. For a
PostgreSQL database, use az postgres commands; for MariaDB, use az mariadb commands.
cmd
bash
# Change the IP address to the public IP address of your workstation, that is, the address shown
# by a site like https://whatismyipaddress.com/.
See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and deploy a web app
Example: Provision a virtual machine
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
Example: Use the Azure libraries to provision a
virtual machine
10/28/2022 • 6 minutes to read • Edit Online
This example demonstrates how to use the Azure SDK management libraries in a Python script to create a
resource group that contains a Linux virtual machine. (Equivalent Azure CLI commands are given at the later in
this article. If you prefer to use the Azure portal, see Create a Linux VM and Create a Windows VM.)
All the commands in this article work the same in Linux/macOS bash and Windows command shells unless
noted.
NOTE
Provisioning a virtual machine through code is a multi-step process that involves provisioning a number of other
resources that the virtual machine requires. If you're simply running such code from the command line, it's much easier to
use the az vm create command, which automatically provisions these secondary resources with defaults for any setting
you choose to omit. The only required arguments are a resource group, VM name, image name, and login credentials. For
more information, see Quick Create a virtual machine with the Azure CLI.
azure-mgmt-resource
azure-mgmt-compute
azure-mgmt-network
azure-identity
2. In your terminal or command prompt with the virtual environment activated, install the management
libraries listed in requirements.txt:
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
import os
import os
# Obtain the management object for resources, using the credentials from the CLI login.
resource_client = ResourceManagementClient(credential, subscription_id)
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
LOCATION = "westus2"
# For details on the previous code, see Example: Provision a resource group
# at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
vnet_result = poller.result()
ip_address_result = poller.result()
nic_result = poller.result()
VM_NAME = "ExampleVM"
USERNAME = "azureuser"
PASSWORD = "ChangePa$$w0rd24"
print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")
# Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
# on a Standard DS1 v2 plan with a public IP address and a default virtual network/subnet.
vm_result = poller.result()
This code uses CLI-based authentication (using AzureCliCredential ) because it demonstrates actions that you
might otherwise do with the Azure CLI directly. In both cases you're using the same identity for authentication.
To use such code in a production script (for example, to automate VM management), use
DefaultAzureCredential (recommended) or a service principal based method as described in How to
authenticate Python apps with Azure services.
Reference links for classes used in the code
AzureCliCredential (azure.identity)
ResourceManagementClient (azure.mgmt.resource)
NetworkManagementClient (azure.mgmt.network)
ComputeManagementClient (azure.mgmt.compute)
6: Clean up resources
az group delete -n PythonAzureExample-VM-rg --no-wait
Run this command if you don't need to keep the resources created in this example and would like to avoid
ongoing charges in your subscription.
You can also use the ResourceManagementClient.resource_groups.begin_delete method to delete a resource group
from code. The code in Example: Provision a resource group demonstrates usage.
See also
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision a web app and deploy code
Example: Provision and query a database
Use Azure Managed Disks with virtual machines
Complete a short survey about the Azure SDK for Python
The following resources contain more comprehensive examples using Python to create a virtual machine:
Create and manage Windows VMs in Azure using Python. You can use this example to create Linux VMs by
changing the storage_profile parameter.
Azure Virtual Machines Management Samples - Python (GitHub). The sample demonstrates additional
management operations like starting and restarting a VM, stopping and deleting a VM, increasing the disk
size, and managing data disks.
Use Azure Managed Disks with the Azure libraries
(SDK) for Python
10/28/2022 • 3 minutes to read • Edit Online
Azure Managed Disks provide a simplified disk management, enhanced scalability, better security, and better
scaling without having to work directly with storage accounts.
You use the azure-mgmt-compute library to administer Managed Disks. (For an example of provisioning a virtual
machine with the azure-mgmt-compute library, see Example - Provision a virtual machine.)
poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'disk_size_gb': 20,
'creation_data': {
'create_option': DiskCreateOption.empty
}
}
)
disk_resource = poller.result()
poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'creation_data': {
'create_option': DiskCreateOption.import_enum,
'source_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd'
}
}
)
disk_resource = poller.result()
poller = compute_client.images.begin_create_or_update(
'my_resource_group',
'my_image_name',
{
'location': 'eastus',
'storage_profile': {
'os_disk': {
'os_type': 'Linux',
'os_state': "Generalized",
'blob_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd',
'caching': "ReadWrite",
}
}
}
)
image_resource = poller.result()
poller = compute_client.disks.begin_create_or_update(
'my_resource_group',
'my_disk_name',
{
'location': 'eastus',
'creation_data': {
'create_option': DiskCreateOption.copy,
'source_resource_id': managed_disk.id
}
}
)
disk_resource = poller.result()
storage_profile = azure.mgmt.compute.models.StorageProfile(
image_reference = azure.mgmt.compute.models.ImageReference(
publisher='Canonical',
offer='UbuntuServer',
sku='16.04-LTS',
version='latest'
)
)
For a complete example on how to create a virtual machine using the Azure management libraries, for Python,
see Example - Provision a virtual machine.
You can also create a storage_profile from your own image:
storage_profile = azure.mgmt.compute.models.StorageProfile(
image_reference = azure.mgmt.compute.models.ImageReference(
id = image.id
)
)
vm = compute.virtual_machines.get(
'my_resource_group',
'my_vm'
)
managed_disk = compute_client.disks.get('my_resource_group', 'myDisk')
vm.storage_profile.data_disks.append({
'lun': 12, # You choose the value, depending of what is available for you
'name': managed_disk.name,
'create_option': DiskCreateOptionTypes.attach,
'managed_disk': {
'id': managed_disk.id
}
})
async_update = compute_client.virtual_machines.begin_create_or_update(
'my_resource_group',
vm.name,
vm,
)
async_update.wait()
'storage_profile': {
'image_reference': {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},
vmss_parameters = {
'location': self.region,
"overprovision": True,
"upgrade_policy": {
"mode": "Manual"
},
'sku': {
'name': 'Standard_A1',
'tier': 'Standard',
'capacity': 5
},
'virtual_machine_profile': {
'storage_profile': {
'image_reference': {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},
'os_profile': {
'computer_name_prefix': naming_infix,
'admin_username': 'Foo12',
'admin_password': 'BaR@123!!!!',
},
'network_profile': {
'network_interface_configurations' : [{
'name': naming_infix + 'nic',
"primary": True,
'ip_configurations': [{
'name': naming_infix + 'ipconfig',
'subnet': {
'id': subnet.id
}
}]
}]
}
}
}
async_update = self.compute_client.disks.begin_create_or_update(
'my_resource_group',
'myDisk',
managed_disk
)
async_update.wait()
Update the storage account type of the Managed Disks
async_update = self.compute_client.disks.begin_create_or_update(
'my_resource_group',
'myDisk',
managed_disk
)
async_update.wait()
async_create_image = compute_client.images.create_or_update(
'my_resource_group',
'myImage',
{
'location': 'westus',
'storage_profile': {
'os_disk': {
'os_type': 'Linux',
'os_state': "Generalized",
'blob_uri': 'https://bg09.blob.core.windows.net/vm-images/non-existent.vhd',
'caching': "ReadWrite",
}
}
}
)
image = async_create_image.result()
async_snapshot_creation = self.compute_client.snapshots.begin_create_or_update(
'my_resource_group',
'mySnapshot',
{
'location': 'westus',
'creation_data': {
'create_option': 'Copy',
'source_uri': managed_disk.id
}
}
)
snapshot = async_snapshot_creation.result()
See also
Example: Provision a virtual machine
Example: Provision a resource group
Example: List resource groups in a subscription
Example: Provision Azure Storage
Example: Use Azure Storage
Example: Provision and use a MySQL database
Complete a short survey about the Azure SDK for Python
Configure logging in the Azure libraries for Python
10/28/2022 • 5 minutes to read • Edit Online
Azure Libraries for Python that are based on azure.core page provide logging output using the standard Python
logging library.
The general process to work with logging is as follows:
1. Acquire the logging object for the desired library and set the logging level.
2. Register a handler for the logging stream.
3. To include HTTP information, pass a logging_enable=True parameter to a client object constructor, a
credential object constructor, or to a specific method.
Details are provided in the remaining sections of this article.
As a general rule, the best resource for understanding logging usage within the libraries is to browse the SDK
source code at github.com/Azure/azure-sdk-for-python. We encourage you to clone this repository locally so
you can easily search for details when needed, as the following sections suggest.
# ...
This example acquires the logger for the azure.mgmt.resource library, then sets the logging level to
logging.DEBUG .
You can call logger.setLevel at any time to change the logging level for different segments of code.
To set a level for a different library, use that library's name in the logging.getLogger call. For example, the azure-
eventhubs library provides a logger named azure.eventhubs , the azure-storage-queue library provides a logger
named azure.storage.queue , and so on. (The SDK source code frequently uses the statement
logging.getLogger(__name__) , which acquires a logger using the name of the containing module.)
import logging
Note that the azure logger is used by some libraries instead of a specific logger. For example, the azure-
storage-blob library uses the azure logger.
You can use the logger.isEnabledFor method to check whether any given logging level is enabled:
Logging levels are the same as the standard logging library levels. The following table describes the general use
of these logging levels in the Azure libraries for Python:
logging.WARNING (default) A function fails to perform its intended task (but not when
the function can recover, such as retrying a REST API call).
Functions typically log a warning when raising exceptions.
The warning level automatically enables the error level.
import logging
This example registers a handler that directs log output to stdout. You can use other types of handlers as
described on logging.handlers in the Python documentation or use the standard logging.basicConfig method.
# Enable HTTP logging on the client object when using DEBUG level
# endpoint is the Blob storage URL.
client = BlobClient(endpoint, DefaultAzureCredential(), logging_enable=True)
Enabling HTTP logging for a client object enables logging for all operations invoked through that object.
Enable HTTP logging for a credential object (DEBUG level)
# Enable HTTP logging on the credential object when using DEBUG level
credential = DefaultAzureCredential(logging_enable=True)
Enabling HTTP logging for a credential object enables logging for all operations invoked through that object,
specifically, but not for operations in a client object that don't involve authentication.
Enable logging for an individual method (DEBUG level)
# Enable HTTP logging for only this operation when using DEBUG level
client.create_container("container01", logging_enable=True)
import os
import sys
import logging
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobClient
logger = logging.getLogger('azure')
logger.setLevel(logging.DEBUG)
credential = DefaultAzureCredential()
storage_url = os.environ["AZURE_STORAGE_BLOB_URL"]
Global configuration
To configure a proxy globally for your script or app, define HTTP_PROXY or HTTPS_PROXY environment variables
with the server URL. These variables work any version of the Azure libraries.
These environment variables are ignored if you pass the parameter use_env_settings=False to a client object
constructor or operation method.
From Python code
import os
os.environ["HTTP_PROXY"] = "http://10.10.1.10:1180"
credential = DefaultAzureCredential()
storage_url = "your_url"
You can use the Azure libraries for Python to connect to all regions where Azure is available.
By default, the Azure libraries are configured to connect to the global Azure cloud.
To use a definition, import the appropriate constant from azure.identity.AzureAuthorityHosts and apply it when
creating client objects.
When using DefaultAzureCredential , as shown in the following example, you can specify the cloud by using the
appropriate value from azure.identity.AzureAuthorityHosts .
import os
from msrestazure.azure_cloud import AZURE_CHINA_CLOUD as CLOUD
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from azure.identity import DefaultAzureCredential, AzureAuthorityHosts
# Assumes the subscription ID and tenant ID to use are in the AZURE_SUBSCRIPTION_ID and
# AZURE_TENANT_ID environment variables
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]
# When using sovereign domains (that is, any cloud other than AZURE_PUBLIC_CLOUD),
# you must use an authority with DefaultAzureCredential.
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_CHINA)
resource_client = ResourceManagementClient(
credential, subscription_id,
base_url=CLOUD.endpoints.resource_manager,
credential_scopes=[CLOUD.endpoints.resource_manager + "/.default"])
subscription_client = SubscriptionClient(
credential,
base_url=CLOUD.endpoints.resource_manager,
credential_scopes=[CLOUD.endpoints.resource_manager + "/.default"])
# Assumes the subscription ID and tenant ID to use are in the AZURE_SUBSCRIPTION_ID and
# AZURE_TENANT_ID environment variables
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
tenant_id = os.environ["AZURE_TENANT_ID"]
stack_cloud = get_cloud_from_metadata_endpoint("https://contoso-azurestack-arm-endpoint.com")
# When using a private cloud, you must use an authority with DefaultAzureCredential.
# The active_directory endpoint should be a URL like https://login.microsoftonline.com.
credential = DefaultAzureCredential(authority=stack_cloud.endpoints.active_directory)
resource_client = ResourceManagementClient(
credential, subscription_id,
base_url=stack_cloud.endpoints.resource_manager,
profile=KnownProfiles.v2019_03_01_hybrid,
credential_scopes=[stack_cloud.endpoints.active_directory_resource_id + "/.default"])
subscription_client = SubscriptionClient(
credential,
base_url=stack_cloud.endpoints.resource_manager,
profile=KnownProfiles.v2019_03_01_hybrid,
credential_scopes=[stack_cloud.endpoints.active_directory_resource_id + "/.default"])
Azure libraries for Python API reference
10/28/2022 • 2 minutes to read • Edit Online
The following articles help you get started with various app hosting options on Azure:
Ser verless hosting :
Create a function in Azure using the Azure CLI that responds to HTTP requests
Connect Azure Functions to Azure Storage using command line tools
Create an Azure Functions project using Visual Studio Code
Connect Azure Functions to Azure Storage using Visual Studio Code
Web app hosting and monitoring :
Create a Python app in Azure App Service on Linux
Configure a Linux Python app for Azure App Service
Set up Azure Monitor for your Python application
Container hosting :
Deploy an Azure Kubernetes Service cluster using the Azure CLI
Deploy a container instance in Azure using the Azure CLI
Create your first Service Fabric container application on Linux
Batch jobs :
Use Python API to run an Azure Batch job
Tutorial: Run a parallel workload with Azure Batch using the Python API
Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Vir tual machines :
Create a Linux virtual machine with the Azure CLI
Data solutions for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online
The following articles help you get started with various data solutions on Azure.
SQL databases
PostgreSQL :
Use Python to connect and query data in Azure Database for PostgreSQL
Run a Python (Django or Flask) web app with PostgreSQL in Azure App Service
MySQL :
Use Python to connect and query data with Azure Database for MySQL
Azure SQL :
Use Python to query an Azure SQL database
MariaDB :
How to connect applications to Azure Database for MariaDB
The following articles help you get started with various identity and security options on Azure:
Authentication and identity
Add sign-in with Microsoft to a Python web app
Acquire a token and call Microsoft Graph API from a Python console app using app's identity
Security and key/secret/cer tificate storage
Store and retrieve certificates with Key Vault
Store and retrieve keys with Key Vault
Store and retrieve secrets with Key Vault
Machine learning for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online
The following articles help you get started with various machine learning options on Azure:
Get started creating your first ML experiment with the Python SDK
Train your first ML model
Train image classification models with MNIST data and scikit-learn using Azure Machine Learning
Auto-train an ML model
Access datasets with Python using the Azure Machine Learning Python client library
Configure automated ML experiments in Python
Deploy a data pipeline with Azure DevOps
Create and run machine learning pipelines with Azure Machine Learning SDK
AI service for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online
Azure Cognitive Services make extensive AI capabilities easily available to applications in areas such as
computer vision and image processing, language analysis and translation, speech, decision-making, and
comprehensive search.
Because Azure Cognitive Services continues to evolve, the best way to find getting started material for Python is
to begin on the Azure Cognitive Service hub page. Select a service of interest and then expand the Quickstar ts
node. Under Quickstar ts , look for subsections about using the client libraries or the REST API. The articles in
those subsections include Python where supported.
Go to the Cognitive Services hub page >>>
Also see the following articles for Azure Cognitive Search, which is in a separate part of the documentation from
Cognitive Services:
Create an Azure Cognitive Search index in Python using Jupyter notebooks.
Use Python and AI to generate searchable content from Azure blobs
Messaging and IoT for Python apps on Azure
10/28/2022 • 2 minutes to read • Edit Online
The following articles help you get started with various messaging options on Azure.
Messaging
Notifications :
How to use Notification Hubs from Python
Queues :
How to use Azure Queue storage v2.1 from Python
Azure Queue storage client library v12 for Python
Use Azure Service Bus queues with Python
Use Service Bus topics and subscriptions with Python
Real-time web functionality (SignalR) :
Create a chat room with Azure Functions and SignalR Service using Python
Event ingestion
Event ingestion :
Ingest real-time data with Event Hubs using Python
Event Hubs data in Azure Storage and read it by using Python
Route custom events to web endpoint with Azure CLI and Event Grid
Media streaming :
Connect to Media Services v3 API
Automation :
Tutorial: Create a Python runbook
DevOps :
Create a CI/CD pipeline for Python with Azure DevOps Starter
Build Python apps
Geographical mapping :
Tutorial: Route electric vehicles by using Azure Notebooks
Tutorial: Join sensor data with weather forecast data by using Azure Notebooks
Burrows-Wheeler Aligner (BWA) and the Genome Analysis Toolkit (GATK) :
Run a workflow through the Microsoft Genomics service
Resource management :
Run your first Resource Graph query using Python
Vir tual machine management :
Create and manage Windows VMs in Azure using Python