FastAI is a deep learning library designed to support both researchers and practitioners. It offers low-level components for developing custom models, as well as high-level abstractions. This dual functionality is achieved without compromising performance or ease of use.
At the core of FastAI’s design is a carefully structured architecture based on independent abstractions. These abstractions capture recurring patterns across deep learning and data processing workflows. Combining the flexibility of the PyTorch framework and the dynamic features of Python, Fastai provides clear, concise implementations of complex behaviors.
FastAI provides three levels of API:
- High-level
- Mid-level
- Low-level

FastAI - High level API
The high-level API provides the highest abstraction, emphasizing simplicity and automation for common tasks, making it ideal for beginners or practitioners who seek a quick and easy solution without delving into detailed configurations.
- Learner: A Learner in Fastai is an object that encapsulates the training process of a machine learning model. It combines the data (provided by a DataLoaders object), the model architecture and the training configuration. It simplifies the training workflow, allowing to focus on model development and experimentation.
- DataBlock: The DataBlock is part of the data preprocessing pipeline. It is used to define how to get data into a format that can be fed into a model. It involves specifying the types of data blocks, how to obtain items and labels, data splitting and transformations.
In high-level API, the DataBlock handles data preprocessing and loading by structuring input data into DataLoaders, while the Learner manages model training, validation and evaluation using that data. Together, they form a streamlined pipeline where DataBlock prepares the data and Learner oversees the learning process, clearly reflecting their roles in different stages of the machine learning workflow.
FastAI - Mid Level API
The mid-level API strikes a balance between abstraction and customization, offering flexibility to customize various components of the training process, such as data processing, model architecture and training loop. It works for users who want more control over specific aspects of their models while still benefiting from higher-level abstractions.
- Callbacks: Callbacks in Fastai are functions that allow you to customize training loop. They provide hooks at various stages of training, allowing us to execute additional actions or modify the behavior of the training process. Callbacks can be used for tasks such as saving model checkpoints, adjusting learning rates, logging metrics and more.
- General optimizer :The general optimizer in Fastai refers to the choice of optimization algorithm used during the training of a machine learning model. Fastai provides a variety of optimizers, including SGD (Stochastic Gradient Descent), Adam and others. The optimizer is responsible for updating the model parameters during each iteration of the training loop.
- Data core: The data core in Fastai refers to the core components and abstractions for handling data. It includes the DataBlock, DataLoaders and other functionalities that streamline the process of transforming and managing datasets.
FastAI - Low level API
The low-level API provides the lowest abstraction, granting full control over the training process. It is suitable for advanced users or those experimenting with novel architectures and algorithms, allowing for fine-grained customization at every stage:
Several features contribute to the flexibility and control over the training process:
- Pipeline: It refers to a series of operations applied sequentially to a piece of data. Pipelines are constructed using the Pipeline class and are used for data processing and augmentation during training for transformations, they make it easy to create a sequence of operations to be applied to the data.
- Reversible transforms: They are the operations that can be undone, allowing for transformations to be applied and then reversed. The Transform class in Fastai provides a decodes method, which defines how to reverse a transformation. Reversible transforms are crucial when interpreting or visualizing the transformed data.
- OO Tensors: OO tensors enhance code readability and maintainability by adding tensor operations within objects. This approach aligns with the principles of object-oriented programming and can make code more modular and extensible.
- Optimized Ops: It refers to the use of optimized low-level operations on tensors for efficiency. Fastai leverages PyTorch's tensor operations, which are optimized for performance to ensure that computations are carried out efficiently.
The choice of API level depends on the specific needs, experience level and the degree of control and customization required for machine learning tasks.
FastAI Across Domains
Below is an overview of how FastAI applies to each major domain:
1. Computer Vision (Images)
FastAI has support for computer vision tasks, making it easy to build models for:
- Image Classification: Classify objects in images using pretrained models like ResNet.
- Object Detection: Identify and locate multiple objects within an image.
- Image Segmentation: Classify each pixel in an image (medical imaging or autonomous driving).
2. Natural Language Processing (Text)
FastAI simplifies NLP workflows by automating tokenization, batching and model training. Supported tasks include:
- Text Classification: Sentiment analysis, spam detection, topic categorization.
- Language Modeling: Predict the next word or sentence for tasks like autocomplete or generative text.
- Named Entity Recognition (NER): Identify people, places and other named items in text.
- Translation: Train sequence-to-sequence models for translating between languages.
3. Tabular Data (Structured Data)
FastAI provides built-in support for mixed-type tabular data, combining categorical and continuous features. Its capabilities include:
- Classification and Regression: Handle tasks like predicting customer churn, loan defaults or sales forecasting.
- Feature Engineering: Automatically process and embed categorical variables.
- Interpretability: Tools to interpret feature importance and model behavior.
4. Collaborative Filtering (Recommendations)
FastAI also supports collaborative filtering for building recommendation systems using user-item interaction data. Features include:
- Matrix Factorization Models: Learn user and item embeddings from sparse interaction matrices.
- Custom Datasets: Easily train on datasets like MovieLens for movie recommendations.
Unified API Advantage
What makes Fastai truly powerful is that it uses the same Learner API across all domains. This consistency allows users to:
- Switch between domains with minimal learning curve
- Reuse knowledge of callbacks, metrics and training loops
- Apply transfer learning and model interpretability in a unified manner
Image Classification with Fastai: Example
Step 1: Install PyTorch and Fastai
To begin, we install the necessary libraries. Fastai is built on top of PyTorch, so installing Fastai will also install the required PyTorch dependencies.
pip install fastai
Step 2: Import Required Libraries
We start by importing the necessary modules from Fastai:
from fastai.vision.all import *
import matplotlib.pyplot as plt
Step 3: Load and Prepare the Dataset
We use Fastai’s built-in method to download and load the MNIST dataset.
path = untar_data(URLs.MNIST_SAMPLE)
dls = ImageDataLoaders.from_folder(path, valid='valid')
#sample images
dls.show_batch(max_n=9, figsize=(6,6))
Output:

Step 4: Explore Dataset Splits
The dataset is already organized into training and validation folders. The data distribution as follows:
print(f"Training set size: {len(dls.train_ds)}")
print(f"Validation set size: {len(dls.valid_ds)}")
Output:

Step 5: Train a Convolutional Neural Network
Now, we create and train a CNN using the cnn_learner function with a ResNet-18 backbone.
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.fine_tune(2)

Step 6: Evaluate the Model
After training, we evaluate the model using classification metrics and a confusion matrix.
interp = ClassificationInterpretation.from_learner(learn)
interp.print_classification_report()
interp.plot_confusion_matrix(figsize=(6,6))
Output:
Step 7: Manual Image Prediction
To test the model manually, we select an image from the validation set and make a prediction.
img_path = dls.valid_ds.items[0]
img = PILImage.create(img_path)
plt.imshow(img)
plt.axis('off')
plt.title("Manual Test Image")
plt.show()
pred_class, pred_idx, probs = learn.predict(img)
print(f"Predicted: {pred_class}")
Output:

Since, the image is somewhat representing the form of '3' digit. Hence, it predicted accurately.
Download the code here.
Why FastAI?
Choosing Fastai for deep learning tasks offers several compelling advantages:
1. API at High Level
With New high-level for data blocks, Fastai reduces the amount of code that is usually involved in deep learning applications. It allows users to concentrate on the essential elements of model construction and training, this abstraction speeds up the development process.
2. Abstractions of Training Loops
The library presents the idea of "Learners", which are objects that combine the model, data and optimization specifics. This method makes deep learning more approachable for beginners by streamlining the training loop and giving seasoned users more customization options for their processes.
3. PyTorch integration
Fastai, which is based on PyTorch, is well-integrated with the popular deep learning framework. Users can take advantage of Fastai's high-level abstractions and PyTorch's functionalities at the same time.
4. APIs Specific to Applications
For a number of disciplines including computer vision, natural language processing and collaborative filtering, Fastai offers specific APIs. These application-specific APIs speed up the creation of models for certain applications by offering pre-built methods and classes suited to typical tasks in each domain.
5. Data Block API
Deep learning requires handling data and Fastai's Data Block API makes this process easier. With clear and expressive code, users may import, preprocess and enhance datasets with ease. This adaptability is especially helpful when working with various data forms and types.
Conclusively, Fastai makes it easy to build and train deep learning models with minimal code while offering the flexibility to customize as needed. This example demonstrates how quickly we can go from data loading to model evaluation using Fastai’s high-level APIs.