The Intel® Python Daal4py Getting Started
sample code shows how to do batch linear regression using the Python API package daal4py powered by the Intel® oneAPI Data Analytics Library (oneDAL).
Area | Description |
---|---|
Category | Getting Started |
What you will learn | Basic daal4py programming model for Intel CPUs |
Time to complete | 5 minutes |
daal4py is a simplified API to oneDAL that allows for fast usage of the framework suited for data scientists or machine learning users. Built to help provide an abstraction to Intel® oneDAL for direct usage or integration into one's own framework.
In this sample, you will run a batch Linear Regression model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.
Optimized for | Description |
---|---|
OS | Ubuntu* 20.04 (or newer) |
Hardware | Intel Atom® processors Intel® Core™ processor family Intel® Xeon® processor family Intel® Xeon® Scalable processor family |
Software | Intel® oneAPI Data Analytics Library (oneDAL) |
Note: AI and Analytics samples are validated on AI Tools Offline Installer. For the full list of validated platforms refer to Platform Validation.
-
This get started sample code is implemented for CPUs using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python*.
-
The Intel® oneAPI Data Analytics Library (oneDAL) is ready for use once you finish the AI Tools installation and have run the post installation script.
You will need to download and install the following toolkits, tools, and components to use the sample.
1. Get AI Tools
Required AI Tools: daal4py (Select Intel® Extension for Scikit-learn* on AI Tools Selector to install)
If you have not already, select and install these Tools via AI Tools Selector. AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
Note: If Docker option is chosen in AI Tools Selector, refer to Working with Preset Containers to learn how to run the docker and samples.
2. (Offline Installer) Activate the AI Tools bundle base environment
If the default path is used during the installation of AI Tools:
source $HOME/intel/oneapi/intelpython/bin/activate
If a non-default path is used:
source <custom_path>/bin/activate
3. (Offline Installer) Activate relevant Conda environment
conda activate base
4. Clone the GitHub repository
git clone https://github.com/oneapi-src/oneAPI-samples.git
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/IntelPython_daal4py_GettingStarted
5. Install dependencies
Note: Before running the following commands, make sure your Conda/Python environment with AI Tools installed is activated
pip install -r requirements.txt
pip install notebook
For Jupyter Notebook, refer to Installing Jupyter for detailed installation instructions.
Note: Before running the sample, make sure Environment Setup is completed.
Go to the section which corresponds to the installation method chosen in AI Tools Selector to see relevant instructions:
1. Register Conda kernel to Jupyter Notebook kernel
If the default path is used during the installation of AI Tools:
$HOME/intel/oneapi/intelpython/envs/base/bin/python -m ipykernel install --user --name=base
If a non-default path is used:
<custom_path>/bin/python -m ipykernel install --user --name=base
2. Launch Jupyter Notebook
jupyter notebook --ip=0.0.0.0
3. Follow the instructions to open the URL with the token in your browser
4. Select the Notebook
IntelPython_daal4py_GettingStarted.ipynb
5. Change the kernel to base
6. Run every cell in the Notebook in sequence
Note: Before running the instructions below, make sure your Conda/Python environment with AI Tools installed is activated
1. Register Conda/Python kernel to Jupyter Notebook kernel
For Conda:
<CONDA_PATH_TO_ENV>/bin/python -m ipykernel install --user --name=<your-env-name>
To know <CONDA_PATH_TO_ENV>, run conda env list
and find your Conda environment path.
For PIP:
python -m ipykernel install --user --name=<your-env-name>
2. Launch Jupyter Notebook
jupyter notebook --ip=0.0.0.0
3. Follow the instructions to open the URL with the token in your browser
4. Select the Notebook
IntelPython_daal4py_GettingStarted.ipynb
5. Change the kernel to <your-env-name>
6. Run every cell in the Notebook in sequence
AI Tools Docker images already have Get Started samples pre-installed. Refer to Working with Preset Containers to learn how to run the docker and samples.
Here's our model:
NumberOfBetas: 14
NumberOfResponses: 1
InterceptFlag: False
Beta: array(
[[ 0.00000000e+00 -1.05416344e-01 5.25259886e-02 4.26844883e-03
2.76607367e+00 -2.82517989e+00 5.49968304e+00 3.48833264e-03
-8.73247684e-01 1.74005447e-01 -8.38917510e-03 -3.28044397e-01
1.58423529e-02 -4.57542900e-01]],
dtype=float64, shape=(1, 14))
NumberOfFeatures: 13
Here is one of our loaded model's features:
[[ 0.00000000e+00 -1.05416344e-01 5.25259886e-02 4.26844883e-03
2.76607367e+00 -2.82517989e+00 5.49968304e+00 3.48833264e-03
-8.73247684e-01 1.74005447e-01 -8.38917510e-03 -3.28044397e-01
1.58423529e-02 -4.57542900e-01]]
[CODE_SAMPLE_COMPLETED_SUCCESFULLY]
- Intel® Python XGBoost* Getting Started Sample
- Intel® Python Scikit-learn Extension Getting Started Sample
Code samples are licensed under the MIT license. See License.txt for details.
Third party program Licenses can be found here: third-party-programs.txt
*Other names and brands may be claimed as the property of others. Trademarks