0% found this document useful (0 votes)
20 views

High Level Architecture

The document summarizes the high level architecture of the Akida Execution Engine (AEE). The AEE currently provides features like model instantiation, training, and inference through a Python framework and C++ kernel library that simulates an Akida NSoC chip. Upcoming versions will allow direct layer instantiation and connection in Python. Future versions aim to remove YAML and rely solely on Flatbuffer serialization, and introduce a hardware interface library for running models on actual Akida hardware.

Uploaded by

fodalih
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

High Level Architecture

The document summarizes the high level architecture of the Akida Execution Engine (AEE). The AEE currently provides features like model instantiation, training, and inference through a Python framework and C++ kernel library that simulates an Akida NSoC chip. Upcoming versions will allow direct layer instantiation and connection in Python. Future versions aim to remove YAML and rely solely on Flatbuffer serialization, and introduce a hardware interface library for running models on actual Akida hardware.

Uploaded by

fodalih
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

High level architecture

General overview

Like other machine learning platforms, the Akida Execution Engine (AEE) is composed of a python
framework library and a C++ kernel library.

The current features of the AEE are:

• the instantiation of Akida network models from YAML files,


• the training of instantiated models,
• the inference of instantiated models,
• the serialization of trained models.

The upcoming features of the AEE include:

• the creation of models using a sequential API.

Current (AEE 1.3.7 - june 2019)

In the current version, all AEE features are provided by the C++ kernel library, which is actually a simulator
library that replicates on a CPU the behaviour of the Akida NSoC.

The python framework library is a very thin layer of interface wrapping in python the calls to the simulator.

Models can only be instantiated through the deserialization of YAML network description files that can be
either written manually, or generated by the C++ kernel engine from an already instantiated model.
The trained model variables for each layer can be saved along the network description in dedicated binary
files using the Google Flatbuffer format.

Next (AEE 1.4.x - Q3 2019)

In the upcoming release of the AEE, the python framework library will be extended to allow:

• the direct instantiation of Akida layers,


• the connection of multiple layers sequentially into coherent models.

The C++ kernel library will still only provide a CPU implementation of the Akida training and inference.

The serialization of the models and layers will still be handled by the C++ kernel library using the hybrid
YAML/Flatbuffer format, but each layer Flatbuffer file will also contain the layer configuration, allowing
them to be instantiated independently.

Future (AEE 2.0 - TBD)

In the target release of the AEE, the legacy hybrid serialization will be removed, replaced by a full
Flatbuffer-based serialization.

The YAML format will still be supported as a template mechanism to quickly create new networks. It will
however only be available through the python API.

The C++ kernel library will be composed of three components:

• a core library in charge of model and layer instantiation and serialization,


• a simulator library providing pure software implementations of the Akida inference and training
operations,
• an hardware interface library to run the same operations on the Akida NSoC.

Hardware abstraction layer

With our current understanding of the Akida hardware interface, the abstraction interface is intended to
be layer-wise. The corresponding implementation is expected to be able to:

• configure the layer network nodes from the provided configuration and variables,
• accept dense (images) or sparse (spikes) inputs and return sparse outputs.

If that is the case, then it would allow the instantiation of hybrid networks with some layers running in the
simulator and some running on the Akida NSoC, allowing us to integrate the different layer types in
multiple iterations.

You might also like