Skip to content

Tags: samadejacobs/lbann

Tags

v0.93

Toggle v0.93's commit message
Release Notes: v0.93

This release contains a major refactoring / overhaul of the code base.
Key highlights include:
- Moving layer design into smaller simpler layers that have a single
  compute behavior per layer.  Specifically, linear combination of the
  inputs, non-linear activations, and regularizers now exist as their
  own layers.
- Layers now have a template parameter that specifies the data layout
  for the distributed matrices.
- Prototext interface for specifying neural network models and data
  readers is nearly fully functional.
- Code now adheres to internal coding style as outlined in
  README_coding_style.txt
- Dead-code has been eliminated and layer file hierachy has been
  cleaned up.

v0.93-rc0

Toggle v0.93-rc0's commit message
Start of Release v0.93

v0.92

Toggle v0.92's commit message
Release v0.92

New features include (but are not limited to):
  - Full support for convolutional and pooling layers
  - GPU acceleration of local Elemental GEMM operations
  - Improved network and data reader support
    -- Alexnet
    -- VGG
    -- CIFAR-10
  - Added a suite of regularizers, objective functions, and metrics, including:
    -- Batch normaalization
    -- Drop-out
    -- L2
  - Dramatically improves the performance of inter-model communication
  - Added suite of image prepossessing routines

v0.91

Toggle v0.91's commit message
v0.91 incorporates a number of changes throught the LBANN code base.

In particular there is a new build system that tries to have LBANN
download all of the dependencies into its build tree, and compile them
locally.  Additional improvements include optimizations in the data
parallel, multiple model training framework, support for convolutional
layers, and general bug fixes.

v0.9

Toggle v0.9's commit message
Initial release of the LBANN toolkit.