Orfeo ToolBox Cookbook for Non-Developers
Orfeo ToolBox Cookbook for Non-Developers
for non-developers
Updated for OTB-4.0
http://www.orfeo-toolbox.org
e-mail: [email protected]
The ORFEO Toolbox is not a black box.
Ch.D.
Foreword
After almost 5 years of development, the Orfeo ToolBox has become a rich library used in many
remote sensing context, from research work to operational systems. The OTB Applications and
more recently the Monteverdi tool has helped to broaden the audience of the library, giving access
to its functionalities to non-developers.
Meanwhile, the OTB Software Guide has grown to more than 700 pages of documented code exam-
ples, which, combined with the class documentation with the Doxygen, allows developer users to
find their way through the Orfeo ToolBox so as to write code suiting their needs.
Yet, the documentation available for non-developers users, using Monteverdi and OTB Applica-
tions to perform everyday remote sensing tasks, has been almost inexistent for all these years, and
these users had to learn the software by themselves or ask for help from more experienced users.
This cookbook aims at fulfilling the need for an appropriate documentation of the applications built
upon the Orfeo ToolBox: Monteverdi, and OTB Applications, which are now integrated into the
main Orfeo ToolBox package and provide several access mode (command-line, QT interface, QGis
plugins, other languages . . . ).
A general introduction to these tools is first presented, along with installation instructions. Rather
than describing all modules and applications in an exhaustive way, we then decided to focus on very
common remote sensing tasks, detailing how they can be achieved with either Monteverdi or an
application.
For more information on the Orfeo ToolBox, please feel free to visit the Orfeo ToolBox website.
CONTENTS
4 Recipes 37
4.1 Using Pleiades images in OTB Applications and Monteverdi . . . . . . . . . . . . . . . . . 38
4.1.1 Opening a Pleiades image in Monteverdi . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.2 Viewing a Pleiades image in Monteverdi . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.3 Handling mega-tiles in Monteverdi . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.4 Partial uncompressing of Pleiades images in Monteverdi . . . . . . . . . . . . . . . 42
4.1.5 Other processing of Pleiades images with Monteverdi . . . . . . . . . . . . . . . . 43
4.1.6 Processing of Pleiades images with OTB Applications . . . . . . . . . . . . . . . . 43
4.2 Optical pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.1 Optical radiometric calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Optical calibration with OTB Applications . . . . . . . . . . . . . . . . . . . . . . . 44
Optical calibration with Monteverdi . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2.2 Pan-sharpening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Pan-sharpening with OTB Applications . . . . . . . . . . . . . . . . . . . . . . . . . 47
Pan-sharpening with Monteverdi . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.3 Digital Elevation Model management . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.4 Ortho-rectification and map projections . . . . . . . . . . . . . . . . . . . . . . . . 49
Ortho-rectification with OTB Applications . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.5 Residual registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Extract metadata from the image reference . . . . . . . . . . . . . . . . . . . . . . . 53
Extract homologous points from images . . . . . . . . . . . . . . . . . . . . . . . . . 54
Geometry refinement using homologous points . . . . . . . . . . . . . . . . . . . . . 56
Orthorecrtify image using the affine geometry . . . . . . . . . . . . . . . . . . . . . . 56
4.3 Image processing and information extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 57
viii Contents
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.1.3 Image Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.1.4 Download or list SRTM tiles related to a set of images . . . . . . . . . . . . . . . . 99
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.1.5 Extract ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.1.6 Multi Resolution Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.1.7 Quick Look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Contents xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.1.8 Read image information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.1.9 Rescale Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.10 Split Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.1.11 Image Tile Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.2 Vector Data Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.2.1 Concatenate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
xii Contents
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.2.2 Rasterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.2.3 VectorData Extract ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2.4 Vector Data reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.2.5 Vector data set field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.2.6 Vector Data Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Contents xiii
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.4.4 Ply 3D files generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.4.5 Generate a RPC sensor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.4.6 Grid Based Image Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.4.7 Image Envelope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.4.8 Ortho-rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Contents xv
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.4.9 Pansharpening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.4.10 Refine Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.4.11 Image resampling with a rigid transform . . . . . . . . . . . . . . . . . . . . . . . . 168
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.4.12 Superimpose sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.5 Image Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
xvi Contents
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.6.3 Fuzzy Model estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.6.4 Edge Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.6.5 Grayscale Morphological Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.6.6 Haralick Texture Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.6.7 Homologous Points Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
xviii Contents
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.6.8 Line segment detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.6.9 Local Statistic Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.6.10 Multivariate alteration detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.6.11 Radiometric Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Contents xix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.6.12 SFS Texture Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.6.13 Vector Data validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.7 Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.7.1 Pixel-wise Block-Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.7.2 Disparity map to elevation map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
xx Contents
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
5.8.3 Compute Images second order statistics . . . . . . . . . . . . . . . . . . . . . . . . 244
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.8.4 Fusion of Classifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.8.5 Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.8.6 Unsupervised KMeans image classification . . . . . . . . . . . . . . . . . . . . . . 251
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.8.7 SOM Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
xxii Contents
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5.8.8 Train a classifier from multiple images . . . . . . . . . . . . . . . . . . . . . . . . . 256
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.9 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.9.1 Connected Component Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 265
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.9.2 Hoover compare segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.9.3 Exact Large-Scale Mean-Shift segmentation, step 2 . . . . . . . . . . . . . . . . . . 271
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Contents xxiii
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
5.10.3 Hyperspectral data unmixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.10.4 Image to KMZ Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
5.10.5 Open Street Map layers importations applications . . . . . . . . . . . . . . . . . . . 293
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
5.10.6 Obtain UTM Zone From Geo Point . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Detailed description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Contents xxv
5.71 Parameters table for Open Street Map layers importations applications. . . . . . . . . . . . . . 294
5.72 Parameters table for Obtain UTM Zone From Geo Point. . . . . . . . . . . . . . . . . . . . . 296
5.73 Parameters table for Pixel Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
5.74 Parameters table for Vertex Component Analysis. . . . . . . . . . . . . . . . . . . . . . . . . 299
LIST OF TABLES
CHAPTER
ONE
1.1 Introduction
OTB Applications was perhaps the older package of the Orfeo ToolBox suite after the OTB pack-
age itself. Since the Orfeo ToolBox is a library providing remote sensing functionalities, the only
applications that were distributed at the beginning were the examples from the Software Guide and
the tests. These applications are very useful for the developer because their code is very short and
only demonstrates one functionality at a time. In many cases, a real application would require :
The OTB Applications package was originally designed to provide applications performing sim-
ple remote sensing tasks, more complex than simple examples from the Software Guide, and with a
more user-friendly interface (either graphical or command-line), to demonstrate the use of the Orfeo
ToolBox functions. The most popular applications are maybe the otbImageViewerManager, which
allows to open a collection of images and navigate in them, and the otbSupervisedClassificationAp-
plication, which allowed to delineate training regions of interest on the image and classify the image
with a SVM classifier trained with these regions (this application is no longer maintained since the
same functionnality is available through the corresponding Monteverdi module). During the first 3
years of the Orfeo ToolBox development, many more applications have been added to this package,
to perform various tasks. Most of them came with a graphical user interface, apart from some small
utilities that are command-line.
The development and release of the Monteverdi software (see chapter 2 at the end of year 2009
changed a lot of things for the OTB Applications package: most of non-developer users were
looking for quite a long time for an application providing Orfeo ToolBox functionalities under a
unified graphical interface. Many applications from the OTB Applications package were integrated
to Monteverdi as modules, and the OTB Applications package lost a lot of its usefulness. No more
2 Chapter 1. A brief tour of OTB-Applications
applications were added to the package and it was barely maintained, as new graphical tools were
directly embedded within Monteverdi.
Then, some people started to regain interest in the OTB Applications package. Monteverdi is a
great tool to perform numerous remote sensing and image processing task in a minute, but it is not
well adapted to heavier (and longer) processing, scripting and batch processing. Therefore, in 2010
the OTB Applications package has been revamped: old applications have been moved to a legacy
folder for backward compatibility, and the development team started to populate the package with
compact command-line tools to perform various heavy processing tasks.
Later on in 2011, the OTB Applications has been further revamped. Because of the increasing need
to interface the OTB Applications into other software and to provide auto-generated interfaces, the
Orfeo ToolBox development team decided to develop a new application framework. The main idea
of this framework is the following: each application is written once for all in a shared library (also
known as plugin). This plugin can be auto-loaded into appropriate tools wihtout recompiling, and is
able to fully describe its parameters, behaviour and documentation.
The tools to use the plugins can be extended, but Orfeo ToolBox shipped the following:
Additionally, QGis plugins built on top of the SWIG/Python interface are available with seamless
integration within QGis. You can find a short guide about it here.
To facilitate the use of these tools and applications, they will now be shipped with the standard Orfeo
ToolBox package. It means that the former OTB-Applications package has entered its maintenance
cycle : no new feature will be pushed there, and all development is done directly inside the Orfeo
ToolBox paackage.
The OTB Applications are now rich of more than 40 tools, which are listed in the the applications
reference documentation, presented in chapter 5, page 91.
1.2 Installation
If you want build from source or if we don’t provide packages for your system, some informations
are available into the OTB Software Guide, in the section (Building from Source)
Since version 3.12, we provide OTB Applications packages through OSGeo4W for Windows XP/-
Seven users:
Follow the instructions in the installer and select the packages you want to add. The installer will
proceed with the installation of selected packages and all their dependencies. For the otb-bin pack-
ages, it will be available directly in the OSGeo4W shell, for example run
otbgui_BandMath.
For the otb-python packages, you can simply check from an OSGeo4W shell the list of available
applications:
python
import otbApplication
print str( otbApplication.Registry.GetAvailableApplications() )
1.2.2 MacOS X
OTB Applications are now available on MacPorts. The port name is called orfeotoolbox. You can
follow the MacPorts documentation to install MacPorts first, then install the orfeotoolbox port. After
the installation, you can used directly on your system, the OTB applications.
For Ubuntu 12.04 and higher, OTB Applications packages may be available as Debian packages
through APT repositories:
Since release 3.14.1, OTB Applications packages are available in the ubuntugis-unstable repository.
You can add it by using these command-lines:
If you are using Synaptic, you can add the repositories, update and install the packages through the
graphical interface.
For further informations about Ubuntu packages go to ubuntugis-unstable launchpad page and click
on Read about installing.
apt-add-repository will try to retrieve the GPG keys of the repositories to certify the origin of the
packages. If you are behind a http proxy, this step won’t work and apt-add-repository will stall and
eventually quit. You can temporarily ignore this error and proceed with the update step. Following
this, aptitude update will issue a warning about a signature problem. This warning won’t prevent
you from installing the packages.
For OpenSuse 12.X and higher, OTB Applications packages are available through zypper.
First, you need to add the appropriate repositories with these command-lines (please replace 11.4
by your OpenSuse version):
sudo zypper ar
http://download.opensuse.org/repositories/games/openSUSE_11.4/ Games
sudo zypper ar
http://download.opensuse.org/repositories/Application:/Geo/openSUSE_11.4/ GEO
sudo zypper ar
http://download.opensuse.org/repositories/home:/tzotsos/openSUSE_11.4/ tzotsos
Now run:
Alternatively you can use the One-Click Installer from the openSUSE Download page or add the
above repositories and install through Yast Package Management.
There is also support for the recently introduced ’rolling’ openSUSE distribution named ’Tumble-
weed’. For Tumbleweed you need to add the following repositories with these command-lines:
sudo zypper ar
http://download.opensuse.org/repositories/games/openSUSE_Tumbleweed/ Games
sudo zypper ar
http://download.opensuse.org/repositories/Application:/Geo/openSUSE_Tumbleweed/ GEO
sudo zypper ar
http://download.opensuse.org/repositories/home:/tzotsos/openSUSE_Tumbleweed/ tzotsos
Using the new OTB Applications framework is slightly more complex than launching a command-
line tool. This section describes all the ways to launch the new applications. Apart from the simpli-
fied access, which is similar to the former access to OTB Applications, you will need to know the
application name and optionally the path where the applications plugins are stored. For applications
shipped with Orfeo ToolBox, the name of each application can be found in chapter 5, page 91.
All standard applications delivered in with Orfeo ToolBox comes with simplified scripts in the
system path, allowing to launch the command-line and graphical user interface versions of the
application in the same simple way we used to launch the old applications. The command-line
interface is prefixed by otbcli_, while the Qt interface is prefixed by otbgui_. For instance, call-
ing otbcli_Convert will launch the command-line interface of the Convert application, while
otbgui_Convert will launch its GUI.
Passing arguments to the command-line version (prefixed by otbcli_) is explained in next sub-
section.
The command-line application launcher allows to load an application plugin, to set its parameters,
and execute it using the command line. Launching the otbApplicationLauncherCommandLine
without argument results in the following help to be displayed:
6 Chapter 1. A brief tour of OTB-Applications
$ otbApplicationLauncherCommandLine
Usage : ./otbApplicationLauncherCommandLine module_name [MODULEPATH] [arguments]
The module_name parameter corresponds to the application name. The [MODULEPATH] argument is
optional and allows to pass to the launcher a path where the shared library (or plugin) corresponding
to module_name is.
It is also possible to set this path with the environment variable ITK_AUTOLOAD_PATH, making the
[MODULEPATH] optional. This variable is checked by default when no [MODULEPATH] argument is
given. When using multiple paths in ITK_AUTOLOAD_PATH, one must make sure to use the standard
path separator of the target system, which is : on Unix, and ; on Windows.
An error in the application name (i.e. in parameter module_name) will make the
otbApplicationLauncherCommandLine lists the name of all applications found in the available
path (either [MODULEPATH] and/or ITK_AUTOLOAD_PATH).
To ease the use of the applications, and try avoiding extensive environment customization, ready-to-
use scripts are provided by the OTB installation to launch each application, and takes care of adding
the standard application installation path to the ITK_AUTOLOAD_PATH environment variable.
These scripts are named otbcli_<ApplicationName> and do not need any path set-
tings. For example you can start the Orthorectification application with the script called
otbcli_Orthorectification.
Launching an application with no or incomplete parameters will make the launcher display a sum-
mary of the parameters, indicating the mandatory parameters missing to allow for application exe-
cution. Here is an example with the OrthoRectification application:
$ otbcli_OrthoRectification
EXAMPLE OF USE:
otbcli_OrthoRectification -io.in QB_TOULOUSE_MUL_Extract_500_500.tif -io.out QB_Toulouse_ortho.tif
DOCUMENTATION: http://www.orfeo-toolbox.org/Applications/OrthoRectification.html
======================= PARAMETERS =======================
-progress <boolean> Report progress
MISSING -io.in <string> Input Image
MISSING -io.out <string> [pixel] Output Image [pixel=uint8/int8/uint16/int16/uint32/int32/float/double]
-map <string> Output Map Projection [utm/lambert2/lambert93/transmercator/wgs/epsg]
MISSING -map.utm.zone <int32> Zone number
-map.utm.northhem <boolean> Northern Hemisphere
-map.transmercator.falseeasting <float> False easting
-map.transmercator.falsenorthing <float> False northing
-map.transmercator.scale <float> Scale factor
-map.epsg.code <int32> EPSG Code
-outputs.mode <string> Parameters estimation modes [auto/autosize/autospacing]
MISSING -outputs.ulx <float> Upper Left X
MISSING -outputs.uly <float> Upper Left Y
MISSING -outputs.sizex <int32> Size X
1.3. Using the applications 7
For a detailed description of the application behaviour and parameters, please check the applica-
tion reference documentation presented chapter 5, page 91 or follow the DOCUMENTATION hyperlink
provided in otbApplicationLauncherCommandLine output. Parameters are passed to the appli-
cation using the parameter key (which might include one or several . character), prefixed by a -.
Command-line examples are provided in chapter 5, page 91.
The graphical interface for the applications provides a usefull interactive user interface to set the
parameters, choose files, and monitor the execution progress.
This interface can be activated through the CMake option OTB WRAP QT.
This launcher needs the same two arguments as the command line launcher :
The application paths can be set with the ITK_AUTOLOAD_PATH environment variable, as for the
command line launcher. Also, as for the command-line application, a more simple script is generated
and installed by OTB to ease the configuration of the module path : to launch the Rescale graphical
user interface, one will start the otbgui_Rescale script.
The resulting graphical application displays a window with several tabs:
• Parameters is where you set the parameters and execute the application.
• Logs is where you see the informations given by the application during its execution.
• Progress is where you see a progress bar of the execution (not available for all applications).
• Documentation is where you find a summary of the application documentation.
In this interface, every optional parameter has a check box that you have to tick if you want to set a
value and use this parameter. The mandatory parameters cannot be unchecked.
The interface of the application Rescale is shown here as an example.
8 Chapter 1. A brief tour of OTB-Applications
The applications can also be accessed from Python, through a module named otbApplication
On Unix systems it is typically available in the /usr/lib/otb/python directory. You may need to
configure the environment variable PYTHONPATH to include this directory so that the module becomes
available from an Python shell.
On Windows, you can install the otb-python package, and the module will be available from an
OSGeo4W shell automatically.
In this module, two main classes can be manipulated :
• Registry, which provides access to the list of available applications, and can create applica-
tions
• Application, the base class for all applications. This allows to interact with an application
instance created by the Registry
As for the command line and GUI launchers, the path to the application modules needs to be prop-
erly set with the ITK_AUTOLOAD_PATH environment variable. The standard location on Unix sys-
tems is /usr/lib/otb/applications. On Windows, the applications are available in the otb-bin
OSGeo4W package, and the environment is configured automatically so you don’t need to tweak
ITK_AUTOLOAD_PATH.
Here is one example of how to use Python to run the Smoothing application, changing the algorithm
at each iteration.
# Example on the use of the Smoothing application
#
# The smoothing algorithm can be set with the "type" parameter key
# and can take 3 values : ’mean’, ’gaussian’, ’anidif’
for type in [ ’ mean ’, ’ gaussian ’, ’ anidif ’]:
1.3. Using the applications 11
# Set the output filename, using the algorithm to differenciate the outputs
app . SetParameterString (" out " , argv [2] + type + ". tif ")
# This will execute the application and save the output file
app . ExecuteAndWriteOutput ()
Since OTB 3.20, OTB applications parameters can be export/import to/from an XML file using
inxml/outxml parameters. Those parameters are available in all applications.
An example is worth a thousand words
Then, you can run the applications with the same parameters using the output xml file previously
saved. For this, you have to use the inxml parameter:
Note that you can also overload parameters from command line at the same time
In this cas it will use as mathematical expression ”(im1b1 - im2b1)” instead of ”abs(im1b1 -
im2b1)”.
Finally, you can also launch applications directly from the command-line launcher executable using
the inxml parameter without having to declare the application name. Use in this case:
It will retrieve the application name and related parameters from the input xml file and launch in this
case the BandMath applications.
CHAPTER
TWO
2.1 Introduction
The OTB Applications package makes available a set of simple software tools, which were designed
to demonstrate what can be done with Orfeo ToolBox. Many users started using these applications
for real processing tasks, so we tried to make them more generic, more robust and easy to use.
Orfeo ToolBox users have been asking for an integrated application for a while, since using several
applications for a complete processing (ortho-rectification, segmentation, classification, etc.) can be
a burden. Recently, the OTB team received a request from CNES’ Strategy and Programs Office in
order to provide an integrated application for capacity building activities (teaching, simple image
manipulation, etc.). The specifications included ease of integration of new processing modules.
2.2 Installation
The application is called Monteverdi, since this is the name of the Orfeo composer. The application
allows you to build interactivelly remote sensing processes based on the Orfeo ToolBox. This is
also in remembering of the great (and once open source) Khoros/Cantata software.
Installation of Monteverdi is very simple. Standard installer packages are available on the main
platforms thanks to OTB-Developpers and external users. These packages are available few days
after the release. Get the latest information on binary packages on the Orfeo ToolBox website in the
section download.
We will discribe in the following sections the way to install monteverdi on:
• MacOSX 10.8
If you want build from source or if we don’t provide packages for your system, some informations
are available into the OTB Software Guide, in the section (Building from Source)
For Windows XP/Seven/8.1 users, there is a classical standalone installation program for Mon-
teverdi, available from the OTB download page after each release.
Since version 1.12, it is also possible to get Monteverdi package through OSGeo4W for Windows
XP/Seven users. Package for Monteverdi is available directly in the OSGeo4W installer when you
select the otb-monteverdi package. Follow the instructions in the OSGeo4W installer and select
the otb-monteverdi. The installer will proceed with the installation of the package and all its de-
pendencies. Monteverdi will be directly installed in the OSGeo4W repository and a shortcut will be
added to your desktop and in the start menu (in the OSGeo4W folder). You can now use directly
Monteverdi from your desktop, from the start menu and from an OSGeo4W shell with command
monteverdi. Currently, you should use the 32bit OSGeo4W installer but we will soon distribute
monteverdi package for 64 bit installer.
2.2.2 MacOS X
A standard DMG package is available for Monteverdi for MacOS X 10.8. Please go the OTB
download page. Click on the file to launch Monteverdi. This DMG file is also compatible with
MacOSX 10.9.
For Ubuntu 12.04 and higher, Monteverdi package may be available as Debian package through
APT repositories.
Since release 1.14, Monteverdi packages are available in the ubuntugis-unstable repository.
You can add it by using these command-lines:
Now run:
If you are using Synaptic, you can add the repository, update and install the package through the
graphical interface.
apt-add-repository will try to retrieve the GPG keys of the repositories to certify the origin of the
packages. If you are behind a http proxy, this step won’t work and apt-add-repository will stall and
eventually quit. You can temporarily ignore this error and proceed with the update step. Following
this, aptitude update will issue a warning about a signature problem. This warning won’t prevent
you from installing the packages.
For OpenSuse 12.X and higher, Monteverdi packages is available through zypper.
First, you need to add the appropriate repositories with these command-lines (please replace 11.4
by your OpenSuse version):
sudo zypper ar
http://download.opensuse.org/repositories/games/openSUSE_11.4/ Games
sudo zypper ar
http://download.opensuse.org/repositories/Application:/Geo/openSUSE_11.4/ GEO
sudo zypper ar
http://download.opensuse.org/repositories/home:/tzotsos/openSUSE_11.4/ tzotsos
Now run:
Alternatively you can use the One-Click Installer from the openSUSE Download page or add the
above repositories and install through Yast Package Management.
There is also support for the recently introduced ’rolling’ openSUSE distribution named ’Tumble-
weed’. For Tumbleweed you need to add the following repositories with these command-lines:
sudo zypper ar
http://download.opensuse.org/repositories/games/openSUSE_Tumbleweed/ Games
sudo zypper ar
http://download.opensuse.org/repositories/Application:/Geo/openSUSE_Tumbleweed/ GEO
sudo zypper ar
http://download.opensuse.org/repositories/home:/tzotsos/openSUSE_Tumbleweed/ tzotsos
This is Monteverdi’s main window (figure 2.1) where the menus are available and where you can
see the different modules, which have been set up for the processing. Input data are obtained by
readers. When you choose to use a new module, you select its input data, and therefore, you build
a processing pipeline sequentially. Figure 2.2 shows the generic window which allows to specify
output(s) of Monteverdi’s modules.
Let’s have a look at the different menus. The first one is of course the ”File” menu. This menu
allows you to open a data set, to save it and to cache it. The ”data set” concept is interesting, since
you don’t need to define by hand if you are looking for an image or a vector file. Of course, you
don’t need to do anything special for any particular file format. So opening a data set will create a
”reader” which will appear in the main window. At any time, you can use the ”save data set” option
2.3. Anatomy of the applications 17
The application allows to interactively select raster/vector dataset by browsing your computer. Mon-
teverdi takes advantage of the automatic detection of images’ extensions to indicate the dataset type
(optical, SAR or vector data).
The input dataset is added to the ”Data and Process” tree, which describes the dataset content and
each node corresponds to a layer.
This module allows to visualize raster or vector data. It allows to create RGB composition from
the input rasters. It is also possible to add vector dataset which are automatically reprojected in the
same projection of the input image or Digital Elevation informations.
The viewer offers three types of data visualisation:
• The Full resolution window: the view of the region of interest selected in the scroll window
• The Zoom window
18 Chapter 2. A brief tour of Monteverdi
• The Pixel description: give access to dynamic informations on the current pixel pointed. In-
formations display are:
– The current index
– The pixel value
– The computed value (the dynamic of hte input image is modified to get a proper visual-
ization
– The coordinates of the current pixel (longitude and latitude)
– In case where there is a Internet connection available, Monteverdi displays the estimate
location of the current pixel (country + city)
The Visualization offers others great functionnalities which are available in the detached window. It
is for example possible to superpose vector dataset to the input image (see figure 2.4).
The ”Setup Tab” allows to modify the RGB composition or use the grayscale mode to display only
one layer.
2.3. Anatomy of the applications 19
The ”Histogram Tab” get access to the dynamic of the displayed layers. The basic idea is to convert
the output of the pixel representation to a RGB pixel for rendering on conventional displays. Values
are constrained to 0-255 with a transfer function and a clamping operation. By default, the dynamic
of each layer is modified by clamping the histogram at min + 2% and max − 2%.
There is also possible to select pixel coordinates and get access to all the informations available in
the “Pixel description Box”.
The ”cache data set” (see figure 2.8) is a very interesting functionality. As you know, Orfeo Tool-
Box implements processing on demand, so when you build a processing pipeline, no processing
takes place unless you ask for it explicitly. That means that you can plug together the opening of a
data set, an orthorectification and a spleckle filter, for example, but nothing will really be computed
until you trigger the pipeline execution. This is very convenient, since you can quickly build a pro-
cessing pipeline and let it execute afterwards while you have a coffee. In Monteverdi, the process is
executed by saving the result of the last module of a pipeline. However, sometimes, you may want
to execute a part of the pipeline without having to set the file name to the obtained result. You can do
20 Chapter 2. A brief tour of Monteverdi
this by caching a data set. That is, the result will be stored in a temporary file which will be created
in the ”Caching” directory created by the application. Another situation in which you may need to
cache a data set is when you need that the input of a module exists when you set its parameters.
This is nor a real requirement, since Monteverdi will generate the needed data by streaming it, but
this can be inefficient. This for instance about visualization of the result of a complex processing.
Using streaming for browsing through the result image means processing the visible part every time
you move inside the image. Caching the data before visualization will generate the whole data set
in advance allowing for a more swift display. All modules allow you to cache their input data sets.
2.4. Available modules 21
The aim of Monteverdi is to provide a generic interface which is based on the definition of the
internal processes. In this frame, the way that you have to manage modules are identical during
the definition of a new process. Selecting a module on the upper main window, open automatically
the ”Inputs definition Window” wich allows to select data which are inputs of the current module.
Monteverdi module can manage single or multiple inputs and these inputs can be images on your
computer or results of previous module already registered in the ”Data and Process” tree.
Management of image formats in Monteverdi works in the same manner as in the Orfeo Tool-
Box. The principle is that the software automatically recognize the image format. Communication
between modules follow also the same principle and the Input definition of modules request to all
available outputs of the same type in the ”Data and process” tree. Internally, all the treatments
in Monteverdi are computed in float precision by default. It is also possible to switch to double
precision by compiling the application from source and set the CMAKE option compile float to ON.
It allows to extract regions of interest (ROI) from an image. There are two ways to select the region:
• By indicating the X and Y coordinatres of the upper-left coordinates and the X-Y size of the
regions.
• By interactivelly selecting the region of interest in the input image.
With Monteverdi, you could generate a large scale of value added informations from lots of inputs
data. One of the basic functionnality is to be able to superpose result’s layers into the same dataset.
Concatenating images into one single multiple-bands image (they need to have the same size), and
to be able to create for example RGB composition with the inputs layer.
22 Chapter 2. A brief tour of Monteverdi
Monteverdi allows to export raster or vector dataset to a file to your system. In the case of raster
images, it is possible to cast output pixel type. In Monteverdi all the processes are done in floating
point precision. On large remote sensing dataset, saving your result in float data type could lead
to file too large(more than 25 Go for pan-sharpened 8 bands WorldView2 with a resolution of 46
centimeters). Since the module allows to cast pixels in other types :
In the frame of remote sensing process, one common operation is to be able to superpose and manip-
ulate data which come from different sources. This section gives access to a large set of geometric
2.4. Available modules 23
Reprojection module
The application is derived from the otbOrthorectificationApplication in the OTB Applications pack-
age and allow to produce orthorectified imagery from level 1 product. The application is able to parse
metadata informations and set default parameters. The application contains 4 tabs:
• Coordinates: Define the center or upper-left pixel coordinates of the orthorectified image (the
longitude and latitude coordinates are calculated through meta-data informations. It is also
possible to specify the map projection of the output.
• Output image: The module allows to only orthorectified a Region Of interest inside the input
dataset. This tab allows to set the size of the ROI around the center pixel coordinate or from
the upper left index. The orthorectified imagery can also be resampled at any resolution in the
line or column directions by setting the ”Spacing X” and the ”Spacing Y” respectively, and
choosing interpolation method.
• DEM: Indicate path to a directory containing SRTM elevation file. The application is able to
detect inside the direcory which DEM files are relevant in the process. You can find detailed
informations on how to get a usable DEM
• Image extent: Compare the initial image extension with the preview the orthorectified result.
This preview is automatically updated if the user change the ”Size X” or ”Size Y” values in
the ”Output Image” tab.
24 Chapter 2. A brief tour of Monteverdi
This module allows to take ground control points on a raster image where no geographic informa-
tions are available. This GCPs list is making correspondence between pixel coordinate in the input
image and physical coordinates. The list allows to derive a general function which convert any
pixel coordinates in physical positions. This function is based on a RPC transformation (Rational
Polynomial Coefficients). As a consequence, the module enriches the output image with metadata
informations defining a RPC sensor model associated with the input raster. There are several ways
to generate the GCPs:
• With Internet access: dynamically generate the correspondance on the input image and Open
Street Map layers.
• Without Internet access: Set manually Ground control points : indicate index position and
cartographic coordinates in the input image.
It is also possible to import/export the list of Ground Control points from/to an XML file.
Moreover, if the input image has GCPs in its metadata, the module allows to add or remove points
from the existing list, which is automatically loaded.
2.4.3 Calibration
In the solar spectrum, sensors on Earth remote sensing satellites measure the radiance reflected by
the atmosphere-Earth surface system illuminated by the sun. This signal depends on the surface
reflectance, but it is also perturbed by two atmospheric processes, the gaseous absorption and the
scattering by molecules and aerosols.
Optical calibration
In the case of the Optical calibration, the basic idea is to be able to retrieve reflectance of the observed
physical objects. The process can be split in 3 main steps:
SAR calibration
The calibration and validation of the measurement systems are important to maintain the reliability
and reproducibility of the SAR measurements, but the establishment of correspondence between
quantities measured by SAR and physical measure requires scientific background. The SAR cali-
bration module allows to estimate quantitative accuracy. For now only calibration of TerraSARX
data is available.
Band Math
The Band Math module allows to perform complex mathematical operations over images. It is
based on the mathematical parser library muParser and comes with a bunch of build-in functions
and operators (listed here). This home-brewed digital calculator is also bundled with custom func-
tions allowing to compute a full expression result simply and really quickly, since the filter supports
streaming and multi-threading. The Monteverdi module provides an intuitive way to easily per-
form complex band computation. The module also prevents error in the mathematical command by
checking the expression as the user types it, and notifying information on the detected error:
Figure 2.12 presents an example on how the band math can produce a threshold image on the NDVI
value computed in one pass using built-in conditional operator “if” available in the parser.
An other operational example, on how this simple module can produce reliable information. Figure
2.13 shows the result of the subtraction of the Water indice on 2 images which was taken before and
during the crisis event. The difference was produced by the band math module and allows to get a
reliable estimation of the flood events.
Figure 2.12: Conditional operators using the band math module (on the left) to process a NDVI image threshold
and the resulting image (on the right).
The Connected Component Segmentation module allows segmentation and object analysis using
user defined criteria at each step. This module uses muParser library using the the same scheme as
2.4. Available modules 27
it is done in Band math module (see 2.4.4 for a detailled explanation). It relies on three main steps
process :
Mask definition : This mask is used as support of Connected Component segmentation (CC) . i.e
zeros pixels are not taken into account by CC algorithm. Binarization criteria is defined by
user, via muparser. This step is optional, if no Mask is given, entire image is processed. The
following example creates a mask using intensity (mean of pixel values) parameter :
distance < 10
Object analysis post processing : This step consists in post processing on each detected area using
shape and statistical object characterization. The following example use elongation parameter
to test labeled objects :
SHAPE_elongation > 2
28 Chapter 2. A brief tour of Monteverdi
Segmentation after small object rejection : output of Connected Component segmentation after
relabeling and small object rejection.
Filter Output : final output after object based analysis opening post processing.
Available variables for each expression can be found using item list variables names. available
functions can be found in help windows by clicking on Help button. The module also prevents error
in the mathematical command by checking the expression as the user types it. Background value
is set to green if formula is right, in red otherwise. If mask expression is left blank entire image is
processed. If Object Analysis expression is left blank the whole set of label objects is considered.
After segmentation step, too small objects can be rejected using Object min area input. Eliminating
too small objects at this step is needed to lighten further computation. min area is the pixel size of
the label object.
When a first pass have been done, Specific label object properties can be displayed. Select the
”Filter Output” visualization mode, Update the visualization. Then use right click on selected object
in image to display object properties.
Clicking on Save and Quit button export output to Monteverdi in vector data format.
A detailled presentation of this module, and examples can be found on the wiki.
A boat detection example is presented on Figure 2.15. Results can be seen on Figure 2.16.
Feature extraction
Under the term Feature Extraction, it include several techniques aiming to detect or extract infor-
mations of low level of abstraction from images. These features can be objects : points, lines,
etc.They can also be measures : moments, textures, etc.
2.4. Available modules 29
Mean-shift segmentation
For a given pixel, the Mean-shift algorithm will build a set of neighboring pixels within a given
spatial radius and a color range. The spatial and color center of this set is then computed and the
algorithm iterates with this new spatial and color center. The Mean-shift can be used for edge-
preserving smoothing, or for clustering.
2.4.5 Learning
Supervised classification
Supervised classification is a procedure in which individual items are placed into groups based on
quantitative information on one or more characteristics inherent in the items and based on a training
set of previously labeled items.
The supervised classification module is based on the Support Vector Machine method which consists
in searching for the separating surface between 2 classes by the determination of the subset of
training samples which best describes the boundary between the 2 classes. This method can be
extended to be able to classify more than 2 classes.
The module allows to interactivelly describe learnings samples which corresponds to polygons sam-
ples on the input images.
Then a SVM model is derived from this learning sample which allows to classify each pixel of the
input image in one of the defined class.
32 Chapter 2. A brief tour of Monteverdi
Non-supervised classification
The non supervised classification module is based on the Kmeans algorithm. The GUI allows to
modify parameters of the algorithm and produce a label image.
This section give access to specific treatments related to the SAR (Synthetic Aperture Radar) func-
tionnalities.
Despeckle
SAR images are generally corrupted by speckle noise. To suppress speckle and improve the radar
image interpretability lots of filtering techniques have been proposed. The module implements to
well-known despeckle methods: Frost and Lee.
Compute the derived intensity and log-intensity from the input SAR imagery.
Polarimetry
In conventional imaging radar the measurement is a scalar which is proportional to the received
backscattered power at a particular combination of linear polarization (HH, HV, VH or VV). Po-
larimetry is the measurement and interpretation of the polarization of this measurement which allows
to measure various optical properties of a material. In polarimetry the basic measurement is a 2x2
complex scattering matrix yielding an eight dimensional measurement space (Sinclair matrix). For
reciprocal targets where HV = V H, this space is compressed to five dimensions: three amplitudes
(|HH|, |HV |, and |VV |); and two phase measurements, (co-pol: HH-VV, and cross-pol: HH-HV).
(see grss-ieee).
Synthesis Allow to construct an image that would be received from a polarimetric radar having
selected transmit and receive polarizations. The Synthesis module waits for real and imaginary part
(real images) of the HH, VV, VH and HV images. The reciprocal case where case V H = HV , is not
properly handled yet, for now the user has to set the same input for the two HV and VH.
Conversion As we saw in the previous main section, the basic measurement is a 2x2 complex
scattering matrix yielding an eight dimensional measurement space. But other measurements exist:
2.4. Available modules 33
Modules in the Conversion subsection allow to proceed these conversions between matrix represen-
tations. Allowed conversion and input images types are described in the following figure 2.18.
Analysis This module allows to perform some of classical polarimetric analysis methods. It allows
to compute:
THREE
3.1 Introduction
Monteverdi was developed 4 years ago in order to provide an integrated application for capacity
building activities (teaching, simple image manipulation, etc.). Its success went far beyond this
initial scope since it opened the OTB world to a wide range of users who needed a ready to use
graphical tool more than a library of components to write their own processing chain. With this 4
years of lifetime, we have a lot of feedbacks regarding how useful the tool was, but also regarding
what should be improved to move toward greater usability and operationnality. We therefore decided
to rework the Monteverdi concept into a brand new software, enlightened by this experience.
Monteverdi2 offers a new interface based on Qt and a set of processing capibilities base on OTB-
Applications.
3.2 Installation
Installation of Monteverdi2 is very simple. Standard installer packages are available on the main
platforms thanks to OTB-Developpers and external users. These packages are available few days
after the release. Get the latest information on binary packages on the Orfeo ToolBox website in the
section download.
We will discribe in the following sections the way to install Monteverdi2 on:
If you want build from source or if we don’t provide packages for your system, some informations
are available into the OTB Software Guide, in the section (Building from Source)
36 Chapter 3. A brief tour of Monteverdi2
For Windows XP/Seven/8.1 users, there is a classical standalone installation program for Mon-
teverdi2, available from the OTB download page after each release of Monteverdi2.
3.2.2 MacOS X
A standard DMG package is available for Monteverdi2 for MacOS X 10.8. Please go the OTB
download page. Click on the file to launch Monteverdi2. We will provide in the next release a
package for MacOSX 10.9.
For Ubuntu 12.04 and higher, Monteverdi2 package may be available as Debian package through
APT repositories.
Since release 0.2, Monteverdi2 packages are available in the ubuntugis-unstable repository.
You can add it by using these command-lines:
Now run:
If you are using Synaptic, you can add the repository, update and install the package through the
graphical interface.
apt-add-repository will try to retrieve the GPG keys of the repositories to certify the origin of the
packages. If you are behind a http proxy, this step won’t work and apt-add-repository will stall and
eventually quit. You can temporarily ignore this error and proceed with the update step. Following
this, aptitude update will issue a warning about a signature problem. This warning won’t prevent
you from installing the packages.
You can find more information about Monteverdi2 into the post of the OTB blog at http://blog.
orfeo-toolbox.org/.
CHAPTER
FOUR
Recipes
This chapter presents guideline to perform various remote sensing and image processing tasks with
either OTB Applications, Monteverdi or both. Its goal is not to be exhaustive, but rather to help
the non-developper user to get familiar with these two packages, so that he can use and explore them
for his future needs.
38 Chapter 4. Recipes
The typical Pleiades product is a pansharpened image of 40 000 by 40 000 pixels large, with 4
spectral bands, but one can even order larger mosaics, whose size can be even larger, with hundreds
of thousands of pixels in each dimension.
To allow easier storage and transfer of such products, the standard image file format is Jpeg2000,
which allows to achieve high compression rates. The counterpart of these better storage and transfer
performances is that the performance of pixels accesses within those images may be poorer than
with an image format without compression, and even more important, the cost of accessing pixels is
not uniform: it depends on where are the pixels you are trying to access, and how they are spatially
arranged.
To be more specific, Pleiades images are internally encoded into 2048 per 2048 pixels tiles (within
the Jpeg2000 file). These tiles represent the atomic decompression unit: if you need a single pixel
from a given tile, you still have to decode the whole tile to get it. As a result, if you plan to access a
large amount of pixels within the image, you should try to access them on a per tile basis, because
anytime you ask for a given tile more than once, the performances of your processing chains drop.
What does it mean? In Orfeo ToolBox, the streaming (on the flow) pipeline execution will try to stay
synchronised with the input image tiling scheme to avoid decoding the same tile several time. But
you may know that in the Orfeo ToolBox world, one can easily chain numerous processing, some
them enlarging the requested region to process the output - like neighbourhood based operators
for instance - or even completely change the image geometry - like ortho-rectification for instance.
And this chaining freedom is also at the heart of Monteverdi. In short, it is very easy to build a
processing pipeline in Orfeo ToolBox or chain of modules in Monteverdi that will get incredibly
bad performances, even if the Orfeo ToolBox back-end does its best to stay in tune with tiles. And
here, we do not even speak of sub-sampling the whole dataset at some point in the pipeline, which
will lead to even more incredibly poor performances, and is however done anytime a viewer is called
on a module output in Monteverdi.
So, can Monteverdi or OTB Applications open and process Pleiades images? Fortunately yes.
Monteverdi even takes advantage of Jpeg2000 ability to generate coarser scale images for quick-
look generation for visualisation purposes. But to ease the use of Pleiades images in Monteverdi,
we chose to open them in a separate data type, and to lock the use of most of modules for this data
type. It can only be used in the Viewer module and a dedicated module allowing to uncompress a
user-defined part of a Pleiades image to disk. One can still force the data type during the opening
of the image, but this is not advised: the advised way to use the other modules with Pleiades data
is to first uncompress to disk your area of interest, and then open it again in Monteverdi (careful,
you may need a lot of disk space to do this). As for the applications, they will work fine even on
Jpeg2000 Pleiades data, but keep in mind that a performance sink might show depending on the
processing you are try to achieve. Again, the advised way of working would be to uncompress your
area of interest first and then work with the uncompressed file, as you used to with other data.
A final word about metadata: OTB Applications and Monteverdi can read the Dimap V2 (note that
we also read the less non-official Dimap V1.1 format) metadata file associated with the Jpeg2000
4.1. Using Pleiades images in OTB Applications and Monteverdi 39
file in the Pleiades product. It reads the RPC localisation model for geo-coding and the information
needed to perform radiometric calibration. These metadata will be written in an associated geom-
etry file (with a .geom extension) when uncompressing your area of interest to disk, so that both
Monteverdi and OTB Applications will be able to retrieve them, even for images extracts.
Opening a Pleiades image in Monteverdi is not different from opening other kind of dataset: use
the Open Dataset item from the File menu, and select the JP2 file corresponding to you image using
the file browser.
Figure 4.1, page 39 shows the dialog box when opening a Pleiades image in Monteverdi. One can
see some changes with respect to the classical dialog box for images opening.
The first novelty is a combo box allowing to choose the resolution of the Jpeg2000 file one wants to
decode. As said in the introduction of this section, Orfeo ToolBox can take advantage of Jpeg2000
capability to access coarser resolution ver efficiently. If you select for instance the Resolution: 1
item, you will end with an image half the size of the original image with pixels twice as big. For
40 Chapter 4. Recipes
instance, on a Pleiades panchromatic or pansharpened product, the Resolution: 0 image has a ground
samping distance of 0.5 meters while the Resolution: 1 image has a ground samping distance of one
meter. For a multispectral product, the Resolution: 0 image has a ground samping distance of 2
meters while the Resolution: 1 image has a ground samping distance of 4 meters.
The second novelty is a check-box called Save quicklook for future re-use. This option allows to
speed-up the loading of a Pleiades image within Monteverdi. In fact, when loading a Pleiades
image, Monteverdi generates a quicklook of this image to be used as a minimap in the Viewer
Module as well as in the Uncompress Jpeg2000 image module. This quicklook is the coarser level
of resolution from the Jpeg2000 file: it should decode easily, but can still take a while. This is why
if the check-box is checked, Monteverdi will write this quicklook in uncompressed Tiff format next
to the Jpeg2000 file. For instance, if the file name is:
IMG_PHR1A_MS_201204011017343_SEN_IPU_20120529_1596-002_R1C1.JP2
Monteverdi will write, if it can, the following files in the same directory:
IMG_PHR1A_MS_201204011017343_SEN_IPU_20120529_1596-002_R1C1.JP2_ql_by_otb.tif
IMG_PHR1A_MS_201204011017343_SEN_IPU_20120529_1596-002_R1C1.JP2_ql_by_otb.geom
Next time one will try to open this image in Monteverdi, the application will find these files and
load directly the quicklook from them, instead of decoding it from the Jpeg2000 file, resulting in an
instant loading of the image in Monteverdi. Since the wheight of these extra files is ususally of a
few megaoctets, it is recommended to keep this option checked unless one has a very good reason
not to. Now that the Pleiades image is loaded in Monteverdi, it appears in the main Monteverdi
window, as shown in figure 4.2, page 39.
You can open the Pleiades image in the viewer, either by using the contextual menu or by opening
the Viewer Module through the menu bar.
You can notice that the viewer opens quickly without showing the traditional progress bar. This is
because Monteverdi already loaded the quick-look upon opening, and we do not need to re-compute
it each time the image is opened in the Viewer Module.
Figure 4.3, page 41 shows a Pleiades image displayed in the Viewer Module. One can notice that
the navigation experience is rather smooth. If you navigate using arrows keys, you will notice that
latency can occur now and then: this is due to the viewport switching to a new Jpeg2000 tile to
decode. On can also observe that the latitude and longitude of the pixel under the mouse pointer is
displayed, which means that the sensor modelling is handled (if you have an internet connection,
you may even see the actual name of the place under mouse pointer). Last, as said in the foreword of
this section, Pleiades image can be quite large, so it might be convenient to switch the viewer style
from Packed to Splitted, in which case you will be able to maximize the Scroll Window for better
localisation of the viewed area. To do so, one can go to the Setup tab of the Viewer Control Window.
4.1. Using Pleiades images in OTB Applications and Monteverdi 41
If the Pleiades product is very large, it might happen that the image is actually splitted into several
Jpeg2000 files, also called mega-tiles. Since the area of interest might span two or more mega-tiles,
it is convenient to stitch together these tiles so as to get the entire scene into one Monteverdi dataset.
To do so, one must first open all mega-tiles in Monteverdi, as described in section 4.1.1, page 39.
Once all mega-tiles are opened as shown in figure 4.4, page 41.
Once this is done, one can use the Mosaic Images module from the File menu. Simply append all
mega-tiles into the module and run it: the module will look for the RiC j pattern to determine the
mega-tiles layout, and will also check for consistency, e.g. missing tiles or mega-tiles size mismatch.
Upon success, it generates a new Pleiades image dataset, which corresponding to the entire scene,
as shown in figure 4.4, page 41. One can then use this dataset as a regular Pleiades dataset.
42 Chapter 4. Recipes
The next very important thing one can do with Monteverdi is to select an area of interest in the
Pleiades image so as to uncompress it to disk. To do so, open the Pleiades dataset into the Uncom-
press Jpeg2000 image module from the File menu. Figure 4.5, page 42 shows what this module
looks like. On the left, one can find informations about the images: dimensions, resolution level,
and number of Jpeg2000 tiles in image, dimension of tiles, and size of tiles in mega-octets. The
center part of the module is the most important one: it displays a quick-look of the Pleiades image.
On this quick-look, one can select the area to be decoded by drawing a rectangle with the mouse.
The red rectangle shown by the module corresponds to this user-defined area. On the left, in red,
one can find the start index and size of corresponding region.
The module also displays a green rectangle, which shows the minimum set of tiles to be decoded to
decode the red area: this is the region that will actually be decoded to disk. On the left, in green,
one can find information about this region: how many tiles it contains, and what will be the size of
the corresponding decoded output file.
Once one chose her area of interest, one can click on the Save button, and select an output file. The
module will write a geometry file (with the .geom extension) with all useful metadata in it, so that
when reading back the file in Monteverdi or in OTB Applications, geometry and radiometry based
functionalities can still be used.
4.2. Optical pre-processing 43
For all the reasons exposed in the foreword of this section, we do not allow to use directly Pleiades
images in the remaining of Monteverdi modules: the advised way of doing so is to first uncompress
the area of interest to disk.
The OTB Applications are able to work directly with Pleiades images. However, keep in mind that
performances may be limited due to the reasons exposed in the foreword of this section. If you
experiment poor performances with some application, try to uncompress the area of interest from
your image with Monteverdi first. One can also use the ExtractROI application for this purpose.
One thing that is interesting to know is that one can access the coarser resolution of the Jpeg2000
file by appending : i to the filename, where i is the resolution level starting at 0. For instance, one
can use the following:
This section presents various pre-processing tasks that are presented in a classical order to obtain a
calibrated, pan-sharpened image.
In remote sensing imagery, pixel values are called DN (for Digital Numbers) and can not be phys-
ically interpreted and compared: they are influenced by various factors such as the amount of light
flowing trough the sensor, the gain of the detectors and the analogic to numeric converter.
Depending on the season, the light and atmospheric conditions, the position of the sun or the sensor
internal parameters, these DN can drastically change for a given pixel (apart from any ground change
effects). Moreover, these effects are not uniform over the spectrum: for instance aerosol amount and
type has usually more impact on the blue channel.
Therefore, it is necessary to calibrate the pixel values before any physical interpretation is made
out of them. In particular, this processing is mandatory before any comparison of pixel spectrum
between several images (from the same sensor), and to train a classifier without dependence to the
atmospheric conditions at the acquisition time.
Calibrated values are called surface reflectivity, which is a ratio denoting the fraction of light that is
reflected by the underlying surface in the given spectral range. As such, its values lie in the range
44 Chapter 4. Recipes
[0, 1]. For convenience, images are often stored in thousandth of reflectivity, so that they can be
encoded with an integer type. Two levels of calibration are usually distinguished:
• The first level is called Top Of Atmosphere (TOA) reflectivity. It takes into account the sensor
gain, sensor spectral response and the solar illumination.
• The second level is called Top Of Canopy (TOC) reflectivity. In addition to sensor gain and so-
lar illumination, it takes into account the optical thickness of the atmosphere, the atmospheric
pressure, the water vapor amount, the ozone amount, as well as the composition and amount
of aerosol gasses.
This transformation can be done either with OTB Applications or with Monteverdi. Sensor-related
parameters such as gain, date, spectral sensitivity and sensor position are seamlessly read from the
image metadata. Atmospheric parameters can be tuned by the user. Supported sensors are :
• Pleiades,
• SPOT5,
• QuickBird,
• Ikonos,
• WorldView-1,
• WorldView-2,
• Formosat.
The OpticalCalibration application allows to perform optical calibration. The mandatory parameters
are the input and output images. All other parameters are optional. By default the level of calibration
is set to TOA (Top Of Atmosphere). The output images are expressed in thousandth of reflectivity
using a 16 bits unsigned integer type.
A basic TOA calibration task can be performed with the following command :
A basic TOC calibration task can be performed with the following command :
• Luminance image.
• TOA reflectance image.
• TOC reflectance image.
• Difference TOA-TOC image, which allows to get the estimation of atmospheric contribution.
4.2.2 Pan-sharpening
Because of physical constrains on the sensor design, it is difficult to achieve high spatial and spectral
resolution at the same time : a better spatial resolution means a smaller detector, which in turns
46 Chapter 4. Recipes
means lesser optical flow on the detector surface. On the contrary, spectral bands are obtained
through filters applied on the detector surface, that lowers the optical flow, so that it is necessary to
increase the detector size to achieve an acceptable signal to noise ratio.
For these reasons, many high resolution satellite payload are composed of two sets of detectors,
which in turns delivers two different kind of images :
• The multi-spectral (XS) image, composed of 3 to 8 spectral bands containing usually blue,
green, red and near infra-red bands at a given resolution (usually from 2.8 meters to 2 meters).
• The panchromatic (PAN) image, which is a grayscale image acquired by a detector covering a
wider part of the light spectrum, which allows to increase the optical flow and thus to reduce
pixel size. Therefore, resolution of the panchromatic image is usually around 4 times lower
than the resolution of the multi-spectral image (from 46 centimeters to 70 centimeters).
It is very frequent that those two images are delivered side by side by data providers. Such a dataset
is called a bundle. A very common remote sensing processing is to fuse the panchromatic image with
the multi-spectral one so as to get an image combining the spatial resolution of the panchromatic
image with the spectral richness of the multi-spectral image. This operation is called pan-sharpening.
This fusion operation requires two different steps :
1. The multi-spectral (XS) image is zoomed and registered to the panchromatic image,
2. A pixel-by-pixel fusion operator is applied to the co-registered pixels of the multi-spectral and
panchromatic image to obtain the fused pixels.
Using either OTB Applications or modules from Monteverdi, it is possible to perform both steps
in a row, or step-by-step fusion, as described in the above sections.
4.2. Optical pre-processing 47
The BundleToPerfectSensor application allows to perform both steps in a row. Seamless sensor mod-
elling is used to perform zooming and registration of the multi-spectral image on the panchromatic
image. Then, a simple pan-sharpening is applied, according to the following formula:
PAN(i, j)
PXS(i, j) = · XS(i, j) (4.1)
PANsmooth (i, j)
Where i and j are pixels indices, PAN is the panchromatic image, XS is the multi-spectral image and
PANsmooth is the panchromatic image smoothed with a kernel to fit the multi-spectral image scale.
Here is a simple example of how to use the BundleToPerfectSensor application:
There are two more optional parameters that can be useful for this tool:
• The -elev option allows to specify the elevation, either with a DEM formatted for OTB
(-elev.dem option, see section 4.2.3) or with an average elevation (-elev.default option).
Since registration and zooming of the multi-spectral image is performed using sensor-models,
it may happen that the registration is not perfect in case of landscape with high elevation
variation. Using a DEM in this case allows to get better registration.
• The -lmSpacing option allows to specify the step of the registration grid between the multi-
spectral image and panchromatic image. This is expressed in amount of panchromatic pixels.
A lower value gives a more precise registration but implies more computation with the sensor
models, and thus increase the computation time. Default value is 10 pixels, which gives
sufficient precision in most of the cases.
Pan-sharpening is a quite heavy processing requiring a lot of system resource. The -ram option
allows you to limit the amount of memory available for the computation, and to avoid overloading
your computer. Increasing the available amount of RAM may also result in better computation time,
seems it optimises the use of the system resources. Default value is 256 Mb.
Monteverdi allows to perform step-by-step fusion. The followings screenshots highlight operations
needed to perform Pan-Sharpening.
• Open panchromatic and multispectral images in monteverdi using the Open Dataset module
or using the -il option of the Monteverdi executable.
48 Chapter 4. Recipes
• The Superimpose module is used to zoomed and registered the multispectral on the panchro-
matic image. As a result, we get a multispectral dataset with the same geographic extension
and the same resolution as the panchromatic image, cf 4.8.
• Now the Simple RCS pan-sharpening module can be used using the panchromatic and the
multispectral images as inputs. It produces a multispectral image with the same resolution
and geographic extension (cf 4.9).
Please also note that since registration and zooming of the multi-spectral image with the panchro-
matic image relies on sensor modelling, this tool will work only for images whose sensor models is
available in Orfeo ToolBox (see section 4.2.4 for a detailed list). It will also work with ortho-ready
products in cartographic projection.
A Digital Elevation Model (DEM) is a georeferenced image (or collection of images) where each
pixel corresponds to a local elevation. DEM are useful for tasks involving sensor to ground and
4.2. Optical pre-processing 49
ground to sensor coordinate transforms, like during ortho-rectification (see section 4.2.4). These
transforms need to find the intersection between the line of sight of the sensor and the earth geoid.
If a simple spheroid is used as the earth model, potentially high localisation errors can be made in
areas where elevation is high or perturbed. Of course, DEM accuracy and resolution have a great
impact on the precision of these transforms.
Two main available DEM, free of charges, and with worldwide cover, are both delivered as 1 degree
by 1 degree tiles:
• The Shuttle Radar topographic Mission (SRTM) is a 90 meters resolution DEM, obtained by
radar interferometry during a campaign of the Endeavour space shuttle from NASA in 2000.
• The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 30
meters resolution DEM obtained by stereoscopic processing of the archive of the ASTER
instrument.
The Orfeo ToolBox relies on OSSIM capabilities for sensor modelling and DEM handling. Tiles of
a given DEM are supposed to be located within a single directory. General elevation support is also
supported from GeoTIFF files.
Whenever an application or Monteverdi module requires a DEM, the option elev.dem allows set
the DEM directory. This directory must contains the DEM tiles, either in DTED or SRTM format,
either as GeoTIFF files. Subdirectories are not supported.
Depending on the reference of the elevation, you also need to use a geoid to manage elevation
accurately. For this, you need to specify a path to a file which contains the geoid. Geoid corresponds
to the equipotential surface that would coincide with the mean ocean surface of the Earth (see ). We
provide one geoid in the OTB-Data repository available here.
In all applications, the option elev.geoid allows to manage the path to the geoid. Finally, it is also
possible to use an average elevation in case no DEM is available by using the elev.default option.
There are several level of products available on the remote sensing imagery market. The most basic
level often provide the geometry of acquisition (sometimes called the raw geometry). In this case,
pixel coordinates can not be directly used as geographical positions. For most sensors (but not for
all), the different lines corresponds to different acquisition times and thus different sensor positions,
and different rows correspond to different cells of the detector.
The mapping of a raw image so as to be registered to a cartographic grid is called ortho-rectification,
and consist in inverting the following effects (at least):
• In most cases, lines are orthogonal to the sensor trajectory, which is not exactly (and in some
case not at all) following a north-south axis,
50 Chapter 4. Recipes
• Depending on the sensor, the line of sight may be different from a Nadir (ground position of
the sensor), and thus a projective warping may appear,
• The variation of height in the landscape may result in severe warping of the image.
Moreover, depending on the area of the world the image has been acquired on, different map pro-
jections should be used.
The ortho-rectification process is as follows: once an appropriate map projection has been defined,
a localisation grid is computed to map pixels from the raw image to the ortho-rectified one. Pixels
from the raw image are then interpolated according to this grid in order to fill the ortho-rectified
pixels.
Ortho-rectification can be performed either with OTB Applications or Monteverdi. Sensor pa-
rameters and image meta-data are seamlessly read from the image files without needing any user
interaction, provided that all auxiliary files are available. The sensor for which Orfeo ToolBox
supports ortho-rectification of raw products are the following:
• Pleiades,
• SPOT5,
• Ikonos,
• Quickbird,
• GeoEye,
• WorldView.
In addition, GeoTiff and other file format with geographical information are seamlessly read by
Orfeo ToolBox, and the ortho-rectification tools can be used to re-sample these images in another
map projection.
The OrthoRectification application allows to perform ortho-rectification and map re-projection. The
simplest way to use it is the following command:
In this case, the tool will automatically estimates all the necessary parameters:
• The map projection is set to UTM (a worldwide map projection) and the UTM zone is auto-
matically estimated,
4.2. Optical pre-processing 51
• The ground sampling distance of the output image is computed to fit the image resolution,
• The region of interest (upper-left corner and size of the image) is estimated so as to contain
the whole input image extent.
In order to use a Digital Elevation Model (see section 4.2.3) for better localisation performances,
one can pass the directory containing the DEM tiles to the application:
If one wants to use a different map projection, the -map option may be used (example with lambert93
map projection):
Map projections handled by the application are the following (please note that the ellipsoid is always
WGS84):
The group outputs contains parameters to set the origin, size and spacing of the output image. For
instance, the ground spacing can be specified as follows:
52 Chapter 4. Recipes
Please note that since the y axis of the image is bottom oriented, the y spacing should be negative to
avoid switching north and south direction.
A user-defined region of interest to ortho-rectify can be specified as follows:
Where the -outputs.ulx and -outputs.uly options allow to specify the coordinates of the upper-
left corner of the output image. The -outputs.sizex and -outputs.sizey options allow to spec-
ify the size of the output image.
A few more interesting options are available:
• The -opt.rpc option allows to use an estimated RPC model instead of the rigorous SPOT5
model, which speeds-up the processing,
• The -opt.gridspacing option allows to define the spacing of the localisation grid used for
ortho-rectification. A coarser grid results in speeding-up the processing, but with potential
loss of accuracy. A standard value would be 10 times the ground spacing of the output image.
• The -interpolator option allows to change the interpolation algorithm between nearest
neighbor, linear and bicubic. Default is nearest neighbor interpolation, but bicubic should be
fine in most cases.
4.2. Optical pre-processing 53
• The -opt.ram option allows to specify the amount of memory available for the processing (in
Mb). Default is 256 Mb. Increasing this value to fit the available memory on your computer
might speed-up the processing.
Image registration is a fundamental problem in image processing. The aim is to align two or more
images of the same scene often taken at different times, from different viewpoints, or by differ-
ent sensors. It is a basic step for orthorectification, image stitching, image fusion, change de-
tection. . . But this process is also critical for stereo reconstruction process to be able to obtain an
accurate estimation of epipolar geometry.
Sensor model is generally not sufficient to provide image registrations. Indeed, several sources of
geometric distortion can be contained in optical remote sensing images including earth rotation,
platform movement, non linearity. . .
They result in geometric errors on scene level, image level and pixel level. It is critical to rectify
the errors before a thematic map is generated, especially when the remote sensing data need to be
integrated together with other GIS data.
This figure illustrates the generic workflow in the case of image series registration:
Bundle-block
Homologous Points
Adjustement
DEM
We will now illustrate this process by applying this workflow to register two images. This process
can be easily extended to perform image series registration.
The aim of this example is to describe how to register a Level 1 QuickBird image over an orthorectify
Pleiades image over the area of Toulouse, France.
We first dump geometry metadata of the image we want to refine in a text file. In OTB, we use
the extension .geom for this type of file. As you will see the application which will estimate a
54 Chapter 4. Recipes
Figure 4.10: From left to right: Pleiades ortho-image, and original QuickBird image over Toulouse
refine geometry only needs as input this metadata and a set of homologous points. The refinement
application will create a new .geom file containing refined geometry parameters which can be used
after for reprojection for example.
The use of external .geom file is available in OTB since release 3.16. See here for more information.
The main idea of the residual registration is to estimate an second transformation (after the applica-
tion of sensors model).
The homologous point application use interest point detection method to get a set of point which
match in both images.
The basic idea is to use this set of homologous points and estimate with them a residual transforma-
tion between the two images.
There is a wide variety of keypoint detector in the literature. They allow to detect and describe local
features in images. These algorithms provide for each interesting point a “feature description”. This
descriptor has the property to be invariant to image translation, scaling, and rotation, partially invari-
4.2. Optical pre-processing 55
ant to illumination changes and robust to local geometric distortion. keypoints. Features extracted
from the input images are then matched against each other. These correspondences are then used to
create the homologous points.
SIFT or SURF keypoints can be computed in the application. The band on which keypoints are
computed can be set independently for both images.
The application offers two modes :
• the first is the full mode where keypoints are extracted from the full extent of both images
(please note that in this mode large image file are not supported).
• The second mode, called geobins, allows to set-up spatial binning so as to get fewer points
spread across the entire image. In this mode, the corresponding spatial bin in the second image
is estimated using geographical transform or sensor modeling, and is padded according to the
user defined precision.
Moreover, in both modes the application can filter matches whose co-localization in the first im-
age exceed this precision. Last, the elevation parameters allow to deal more precisely with sensor
modelling in case of sensor geometry data. The outvector option allows to create a vector file with
segments corresponding to the localization error between the matches.
Finally, with the 2wgs84 option, you can match two sensor geometry images or a sensor geometry
image with an ortho-rectified reference. In all cases, you get a list of ground control points spread
all over your image.
Note that for a proper use of the application, elevation must be correctly set (including DEM and
geoid file).
56 Chapter 4. Recipes
Now that we can use this set of tie points to estimate a residual transformation.For this we use the
dedicated application called RefineSensorModel. This application make use of OSSIM capabilities
to align the sensor model.
It reads the input geometry metadata file (.geom) which contains the sensor model information that
we want to refine and the text file (homologous points.txt) containing the list of ground control
point. It performs a least-square fit of the sensor model adjustable parameters to these tie points and
produces an updated geometry file as output (the extension which is always use is .geom)
The application can provide as well an optional ground control points based statistics file and a
vector file containing residues that you can display in a GIS software.
Please note again that for a proper use of the application, elevation must be correctly set (including
DEM and geoid file). The map parameters allows to choose a map projection in which the accuracy
will be estimated (in meters).
Accuracy values are provided as output of the application (computed using tie points location) and
allow also to control the precision of the estimated model.
Now we will show how we can use this new sensor model. In our case we’ll use this sensor model to
orthorectify the image over the PlÃ
iades
c reference. Orfeo ToolBox offers since version 3.16 the
possibility to use hrefhttp://wiki.orfeo-toolbox.org/index.php/ExtendedFileNameextend image path
to use different metadata file as input. That’s what we are going to use there to orthorectify the
QuickBird image using the .geom file obtained by the RefineSensorModel applications. over the
first one using for the second image estimated sensor model which take into account the original
sensor model of the slave and which also fit to the set of tie points.
-elev.geoid OTB-Data/Input/DEM/egm96.grd
As a result, if you’ve got enough homologous points in images and control that the residual error
between the set of tie points and the estimated sensor model is small, you must achieve a good
registration now between the 2 rectified images. Normally far better than ’only’ performing separate
orthorectification over the 2 images.
This methodology can be adapt and apply in several cases, for example :
The BandMath application provides a simple and efficient way to perform band operations. The
command line application and the corresponding Monteverdi module (shown in the section 2.4.4)
are based on the same standards. It computes a band wise operation according to a user defined
mathematical expression. The following code computes the absolute difference between first bands
of two images:
The naming convention ”im[x]b[y]” designates the yth band of the xth input image.
The BandMath application embeds built-in operators and functions (listed here), allowing a vast
choice of possible operations.
4.3.2 Segmentation
Segmenting objects across a very high resolution scene and with a controlled quality is a difficult task
for which no method has reached a sufficient level of performance to be considered as operational.
Even if we leave aside the question of segmentation quality and consider that we have a method
performing reasonably well on our data and objects of interest, the task of scaling up segmentation
to real very high resolution data is itself challenging. First, we can not load the whole data into
memory, and there is a need for on the flow processing which does not cope well with traditional
58 Chapter 4. Recipes
segmentation algorithms. Second, the result of the segmentation process itself is difficult to represent
and manipulate efficiently.
The experience of segmenting large remote sensing images is packed into a single Segmentation in
OTB Applications.
You can find more information about this application here.
LSMS is a segmentation workflow which allows to perform tile-wise segmentation of very large
image with theoretical guarantees of getting identical results to those without tiling. It has been
developed by David Youssefi and Julien Michel during David internship at CNES and is to be pub-
lished soon.
The workflow consists in chaining 3 or 4 dedicated applications and produces a GIS vector file with
artifact-free polygons corresponding to the segmented image, as well as mean and variance of the
radiometry of each band for each polygon.
The first step of the workflow is to perform Mean-Shift smoothing with the MeanShiftSmoothing
application:
Note that the modesearch option should be disabled, and that the foutpos parameter is optional: it
can be activated if you want to perform the segmentation based on both spatial and range modes.
This application will smooth large images by streaming them, and deactivating the modesearch will
guarantee that the results will not depend on the streaming scheme. Please also note that the maxiter
is used to set the margin to ensure these identical results, and as such increasing the maxiter may
have an additional impact on processing time.
Step 2: Segmentation
The next step is to produce an initial segmentation based on the smoothed images produced by the
MeanShiftSmoothing application. To do so, the LSMSSegmentation will process them by tiles whose
4.3. Image processing and information extraction 59
dimensions are defined by the tilesizex and tilesizey parameters, and by writing intermediate images
to disk, thus keeping the memory consumption very low throughout the process. The segmenta-
tion will group together adjacent pixels whose range distance is below the ranger parameter and
(optionally) spatial distance is below the spatialr parameter.
Note that the final segmentation image may contains a very large number of segments, and the
uint32 image type should therefore be used to ensure that there will be enough labels to index those
segments. The minsize parameter will filter segments whose size in pixels is below its value, and
their labels will be set to 0 (nodata).
Please note that the output segmented image may look patchy, as if there were tiling artifacts: this is
because segments are numbered sequentially with respect to the order in which tiles are processed.
You will see after the result of the vectorization step that there are no artifacts in the results.
The LSMSSegmentation application will write as many intermediate files as tiles needed during
processing. As such, it may require twice as free disk space as the final size of the final image. The
cleanup option (active by default) will clear the intermediate files during the processing as soon as
they are not needed anymore. By default, files will be written to the current directory. The tmpdir
option allows to specify a different directory for these intermediate files.
The LSMSSegmentation application allows to filter out small segments. In the output segmented
image, those segments will be removed and replaced by the background label (0). Another solution
to deal with the small regions is to merge them with the closest big enough adjacent region in terms
of radiometry. This is handled by the LSMSSmallRegionsMerging application, which will output a
segmented image where small regions have been merged. Again, the uint32 image type is advised
for this output image.
The minsize parameter allows to specify the threshold on the size of the regions to be merged. Like
the LSMSSegmentation application, this application will process the input images tile-wise to keep
resources usage low, with the guarantee of identical results. You can set the tile size using the
tilesizex and tilesizey parameters. However unlike the LSMSSegmentation application, it does not
require to write any temporary file to disk.
Step 4: Vectorization
The last step of the LSMS workflow consists in the vectorization of the segmented image into a
GIS vector file. This vector file will contain one polygon per segment, and each of these polygons
will hold additional attributes denoting the label of the original segment, the size of the segment in
pixels, and the mean and variance of each band over the segment. The projection of the output GIS
vector file will be the same as the projection from the input image (if input image has no projection,
so does the output GIS file).
This application will process the input images tile-wise to keep resources usage low, with the guaran-
tee of identical results. You can set the tile size using the tilesizex and tilesizey parameters. However
unlike the LSMSSegmentation application, it does not require to write any temporary file to disk.
This framework is dedicated to perform cartographic validation starting from the result of a detection
(for example a road extraction), enhance the results fiability by using a classifier fusion algorithm.
Using a set of descriptor, the processing chain validates or invalidates the input geometrical features.
The DSFuzzyModelEstimation application performs the fuzzy model estimation (once by use case:
descriptor set / Belief support / Plausibility support). It has the following input parameters :
• -psin a vector data of positive samples enriched according to the ”Compute Descriptors” part
• -nsin a vector data of negative samples enriched according to the ”Compute Descriptors”
part
• -belsup a support for the Belief computation
4.3. Image processing and information extraction 61
The output file FuzzyModel.xml contains the optimal model to perform informations fusion.
The first step in the classifier fusion based validation is to compute, for each studied polyline, the
choosen descriptors. In this context, the ComputePolylineFeatureFromImage application can be
used for a large range of descriptors. It has the following inputs :
• -in an image (of the sudied scene) corresponding to the choosen descriptor (NDVI, building
Mask. . . )
• -vd a vector data containing polyline of interest
• -expr a formula (”b1 >0.4”, ”b1 == 0”) where b1 is the standard name of input image first
band
• -field a field name corresponding to the descriptor codename (NONDVI, ROADSA...)
The output is a vector data containing polylines with a new field containing the descriptor value. In
order to add the ”NONDVI” descriptor to an input vector data (”inVD.shp”) corresponding to the
percentage of pixels along a polyline that verifies the formula ”NDVI >0.4” :
NDVI.TIF is the NDVI mono band image of the studied scene. This step must be repeated for each
choosen descriptor:
62 Chapter 4. Recipes
Both NDVI.TIF and roadSpectralAngle.TIF can be produced using Monteverdi feature extrac-
tion capabilities, and Buildings.TIF can be generated using Monteverdi rasterization module.
From now on, VD_NONDVI_ROADSA_NOBUIL.shp contains three descriptor fields. It will be used in
the following part.
The final application (VectorDataDSValidation) will validate or unvalidate the studied samples using
the Dempster-Shafer theory . Its inputs are :
4.4 Classification
The classification in the application framework provides a supervised pixel-wise classification chain
based on learning from multiple images, and using one specified machine learning method like
4.4. Classification 63
SVM, Bayes, KNN, Random Forests, Artificial Neural Network, and others...(see application help
of TrainImagesClassifier for further details about all the available classifiers). It supports huge
images through streaming and multi-threading. The classification chain performs a training step
based on the intensities of each pixel as features. Please note that all the input images must have the
same number of bands to be comparable.
Statistics estimation
In order to make these features comparable between each training images, the first step consists in
estimating the input images statistics. These statistics will be used to center and reduce the intensities
(mean of 0 and standard deviation of 1) of samples based on the vector data produced by the user.
To do so, the ComputeImagesStatistics tool can be used:
This tool will compute each band mean, compute the standard deviation based on pooled variance
of each band and finally export them to an XML file. The features statistics XML file will be an
input of the following tools.
As the chain is supervised, we first need to build a training set with positive examples of different
objects of interest. This can be done with Monteverdi Vectorization module (Fig.4.11). These
polygons must be saved in OGR vector format supported by GDAL like ESRI shapefile for example.
This operation will be reproduced on each image used as input of the training function.
Please note that the positive examples in the vector data should have a “Class“ field with a label
value higher than 1 and coherent in each images.
You can generate the vector data set with Quantum GIS software for example and save it in an OGR
vector format supported by GDAL (ESRI sphapefile for example). OTB Applications should be
able to transform the vector data into the image coordinate system.
Once images statistics have been estimated, the learning scheme is the following:
Figure 4.11: A training data set builded with the vectorization monteverdi module.
4.4. Classification 65
(c) Add vectors respectively to the training samples set and the validation samples set.
2. Increase the size of the training samples set and balance it by generating new noisy samples
from the previous ones,
3. Perform the learning with this training set
4. Estimate performances of the classifier on the validation samples set (confusion matrix, pre-
cision, recall and F-Score).
Let us consider a SVM classification. These steps can be performed by the TrainImagesClassifier
command-line using the following:
Additional groups of parameters are also available (see application help for more details):
Once the classifier has been trained, one can apply the model to classify pixel inside defined classes
on a new image using the ImageClassifier application:
You can set an input mask to limit the classification to the mask area with value >0.
The performance of the model generated by the TrainImagesClassifier application is directly esti-
mated by the application itself, which displays the precision, recall and F-score of each class, and
can generate the global confusion matrix as an output *.CSV file.
66 Chapter 4. Recipes
Color mapping can be used to apply color transformations on the final graylevel label image. It
allows to get an RGB classification map by re-mapping the image values to be suitable for display
purposes. One can use the ColorMapping application. This tool will replace each label with an
8-bits RGB color specificied in a mapping file. The mapping file should look like this :
In the previous example, 1 is the label and 255 0 0 is a RGB color (this one will be rendered as red).
To use the mapping tool, enter the following :
Other look-up tables (LUT) are available : standard continuous LUT, optimal LUT, and LUT com-
puted over a support image.
Example
We consider 4 classes: water, roads, vegetation and buildings with red roofs. Data is available in the
OTB-Data repository and this image is produced with the commands inside this file.
After having processed several classifications of the same input image but from different models or
methods (SVM, KNN, Random Forest,...), it is possible to make a fusion of these classification maps
4.4. Classification 67
Figure 4.12: From left to right: Original image, result image with fusion (with monteverdi viewer) of original
image and fancy classification and input image with fancy color classification from labeled image.
with the FusionOfClassifications application which uses either majority voting or the Demspter
Shafer framework to handle this fusion. The Fusion of Classifications generates a single more
robust and precise classification map which combines the information extracted from the input list
of labeled images.
The FusionOfClassifications application has the following input parameters :
• -out the output labeled image resulting from the fusion of the input classification images
• -method the fusion method (either by majority voting or by Dempster Shafer)
• -nodatalabel label for the no data class (default value = 0)
The input pixels with the nodata class label are simply ignored by the fusion process. Moreover,
the output pixels for which the fusion process does not result in a unique class label, are set to the
undecided value.
In the Majority Voting method implemented in the FusionOfClassifications application, the value of
each output pixel is equal to the more frequent class label of the same pixel in the input classification
maps. However, it may happen that the more frequent class labels are not unique in individual pixels.
In that case, the undecided label is attributed to the output pixels.
The application can be used like this:
68 Chapter 4. Recipes
Let us consider 6 independent classification maps of the same input image (Cf. left image in Fig.
4.12) generated from 6 different SVM models. The Fig. 4.13 represents them after a color mapping
by the same LUT. Thus, 4 classes (water: blue, roads: gray, vegetation: green, buildings with red
roofs: red) are observable on each of them.
Figure 4.13: Six fancy colored classified images to be fused, generated from 6 different SVM models.
As an example of the FusionOfClassifications application by majority voting, the fusion of the six
input classification maps represented in Fig. 4.13 leads to the classification map illustrated on the
right in Fig. 4.14. Thus, it appears that this fusion highlights the more relevant classes among the
six different input classifications. The white parts of the fused image correspond to the undecided
class labels, i.e. to pixels for which there is not a unique majority voting.
4.4. Classification 69
Figure 4.14: From left to right: Original image, and fancy colored classified image obtained by a majority voting
fusion of the 6 classification maps represented in Fig. 4.13 (water: blue, roads: gray, vegetation: green, buildings
with red roofs: red, undecided: white).
The FusionOfClassifications application, handles another method to compute the fusion: the Demp-
ster Shafer framework. In the Dempster-Shafer theory, the performance of each classifier resulting
in the classification maps to fuse are evaluated with the help of the so-called belief function of each
class label, which measures the degree of belief that the corresponding label is correctly assigned
to a pixel. For each classifier, and for each class label, these belief functions are estimated from
another parameter called the mass of belief of each class label, which measures the confidence that
the user can have in each classifier according to the resulting labels.
In the Dempster Shafer framework for the fusion of classification maps, the fused class label for
each pixel is the one with the maximal belief function. In case of multiple class labels maximizing
the belief functions, the output fused pixels are set to the undecided value.
In order to estimate the confidence level in each classification map, each of them should be con-
fronted with a ground truth. For this purpose, the masses of belief of the class labels resulting from
a classifier are estimated from its confusion matrix, which is itself exported as a *.CSV file with
the help of the ComputeConfusionMatrix application. Thus, using the Dempster Shafer method to
fuse classification maps needs an additional input list of such *.CSV files corresponding to their
respective confusion matrices.
The application can be used like this:
-out DSFusedClassificationMap.tif
Figure 4.15: From left to right: Original image, and fancy colored classified image obtained by a Dempster
Shafer fusion of the 6 classification maps represented in Fig. 4.13 (water: blue, roads: gray, vegetation: green,
buildings with red roofs: red, undecided: white).
In order to properly use the FusionOfClassifications application, some points should be considered.
First, the list_of_input_images and OutputFusedClassificationImage are single band la-
beled images, which means that the value of each pixel corresponds to the class label it belongs to,
and labels in each classification map must represent the same class. Secondly, the undecided label
value must be different from existing labels in the input images in order to avoid any ambiguity in
the interpretation of the OutputFusedClassificationImage.
Resulting classification maps can be regularized in order to smoothen irregular classes. Such a
regularization process improves classification results by making more homogeneous areas which
are easier to handle.
Majority Voting takes the more representative value of all the pixels identified by the structuring ele-
ment and then sets the output center pixel to this majority label value. The ball shaped neighborhood
is identified by its radius expressed in pixels.
Handling ambiguity and not classified pixels in the majority voting based regularization
Since, the Majority Voting regularization may lead to not unique majority labels in the neighborhood,
it is important to define which behaviour the filter must have in this case. For this purpose, a Boolean
parameter (called ip.suvbool) is used in the ClassificationMapRegularization application to choose
whether pixels with more than one majority class are set to Undecided (true), or to their Original
labels (false = default value).
Moreover, it may happen that pixels in the input image do not belong to any of the considered class.
Such pixels are assumed to belong to the NoData class, the label of which is specified as an input
parameter for the regularization. Therefore, those NoData input pixels are invariant and keep their
NoData label in the output regularized image.
The ClassificationMapRegularization application has the following input parameters :
• -ip.suvbool a Boolean number used to choose whether pixels with more than one majority
class are set to Undecided (true), or to their Original labels (false = default value). Please note
that the Undecided value must be different from existing labels in the input image
• -ip.nodatalabel label for the NoData class. Such input pixels keep their NoData label in
the output image (default value = 0)
Example
Resulting from the ColorMapping application presented in section 4.4.1, and illustrated in Fig. 4.12,
the Fig. 4.16 shows a regularization of a classification map composed of 4 classes: water, roads,
vegetation and buildings with red roofs. The radius of the ball shaped structuring element is equal
to 3 pixels, which corresponds to a ball included in a 7 x 7 pixels square. Pixels with more than one
majority class keep their original labels.
Figure 4.16: From left to right: Original image, fancy colored classified image and regularized classification
map with radius equal to 3 pixels.
As described in the OTB Software Guide, the term Feature Extraction refers to techniques aiming at
extracting added value information from images. These extracted items named features can be local
statistical moments, edges, radiometric indices, morphological and textural properties. For example,
4.5. Feature extraction 73
such features can be used as input data for other image processing methods like Segmentation and
Classification.
This application computes the 4 local statistical moments on every pixel in the selected channel of
the input image, over a specified neighborhood. The output image is multi band with one statistical
moment (feature) per band. Thus, the 4 output features are the Mean, the Variance, the Skewness
and the Kurtosis. They are provided in this exact order in the output image.
The LocalStatisticExtraction application has the following input parameters:
This application Computes edge features on every pixel in the selected channel of the input image.
The EdgeExtraction application has the following input parameters:
• -out the output mono band image containing the edge features
This application computes radiometric indices using the channels of the input image. The output is
a multi band image into which each channel is one of the selected indices.
The RadiometricIndices application has the following input parameters:
The available radiometric indices to be listed into -list with their relevant channels in brackets are:
4.5. Feature extraction 75
The application can be used like this, which leads to an output image with 3 bands, respectively with
the Vegetation:NDVI, Vegetation:RVI and Vegetation:IPVI radiometric indices in this exact order:
or like this, which leads to a single band output image with the Water:NDWI2 radiometric indice:
Morphological features can be highlighted by using image filters based on mathematical morphology
either on binary or gray scale images.
This application performs binary morphological operations (dilation, erosion, opening and closing)
on a mono band image with a specific structuring element (a ball or a cross) having one radius along
X and another one along Y. NB: the cross shaped structuring element has a fixed radius equal to 1
pixel in both X and Y directions.
The BinaryMorphologicalOperation application has the following input parameters:
This application performs morphological operations (dilation, erosion, opening and closing) on a
gray scale mono band image with a specific structuring element (a ball or a cross) having one radius
along X and another one along Y. NB: the cross shaped structuring element has a fixed radius equal
to 1 pixel in both X and Y directions.
The GrayScaleMorphologicalOperation application has the following input parameters:
Texture features can be extracted with the help of image filters based on texture analysis methods
like Haralick and structural feature set (SFS).
This application computes Haralick, advanced and higher order texture features on every pixel in
the selected channel of the input image. The output image is multi band with a feature per band.
78 Chapter 4. Recipes
• -parameters.nbbin the number of bin per axis for histogram generation (default value is 8)
• -out the output multi band image containing the selected texture features (one feature per
band)
The available values for -texture with their relevant features are:
• -texture=simple: In this case, 8 local Haralick textures features will be processed. The 8
output image channels are: Energy, Entropy, Correlation, Inverse Difference Moment, Inertia,
Cluster Shade, Cluster Prominence and Haralick Correlation. They are provided in this exact
order in the output image. Thus, this application computes the following Haralick textures
over a sliding windows with user defined radius: (where g(i, j) is the element in cell i, j of a
normalized Gray Level Co-occurrence Matrix (GLCM)):
”Energy” = f1 = ∑i, j g(i, j)2
”Entropy” = f2 = − ∑i, j g(i, j) log2 g(i, j), or 0 if g(i, j) = 0
”Correlation” = f3 = ∑i, j (i−µ)( j−µ)g(i,
σ2
j)
1
”Inverse Difference Moment” = f4 = ∑i, j 1+(i− j)2
g(i, j)
”Inertia” = f5 = ∑i, j (i − j)2 g(i, j) (sometimes called ”contrast”)
”Cluster Shade” = f6 = ∑i, j ((i − µ) + ( j − µ))3 g(i, j)
”Cluster Prominence” = f7 = ∑i, j ((i − µ) + ( j − µ))4 g(i, j)
∑ (i, j)g(i, j)−µ2
”Haralick’s Correlation” = f8 = i, j σ2 t
where µt and σt are the mean and standard
t
deviation of the row (or column, due to symmetry) sums.
4.5. Feature extraction 79
Above, µ = (weighted pixel average) = ∑i, j i · g(i, j) = ∑i, j j · g(i, j) (due to matrix symmetry),
and σ = (weighted pixel variance) = ∑i, j (i − µ)2 · g(i, j) = ∑i, j ( j − µ)2 · g(i, j) (due to matrix
symmetry).
• -texture=advanced: In this case, 9 advanced texture features will be processed. The 9
output image channels are: Mean, Variance, Sum Average, Sum Variance, Sum Entropy, Dif-
ference of Entropies, Difference of Variances, IC1 and IC2. They are provided in this exact
order in the output image.
• -texture=higher: In this case, 11 local higher order statistics texture coefficients based on
the grey level run-length matrix will be processed. The 11 output image channels are: Short
Run Emphasis, Long Run Emphasis, Grey-Level Nonuniformity, Run Length Nonuniformity,
Run Percentage, Low Grey-Level Run Emphasis, High Grey-Level Run Emphasis, Short Run
Low Grey-Level Emphasis, Short Run High Grey-Level Emphasis, Long Run Low Grey-
Level Emphasis and Long Run High Grey-Level Emphasis. They are provided in this exact
order in the output image. Thus, this application computes the following Haralick textures
over a sliding window with user defined radius: (where p(i, j) is the element in cell i, j of a
normalized Run Length Matrix, nr is the total number of runs and n p is the total number of
pixels):
1 p(i, j)
”Short Run Emphasis” = SRE = nr ∑i, j j2
1
”Long Run Emphasis” = LRE = nr ∑i, j p(i, j) ∗ j2
1
2
”Grey-Level Nonuniformity” = GLN = nr ∑i ∑ j p(i, j)
1 2
”Run Length Nonuniformity” = RLN = nr ∑ j (∑i p(i, j))
nr
”Run Percentage” = RP = np
1 p(i, j)
”Low Grey-Level Run Emphasis” = LGRE = nr ∑i, j i2
-parameters.max 255
-out OutputImage
This application computes Structural Feature Set textures on every pixel in the selected channel of
the input image. The output image is multi band with a feature per band. The 6 output texture
features are SFS’Length, SFS’Width, SFS’PSI, SFS’W-Mean, SFS’Ratio and SFS’SD. They are
provided in this exact order in the output image.
It is based on line direction estimation and described in the following publication. Please refer to
Xin Huang, Liangpei Zhang and Pingxiang Li publication, Classification and Extraction of Spa-
tial Features in Urban Areas Using High-Resolution Multispectral Imagery. IEEE Geoscience and
Remote Sensing Letters, vol. 4, n. 2, 2007, pp 260-264.
The texture is computed for each pixel using its neighborhood. User can set the spatial threshold that
is the max line length, the spectral threshold that is the max difference authorized between a pixel
of the line and the center pixel of the current neighborhood. The adjustement constant alpha and the
ratio Maximum Consideration Number, which describes the shape contour around the central pixel,
are used to compute the w − mean value.
The SFSTextureExtraction application has the following input parameters:
• -channel the selected channel index in the input image to be processed (default value is 1)
• -parameters.spethre the spectral threshold (default value is 50)
• -parameters.spathre the spatial threshold (default value is 100 pixels)
This section describes how to convert pair of stereo images into elevation information.
The standard problem of terrain reconstruction with available OTB Applications contains the fol-
lowing steps:
The aim of this application is to generate resampled grids to transform images in epipolar geometry.
Epipolar geometry is the geometry of stereo vision (see here). The operation of stereo rectification
determines transformations to apply to each image such that pairs of conjugate epipolar lines become
collinear, parallel to one of the image axes and aligned. In this geometry, the objects present on a
given row of the left image are also located on the same line in the right image.
Applying this transformation reduces the problem of elevation (or stereo correspondences determi-
nation) to a 1-D problem. We have two images image1 and image2 over the same area (the stereo
pair) and we assume that we know the localization functions (forward and inverse) associated for
each of these images.
The forward function allows to go from the image referential to the geographic referential:
f orward
(long, lat) = fimage1 (i, j, h) (4.2)
where h is the elevation hypothesis, (i, j) are the pixel coordinates in image1 and (long,lat) are
geographic coordinates. As you can imagine, the inverse function allows to go from geographic
coordinates to the image geometry.
For the second image, in that case, the expression of the inverse function is:
inverse
(long, lat, h) = fimage2 (i, j) (4.3)
Using jointly the forward and inverse functions from the image pair, we can construct a co-
localization function fimage1→image2 between the position of a pixel in the first and its position in
the second one:
82 Chapter 4. Recipes
inverse f orward
fimage1→image2 (iimage1 , jimage1 , h) = fimage2 fimage1 ((iimage1 , jimage1 ), h) (4.5)
The expression is not really important, what we need to understand is that if we are able to determine
for a given pixel in image1 the corresponding pixel in image2, as we know the expression of the co-
localization function between both images, we can determine by identification the information about
the elevation (variable h in the equation)!
We now have the mathematical basis to understand how 3-D information can be extracted by exam-
ination of the relative positions of objects in the two 2-D epipolar images.
The construction of the two epipolar grids is a little bit more complicated in the case of VHR optical
images.That is because most of passive remote sensing from space use a push-broom sensor, which
corresponds to a line of sensors arranged perpendicularly to the flight direction of the spacecraft.
This acquisition configuration implies a slightly different strategy for stereo-rectification (see here).
We will now explain how to use the StereoRectificationGridGenerator application to produce two
images which are deformation grids to resample the two images in epipolar geometry.
The application estimates the displacement to apply to each pixel in both input images to obtain
epipolar geometry.The application accept a ‘step’ parameter to estimate displacements on a coarser
grid. Here we estimate the displacements every 10 pixels. This is because in most cases with a
pair of VHR and a small angle between the two images, this grid is very smooth. Moreover, the
implementation is not streamable and uses potentially a lot of memory. Therefore it is generally a
good idea to estimate the displacement grid at a coarser resolution.
The application outputs the size of the output images in epipolar geometry. Note these values, we
will use them in the next step to resample the two images in epipolar geometry.
In our case, we have:
The epi.baseline parameter provides the mean value (in pixels.meters−1 ) of the baseline to sensor
altitude ratio. It can be used to convert disparities to physical elevation, since a disparity of this
value will correspond to an elevation offset of one meter with respect to the mean elevation.
we can now move forward to the resampling in epipolar geometry.
As you can see, we set sizex and sizey parameters using output values given by the StereoRectifica-
tionGridGenerator application to set the size of the output epipolar images.
We obtain two images in epipolar geometry, as shown in figure 4.17. Note that the application allows
to resample only a part of the image using the -out.ulx and -out.uly parameters.
Figure 4.17: Extract of resample image1 and image2 in epipolar geometry over Pyramids of Cheops.
c CNES
2012
An almost complete spectrum of stereo correspondence algorithms has been published and it is
still augmented at a significant rate! See for example . The Orfeo ToolBox implements different
strategies for block matching:
An other important parameter (mandatory in the application!) is the range of disparities. In theory,
the block matching can perform a blind exploration and search for a infinite range of disparities
between the stereo pair. We need now to evaluate a range of disparities where the block matching
will be performed (in the general case from the deepest point on Earth, the Challenger Deep. to the
Everest summit!)
We deliberately exaggerated but you can imagine that without a smaller range the block matching
algorithm can take a lot of time. That is why these parameters are mandatory for the application and
as a consequence we need to estimate them manually. This is pretty simple using the two epipolar
images.
In our case, we take one point on a flat area. The image coordinate in image1 is [1970, 1525] and in
image2 is [1970, 1526]. We then select a second point on a higher region (in our case a point near the
top of the Pyramid of Cheops!). The image coordinate of this pixel in image1 is [1661, 1299] and in
image2 is [1633, 1300]. So you see for the horizontal exploration, we must set the minimum value
lower than −30 (the convention for the sign of the disparity range is from image1 to image2).
Note that this estimation can be simplified using an external DEM in the StereoRectificationGrid-
Generator application. Regarding the vertical disparity, in the first step we said that we reduced the
problem of 3-D extraction to a 1-D problem, but this is not completely true in general cases. There
might be small disparities in the vertical direction which are due to parallax errors (i.e. epipolar
4.6. Stereoscopic reconstruction from VHR optical images pair 85
lines exhibit a small shift in the vertical direction, around 1 pixel). In fact, the exploration is typi-
cally smaller along the vertical direction of disparities than along the horizontal one. You can also
estimate them on the epipolar pair (in our case we use a range of −1 to 1).
One more time, take care of the sign of this minimum and this maximum for disparities (always
from image1 to image2).
The command line for the BlockMatching application is :
The application creates by default a two bands image : the horizontal and vertical disparities.
The BlockMatching application gives access to a lot of other powerful functionalities to improve the
quality of the output disparity map.
Here are a few of these functionalities:
• -io.outmetric: if the optimal metric values image is activated, it will be concatenated to the
output image (which will then have three bands: horizontal disparity, vertical disparity and
metric value)
• -bm.subpixel: Perform sub-pixel estimation of disparities
• -mask.inleft and -mask.inright: you can specify a no-data value which will discard pixels with
this value (for example the epipolar geometry can generate large part of images with black
pixels) This mask can be easily generated using the BandMath application:
-out image2_epipolar_mask.tif
-exp "if(im1b1<=0,0,255)"
• -mask.variancet : The block matching algorithm has difficulties to find matches on uniform
areas. We can use the variance threshold to discard those regions and speed-up computation
time.
• -bm.medianfilter.radius 5 and -bm.medianfilter.incoherence 2.0: Applies a median filter to
the disparity map. The median filter belongs to the family of nonlinear filters. It is used to
smooth an image without being biased by outliers or shot noise. The radius corresponds to
the neighbourhood where the median value is computed. A detection of incoherence between
the input disparity map and the median-filtered one is performed (a pixel corresponds to an
incoherence if the absolute value of the difference between the pixel value in the disparity
map and in the median image is higher than the incoherence threshold, whose default value is
1). Both parameters must be defined in the application to activate the filter.
Of course all these parameters can be combined to improve the disparity map.
Using the previous application, we evaluated disparities between images. The next (and last!) step
is now to transform the disparity map into an elevation information to produce an elevation map. It
uses as input the disparity maps (horizontal and vertical) to produce a Digital Surface Model (DSM)
with a regular sampling. The elevation values is computed from the triangulation of the ”left-right”
pairs of matched pixels. When several elevations are available on a DSM cell, the highest one is
kept.
First, an important point is that it is often a good idea to rework the disparity map given by the
BlockMatching application to only keep relevant disparities. For this purpose, we can use the output
optimal metric image and filter disparities with respect to this value.
4.6. Stereoscopic reconstruction from VHR optical images pair 87
For example, if we used Normalized Cross-Correlation (NCC), we can keep only disparities where
optimal metric value is superior to 0.9. Disparities below this value can be consider as inaccurate
and will not be used to compute elevation information (the -io.mask parameter can be used for this
purpose).
This filtering can be easily done with OTB Applications.
We first use the BandMath application to filter disparities according to their optimal metric value:
Now, we can use the DisparityMapToElevationMap application to compute the elevation map from
the filtered disparity maps.
It produces the elevation map projected in WGS84 (EPSG code:4326) over the ground area covered
by the stereo pair. Pixels values are expressed in meters.
This is it ! Figure 4.19 shows the output DEM from the Cheops pair.
4.6.5 One application to rule them all in multi stereo framework scheme
An application has been added to fuse one or multiple stereo reconstruction(s) using all in one
approach : StereoFramework. It computes the DSM from one or several stereo pair. First of all the
user have to choose his input data and defines stereo couples using -input.co string parameter. This
parameter use the following formatting convention ” index0 index1 , index2 index3 , . . . ”, which will
88 Chapter 4. Recipes
create a first couple with image index0 and index1 , a second with image index1 and index2 , and so
on. If left blank images are processed by pairs (which is equivalent as using ” 0 1,2 3,4 5 ” . . . ).
In addition to the usual elevation and projection parameters, main parameters have been splitted in
groups detailled below:
Output : output parameters : DSM resolution, NoData value, Cell Fusion method,
• -output.map : output projection map selection.
• -output.res : Spatial Sampling Distance of the output DSM in meters
• -output.nodata : DSM empty cells are filled with this float value (-32768 by default)
• -output.fusionmethod : Choice of fusion strategy in each DSM cell (max, min, mean,
acc)
• -output.out : Output DSM
• -output.mode : Output DSM extent choice
Stereorect : Direct and inverse stereorectification grid subsampling parameters
• -stereorect.fwdgridstep : Step of the direct deformation grid (in pixels)
• -stereorect.invgridssrate : Sub-sampling of the inverse epipolar grid
BM : Block Matching parameters.
• -bm.metric : Block-matching metric choice (robust SSD, SSD, NCC, Lp Norm)
• -bm.radius : Radius of blocks for matching filter (in pixels, 2 by default)
• -bm.minhoffset : Minimum altitude below the selected elevation source (in meters,
-20.0 by default)
• -bm.maxhoffset : Maximum altitude above the selected elevation source (in meters,
20.0 by default)
4.6. Stereoscopic reconstruction from VHR optical images pair 89
The parameters -bm.minhoffset and -bm.maxhoffset are used inside the application to derive
the minimum and maximum horizontal disparity exploration, so they have a critical impact on com-
putation time. It is advised to choose an elevation source that is not too far from the DSM you want
to produce (for instance, an SRTM elevation model). Therefore, the altitude from your elevation
source will be already taken into account in the epipolar geometry and the disparities will reveal
the elevation offsets (such as buildings). It allows you to use a smaller exploration range along the
elevation axis, causing a smaller exploration along horizontal disparities and faster computation.
-stereorect.fwdgridstep and -stereorect.invgridssrate have also a deep impact in time
consumption, thus they have to be carefully chosen in case of large image processing.
To reduce time consumption it would be useful to crop all sensor images to the same extent. The
easiest way to do that is to choose an image as reference, and then apply ExtractROI application on
the other sensor images using the fit mode -mode.fit option.
The following algorithms are used in the application: For each sensor pair
• Compute the epipolar deformation grids from the stereo pair (direct and inverse)
• Resample into epipolar geometry with BCO interpolator
• Create masks for each epipolar image : remove black borders and resample input masks
• Compute horizontal disparities with a block matching algorithm
90 Chapter 4. Recipes
Then fuse all 3D maps to produce DSM with desired geographic or cartographic projection and
parametrizable extent.
CHAPTER
FIVE
This chapter is the reference documentation for applications delivered with Orfeo ToolBox. It
provides detailed description of the application behaviour and parameters, as well as python and
bash snippets to use them applications. For a general introduction on applications delivered with
Orfeo ToolBox, please read chapter1, page 1.
Detailed description
This application allows to map a label image to a 8-bits RGB image (in both ways) using different
methods.
-The custom method allows to use a custom look-up table. The look-up table is loaded from a text
file where each line describes an entry. The typical use of this method is to colorise a classification
map.
-The continuous method allows to map a range of values in a scalar input image to a colored image
using continuous look-up table, in order to enhance image interpretation. Several look-up tables can
been chosen with different color ranges.
-The optimal method computes an optimal look-up table. When processing a segmentation label
image (label to color), the color difference between adjacent segmented regions is maximized. When
processing an unknown color image (color to label), all the present colors are mapped to a continuous
label list.
- The support image method uses a color support image to associate an average color to each region.
92 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.1, page 93
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ColorMapping.
Operation Selection of the operation to execute (default is : label to color). Available choices are:
• Label to color
• Color to label
– Not Found Label: Label to use for unknown colors.
Color mapping method Selection of color mapping methods and their parameters. Available
choices are:
• Color mapping with custom labeled look-up table: Apply a user-defined look-up table to a
labeled image. Look-up table is loaded from a text file.
– Look-up table file: An ASCII file containing the look-up table
with one color per line
(for instance the line ’1 255 0 0’ means that all pixels with label 1 will be replaced by
RGB color 255 0 0)
Lines beginning with a # are ignored
• Color mapping with continuous look-up table: Apply a continuous look-up table to a range
of input values.
– Look-up tables: Available look-up tables.
– Mapping range lower value: Set the lower input value of the mapping range.
94 Chapter 5. Applications Reference Documentation
– Mapping range higher value: Set the higher input value of the mapping range.
• Compute an optimized look-up table: [label to color] Compute an optimal look-up table
such that neighboring labels in a segmentation are mapped to highly contrasted colors. [color
to label] Searching all the colors present in the image to compute a continuous label list
– Background label: Value of the background label
• Color mapping with look-up table calculated on support image
– Support Image: Support image filename. For each label, the LUT is calculated from
the mean pixel value in the support image, over the corresponding labeled areas. First of
all, the support image is normalized with extrema rejection
– NoData value: NoData value for each channel of the support image, which will not be
handled in the LUT estimation. If NOT checked, ALL the pixel values of the support
image will be handled in the LUT estimation.
– lower quantile: lower quantile for image normalization
– upper quantile: upper quantile for image normalization
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
The segmentation optimal method does not support streaming, and thus large images. The operation
color to label is not implemented for the methods continuous LUT and support image LUT.
ColorMapping using support image is not threaded.
Authors
See also
• ImageSVMClassifier
Concatenate a list of images of the same size into a single multi-channel one.
Detailed description
This application performs images channels concatenation. It will walk the input image list (single
or multi-channel) and generates a single multi-channel image. The channel order is the one of the
list.
Parameters
This section describes in details the parameters available for this application. Table 5.2, page 96
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ConcatenateImages.
96 Chapter 5. Applications Reference Documentation
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Authors
See also
Convert an image to a different format, eventually rescaling the data and/or changing the pixel type.
Detailed description
This application performs an image pixel type conversion (short, ushort, uchar, int, uint, float and
double types are handled). The output image is written in the specified format (ie. that corresponds
to the given extension).
The convertion can include a rescale using the image 2 percent minimum and maximum values. The
rescale can be linear or log2.
Parameters
This section describes in details the parameters available for this application. Table 5.3, page 98
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Convert.
Rescale type Transfer function for the rescaling Available choices are:
• None
• Linear
– Gamma correction factor: Gamma correction factor
• Log2
Input mask The masked pixels won’t be used to adapt the dynamic (the mask must have the same
dimensions as the input image)
Histogram Cutting Parameters Parameters to cut the histogram edges before rescaling
• High Cut Quantile: Quantiles to cut from histogram high values before computing min/max
rescaling (in percent, 2 by default)
• Low Cut Quantile: Quantiles to cut from histogram low values before computing min/max
rescaling (in percent, 2 by default)
Load otb application from xml file Load otb application from xml file
5.1. Image Manipulation 99
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• Rescale
Detailed description
This application allows to select the appropriate SRTM tiles that covers a list of images. It builds a
list of the required tiles. Two modes are available: the first one download those tiles from the USGS
SRTM3 website (http://dds.cr.usgs.gov/srtm/version2 1/SRTM3/), the second one list those tiles in
a local directory. In both cases, you need to indicate the directory in which directory tiles will be
download or the location of local SRTM files.
Parameters
This section describes in details the parameters available for this application. Table 5.4, page 100
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is DownloadSRTMTiles.
Figure 5.4: Parameters table for Download or list SRTM tiles related to a set of images.
Input images list The list of images on which you want to determine corresponding SRTM tiles.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
DownloadSRTMTiles . SetParameterString (" mode . list . indir " , "/ home / user / srtm_dir /")
Limitations
None
Authors
Detailed description
This application extracts a Region Of Interest with user defined size, or reference image.
102 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.5, page 102
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ExtractROI.
• Standard: In standard mode, extract is done according the coordinates entered by the user
5.1. Image Manipulation 103
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This application builds a multi-resolution pyramid of the input image. User can specified the number
of levels of the pyramid and the subsampling factor. To speed up the process, you can use the fast
scheme option
Parameters
This section describes in details the parameters available for this application. Table 5.6, page 105
presents a summary of these parameters and the parameters keys to be used in command-line and
5.1. Image Manipulation 105
• Input Image:
• Output Image: will be used to get the prefix and the extension of the images to write
• Available RAM (Mb): Available memory for processing (in MB)
• Number Of Levels: Number of levels in the pyramid (default is 1).
• Subsampling factor: Subsampling factor between each level of the pyramid (default is 2).
• Variance factor: Variance factor use in smoothing. It is multiplied by the subsampling factor
of each level in the pyramid (default is 0.6).
• Use Fast Scheme: If used, this option allows to speed-up computation by iteratively sub-
sampling previous level of pyramid instead of processing the full input.
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
106 Chapter 5. Applications Reference Documentation
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.7, page 107
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Quicklook.
5.1. Image Manipulation 107
• Save otb application to xml file: Save otb application to xml file
108 Chapter 5. Applications Reference Documentation
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
This application does not provide yet the optimal way to decode coarser level of resolution from
JPEG2000 images (like in Monteverdi).
Trying to subsampled huge JPEG200 image with the application will lead to poor performances for
now.
Authors
Detailed description
Display information about the input image like: image size, origin, spacing, metadata, projections...
5.1. Image Manipulation 109
Parameters
This section describes in details the parameters available for this application. Table 5.8, page 110
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ReadImageInfo.
Display the OSSIM keywordlist Output the OSSIM keyword list. It contains metadata information
(sensor model, geometry ). Informations are stored in keyword list (pairs of key/value)
Write the OSSIM keywordlist to a geom file This option allows to extract the OSSIM keywordlist
of the image into a geom file.
Default RGB Display This group of parameters allows to access to the default rgb composition.
Ground Control Points informations This group of parameters allows to access to the GCPs infor-
mations.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
5.1. Image Manipulation 113
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This application scales the given image pixel intensity between two given values. By default min
(resp. max) value is set to 0 (resp. 255).
Parameters
This section describes in details the parameters available for this application. Table 5.9, page 114
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Rescale.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This application splits a N-bands image into N mono-band images. The output images filename will
be generated from the output parameter. Thus if the input image has 2 channels, and the user has set
an output outimage.tif, the generated images will be outimage 0.tif and outimage 1.tif
Parameters
This section describes in details the parameters available for this application. Table 5.10, page 116
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is SplitImage.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.11, page 117
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is TileFusion.
• Input Tile Images: Input tiles to concatenate (in lexicographic order : (0,0) (1,0) (0,1) (1,1)).
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
118 Chapter 5. Applications Reference Documentation
#!/usr/bin/python
Limitations
None
Authors
5.2.1 Concatenate
Concatenate VectorDatas
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.12, page 119
presents a summary of these parameters and the parameters keys to be used in command-line and
5.2. Vector Data Manipulation 119
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
5.2.2 Rasterization
Detailed description
This application allows to reproject and rasterize a vector dataset. The grid of the rasterized output
can be set by using a reference image, or by setting all parmeters (origin, size, spacing) by hand.
In the latter case, at least the spacing (ground sampling distance) is needed (other parameters are
computed automatically). The rasterized output can also be in a different projection reference system
than the input dataset.
There are two rasterize mode available in the application. The first is the binary mode: it allows to
render all pixels belonging to a geometry of the input dataset in the foreground color, while rendering
the other in background color. The second one allows to render pixels belonging to a geometry woth
respect to an attribute of this geometry. The field of the attribute to render can be set by the user. In
the second mode, the background value is still used for unassociated pixels.
Parameters
This section describes in details the parameters available for this application. Table 5.13, page 121
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Rasterization.
Input reference image A reference image from which to import output grid and projection refer-
ence system information.
Output size x Output size along x axis (useless if support image is given)
Output size y Output size along y axis (useless if support image is given)
Output EPSG code EPSG code for the output projection reference system (EPSG 4326 for
WGS84, 32631 for UTM31N...,useless if support image is given)
Spacing (GSD) x Spacing (ground sampling distance) along x axis (useless if support image is
given)
Spacing (GSD) y Spacing (ground sampling distance) along y axis (useless if support image is
given)
Background value Default value for pixels not belonging to any geometry
• Binary mode: In this mode, pixels within a geometry will hold the user-defined foreground
value
– Foreground value: Value for pixels inside a geometry
• Attribute burning mode: In this mode, pixels within a geometry will hold the value of a
user-defined field extracted from this geometry.
– The attribute field to burn: Name of the attribute field to burn
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• For now, support of input dataset with multiple layers having different projection reference
system is limited.
Perform an extract ROI on the input vector data according to the input image extent
Detailed description
This application extracts the vector data features belonging to a region specified by the support
image envelope
Parameters
This section describes in details the parameters available for this application. Table 5.14, page 124
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VectorDataExtractROIApplication.
124 Chapter 5. Applications Reference Documentation
Input and output data Group containing input and output parameters
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
5.2. Vector Data Manipulation 125
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
This application allows to reproject a vector data using support image projection reference, or a
user specified map projection
126 Chapter 5. Applications Reference Documentation
Detailed description
This application allows to reproject a vector data using support image projection reference, or a user
given map projection.
If given, image keywordlist can be added to reprojected vectordata.
Parameters
This section describes in details the parameters available for this application. Table 5.15, page 126
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VectorDataReprojection.
Input data
Output data
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
128 Chapter 5. Applications Reference Documentation
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
VectorDataReprojection . SetParameterString (" out . proj . image . in " , " ROI_QB_MUL_1 . tif ")
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.16, page 129
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VectorDataSetField.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Authors
Detailed description
This application performs a transformation of an input vector data transforming each vertex in the
vector data. The applied transformation manages translation, rotation and scale, and can be centered
or not.
Parameters
This section describes in details the parameters available for this application. Table 5.17, page 131
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VectorDataTransform.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
132 Chapter 5. Applications Reference Documentation
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
5.3 Calibration
Detailed description
The application allows to convert pixel values from DN (for Digital Numbers) to physically inter-
pretable and comparable values. Calibrated values are called surface reflectivity and its values lie in
the range [0, 1].
The first level is called Top Of Atmosphere (TOA) reflectivity. It takes into account the sensor gain,
sensor spectral response and the solar illumination.
The second level is called Top Of Canopy (TOC) reflectivity. In addition to sensor gain and solar
illumination, it takes into account the optical thickness of the atmosphere, the atmospheric pressure,
the water vapor amount, the ozone amount, as well as the composition and amount of aerosol gasses.
It is also possible to indicate an AERONET file which contains atmospheric parameters (version 1
and version 2 of Aeronet file are supported.
Parameters
This section describes in details the parameters available for this application. Table 5.18, page 134
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is OpticalCalibration.
Clamp of reflectivity values between [0, 100] Clamping in the range [0, 100]. It can be useful to
preserve area with specular reflectance.
• Aerosol Model:
Available choices are:
5.3. Calibration 135
– No Aerosol Model
– Continental
– Maritime
– Urban
– Desertic
• Ozone Amount: Ozone Amount
• Water Vapor Amount: Water Vapor Amount (in saturation fraction of water)
• Atmospheric Pressure: Atmospheric Pressure (in hPa)
• Aerosol Optical Thickness: Aerosol Optical Thickness
• Aeronet File: Aeronet file containing atmospheric parameters
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.19, page 137
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is SarRadiometricCalibration.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
5.4 Geometry
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.20, page 138
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is BundleToPerfectSensor.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Spacing of the deformation field Spacing of the deformation field. Default is 10 times the PAN
image spacing.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This application computes the geographic coordinates from a cartographic one. User has
to give the X and Y coordinate and the cartographic projection (UTM/LAMBERT/LAM-
BERT2/LAMBERT93/SINUS/ECKERT4/TRANSMERCATOR/MOLLWEID/SVY21).
Parameters
This section describes in details the parameters available for this application. Table 5.21, page 141
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ConvertCartoToGeoPoint.
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
• EPSG Code: This code is a generic way of identifying map projections, and allows to specify
a large amount of them. See www.spatialreference.org to find which EPSG code is associated
to your projection;
– EPSG Code: See www.spatialreference.org to find which EPSG code is associated to
your projection
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This Application converts a sensor point of an input image to a geographic point using the Forward
Sensor Model of the input image.
Parameters
This section describes in details the parameters available for this application. Table 5.22, page 143
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ConvertSensorToGeoPoint.
Figure 5.22: Parameters table for Convert Sensor Point To Geographic Point.
144 Chapter 5. Applications Reference Documentation
Point Coordinates
Geographic Coordinates
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.23, page 146
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is GeneratePlyFile.
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Authors
Detailed description
This application generates a RPC sensor model from a list of Ground Control Points. At least
20 points are required for estimation wihtout elevation support, and 40 points for estimation with
elevation support. Elevation support will be automatically deactivated if an insufficient amount of
points is provided. The application can optionnaly output a file containing accuracy statistics for
each point, and a vector file containing segments represening points residues. The map projection
parameter allows to define a map projection in which the accuracy is evaluated.
Parameters
This section describes in details the parameters available for this application. Table 5.24, page 148
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is GenerateRPCSensorModel.
Output geom file Geom file containing the generated RPC sensor model
5.4. Geometry 149
Input file containing tie points. Points are stored in following format:
Input file containing tie points
row col lon lat. Line beginning with # are ignored.
Output file containing output precision statistics Output file containing the following info:
ref lon ref lat elevation predicted lon predicted lat x error ref(meters) y error ref(meters)
global error ref(meters) x error(meters) y error(meters) overall error(meters)
Output vector file with residues File containing segments representing residues
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
– Zone number: The zone number ranges from 1 to 60 and allows to define the transverse
mercator projection (along with the hemisphere)
– Northern Hemisphere: The transverse mercator projections are defined by their zone
number as well as the hemisphere. Activate this parameter if your image is in the north-
ern hemisphere.
• Lambert II Etendu: This is a Lambert Conformal Conic projection mainly used in France.
• Lambert93: This is a Lambert 93 projection mainly used in France.
• WGS 84: This is a Geographical projection
• EPSG Code: This code is a generic way of identifying map projections, and allows to specify
a large amount of them. See www.spatialreference.org to find which EPSG code is associated
to your projection;
– EPSG Code: See www.spatialreference.org to find which EPSG code is associated to
your projection
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
150 Chapter 5. Applications Reference Documentation
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
5.4. Geometry 151
Authors
See also
• OrthoRectication,HomologousPointsExtraction,RefineSensorModel
Detailed description
This application allows to perform image resampling from an input resampling grid.
Parameters
This section describes in details the parameters available for this application. Table 5.25, page 152
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is GridBasedImageResampling.
Input and output data This group of parameters allows to set the input and output images.
• Upper Left X: X Coordinate of the upper-left pixel of the output resampled image
• Upper Left Y: Y Coordinate of the upper-left pixel of the output resampled image
5.4. Geometry 153
• Default value: The default value to give to pixel that falls outside of the input image.
Interpolation This group of parameters allows to define how the input image will be interpolated
during resampling. Available choices are:
• Nearest Neighbor interpolation: Nearest neighbor interpolation leads to poor image quality,
but it is very fast.
• Linear interpolation: Linear interpolation leads to average image quality but is quite fast
• Bicubic interpolation
– Radius for bicubic interpolation: This parameter allows to control the size of the
bicubic interpolation filter. If the target pixel size is higher than the input pixel size,
increasing this parameter will reduce aliasing artefacts.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• otbStereorecificationGridGeneration
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.26, page 155
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ImageEnvelope.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
156 Chapter 5. Applications Reference Documentation
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
5.4.8 Ortho-rectification
Detailed description
An inverse sensor model is built from the input image metadata to convert geographical to raw ge-
ometry coordinates. This inverse sensor model is then combined with the chosen map projection to
build a global coordinate mapping grid. Last, this grid is used to resample using the chosen interpo-
lation algorithm. A Digital Elevation Model can be specified to account for terrain deformations.
In case of SPOT5 images, the sensor model can be approximated by an RPC model in order to
speed-up computation.
Parameters
This section describes in details the parameters available for this application. Table 5.27, page 158
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is OrthoRectification.
Input and output data This group of parameters allows to set the input and output images.
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
– Zone number: The zone number ranges from 1 to 60 and allows to define the transverse
mercator projection (along with the hemisphere)
– Northern Hemisphere: The transverse mercator projections are defined by their zone
number as well as the hemisphere. Activate this parameter if your image is in the north-
ern hemisphere.
• Lambert II Etendu: This is a Lambert Conformal Conic projection mainly used in France.
• Lambert93: This is a Lambert 93 projection mainly used in France.
• WGS 84: This is a Geographical projection
• EPSG Code: This code is a generic way of identifying map projections, and allows to specify
a large amount of them. See www.spatialreference.org to find which EPSG code is associated
to your projection;
– EPSG Code: See www.spatialreference.org to find which EPSG code is associated to
your projection
Output Image Grid This group of parameters allows to define the grid on which the input image
will be resampled.
• Upper Left Y: Cartographic Y coordinate of the upper-left corner (meters for cartographic
projections, degrees for geographic ones)
• Size X: Size of projected image along X (in pixels)
• Size Y: Size of projected image along Y (in pixels)
160 Chapter 5. Applications Reference Documentation
• Pixel Size X: Size of each pixel along X axis (meters for cartographic projections, degrees
for geographic ones)
• Pixel Size Y: Size of each pixel along Y axis (meters for cartographic projections, degrees
for geographic ones)
• Lower right X: Cartographic X coordinate of the lower-right corner (meters for cartographic
projections, degrees for geographic ones)
• Lower right Y: Cartographic Y coordinate of the lower-right corner (meters for cartographic
projections, degrees for geographic ones)
• Model ortho-image: A model ortho-image that can be used to compute size, origin and
spacing of the output
• Force isotropic spacing by default: Default spacing (pixel size) values are estimated from
the sensor modeling of the image. It can therefore result in a non-isotropic spacing. This
option allows you to force default values to be isotropic (in this case, the minimum of spacing
in both direction is applied. Values overriden by user are not affected by this option.
• Default pixel value: Default value to write when outside of input image.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Interpolation This group of parameters allows to define how the input image will be interpolated
during resampling. Available choices are:
• Bicubic interpolation
5.4. Geometry 161
– Radius for bicubic interpolation: This parameter allows to control the size of the
bicubic interpolation filter. If the target pixel size is higher than the input pixel size,
increasing this parameter will reduce aliasing artefacts.
• Nearest Neighbor interpolation: Nearest neighbor interpolation leads to poor image quality,
but it is very fast.
• Linear interpolation: Linear interpolation leads to average image quality but is quite fast
Speed optimization parameters This group of parameters allows to optimize processing time.
• RPC modeling (points per axis): Enabling RPC modeling allows to speed-up SPOT5 ortho-
rectification. Value is the number of control points per axis for RPC estimation
• Available RAM (Mb): This allows to set the maximum amount of RAM available for pro-
cessing. As the writing task is time consuming, it is better to write large pieces of data, which
can be achieved by increasing this parameter (pay attention to your system capabilities)
• Resampling grid spacing: Resampling is done according to a coordinate mapping deforma-
tion grid, whose pixel size is set by this parameter, and expressed in the coordinate system of
the output image The closer to the output spacing this parameter is, the more precise will be
the ortho-rectified image,but increasing this parameter will reduce processing time.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Supported sensors are Pleiades, SPOT5 (TIF format), Ikonos, Quickbird, Worldview2, GeoEye.
Authors
See also
5.4.9 Pansharpening
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.28, page 163
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Pansharpening.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
This application reads a geom file containing a sensor model and a text file containing a list of ground
control point, and performs a least-square fit of the sensor model adjustable parameters to these tie
points. It produces an updated geom file as output, as well as an optional ground control points
based statistics file and a vector file containing residues. The output geom file can then be used to
ortho-rectify the data more accurately. Plaease note that for a proper use of the application, elevation
must be correctly set (including DEM and geoid file). The map parameters allows to choose a map
projection in which the accuracy will be estimated in meters.
Parameters
This section describes in details the parameters available for this application. Table 5.29, page 165
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is RefineSensorModel.
Input geom file Geom file containing the sensor model to refine
Output geom file Geom file containing the refined sensor model
Input file containing tie points. Points are stored in following format:
Input file containing tie points
row col lon lat. Line beginning with # are ignored.
Output file containing output precision statistics Output file containing the following info:
ref lon ref lat elevation predicted lon predicted lat x error ref(meters) y error ref(meters)
global error ref(meters) x error(meters) y error(meters) overall error(meters)
Output vector file with residues File containing segments representing residues
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• OrthoRectification,HomologousPointsExtraction
Detailed description
This application performs a parametric transform on the input image. Scaling, translation and rota-
tion with scaling factor are handled. Parameters of the transform is expressed in physical units, thus
particular attention must be paid on pixel size (value, and sign). Moreover transform is expressed
from input space to output space (on the contrary ITK Transforms are expressed form output space
to input space).
Parameters
This section describes in details the parameters available for this application. Table 5.30, page 169
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is RigidTransformResample.
Figure 5.30: Parameters table for Image resampling with a rigid transform.
Transform parameters This group of parameters allows to set the transformation to apply.
∗ The Y translation (in physical units): The translation value along Y axis (in
physical units)
∗ X scaling: Scaling factor between the output X spacing and the input X spacing
∗ Y scaling: Scaling factor between the output Y spacing and the input Y spacing
– rotation: rotation
∗ Rotation angle: The rotation angle in degree (values between -180 and 180)
∗ X scaling: Scale factor between the X spacing of the rotated output image and the
X spacing of the unrotated image
∗ Y scaling: Scale factor between the Y spacing of the rotated output image and the
Y spacing of the unrotated image
Interpolation This group of parameters allows to define how the input image will be interpolated
during resampling. Available choices are:
• Nearest Neighbor interpolation: Nearest neighbor interpolation leads to poor image quality,
but it is very fast.
• Linear interpolation: Linear interpolation leads to average image quality but is quite fast
• Bicubic interpolation
– Radius for bicubic interpolation: This parameter allows to control the size of the
bicubic interpolation filter. If the target pixel size is higher than the input pixel size,
increasing this parameter will reduce aliasing artefacts.
Available RAM (Mb) This allows to set the maximum amount of RAM available for processing. As
the writing task is time consuming, it is better to write large pieces of data, which can be achieved
by increasing this parameter (pay attention to your system capabilities)
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• Translation
Using available image metadata, project one image onto another one
Detailed description
This application performs the projection of an image into the geometry of another one.
172 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.31, page 172
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Superimpose.
The image to reproject The image to reproject into the geometry of the reference input.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
5.4. Geometry 173
Spacing of the deformation field Generate a coarser deformation field with the given spacing
Interpolation This group of parameters allows to define how the input image will be interpolated
during resampling. Available choices are:
• Bicubic interpolation: Bicubic interpolation leads to very good image quality but is slow.
– Radius for bicubic interpolation: This parameter allows to control the size of the
bicubic interpolation filter. If the target pixel size is higher than the input pixel size,
increasing this parameter will reduce aliasing artefacts.
• Nearest Neighbor interpolation: Nearest neighbor interpolation leads to poor image quality,
but it is very fast.
• Linear interpolation: Linear interpolation leads to average image quality but is quite fast
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
174 Chapter 5. Applications Reference Documentation
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.32, page 175
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is DimensionalityReduction.
5.5. Image Filtering 175
Rescale Output.
Number of Components. Number of relevant components kept. By default all components are
kept.
Transformation matrix output (text format) Filename to store the transformation matrix (csv format)
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
This application does not provide the inverse transform and the transformation matrix export for the
MAF.
Authors
See also
• ”Kernel maximum autocorrelation factor and minimum noise fraction transformations,” IEEE
Transactions on Image Processing, vol. 20, no. 3, pp. 612-624, (2011)
5.5.2 Mean Shift filtering (can be used as Exact Large-Scale Mean-Shift segmen-
tation, step 1)
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.33, page 178
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is MeanShiftSmoothing.
Figure 5.33: Parameters table for Mean Shift filtering (can be used as Exact Large-Scale Mean-Shift
segmentation, step 1).
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
With mode search option, the result will slightly depend on thread number.
Authors
5.5.3 Smoothing
Detailed description
This application applies smoothing filter to an image. Either gaussian, mean, or anisotropic diffusion
are available.
180 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.34, page 180
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is Smoothing.
• Mean
– Radius: Mean radius (in pixels)
• Gaussian
– Radius: Gaussian radius (in pixels)
5.5. Image Filtering 181
• Anisotropic Diffusion
– Time Step: Diffusion equation time step
– Nb Iterations: Number of iterations
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Examples
Example 1 Image smoothing using a mean filter.To run this example in command-line, use the
following:
otbcli_Smoothing -in Romania_Extract . tif -out smoothedImage_mean . png uchar - type mean
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Example 2 Image smoothing using an anisotropic diffusion filter.To run this example in command-
line, use the following:
otbcli_Smoothing -in Romania_Extract . tif -out smoothedImage_ani . png float - type anidif
- type . anidif . timestep 0.1 - type . anidif . nbiter 5
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.35, page 183
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is BinaryMorphologicalOperation.
5.6. Feature Extraction 183
Feature Output Image Output image containing the filtered output image.
Structuring Element Type Choice of the structuring element type Available choices are:
• Ball
184 Chapter 5. Applications Reference Documentation
• Dilate
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
BinaryMorphologicalOperation =
otbApplication . Registry . CreateApplication (" BinaryMorphologicalOperation ")
Limitations
None
Authors
See also
This application compute for each studied polyline, contained in the input VectorData, the choosen
descriptors.
Detailed description
The first step in the classifier fusion based validation is to compute, for each studied polyline, the
choosen descriptors.
186 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.36, page 186
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ComputePolylineFeatureFromImage.
Figure 5.36: Parameters table for Compute Polyline Feature From Image.
Vector Data Vector data containing the polylines where the features will be computed.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
5.6. Feature Extraction 187
and no geoid file has been set. This is also used by some application as an average elevation
value.
Feature expression The feature formula (b1 <0.3) where b1 is the standard name of input image
first band
Feature name The field name corresponding to the feature codename (NONDVI, ROADSA...)
Output Vector Data The output vector data containing polylines with a new field
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Since it does not rely on streaming process, take care of the size of input image before launching
application.
Authors
Estimate feature fuzzy model parameters using 2 vector data (ground truth samples and wrong sam-
ples).
Detailed description
Estimate feature fuzzy model parameters using 2 vector data (ground truth samples and wrong sam-
ples).
Parameters
This section describes in details the parameters available for this application. Table 5.37, page 189
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is DSFuzzyModelEstimation.
• Input Positive Vector Data: Ground truth vector data for positive samples
• Input Negative Vector Data: Ground truth vector data for negative samples
• Belief Support: Dempster Shafer study hypothesis to compute belief
• Plausibility Support: Dempster Shafer study hypothesis to compute plausibility
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
190 Chapter 5. Applications Reference Documentation
#!/usr/bin/python
DSFuzzyModelEstimation . SetParameterStringList (" plasup " , [ ’" NONDVI " ’, ’" ROADSA " ’, ’" NOBUIL " ’])
DSFuzzyModelEstimation . SetParameterString (" initmod " , " Dempster - Shafer / DSFuzzyModel_Init . xml ")
Limitations
None.
Authors
Computes edge features on every pixel of the input image selected channel
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.38, page 191
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is EdgeExtraction.
• Gradient
• Sobel
• Touzi
– The X Radius:
– The Y Radius:
192 Chapter 5. Applications Reference Documentation
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• otb class
5.6. Feature Extraction 193
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.39, page 193
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is GrayScaleMorphologicalOperation.
Feature Output Image Output image containing the filtered output image.
194 Chapter 5. Applications Reference Documentation
Structuring Element Type Choice of the structuring element type Available choices are:
• Ball
– The Structuring Element X Radius: The Structuring Element X Radius
– The Structuring Element Y Radius: The Structuring Element Y Radius
• Cross
• Dilate
• Erode
• Opening
• Closing
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
This application computes Haralick, advanced and higher order textures on a mono band image
Parameters
This section describes in details the parameters available for this application. Table 5.40, page 196
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is HaralickTextureExtraction.
196 Chapter 5. Applications Reference Documentation
Texture feature parameters This group of parameters allows to define texture parameters.
• X Radius: X Radius
• Y Radius: Y Radius
• X Offset: X Offset
• Y Offset: Y Offset
Texture Set Selection Choice of The Texture Set Available choices are:
• Simple Haralick Texture Features: This group of parameters defines the 8 local Haralick
texture feature output image. The image channels are: Energy, Entropy, Correlation, Inverse
Difference Moment, Inertia, Cluster Shade, Cluster Prominence and Haralick Correlation
• Advanced Texture Features: This group of parameters defines the 9 advanced texture feature
output image. The image channels are: Mean, Variance, Sum Average, Sum Variance, Sum
Entropy, Difference of Entropies, Difference of Variances, IC1 and IC2
• Higher Order Texture Features: This group of parameters defines the 11 higher order tex-
ture feature output image. The image channels are: Short Run Emphasis, Long Run Empha-
sis, Grey-Level Nonuniformity, Run Length Nonuniformity, Run Percentage, Low Grey-Level
Run Emphasis, High Grey-Level Run Emphasis, Short Run Low Grey-Level Emphasis, Short
Run High Grey-Level Emphasis, Long Run Low Grey-Level Emphasis and Long Run High
Grey-Level Emphasis
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
This application allows to compute homologous points between images using keypoints. SIFT or
SURF keypoints can be used and the band on which keypoints are computed can be set indepen-
dantly for both images. The application offers two modes : the first is the full mode where keypoints
are extracted from the full extent of both images (please note that in this mode large image file are
not supported). The second mode, called geobins, allows to set-up spatial binning to get fewer points
spread accross the entire image. In this mode, the corresponding spatial bin in the second image is
estimated using geographical transform or sensor modelling, and is padded according to the user
5.6. Feature Extraction 199
defined precision. Last, in both modes the application can filter matches whose colocalisation in
first image exceed this precision. The elevation parameters are to deal more precisely with sensor
modelling in case of sensor geometry data. The outvector option allows to create a vector file with
segments corresponding to the localisation error between the matches. It can be useful to assess the
precision of a registration for instance. The vector file is always reprojected to EPSG:4326 to allow
display in a GIS. This is done via reprojection or by applying the image sensor models.
Parameters
This section describes in details the parameters available for this application. Table 5.41, page 200
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is HomologousPointsExtraction.
Input band 1 Index of the band from input image 1 to use for keypoints extraction
Input band 2 Index of the band from input image 1 to use for keypoints extraction
Keypoints detection algorithm Choice of the detection algorithm to use Available choices are:
• SURF algorithm
• SIFT algorithm
Use back-matching to filter matches. If set to true, matches should be consistent in both ways.
• Extract and match all keypoints (no streaming): Extract and match all keypoints, loading
both images entirely into memory
• Search keypoints in small spatial bins regularly spread accross first image: This method
allows to retrieve a set of tie points regulary spread accross image 1. Corresponding bins in
image 2 are retrieved using sensor and geographical information if available. The first bin
position takes into account the margin parameter. Bins are cropped to the largest image region
shrinked by the margin parameter for both in1 and in2 images.
5.6. Feature Extraction 201
Estimated precision of the colocalisation function (in pixels). Estimated precision of the colocalisa-
tion function in pixels
Filter points according to geographical or sensor based colocalisation If enabled, this option allows
to filter matches according to colocalisation from sensor or geographical information, using the given
tolerancy expressed in pixels
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Output file with tie points File containing the list of tie points
Output vector file with tie points File containing segments representing matches
202 Chapter 5. Applications Reference Documentation
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Authors
See also
• RefineSensorModel
5.6. Feature Extraction 203
Detailed description
This application detects locally straight contours in a image. It is based on Burns, Hanson, and
Riseman method and use an a contrario validation approach (Desolneux, Moisan, and Morel). The
algorithm was published by Rafael Gromponevon Gioi, JÃ
r c Ã
mie
c Jakubowicz, Jean-Michel
Morel and Gregory Randall.
The given approach computes gradient and level lines of the image and detects aligned points in line
support region. The application allows to export the detected lines in a vector data.
Parameters
This section describes in details the parameters available for this application. Table 5.42, page 203
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is LineSegmentDetection.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
No rescaling in [0, 255] By default, the input image amplitude is rescaled between [0,255]. Turn
on this parameter to skip rescaling
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Computes local statistical moments on every pixel in the selected channel of the input image
Detailed description
This application computes the 4 local statistical moments on every pixel in the selected channel of
the input image, over a specified neighborhood. The output image is multi band with one statistical
moment (feature) per band. Thus, the 4 output features are the Mean, the Variance, the Skewness
and the Kurtosis. They are provided in this exact order in the output image.
Parameters
This section describes in details the parameters available for this application. Table 5.43, page 206
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is LocalStatisticExtraction.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• otbRadiometricMomentsImageFunction class
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.44, page 208
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is MultivariateAlterationDetector.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
MultivariateAlterationDetector . SetParameterString (" in2 " , " Spot5 - Gloucester - after . tif ")
Limitations
None
Authors
See also
• This filter implements the Multivariate Alteration Detector, based on the following work:
A. A. Nielsen and K. Conradsen, Multivariate alteration detection (mad) in multispectral,
bi-temporal image data: a new approach to change detection studies, Remote Sens. Environ.,
vol. 64, pp. 1-19, (1998)
Multivariate Alteration Detector takes two images as inputs and produce a set of N
change maps as a VectorImage (where N is the maximum of number of bands in first and
second image) with the following properties:
- Change maps are differences of a pair of linear combinations of bands from image 1 and
bands from image 2 chosen to maximize the correlation.
- Each change map is orthogonal to the others.
This is a statistical method which can handle different modalities and even different
bands and number of bands between images.
If numbers of bands in image 1 and 2 are equal, then change maps are sorted by in-
creasing correlation. If number of bands is different, the change maps are sorted by
decreasing correlation.
The GetV1() and GetV2() methods allow to retrieve the linear combinations used to
generate the Mad change maps as a vnl matrix of double, and the GetRho() method allows
to retrieve the correlation associated to each Mad change maps as a vnl vector.
This filter has been implemented from the Matlab code kindly made available by the
authors here:
http://www2.imm.dtu.dk/ aa/software.html
Both cases (same and different number of bands) have been validated by comparing
the output image to the output produced by the Matlab code, and the reference images for
testing have been generated from the Matlab code using Octave.
Detailed description
This application computes radiometric indices using the relevant channels of the input image. The
output is a multi band image into which each channel is one of the selected indices.
Parameters
This section describes in details the parameters available for this application. Table 5.45, page 210
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is RadiometricIndices.
Available Radiometric Indices List of available radiometric indices with their relevant channels in
brackets:
Vegetation:NDVI - Normalized difference vegetation index (Red, NIR)
Vegetation:TNDVI - Transformed normalized difference vegetation index (Red, NIR)
Vegetation:RVI - Ratio vegetation index (Red, NIR)
Vegetation:SAVI - Soil adjusted vegetation index (Red, NIR)
Vegetation:TSAVI - Transformed soil adjusted vegetation index (Red, NIR)
Vegetation:MSAVI - Modified soil adjusted vegetation index (Red, NIR)
Vegetation:MSAVI2 - Modified soil adjusted vegetation index 2 (Red, NIR)
Vegetation:GEMI - Global environment monitoring index (Red, NIR)
Vegetation:IPVI - Infrared percentage vegetation index (Red, NIR)
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
212 Chapter 5. Applications Reference Documentation
#!/usr/bin/python
Limitations
None
Authors
See also
Computes Structural Feature Set textures on every pixel of the input image selected channel
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.46, page 213
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is SFSTextureExtraction.
5.6. Feature Extraction 213
Texture feature parameters This group of parameters allows to define SFS texture parameters.
The available texture features are SFS’Length, SFS’Width, SFS’PSI, SFS’W-Mean, SFS’Ratio and
SFS’SD. They are provided in this exact order in the output image.
Feature Output Image Output image containing the SFS texture features.
214 Chapter 5. Applications Reference Documentation
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• otbSFSTexturesImageFilter class
5.6. Feature Extraction 215
Vector data validation based on the fusion of features using Dempster-Shafer evidence theory frame-
work.
Detailed description
This application validates or unvalidate the studied samples using the Dempster-Shafer theory.
Parameters
This section describes in details the parameters available for this application. Table 5.47, page 215
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VectorDataDSValidation.
• Output Vector Data: Output VectorData containing only the validated samples
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None.
Authors
See also
• http://en.wikipedia.org/wiki/Dempster-Shafer theory
5.7 Stereo
Detailed description
This application allows to performs block-matching to estimate pixel-wise disparities between two
images. The application allows to choose the block-matching method to use. It also allows to
input masks (related to the left and right input image) of pixels for which the disparity should be
investigated. Additionally, two criteria can be optionally used to disable disparity investigation
for some pixel: a no-data value, and a threshold on the local variance. This allows to speed-up
computation by avoiding to investigate disparities that will not be reliable anyway. For efficiency
reasons, if the optimal metric values image is desired, it will be concatenated to the output image
(which will then have three bands : horizontal disparity, vertical disparity and metric value). One
can split these images afterward.
Parameters
This section describes in details the parameters available for this application. Table 5.48, page 218
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is BlockMatching.
Input and output data This group of parameters allows to set the input and output images.
5.7. Stereo 219
• The output mask corresponding to all criterions: A mask image corresponding to all
citerions (see masking parameters). Only required if variance threshold or nodata criterions
are set.
• Output optimal metric values as well: If used, the output image will have a second com-
ponent with metric optimal values
Image masking parameters This group of parameters allows to determine the masking parameters
to prevent disparities estimation for some pixels of the left image
• Discard left pixels from mask image: This parameter allows to provide a custom mask for
the left image.Block matching will be only perform on pixels inside the mask.
• Discard right pixels from mask image: This parameter allows to provide a custom mask
for the right image.Block matching will be perform only on pixels inside the mask.
• Discard pixels with no-data value: This parameter allows to discard pixels whose value is
equal to the user-defined no-data value.
• Discard pixels with low local variance: This parameter allows to discard pixels whose local
variance is too small (the size of the neighborhood is given by the radius parameter)
Block matching parameters This group of parameters allow to tune the block-matching behaviour
• Block-matching metric:
Available choices are:
– Sum of Squared Distances: Sum of squared distances between pixels value in the met-
ric window
– Normalized Cross-Correlation: Normalized Cross-Correlation between the left and
right windows
– Lp pseudo-norm: Lp pseudo-norm between the left and right windows
∗ p value: Value of the p parameter in Lp pseudo-norm (must be positive)
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• otbStereoRectificationGridGenerator
Detailed description
This application uses a disparity map computed from a stereo image pair to produce an elevation
map on the ground area covered by the stereo pair. The needed inputs are : the disparity map, the
stereo pair (in original geometry) and the epipolar deformation grids. These grids have to link the
original geometry (stereo pair) and the epipolar geometry (disparity map).
Parameters
This section describes in details the parameters available for this application. Table 5.49, page 223
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is DisparityMapToElevationMap.
Input and output data This group of parameters allows to set the input and output images and grids.
• Input disparity map: The input disparity map (horizontal disparity in first band, vertical in
second)
• Left sensor image: Left image in original (sensor) geometry
• Right sensor image: Right image in original (sensor) geometry
• Left Grid: Left epipolar grid (deformation grid between sensor et disparity spaces)
• Right Grid: Right epipolar grid (deformation grid between sensor et disparity spaces)
• Output elevation map: Output elevation map in ground projection
• Disparity mask: Masked disparity cells won’t be projected
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
224 Chapter 5. Applications Reference Documentation
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
5.7. Stereo 225
Authors
See also
• otbStereoRectificationGridGenerator otbBlockMatching
Detailed description
Estimate disparity map between two images. Output image contain x offset, y offset and metric
value.
Parameters
This section describes in details the parameters available for this application. Table 5.50, page 226
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is FineRegistration.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Compute the ground elevation with a stereo block matching algorithm between one or mulitple
stereo pair in sensor geometry. The output is projected in desired geographic or cartographic map
projection (UTM by default). The pipeline is made of the following steps:
for each sensor pair :
- compute the epipolar displacement grids from the stereo pair (direct and inverse)
- resample the stereo pair into epipolar geometry using BCO interpolation
- create masks for each epipolar image : remove black borders and resample input masks
- compute horizontal disparities with a block matching algorithm
- refine disparities to sub-pixel precision with a dichotomy algorithm
- apply an optional median filter
- filter disparities based on the correlation score and exploration bounds
- translate disparities in sensor geometry
convert disparity to 3D Map.
Then fuse all 3D maps to produce DSM.
Parameters
This section describes in details the parameters available for this application. Table 5.51, page 230
presents a summary of these parameters and the parameters keys to be used in command-line and
5.7. Stereo 229
• Couples list: List of index of couples im image list. Couples must be separated by a comma.
(index start at 0). for example : 0 1,1 2 will process a first couple composed of the first and
the second image in image list, then the first and the third image
. note that images are handled by pairs. if left empty couples are created from input index i.e.
a first couple will be composed of the first and second image, a second couple with third and
fourth image etc. (in this case image list must be even).
5.7. Stereo 231
• Image channel used for the block matching: Used channel for block matching (used for
all images)
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Output parameters This group of parameters allows to choose the DSM resolution, nodata value,
and projection parameters.
• Output resolution: Spatial sampling distance of the output elevation : the cell size (in m)
• NoData value: DSM empty cells are filled with this value (optional -32768 by default)
• Method to fuse measures in each DSM cell: This parameter allows to choose the method
used to fuse elevation measurements in each output DSM cell
Available choices are:
– The cell is filled with the maximum measured elevation values
– The cell is filled with the minimum measured elevation values
– The cell is filled with the mean of measured elevation values
– accumulator mode. The cell is filled with the the number of values (for debugging
purposes).
• Output DSM: Output elevation image
• Parameters estimation modes:
Available choices are:
– Fit to sensor image: Fit the size, origin and spacing to an existing ortho image (uses the
value of outputs.ortho)
232 Chapter 5. Applications Reference Documentation
– User Defined: This mode allows you to fully modify default values.
∗ Upper Left X : Cartographic X coordinate of upper-left corner (meters for carto-
graphic projections, degrees for geographic ones)
∗ Upper Left Y : Cartographic Y coordinate of the upper-left corner (meters for
cartographic projections, degrees for geographic ones)
∗ Size X : Size of projected image along X (in pixels)
∗ Size Y : Size of projected image along Y (in pixels)
∗ Pixel Size X : Size of each pixel along X axis (meters for cartographic projections,
degrees for geographic ones)
∗ Pixel Size Y : Size of each pixel along Y axis (meters for cartographic projections,
degrees for geographic ones)
Output Cartographic Map Projection Parameters of the output map projection to be used. Available
choices are:
Stereorectification Grid parameters This group of parameters allows to choose direct and inverse
grid subsampling. These parameters are very useful to tune time and memory consumption.
• Step of the displacement grid (in pixels): Stereo-rectification displacement grid only varies
slowly. Therefore, it is recommended to use a coarser grid (higher step value) in case of large
images
5.7. Stereo 233
• Sub-sampling rate for epipolar grid inversion: Grid inversion is an heavy process that
implies spline regression on control points. To avoid eating to much memory, this parameter
allows to first sub-sample the field to invert.
Block matching parameters This group of parameters allow to tune the block-matching behavior
• Block-matching metric:
Available choices are:
– Sum of Squared Distances divided by mean of block: derived version of Sum of
Squared Distances between pixels value in the metric window (SSD divided by mean
over window)
– Sum of Squared Distances: Sum of squared distances between pixels value in the met-
ric window
– Normalized Cross-Correlation: Normalized Cross-Correlation between the left and
right windows
– Lp pseudo-norm: Lp pseudo-norm between the left and right windows
∗ p value: Value of the p parameter in Lp pseudo-norm (must be positive)
• Radius of blocks for matching filter (in pixels): The radius of blocks in Block-Matching
(in pixels)
• Minimum altitude offset (in meters): Minimum altitude below the selected elevation source
(in meters)
• Maximum altitude offset (in meters): Maximum altitude above the selected elevation
source (in meters)
• Use bijection consistency in block matching strategy: use bijection consistency. Right
to Left correlation is computed to validate Left to Right disparities. If bijection is not found
pixel is rejected.
• Use median disparities filtering: disparities output can be filtered using median post filter-
ing (disabled by default).
• Correlation metric threshold: Use block matching metric output to discard pixels with low
correlation value (disabled by default, float value)
234 Chapter 5. Applications Reference Documentation
Masks
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
StereoFramework . SetParameterString (" output . out " , " dem . tif ")
Authors
Generates two deformation fields to stereo-rectify (i.e. resample in epipolar geometry) a pair of
stereo images up to the sensor model precision
Detailed description
This application generates a pair of deformation grid to stereo-rectify a pair of stereo images accord-
ing to sensor modelling and a mean elevation hypothesis. The deformation grids can be passed to
the StereoRectificationGridGenerator application for actual resampling in epipolar geometry.
Parameters
This section describes in details the parameters available for this application. Table 5.52, page 236
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is StereoRectificationGridGenerator.
Input and output data This group of parameters allows to set the input and output images.
Epipolar geometry and grid parameters Parameters of the epipolar geometry and output grids
• Elevation management: This group of parameters allows to manage elevation values. Sup-
ported formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a
useful tool to list/download tiles related to a product.
– DEM directory: This parameter allows to select a directory containing Digital Eleva-
tion Model tiles
– Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no
DEM available, no coverage for some points or pixels with no data in the DEM tiles. A
version of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-
Data/raw-file/404aa6e4b3e0/Input/DEM/egm96.grd).
– Default elevation: This parameter allows to set the default height above ellipsoid when
there is no DEM available, no coverage for some points or pixels with no data in the
DEM tiles, and no geoid file has been set. This is also used by some application as an
average elevation value.
5.7. Stereo 237
– Average elevation computed from DEM: Average elevation computed from the pro-
vided DEM
∗ Sub-sampling step: Step of sub-sampling for average elevation estimation
∗ Average elevation value: Average elevation value estimated from DEM
∗ Minimum disparity from DEM: Disparity corresponding to estimated minimum
elevation over the left image
∗ Maximum disparity from DEM: Disparity corresponding to estimated maximum
elevation over the left image
• Scale of epipolar images: The scale parameter allows to generated zoomed-in (scale <1) or
zoomed-out (scale >1) epipolar images.
• Step of the deformation grid (in nb. of pixels): Stereo-rectification deformation grid only
varies slowly. Therefore, it is recommanded to use a coarser grid (higher step value) in case
of large images
• Rectified image size X: The application computes the optimal rectified image size so that
the whole left input image fits into the rectified area. However, due to the scale and step
parameter, this size may not match the size of the deformation field output. In this case, one
can use these output values.
• Rectified image size Y: The application computes the optimal rectified image size so that
the whole left input image fits into the rectified area. However, due to the scale and step
parameter, this size may not match the size of the deformation field output. In this case, one
can use these output values.
• Mean baseline ratio: This parameter is the mean value, in pixels.meters− 1, of the baseline
to sensor altitude ratio. It can be used to convert disparities to physical elevation, since a
disparity of one pixel will correspond to an elevation offset of the invert of this value with
respect to the mean elevation.
Write inverse fields This group of parameter allows to generate the inverse fields as well
• Left inverse deformation grid: The output deformation grid to be used to resample the
epipolar left image
• Right inverse deformation grid: The output deformation grid to be used to resample the
epipolar right image
• Sub-sampling rate for inversion: Grid inversion is an heavy process that implies spline
regression on control points. To avoid eating to much memory, this parameter allows to first
sub-sample the field to invert.
Load otb application from xml file Load otb application from xml file
238 Chapter 5. Applications Reference Documentation
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Generation of the deformation grid is not streamable, pay attention to this fact when setting the grid
step.
Authors
See also
• otbGridBasedImageResampling
5.8. Learning 239
5.8 Learning
Filters the input labeled image using Majority Voting in a ball shaped neighbordhood.
Detailed description
This application filters the input labeled image (with a maximal class label = 65535) using Majority
Voting in a ball shaped neighbordhood. Majority Voting takes the more representative value of all
the pixels identified by the ball shaped structuring element and then sets the center pixel to this
majority label value.
-NoData is the label of the NOT classified pixels in the input image. These input pixels keep their
NoData label in the output image.
-Pixels with more than 1 majority class are marked as Undecided if the parameter ’ip.suvbool ==
true’, or keep their Original labels otherwise.
Parameters
This section describes in details the parameters available for this application. Table 5.53, page 239
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ClassificationMapRegularization.
Input and output images This group of parameters allows to set input and output images for clas-
sification map regularization by Majority Voting.
Regularization parameters This group allows to set parameters for classification map regulariza-
tion by Majority Voting.
• Structuring element radius (in pixels): The radius of the ball shaped structuring element
(expressed in pixels). By default, ’ip.radius = 1 pixel’.
• Multiple majority: Undecided(X)/Original: Pixels with more than 1 majority class are
marked as Undecided if this parameter is checked (true), or keep their Original labels other-
wise (false). Please note that the Undecided value must be different from existing labels in
the input labeled image. By default, ’ip.suvbool = false’.
• Label for the NoData class: Label for the NoData class. Such input pixels keep their
NoData label in the output image. By default, ’ip.nodatalabel = 0’.
• Label for the Undecided class: Label for the Undecided class. By default,
’ip.undecidedlabel = 0’.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
The input image must be a single band labeled image (with a maximal class label = 65535). The
structuring element radius must have a minimum value equal to 1 pixel. Please note that the Unde-
cided value must be different from existing labels in the input labeled image.
Authors
See also
Detailed description
This application computes the confusion matrix of a classification map relatively to a ground truth.
This ground truth can be given as a raster or a vector data. Only reference and produced pixels with
242 Chapter 5. Applications Reference Documentation
values different from NoData are handled in the calculation of the confusion matrix. The confusion
matrix is organized the following way: rows = reference labels, columns = produced labels. In
the header of the output file, the reference and produced class labels are ordered according to the
rows/columns of the confusion matrix.
Parameters
This section describes in details the parameters available for this application. Table 5.54, page 242
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ComputeConfusionMatrix.
– Input reference vector data: Input vector data of the ground truth
– Field name: Field name containing the label values
Value for nodata pixels Label for the NoData class. Such input pixels will be discarded from the
ground truth and from the input classification map. By default, ’nodatalabel = 0’.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
ComputeConfusionMatrix . SetParameterString (" ref . vector . in " , " VectorData_QB1_bis . shp ")
ComputeConfusionMatrix . SetParameterString (" ref . vector . field " , " Class ")
Limitations
None
244 Chapter 5. Applications Reference Documentation
Authors
Computes global mean and standard deviation for each band from a set of images and optionally
saves the results in an XML file.
Detailed description
This application computes a global mean and standard deviation for each band of a set of images
and optionally saves the results in an XML file. The output XML is intended to be used an input for
the TrainImagesClassifier application to normalize samples before learning.
Parameters
This section describes in details the parameters available for this application. Table 5.55, page 244
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ComputeImagesStatistics.
Figure 5.55: Parameters table for Compute Images second order statistics.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
Each image of the set must contain the same bands as the others (i.e. same types, in the same order).
Authors
See also
Fuses several classifications maps of the same image on the basis of class labels.
Detailed description
This application allows to fuse several classification maps and produces a single more robust clas-
sification map. Fusion is done either by mean of Majority Voting, or with the Dempster Shafer
246 Chapter 5. Applications Reference Documentation
Parameters
This section describes in details the parameters available for this application. Table 5.56, page 246
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is FusionOfClassifications.
Input classifications List of input classification maps to fuse. Labels in each classification image
must represent the same class.
Fusion method Selection of the fusion method and its parameters. Available choices are:
• Majority Voting: Fusion of classification maps by majority voting for each output pixel.
• Dempster Shafer combination: Fusion of classification maps by the Dempster Shafer com-
bination method for each output pixel.
– Confusion Matrices: A list of confusion matrix files (*.CSV format) to define the
masses of belief and the class labels. Each file should be formatted the following way:
the first line, beginning with a ’#’ symbol, should be a list of the class labels present in the
corresponding input classification image, organized in the same order as the confusion
matrix rows/columns.
– Mass of belief measurement: Type of confusion matrix measurement used to compute
the masses of belief of each classifier.
Label for the NoData class Label for the NoData class. Such input pixels keep their NoData label
in the output image and are not handled in the fusion process. By default, ’nodatalabel = 0’.
Label for the Undecided class Label for the Undecided class. Pixels with more than 1 fused class
are marked as Undecided. Please note that the Undecided value must be different from existing
labels in the input classifications. By default, ’undecidedlabel = 0’.
The output classification image The output classification image resulting from the fusion of the
input classification images.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
FusionOfClassifications . SetParameterString (" method . dempstershafer . mob " ," precision ")
Limitations
None
Authors
See also
• ImageClassifier application
Detailed description
This application performs an image classification based on a model file produced by the TrainIm-
agesClassifier application. Pixels of the output image will contain the class labels decided by the
classifier (maximal class label = 65535). The input pixels can be optionally centered and reduced
according to the statistics file produced by the ComputeImagesStatistics application. An optional
input mask can be provided, in which case only input image pixels whose corresponding mask value
is greater than 0 will be classified. The remaining of pixels will be given the label 0 in the output
image.
Parameters
This section describes in details the parameters available for this application. Table 5.57, page 249
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ImageClassifier.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
The input image must have the same type, order and number of bands than the images used to
produce the statistics file and the SVM model file. If a statistics file was used during training by the
TrainImagesClassifier, it is mandatory to use the same statistics file for classification. If an input
mask is used, its size must match the input image size.
Authors
See also
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.58, page 251
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is KMeansClassification.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
5.8. Learning 253
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.59, page 254
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is SOMClassification.
• set user defined seed: Set specific seed. with integer value.
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
5.8. Learning 255
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Train a classifier from multiple pairs of images and training vector data.
Detailed description
This application performs a classifier training from multiple pairs of input images and training vec-
tor data. Samples are composed of pixel values in each band optionally centered and reduced using
an XML statistics file produced by the ComputeImagesStatistics application.
The training vector data must contain polygons with a positive integer field representing the class la-
bel. The name of this field can be set using the ”Class label field” parameter. Training and validation
sample lists are built such that each class is equally represented in both lists. One parameter allows
to control the ratio between the number of samples in training and validation sets. Two parameters
allow to manage the size of the training and validation sets per class and per image.
Several classifier parameters can be set depending on the chosen classifier. In the validation process,
the confusion matrix is organized the following way: rows = reference labels, columns = produced
labels. In the header of the optional confusion matrix output file, the validation (reference) and pre-
dicted (produced) class labels are ordered according to the rows/columns of the confusion matrix.
This application is based on LibSVM and on OpenCV Machine Learning classifiers, and is compat-
ible with OpenCV 2.3.1 and later.
Parameters
This section describes in details the parameters available for this application. Table 5.60, page 259
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is TrainImagesClassifier.
Figure 5.60: Parameters table for Train a classifier from multiple images.
Input and output data This group of parameters allows to set input and output data.
• Output confusion matrix: Output file containing the confusion matrix (.csv format).
• Output model: Output file containing the model estimated (.txt format).
260 Chapter 5. Applications Reference Documentation
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Training and validation samples parameters This group of parameters allows to set training and
validation sample lists parameters.
• Maximum training sample size per class: Maximum size per class (in pixels) of the training
sample list (default = 1000) (no limit = -1). If equal to -1, then the maximal size of the
available training sample list per class will be equal to the surface area of the smallest class
multiplied by the training sample ratio.
• Maximum validation sample size per class: Maximum size per class (in pixels) of the
validation sample list (default = 1000) (no limit = -1). If equal to -1, then the maximal size of
the available validation sample list per class will be equal to the surface area of the smallest
class multiplied by the validation sample ratio.
• On edge pixel inclusion: Takes pixels on polygon edge into consideration when building
training and validation samples.
• Training and validation sample ratio: Ratio between training and validation samples (0.0
= all training, 1.0 = all validation) (default = 0.5).
• Name of the discrimination field: Name of the field used to discriminate class labels in the
input vector data files.
Classifier to use for the training Choice of the classifier to use for the training. Available choices
are:
• LibSVM classifier: This group of parameters allows to set SVM classifier parameters.
– SVM Kernel Type: SVM Kernel Type.
5.8. Learning 261
– Cost parameter C: SVM models have a cost parameter C (1 by default) to control the
trade-off between training errors and forcing rigid margins.
– Parameters optimization: SVM parameters optimization flag.
• SVM classifier (OpenCV): This group of parameters allows to set SVM classifier parameters.
See complete documentation here http://docs.opencv.org/modules/ml/doc/support_
vector_machines.html.
– SVM Model Type: Type of SVM formulation.
– SVM Kernel Type: SVM Kernel Type.
– Cost parameter C: SVM models have a cost parameter C (1 by default) to control the
trade-off between training errors and forcing rigid margins.
– Parameter nu of a SVM optimization problem (NU SVC / ONE CLASS): Param-
eter nu of a SVM optimization problem.
– Parameter coef0 of a kernel function (POLY / SIGMOID): Parameter coef0 of a
kernel function (POLY / SIGMOID).
– Parameter gamma of a kernel function (POLY / RBF / SIGMOID): Parameter
gamma of a kernel function (POLY / RBF / SIGMOID).
– Parameter degree of a kernel function (POLY): Parameter degree of a kernel function
(POLY).
– Parameters optimization: SVM parameters optimization flag.
-If set to True, then the optimal SVM parameters will be estimated. Parameters are
considered optimal by OpenCV when the cross-validation estimate of the test set error
is minimal. Finally, the SVM training process is computed 10 times with these optimal
parameters over subsets corresponding to 1/10th of the training samples using the k-fold
cross-validation (with k = 10).
-If set to False, the SVM classification process will be computed once with the currently
set input SVM parameters over the training samples.
-Thus, even with identical input SVM parameters and a similar random seed, the output
SVM models will be different according to the method used (optimized or not) because
the samples are not identically processed within OpenCV.
• Boost classifier: This group of parameters allows to set Boost classifier parameters. See com-
plete documentation here http://docs.opencv.org/modules/ml/doc/boosting.html.
– Boost Type: Type of Boosting algorithm.
– Weak count: The number of weak classifiers.
– Weight Trim Rate: A threshold between 0 and 1 used to save computational time.
Samples with summary weight <= (1 - weight trim rate) do not participate in the next
iteration of training. Set this parameter to 0 to turn off this functionality.
– Maximum depth of the tree: Maximum depth of the tree.
262 Chapter 5. Applications Reference Documentation
• Decision Tree classifier: This group of parameters allows to set Decision Tree classifier pa-
rameters. See complete documentation here http://docs.opencv.org/modules/ml/doc/
decision_trees.html.
– Maximum depth of the tree: The training algorithm attempts to split each node while
its depth is smaller than the maximum possible depth of the tree. The actual depth may
be smaller if the other termination criteria are met, and/or if the tree is pruned.
– Minimum number of samples in each node: If all absolute differences between an
estimated value in a node and the values of the train samples in this node are smaller
than this regression accuracy parameter, then the node will not be split.
– Termination criteria for regression tree:
– Cluster possible values of a categorical variable into K <= cat clusters to find a sub-
optimal split: Cluster possible values of a categorical variable into K <= cat clusters
to find a suboptimal split.
– K-fold cross-validations: If cv folds >1, then it prunes a tree with K-fold cross-
validation where K is equal to cv folds.
– Set Use1seRule flag to false: If true, then a pruning will be harsher. This will make a
tree more compact and more resistant to the training data noise but a bit less accurate.
– Set TruncatePrunedTree flag to false: If true, then pruned branches are physically
removed from the tree.
• Gradient Boosted Tree classifier: This group of parameters allows to set Gradient Boosted
Tree classifier parameters. See complete documentation here http://docs.opencv.org/
modules/ml/doc/gradient_boosted_trees.html.
– Number of boosting algorithm iterations: Number ”w” of boosting algorithm itera-
tions, with w*K being the total number of trees in the GBT model, where K is the output
number of classes.
– Regularization parameter: Regularization parameter.
– Portion of the whole training set used for each algorithm iteration: Portion of the
whole training set used for each algorithm iteration. The subset is generated randomly.
– Maximum depth of the tree: The training algorithm attempts to split each node while
its depth is smaller than the maximum possible depth of the tree. The actual depth may
be smaller if the other termination criteria are met, and/or if the tree is pruned.
• Artificial Neural Network classifier: This group of parameters allows to set Artificial Neural
Network classifier parameters. See complete documentation here http://docs.opencv.
org/modules/ml/doc/neural_networks.html.
– Train Method Type: Type of training method for the multilayer perceptron (MLP)
neural network.
– Number of neurons in each intermediate layer: The number of neurons in each
intermediate layer (excluding input and output layers).
5.8. Learning 263
– Size of the randomly selected subset of features at each tree node: The size of the
subset of features, randomly selected at each tree node, that are used to find the best
split(s). If you set it to 0, then the size will be set to the square root of the total number
of features.
– Maximum number of trees in the forest: The maximum number of trees in the forest.
Typically, the more trees you have, the better the accuracy. However, the improvement
in accuracy generally diminishes and reaches an asymptote for a certain number of trees.
Also to keep in mind, increasing the number of trees increases the prediction time lin-
early.
– Sufficient accuracy (OOB error): Sufficient accuracy (OOB error).
• KNN classifier: This group of parameters allows to set KNN classifier parameters. See
complete documentation here http://docs.opencv.org/modules/ml/doc/k_nearest_
neighbors.html.
– Number of Neighbors: The number of neighbors to use.
set user defined seed Set specific seed. with integer value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
5.9 Segmentation
Connected component segmentation and object based image filtering of the input image according
to user-defined criterions.
266 Chapter 5. Applications Reference Documentation
Detailed description
This application allows to perform a masking, connected components segmentation and object
based image filtering. First and optionally, a mask can be built based on user-defined criteri-
ons to select pixels of the image which will be segmented. Then a connected component seg-
mentation is performed with a user defined criterion to decide whether two neighbouring pixels
belong to the same segment or not. After this segmentation step, an object based image filter-
ing is applied using another user-defined criterion reasoning on segment properties, like shape
or radiometric attributes. Criterions are mathematical expressions analysed by the MuParser li-
brary (http://muparser.sourceforge.net/). For instance, expression ”((b1>80) and intensity>95)”
will merge two neighbouring pixel in a single segment if their intensity is more than 95 and their
value in the first image band is more than 80. See parameters documentation for a list of available
attributes. The output of the object based image filtering is vectorized and can be written in shapefile
or KML format. If the input image is in raw geometry, resulting polygons will be transformed to
WGS84 using sensor modelling before writing, to ensure consistency with GIS softwares. For this
purpose, a Digital Elevation Model can be provided to the application. The whole processing is done
on a per-tile basis for large images, so this application can handle images of arbitrary size.
Parameters
This section describes in details the parameters available for this application. Table 5.61, page 266
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ConnectedComponentSegmentation.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
otbcli_ConnectedComponentSegmentation -in ROI_QB_MUL_4 . tif - mask " (( b1 >80) * intensity >95) "
- expr " distance <10 " - minsize 15 - obia " SHAPE_Elongation >8 " -out
ConnectedComponentSegmentation . shp
To run this example from Python, use the following code snippet:
#!/usr/bin/python
ConnectedComponentSegmentation . SetParameterString (" mask " , " (( b1 >80) * intensity >95) ")
Limitations
Due to the tiling scheme in case of large images, some segments can be arbitrarily split across
multiple tiles.
Authors
Detailed description
This application compares a machine segmentation (MS) with a partial ground truth segmentation
(GT). The Hoover metrics are used to estimate scores for correct detection, over-segmentation,
5.9. Segmentation 269
Parameters
This section describes in details the parameters available for this application. Table 5.62, page 269
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is HooverCompareSegmentation.
• Colored ground truth output: The colored ground truth output image.
• Colored machine segmentation output: The colored machine segmentation output image.
270 Chapter 5. Applications Reference Documentation
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
This application performs the second step of the exact Large-Scale Mean-Shift segmentation work-
flow (LSMS). Filtered range image and spatial image should be created with the MeanShiftSmooth-
ing application, with modesearch parameter disabled. If spatial image is not set, the application will
only process the range image and spatial radius parameter will not be taken into account. This ap-
plication will produce a labeled image where neighbor pixels whose range distance is below range
radius (and optionally spatial distance below spatial radius) will be grouped together into the same
cluster. For large images one can use the nbtilesx and nbtilesy parameters for tile-wise processing,
with the guarantees of identical results. Please note that this application will generate a lot of tempo-
rary files (as many as the number of tiles), and will therefore require twice the size of the final result
in term of disk space. The cleanup option (activated by default) allows to remove all temporary file
as soon as they are not needed anymore (if cleanup is activated, tmpdir set and tmpdir does not exists
before running the application, it will be removed as well during cleanup). The tmpdir option allows
to define a directory where to write the temporary files. Please also note that the output image type
should be set to uint32 to ensure that there are enough labels available.
Parameters
This section describes in details the parameters available for this application. Table 5.63, page 272
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is LSMSSegmentation.
Figure 5.63: Parameters table for Exact Large-Scale Mean-Shift segmentation, step 2.
• Filtered image: The filtered image (cf. Adaptive MeanShift Smoothing application).
• Spatial image: The spatial image. Spatial input is the displacement map (output of the
Adaptive MeanShift Smoothing application).
• Output Image: The output image. The output image is the segmentation of the filtered
image. It is recommended to set the pixel type to uint32.
• Range radius: Range radius defining the radius (expressed in radiometry unit) in the multi-
spectral space.
• Temporary files cleaning: If activated, the application will try to clean all temporary files it
created
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
5.9. Segmentation 273
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
This application is part of the Large-Scale Mean-Shift segmentation workflow (LSMS) and may not
be suited for any other purpose.
Authors
See also
Detailed description
This application performs the third step of the exact Large-Scale Mean-Shift segmentation workflow
(LSMS). Given a segmentation result (label image) and the original image, it will merge regions
whose size in pixels is lower than minsize parameter with the adjacent regions with the adjacent
region with closest radiometry and acceptable size. Small regions will be processed by size: first all
regions of area, which is equal to 1 pixel will be merged with adjacent region, then all regions of
area equal to 2 pixels, until regions of area minsize. For large images one can use the nbtilesx and
nbtilesy parameters for tile-wise processing, with the guarantees of identical results.
Parameters
This section describes in details the parameters available for this application. Table 5.64, page 274
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is LSMSSmallRegionsMerging.
Figure 5.64: Parameters table for Exact Large-Scale Mean-Shift segmentation, step 3 (optional).
• Minimum Region Size: Minimum Region Size. If, after the segmentation, a region is of
size lower than this criterion, the region is merged with the ”nearest” region (radiometrically).
• Size of tiles in pixel (X-axis): Size of tiles along the X-axis.
• Size of tiles in pixel (Y-axis): Size of tiles along the Y-axis.
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
This application is part of the Large-Scale Mean-Shift segmentation workflow (LSMS) and may not
be suited for any other purpose.
Authors
See also
Detailed description
This application performs the fourth step of the exact Large-Scale Mean-Shift segmentation work-
flow (LSMS). Given a segmentation result (label image), that may have been processed for small
regions merging or not, it will convert it to a GIS vector file containing one polygon per segment.
Each polygon contains additional fields: mean and variance of each channels from input image (in
parameter), segmentation image label, number of pixels in the polygon. For large images one can
use the nbtilesx and nbtilesy parameters for tile-wise processing, with the guarantees of identical
results.
Parameters
This section describes in details the parameters available for this application. Table 5.65, page 276
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is LSMSVectorization.
Figure 5.65: Parameters table for Exact Large-Scale Mean-Shift segmentation, step 4.
5.9. Segmentation 277
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
This application is part of the Large-Scale Mean-Shift segmentation workflow (LSMS) and may not
be suited for any other purpose.
278 Chapter 5. Applications Reference Documentation
Authors
See also
5.9.6 Segmentation
Performs segmentation of an image, and output either a raster or a vector file. In vector mode, large
input datasets are supported.
Detailed description
In raster mode, the output of the application is a classical image of unique labels identifying
the segmented regions. The labeled output can be passed to the ColorMapping application to
render regions with contrasted colours. Please note that this mode loads the whole input image into
memory, and as such can not handle large images.
To segment large data, one can use the vector mode. In this case, the output of the applica-
tion is a vector file or database. The input image is split into tiles (whose size can be set using the
tilesize parameter), and each tile is loaded, segmented with the chosen algorithm, vectorized, and
written into the output file or database. This piece-wise behavior ensure that memory will never
get overloaded, and that images of any size can be processed. There are few more options in the
vector mode. The simplify option allows to simplify the geometry (i.e. remove nodes in polygons)
according to a user-defined tolerance. The stitch option allows to application to try to stitch together
polygons corresponding to segmented region that may have been split by the tiling scheme.
Parameters
This section describes in details the parameters available for this application. Table 5.66, page 280
presents a summary of these parameters and the parameters keys to be used in command-line and
5.9. Segmentation 279
– Scale factor: Scaling of the image before processing. This is useful for images with
narrow decimal ranges (like [0,1] for instance).
• Connected components: Simple pixel-based connected-components algorithm with a user-
defined connection condition.
– Condition: User defined connection condition, written as a mathematical expression.
Available variables are p(i)b(i), intensity p(i) and distance (example of expression :
distance <10 )
• Watershed: The traditional watershed algorithm. The height function is the gradient magni-
tude of the amplitude (square root of the sum of squared bands).
– Depth Threshold: Depth threshold Units in percentage of the maximum depth in the
image.
– Flood Level: flood level for generating the merge tree from the initial segmentation
(between 0 and 1)
• Morphological profiles based segmentation: Segmentation based on morphological pro-
files, as described in Martino Pesaresi and Jon Alti Benediktsson, Member, IEEE: A new ap-
proach for the morphological segmentation of high resolution satellite imagery. IEEE Trans-
actions on geoscience and remote sensing, vol. 39, NO. 2, February 2001, p. 309-320.
– Profile Size: Size of the profiles
– Initial radius: Initial radius of the structuring element (in pixels)
– Radius step.: Radius step along the profile (in pixels)
– Threshold of the final decision rule: Profiles values under the threshold will be ig-
nored.
Processing mode Choice of processing mode, either raster or large-scale. Available choices are:
• Tile-based large-scale segmentation with vector output: In this mode, the application will
output a vector file or database, and process the input image piecewise. This allows to perform
segmentation of very large images.
– Output vector file: The output vector file or database (name can be anything under-
stood by OGR)
– Writing mode for the output vector file: This allows to set the writing behaviour for
the output vector file. Please note that the actual behaviour depends on the file format.
– Mask Image: Only pixels whose mask value is strictly positive will be segmented.
– 8-neighbor connectivity: Activate 8-Neighborhood connectivity (default is 4).
– Stitch polygons: Scan polygons on each side of tiles and stitch polygons which connect
by more than one pixel.
282 Chapter 5. Applications Reference Documentation
– Minimum object size: Objects whose size is below the minimum object size (area in
pixels) will be ignored during vectorization.
– Simplify polygons: Simplify polygons according to a given tolerance (in pixel). This
option allows to reduce the size of the output file or database.
– Layer name: Name of the layer in the vector file or database (default is Layer).
– Geometry index field name: Name of the field holding the geometry index in the
output vector file or database.
– Tiles size: User defined tiles size for tile-based segmentation. Optimal tile size is
selected according to available RAM if null.
– Starting geometry index: Starting value of the geometry index field
– OGR options for layer creation: A list of layer creation options in the form
KEY=VALUE that will be passed directly to OGR without any validity checking. Op-
tions may depend on the file format, and can be found in OGR documentation.
• Standard segmentation with labeled raster output: In this mode, the application will output
a standard labeled raster. This mode can not handle large data.
– Output labeled image: The output labeled image.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Examples
Example 1 Example of use with vector mode and watershed segmentationTo run this example in
command-line, use the following:
otbcli_Segmentation -in QB_Toulouse_Ortho_PAN . tif - mode vector - mode . vector . out
SegmentationVector . sqlite - filter watershed
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Segmentation . SetParameterString (" mode . vector . out " , " SegmentationVector . sqlite ")
Example 2 Example of use with raster mode and mean-shift segmentationTo run this example in
command-line, use the following:
otbcli_Segmentation -in QB_Toulouse_Ortho_PAN . tif - mode raster - mode . raster . out
SegmentationRaster . tif uint16 - filter meanshift
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Segmentation . SetParameterString (" mode . raster . out " , " SegmentationRaster . tif ")
Segmentation . SetParameterOutputImagePixelType (" mode . raster . out " , 3)
Limitations
In raster mode, the application can not handle large input images. Stitching step of vector mode
might become slow with very large input images.
MeanShift filter results depends on the number of threads used.
Watershed and multiscale geodesic morphology segmentation will be performed on the amplitude
of the input image.
Authors
See also
• MeanShiftSegmentation
5.10 Miscellanous
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.67, page 284
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is BandMath.
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
BandMath . SetParameterString (" exp " , " cos ( im1b1 )+ im2b1 * im3b1 - im3b2 + ndvi ( im3b3 , im3b4 )")
Limitations
None
Authors
Detailed description
This application computes MSE (Mean Squared Error), MAE (Mean Absolute Error) and PSNR
(Peak Signal to Noise Ratio) between the channel of two images (reference and measurement). The
user has to set the used channel and can specify a ROI.
Parameters
This section describes in details the parameters available for this application. Table 5.68, page 286
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is CompareImages.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
Detailed description
The application applies a linear unmixing algorithm to an hyperspectral data cube. This method
supposes that the mixture between materials in the scene is macroscopic and simulates a linear
mixing model of spectra.
The Linear Mixing Model (LMM) acknowledges that reflectance spectrum associated with each
pixel is a linear combination of pure materials in the recovery area, commonly known as endmem-
bers. Endmembers can be estimated using the VertexComponentAnalysis application.
5.10. Miscellanous 289
The application allows to estimate the abundance maps with several algorithms : Unconstrained
Least Square (ucls), Fully Constrained Least Square (fcls), Image Space Reconstruction Algorithm
(isra) and Non-negative constrained Least Square (ncls) and Minimum Dispertion Constrained Non
Negative Matrix Factorization (MDMDNMF).
Parameters
This section describes in details the parameters available for this application. Table 5.69, page 289
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is HyperspectralUnmixing.
• Load otb application from xml file: Load otb application from xml file
• Save otb application to xml file: Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• VertexComponentAnalysis
5.10. Miscellanous 291
Detailed description
This application exports the input image in a kmz product that can be display in the Google Earth
software. The user can set the size of the product size, a logo and a legend to the product. Furthe-
more, to obtain a product that fits the relief, a DEM can be used.
Parameters
This section describes in details the parameters available for this application. Table 5.70, page 291
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is KmzExport.
Output .kmz product Output Kmz product directory (with .kmz extension)
Tile Size Size of the tiles in the kmz product, in number of pixels (default = 512).
292 Chapter 5. Applications Reference Documentation
Image logo Path to the image logo to add to the KMZ product.
Image legend Path to the image legend to add to the KMZ product.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• Conversion
Detailed description
Generate a vector data from Open Street Map data. A DEM could be use. By default, the entire
layer is downloaded, an image can be use as support for the OSM data. The application can provide
also available classes in layers . This application required an Internet access. Informations about the
OSM project : http://www.openstreetmap.fr/
Parameters
This section describes in details the parameters available for this application. Table 5.71, page 294
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is OSMDownloader.
Figure 5.71: Parameters table for Open Street Map layers importations applications.
Elevation management This group of parameters allows to manage elevation values. Supported
formats are SRTM, DTED or any geotiff. DownloadSRTMTiles application could be a useful tool
to list/download tiles related to a product.
• DEM directory: This parameter allows to select a directory containing Digital Elevation
Model tiles
• Geoid File: Use a geoid grid to get the height above the ellipsoid in case there is no DEM
available, no coverage for some points or pixels with no data in the DEM tiles. A version
of the geoid can be found on the OTB website (http://hg.orfeo-toolbox.org/OTB-Data/raw-
file/404aa6e4b3e0/Input/DEM/egm96.grd).
• Default elevation: This parameter allows to set the default height above ellipsoid when there
is no DEM available, no coverage for some points or pixels with no data in the DEM tiles,
5.10. Miscellanous 295
and no geoid file has been set. This is also used by some application as an average elevation
value.
Load otb application from xml file Load otb application from xml file
Save otb application to xml file Save otb application to xml file
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
See also
• Convertion
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.72, page 296
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is ObtainUTMZoneFromGeoPoint.
Figure 5.72: Parameters table for Obtain UTM Zone From Geo Point.
• Save otb application to xml file: Save otb application to xml file
5.10. Miscellanous 297
Example
Obtain a UTM Zone To run this example in command-line, use the following:
otbcli_ObtainUTMZoneFromGeoPoint -lat 10.0 -lon 124.0
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.73, page 298
presents a summary of these parameters and the parameters keys to be used in command-line and
298 Chapter 5. Applications Reference Documentation
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors
Detailed description
Parameters
This section describes in details the parameters available for this application. Table 5.74, page 300
presents a summary of these parameters and the parameters keys to be used in command-line and
programming languages. Application key is VertexComponentAnalysis.
Example
To run this example from Python, use the following code snippet:
#!/usr/bin/python
Limitations
None
Authors