Practical C Plus Plus Metaprogramming
Practical C Plus Plus Metaprogramming
Metaprogramming
Modern Techniques for
Accelerated Development
Editors: Nan Barber and Brian Foster Interior Designer: David Futato
Production Editor: Colleen Lobner Cover Designer: Randy Comer
Copyeditor: Octal Publishing, Inc. Illustrator: Rebecca Demarest
Proofreader: Rachel Head
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Practical C++
Metaprogramming, the cover image, and related trade dress are trademarks of
O’Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the authors disclaim all responsibility for errors or omissions, including without
limitation responsibility for damages resulting from the use of or reliance on this
work. Use of the information and instructions contained in this work is at your own
risk. If any code samples or other technology this work contains or describes is sub‐
ject to open source licenses or the intellectual property rights of others, it is your
responsibility to ensure that your use thereof complies with such licenses and/or
rights.
978-1-491-95504-8
[LSI]
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A Misunderstood Technique 1
What Is Metaprogramming? 3
How to Get Started with Metaprogramming 6
Summary 8
v
Preface
vii
The good news is that you don’t need to master C++ metaprogram‐
ming, because you are standing on the shoulders of giants.
In this report, we will progressively expose you to the technique and
its practical applications, and give you a list of tools that you can use
to get right to it.
Then, depending on your tastes and your aspirations, you can
decide how deep down the rabbit hole you want to go.
Understanding Metaprogramming
Metaprogramming is a technique that can greatly increase your pro‐
ductivity when properly used. Improperly used, though it can result
in unmaintainable code and greatly increased development time.
Dismissing metaprogramming based on a preconceived notion or
dogma is counterproductive. Nevertheless, properly understanding
if the technique suits your needs is paramount for fruitful and
rewarding use.
An analogy we like to use is that you should see a metaprogram as
a robot you program to do a job for you. After you’ve programmed
the robot, it will be happy to do the task for you a thousand times,
without error. Additionally, the robot is faster than you and more
precise.
If you do something wrong, though, it might not be immediately
obvious where the problem is. Is it a problem in how you program‐
med the robot? Is it a bug in the robot? Or is your program correct
but the result unexpected?
That’s what makes metaprogramming more difficult: the feedback
isn’t immediate, and because you added an intermediary you’ve
added more variables to the equation.
That’s also why before using this technique you must ensure that
you know how to program the robot.
viii | Preface
Conventions Used in This Report
The following typographical conventions are used in this report:
Italic
Indicates new terms, URLs, email addresses, filenames, and file
extensions.
Constant width
Used for program listings, as well as within paragraphs to refer
to program elements such as variable or function names, data‐
bases, data types, environment variables, statements, and key‐
words.
Acknowledgments
This report would probably not exist without the work of Aleksey
Gurtovoy and David Abrahams, authors of the Boost.MPL library
and the reference book C++ Template Metaprogramming (Addison-
Wesley Professional).
More recently, Eric Niebler and Peter Dimov paved the way to what
modern C++ template metaprogramming should look like. They
have been greatly influential in our work.
We would also like to thank all of the contributors to the Brigand
library and Louis Dionne for his metaprogramming library bench‐
mark.
Finally, we would like to thank Jon Kalb and Michael Caisse for their
reviews, as well as our families, friends, and coworkers, who have
been incredibly supportive.
Preface | ix
CHAPTER 1
Introduction
If you grabbed this report, it means that you have at least curios‐
ity about C++ metaprogramming, a topic that often generates out‐
right rejection.
Before we talk about template metaprogramming, let’s ask ourselves
a question: why do we violently reject some techniques, even before
studying them?
There are, of course, many valid reasons to reject something new,
because, let’s be frank, sometimes concepts are just plain nonsense
or totally irrelevant to the task at hand.
However, there is also a lot to be said about managing your own
psychology when accepting novelty, and recognizing our own men‐
tal barriers is the best way to prevent them from growing.
The purpose of this report is to demonstrate that understanding
C++ metaprogramming will make you a better C++ programmer, as
well as a better software engineer in general.
A Misunderstood Technique
Like every technique, we can overuse and misunderstand metaprog‐
ramming. The most common reproaches are that it makes code
more difficult to read and understand and that it has no real benefit.
As you progress along the path of software engineering, the techni‐
ques you learn are more and more advanced. You could opt to rely
solely on simple techniques and solve complex problems via a com‐
1
position of these techniques, but you will be missing an opportunity
to be more concise, more productive, and sometimes more efficient.
Imagine that you are given an array and that you need to fill it with
increasing integers. You could write the following function:
void f(int * p, size_t l)
{
for(size_t i = 0; i < l; ++i)
{
p[i] = i;
}
}
// ...
int my_array[5];
f(my_array, 5);
Or you could use the Standard Template Library (STL):
int my_array[5];
2 | Chapter 1: Introduction
When you are finished, we hope that you that will agree with us that
it is both useful and accessible.
What Is Metaprogramming?
By definition, metaprogramming is the design of programs whose
input and output are programs themselves. Put another way, it’s
writing code whose job is to write code itself. It can be seen as the
ultimate level of abstraction, as code fragments are actually seen as
data and handled as such.
It might sound esoteric, but it’s actually a well-known practice. If
you’ve ever written a Bash script generating C files from a boiler‐
plate file, you’ve done metaprogramming. If you’ve ever written C
macros, you’ve done metaprogramming. In another sphere, you
could debate whether generating Java classes from a UML schema is
not actually just another form of metaprogramming.
In some way, you’ve probably done metaprogramming at various
points in your career without even knowing it.
What Is Metaprogramming? | 3
PROCESS(float, y )
PROCESS(float, z )
PROCESS(float, weight )
// in particle.c
typedef struct
{
#define PROCESS(type, member) type member;
#include "components.h"
#undef PROCESS
} particle_t;
#include "components.h"
#undef PROCESS
}
X-macros are a well-tested, pure C-style solution. Like a lot of C-
based solutions, they work quite well and deliver the performance
we expect. We could debate the elegance of this solution, but con‐
sider that a very similar yet more automated system is available
through the Boost.Preprocessor vertical repetition mechanism,
based on self-referencing macros.1
4 | Chapter 1: Introduction
between 1 and an arbitrary limit. Quite mundane, isn’t it? Except
that this enumeration was done through warnings at compile time.
Let’s take a moment to ponder the scope of this discovery. It meant
that we could turn templates into a very crude and syntactically
impractical functional language, which later would actually be pro‐
ven by Todd Veldhuizen to be Turing-complete. If your computer
science courses need to be refreshed, this basically meant that, given
the necessary effort, any functions computable by a Turing machine
(i.e., a computer) could be turned into a compile-time equivalent by
using C++ templates. The era of C++ template metaprogramming
was coming.
C++ template metaprogramming is a technique based on the use
(and abuse) of C++ template properties to perform arbitrary com‐
putations at compile time. Even if templates are Turing-complete,
we barely need a fraction of this computational power. A classic ros‐
ter of applications of C++ template metaprogramming includes the
following:
What Is Metaprogramming? | 5
For some reason, even with those familiar interfaces, metaprogram‐
ming tools continued to be used by experts and were often over‐
looked and considered unnecessarily complex. The compilation
time of metaprograms was also often criticized as hindering a nor‐
mal, runtime-based development process.
Most of the critiques you may have heard about template metaprog‐
ramming stem from this limitation—which no longer applies, as we
will see in the rest of this report.
6 | Chapter 1: Introduction
Checking the Memory Model
Is an integer the size of a pointer? Are you compiling on a 32-bit or
64-bit platform? You can have a compile-time check for this:
static_assert(sizeof(void *) == 8, "expected 64-bit platform");
In this case, the program will not compile if the targeted platform
isn’t 64-bit. This is a nice way to detect invalid compiler/platform
usage.
We can, however, do better than that and build a value based on the
platform without using macros. Why not use macros? A metapro‐
gram can be much more advanced than a macro, and the error out‐
put is generally more precise (i.e., you will get the line where you
have the error, whereas with preprocessor macros this is often not
the case).
Let’s assume that your program has a read buffer. You might want
the value of this read buffer to be different if you are compiling on a
32-bit platform or a 64-bit platform because on 32-bit platforms you
have less than 3 GB of user space available.
The following program will define a 100 MB buffer value on 32-bit
platforms and 1 GB on 64-bit platforms:
static const std::uint64_t default_buffer_size =
std::conditional<sizeof(void *) == 8,
std::integral_constant<std::uint64_t, 100 * 1024 * 1024>,
std::integral_constant<std::uint64_t, 1024 * 1024 * 1024>
>::type::value;
Here’s what the equivalent in macros would be:
#ifdef IS_MY_PLATFORM_64
static const std::uint64_t default_buffer_size
= 100 * 1024 * 1024;
#else
static const std::uint64_t default_buffer_size
= 1024 * 1024 * 1024;
#endif
The macros will silently set the wrong value if you have a typo in the
macro value, if you forget a header, or if an exotic platform on
which you compile doesn’t have the value properly defined.
Also, it is often very difficult to come up with good macros to detect
the correct platform (although Boost.Predef has now greatly
reduced the complexity of the task).
8 | Chapter 1: Introduction
CHAPTER 2
C++ Metaprogramming
in Practice
9
it is no longer possible to run any other kind of software without
having a team of senior system administrators perform a week-long
ritual to cleanse the machine.
Last but not least, the SLA only believes in one god, and that god
is The Great Opaque Pointer. All interfaces are made as incoherent as
possible to ensure that you join the writers in an unnamable crazy
laughter, ready to be one with The Great Opaque Pointer.
If you didn’t have several years of experience up your sleeve, you
would advocate a complete rewrite of the SLA—but you know
enough about software engineering to know that “total rewrite” is
another name for “suicide mission.”
Are we dramatizing? Yes, we are. But let’s have a look at a function
of the SLA:
// we assume alpha and beta to be parameters to the mathematical
// model underlying the weather simulation algorithms--any
// resemblance to real algorithms is purely coincidental
void adjust_values(double * alpha1,
double * beta1,
double * alpha2,
double * beta2);
Now let’s have a look at how you designed your application:
class reading
{
/* stuff */
public:
double alpha_value(location l, time t) const;
double beta_value(location l, time t) const;
/* other stuff */
};
Let us not try to determine what those alpha and beta values are,
whether the design makes sense, or what exactly adjust_values
does. What we really want to see is how we adapt two pieces of soft‐
ware that have very different logic.
// some code
• Type manipulation
• Being able to work on an arbitrary number of parameters and
iterate on them
In other words, we’d like to write C++ that modifies types and not
values. Template metaprogramming is the perfect tool for compile-
time type manipulations.
Let us take a look at a general case. How could we write a program
that takes a double and transforms it into a pointer to a double?
MagicListOfValues values;
olf_f(get_pointers(values));
return values;
}
The only problem is that we can’t do that.
// convenience function
template <typename F>
using make_tuple_of_params_t =
typename make_tuple_of_params<F>::type;
// ...
}
We now have a tuple of params we can load with the results of our
C++ functions and pass to the C function. The only problem is that
the C function is in the form void(double *, double *, double
*, double *), and we work on values.
We will therefore modify our make_tuple_of_params functor
accordingly:
template <typename Ret, typename... Args>
struct make_tuple_of_derefed_params<Ret (Args...)>
{
using type = std::tuple<std::remove_ptr_t<Args>...>;
};
// ...
}
We just need to load up the results!
Returning auto
In C++14 you don’t need to be explicit about the
return type of a function; the type can be determined
at compile time contextually. Using auto in this case
greatly simplifies the writing of generic functions.
We are getting very close to solving our problem; that is, automating
the generation of facade code to adapt the simulation library to our
distributed system.
But the problem is not fully solved yet because we need to somehow
“iterate” on the functions. We will modify our dispatch function so
that it accepts the tuple of functions as a parameter and takes an
index, as demonstrated here:
template <std::size_t FunctionIndex,
typename FunctionsTuple,
typename Params,
std::size_t... I>
auto dispatch_params(FunctionsTuple & functions,
Params & params,
std::index_sequence<I...>)
{
return (std::get<FunctionIndex>(functions))
(std::get<I>(params)...);
}
make_tuple_of_derefed_params_t<LegacyFunction> params =
std::tuple_cat(
dispatch_functions(functions,
std::make_index_sequence<functions_count>(),
params1,
std::make_index_sequence<params_count>()),
dispatch_functions(functions,
std::make_index_sequenc<functions_count>(),
params2,
std::make_index_sequence<params_count>()));
/* rest of the code */
}
As you can see, the logic of our function makes generalization to an
arbitrary list of parameters possible.
using tuple_type =
make_tuple_of_derefed_params_t<LegacyFunction>;
tuple_type t =
std::tuple_cat(
dispatch_functions(functions,
std::make_index_sequence<functions_count>(),
params1,
std::make_index_sequence<params_count>()),
dispatch_functions(functions,
std::make_index_sequenc<functions_count>(),
params2,
std::make_index_sequence<params_count>()));
dispatch_to_c(legacy,
params,
std::make_index_sequence<t_count>());
Summary
Did we accomplish our mission? We’d like to believe that, yes,
we did.
With the use of a couple of template metaprogramming tricks, we
managed to drastically reduce the amount of code required to get
the job done. That’s the immediate benefit of automating code gen‐
eration. Less code means fewer errors, less testing, less maintenance,
and potentially better performance.
This is the strength of metaprogramming. You spend more time
carefully thinking about a small number of advanced functions, so
you don’t need to waste your time on many trivial functions.
Now that you have been exposed to template metaprogramming,
you probably have many questions. How can I check that my
parameters are correct? How can I get meaningful error messages
if I do something wrong? How can I store a pure list of types,
without values?
More importantly, can these techniques be made reusable?
Let’s take it from the beginning…
27
Values at Compile Time
Compile-time computations need to operate on values defined as
valid at compile time. Here’s what this notion of compile-time values
covers:
Pro Tip
Most of the basic needs for type manipulation are pro‐
vided by type_traits. We strongly advise any
metaprogrammer-in-training to become highly famil‐
iar with this standard component.
Type Containers
C++ runtime development relies on the notion of containers to
express complex data manipulations. Such containers can be defined
as data structures holding a variable number of values and following
a given schema of storage (contiguous cells, linked cells, and so on).
We can then apply operations and algorithms to containers to mod‐
ify, query, remove, or insert values. The STL provides pre-made
containers, like list, set, and vector.
How can we end up with a similar concept at compile time? Obvi‐
ously, we cannot request memory to be allocated to store our values.
Moreover, our “values” actually being types, such storage makes lit‐
tle sense. The logical leap we need to make is to understand that
containers are also values, which happen to contain zero or more
other values; if we apply our systematic “values are types” motto,
this means that compile-time containers must be types that contain
zero or more other types. But how can a type contain another type?
There are multiple solutions to this issue.
Meta-Axiom #2
Any template class accepting a variable number of type
parameters can be considered a type container.
Compile-Time Operations
We now have defined type containers as arbitrary template classes
with at least a template parameter pack parameter. Operations on
such containers are defined by using the intrinsic C++ support for
template parameter packs.
Compile-Time Operations | 31
We can do all of the following:
Exercise
Can you infer the implementation for push_front?
Compile-Time Operations | 33
Removal of existing elements in a type container follows a similar
reasoning but relies on the recursive structure of the parameter
pack. Fear not! As we said earlier, recursion in template metaprog‐
ramming is usually ill advised, but here we will only exploit the
structure of the parameter pack and we won’t do any loops. Let’s
begin with the bare-bones code for a hypothetical remove_front
algorithm:
template<class List> struct remove_front;
List<Elements...>
This is empty. In this case, it can be written as List<>.
If we know that a head type exists, we can remove it. If the list is
empty, the job is already done. The code then reflects this process:
template<class List> struct remove_front;
Pack Rewrapping
So far, we’ve dealt mostly with accessing and mutating the parameter
pack. Other algorithms might need to work with the enclosing type
container.
As an example, let’s write a metafunction that turns an arbitrary type
container into a std::tuple. How can we do that? Because the dif‐
ference between std::tuple<T...> and List<T...> is the enclosing
template type, we can just change it, as shown here:
template<class List> struct as_tuple;
Compile-Time Operations | 35
This technique was explained by Peter Dimov in his
blog in 2015 and instigated a lot of discussion around
similar techniques.
Container Transformations
These tools—rewrapping, iteration, and type introspection for type
containers—lead us to the final and most interesting metaprograms:
container transformations. Such transformations, directly inspired
by the STL algorithms, will help introduce the concept of structured
metaprogramming.
Concatenating containers
A first example of transformation is the concatenation of two exist‐
ing type containers. Considering any two lists L1<T1...> and
L2<T2...>, we wish to obtain a new list equivalent
to L1<T1...,T2...>.
The first intuition we might have coming from our runtime experi‐
ence is to find a way to “loop” over types as we repeatedly call
push_back. Even if it’s a correct implementation, we need to fight
this compulsion of thinking with loops. Loops over types will
require a linear number of intermediate types to be computed, lead‐
ing to unsustainable compilation times. The correct way of handling
this use case is to find a natural way to exploit the variadic nature of
our containers.
In fact, we can look at append as a kind of rewrapping in which we
push into a given variadic structure more types than it contained
before. A sample implementation can then be as follows:
template<typename L1, typename L2> struct append;
Pro Tip
Dealing with compile-time containers requires no
loops. Try to express your algorithm as much as possi‐
ble as a direct manipulation of parameter packs.
Generalizing metafunctions
In the compile-time world, we can pass metafunctions directly by
having our transform metaprogram await a template template
parameter. This is a valid solution, but as for runtime functions, we
might want to bind arbitrary parameters of existing metafunctions
to maximize code reuse.
Compile-Time Operations | 37
Let’s introduce the Boost.MPL notion of the metafunction class. A
metafunction class is a structure, which might or might not be a
template, that contains an internal template structure named apply.
This internal metafunction will deal with actually computing our
new type. In a way, this apply is the equivalent of the generalized
operator() of callable objects. As an example, let’s turn
std::remove_ptr into a metafunction class:
struct remove_ptr
{
template<typename T> struct apply
{
using type = typename std::remove_ptr<T>::type;
};
};
How can we use this so-called metafunction class? It’s a bit different
than with metafunctions:
using no_ptr = remove_ptr::apply<int*>::type;
Pro Tip
Metafunctions follow similar rules to those for func‐
tions: they can be composed, bound, or turned into
various similar yet different interfaces. The transition
between metafunctions and metafunction classes is
only the tip of the iceberg.
Compile-Time Operations | 39
Advanced Uses of Metaprogramming
With a bit of imagination and knowledge, you can do things much
more advanced than performing compile-time checks with template
metaprogramming. The purpose of this section is just to give you an
idea of what is possible.
struct second_command
{
std::string operator()(int) { /* something */ }
};
Compile-Time Serialization
What do we mean by compile-time serialization? When you want to
serialize an object, there are a lot of things you already know at com‐
pile time—and remember, everything you do at compile time
doesn’t need to be done any more at runtime.
That means much faster serialization and more efficient memory
usage.
For example, when you want to serialize a std::uint64_t, you
know exactly how much memory you need, whereas when you seri‐
alize a std::vector<std::uint64_t>, you must read the size of the
vector at runtime to know how much memory you need to allocate.
Recursively, it means that if you serialize a structure that is made up
strictly of integers, you are able, at compile time, to know exactly
how much memory you need, which means you can allocate the
required intermediate buffers at compile time.
Summary | 43
About the Authors
Edouard Alligand is the founder and CEO of Quasardb, an
advanced, distributed hyper scalable database. He has more than 15
years of professional experience in software engineering. Edouard
combines an excellent knowledge of low-level programming with a
love for template metaprogramming, and likes to come up with
uncompromising solutions to seemingly impossible problems. He
lives in Paris, France.
Joel Falcou is CTO of NumScale, an Associate Professor at the Uni‐
versity of Paris-Sud, and a researcher at the Laboratoire de Recher‐
che d’Informatique in Orsay, France. He is a member of the C++
Standards Committee and the author of Boost.SIMD and NT2. Joel’s
research focuses on studying generative programming idioms and
techniques to design tools for parallel software development.