April 1st 2025
I'm excited to share this interview with Klaus Iglberger. Klaus is a respected C++ speaker at at CppCon and Meeting C++, he's the author of "C++ Software Design", creator of open-source math library Blaze, and C++ consultant. I've asked him a lot of questions, so be prepared..! Let's go.
My name is Klaus and I’ve been a C++ developer since the beginning of the 2000s. I’ve written my PhD in C++, then worked for a few companies, always using C++, before starting as an independent C++ trainer/consultant in around 2016.
I cannot remember any key moment. It was — and unfortunately still is — my general impression that software design and architecture are not taken as seriously as they should. As stated, very few of my customers have an architectural documentation, use ADRs, or consciously apply architectural or design patterns.
Balancing the requirements in software design and architecture is always a compromise. To quote Neal Ford and Mark Richards: “Everything in software architecture is a trade-off.” In fact, I would argue that there is almost never a perfect solution — if you believe to have a perfect solution, congratulations! — but you’ll always pick the least bad solution for a problem. But choosing this solution consciously, based on the decision which requirements are most important for your application, is software architecture.
While performance is of major interest for C++ developers and while of course there are performance-related architectural properties — for instance scalability and elasticity — the performance that C++ developers usually think about mostly happens on the level of implementation details. Whether or not the C++ compiler can create more efficient code, is not so much a concern on the level of design/architecture. Rather it is maintainability and the ability to easily change things that drive design decisions.
One of the primary principles of good software is to keep things simple. This is expressed by the KISS principle — Keep It Simple, Stupid. So I would advise to keep things simple in the beginning, making sure that other developers easily understand the code. And this may have huge advantages: first, the code is easier to understand, also for other developers. This is a very important aspect! But second, at this time you might not have a good idea how the code will evolve, how it will change. Hence the introduction of any kind of abstraction — whether it’s a base class, a template, or anything else, is a guess about the future, which might prove to be wrong. Thus keeping things simple and waiting until you understand how things evolve is a good thing.
Then eventually, there comes the day when a change or an extension happens: some new feature is added or perhaps an alternative way to do something is required. Once this happens, you suddenly understand how the code will evolve, how the code will change. And often you know that this kind of change will happen again. Now is the time to consciously pick an appropriate solution from your toolbox to accommodate for this kind of change. And yes, of course, this solution is likely some kind of design or architectural pattern.
Of course sometimes exceptional situation occur: something changes and you know that this is the only change of this kind. Then keep the code simple, avoid any kind of over-engineering.
A common mistake is to believe that you have to use design patterns for everything. In case you get up in the morning, thinking “Today I’m gonna the XY design patterns” something is wrong. Once again, keep the code simple!
Another common mistake is that abstractions are used too early, before you know how the code evolves. That for instance happens if you believe everything needs a base class or that you need a virtual destructor for every class. Be patient, wait, until you understand the evolution of the code better.
The design/architecture usually varies based on the selected requirements. It makes a difference whether you favor flexibility and maintainability or scalability and elasticity.
It is important to understand that both OOP and FP have their unique advantages: OOP allows you to add new types easily, but is pretty restrictive when it comes to adding new operations. To understand why that is, imagine you try to add a new pure virtual function to a base class that is already used by many of your customers. Of course your customers would not be happy to be forced to implement this new function. Therefore it might prove to be difficult to add the operation.
FP, on the other side, allows you to add new operations easily, but makes adding new types more difficult. This choice — making adding types or operations easy — is what we call the Expression Problem. Thus it is important to know both, know about these strengths and weaknesses and choose the right tool for the right task.
On the level of implementation details, FP has considerable advantages: fewer pointer, memory allocations, and the use of value semantics. And the use of algorithms or even ranges is strongly recommended as best practice in C++ today. But the higher we go, the more we think about design and architecture, the more OOP can play its strengths: loose coupling, increasedextensibility, plugin architectures, … In summary, we need both, OOP and FP, and should know about their pros and cons. Luckily, C++ makes it easy to use both :-)
My impression is that because of the GoF (Gang of Four) book and the way they first taught us design patterns, too many people believe design patterns are an OO tool. In my classes, talks, and book I’m trying to show that design patterns are much more general: they represent a general dependency structure, which is applicable in both OOP, FP, and also in generic programming. I hope that this helps people to understand how important these patterns truly are and that eventually this causes a shift in how we teach design patterns. To give an example: the Strategy design pattern is used hundreds of times in the standard library, but almost exclusively in the form of template parameters (see for instance the allocator template parameter of std::vector or the deleter of std::unique_ptr).
If there is no time pressure, I wait. Seeing how the code evolves will help to understand which design is the most suitable. If the decision has to be made now, then I try to pick the one that feels the least bad. Of course it might turn out to be a bad decision, but given the available information it was the best choice at the time. This is what Architectural Decision Records (ADR) are for: ADRs help to document our choices and to help others to understand why the design/architecture is as it is.
std::variant is an example that I like to talk about, because it very effectively shows how C++ has changed in the last decades and the significant advantages of value semantics. While value semantics has been with C++ since he beginning, the changes and extensions of the language have helped a lot to promote value semantics as the preferred programming style.
A second example for value semantics, which replaces classic design patterns, is std::function. std::function is for instance a great replacement for any classic, inheritance-based implementation of the Strategy or Command design patterns. And std::function is just an example for Type Erasure, which can — and probably should — be used as a replacement for any inheritance based solution.
I’m not aware of any pattern that has become obsolete due to new C++ language features. After all, patterns represent dependency structures, which don’t become obsolete. The idea that a pattern becomes obsolete may stem from other programming languages, in which some patterns are considered so important that they are embedded in the language itself. That doesn’t make the pattern obsolete, though, but can be considered as proof of its importance.
I agree to the statement that design is the hard part of software development. That is because usually there is no perfect design/architecture, but you’d choose the solution that is the best — or better least bad — for a given problem. As stated before, software design and architecture is a always a trade-off between different requirements.
C++ has the reputation to be difficult to master. As the question implies that is for instance because of the complex rules and undefined behavior. However, by now I’m very certain that the complexity of code is often self-inflicted. We have a number of solutions by now that help to reduce complexity quite a bit: algorithms and ranges help to reduce the complexity of indices and iterators. Strong types, in particular in combination with concepts, help to avoid any kind of problem with overload resolution and type conversions. And values and value wrappers, such as std::variant, std::optional and std::function, help to avoid any problem with null pointers, dangling pointers, and life time issues.
For some reason, people are attracted by the complexity. People still prefer to write for loops, use fundamental types and use pointers. This may be because they don’t know better or because of the way people learn C++. I’m convinced, though, this is also because of the common believe that you need to work on the lowest level to get fast code and to be in control.
What we need is a mindset shift towards the simpler solutions. In my classes, talks and even in my book I try to demonstrate that the simpler solutions are usually the better solutions that help to reduce the complexity of the code base without affecting performance. On the contrary, sometime performance even improves, as for instance when replacing the classic Visitor with std::variant.
I’m mostly looking forward to see what we’ll be able to do with reflection. When teaching Type Erasure, one of the most most common complaints is that we have to write extra code to get what we want. It has been demonstrated before that with reflection it will be possible to generate this code. However, obviously, the next step I’d like to see is that the language provides support to create Type Erasure wrappers on its own.
The inspiration came from the fact that I myself didn’t have a suitable resource to learn more about this topic. The available books were either old, had a dry, non-entertaining writing style, or simply didn’t well communicate the advantages of good software design — in particular for C++. Additionally, in the C++ community people tend to talk mostly about implementation details, features and new C++ standards. There are too few people talking about the bigger picture.
Since my class on “C++ Software Design” was received well and people liked it, I felt that I should fill the gap and write the book I would have liked to read myself.
I hope that this core lessen could be that design patterns are universal solutions for all paradigms and not just OOP solutions. I hope that people realize that design patterns are everywhere and that is is pretty much impossible to write serious code without them.
Blaze is strongly building on Expression Templates, an idea introduced by Todd Veldhuizen in 1995. However, it turns out that Expression Templates are an example for the Decorator design pattern from the GoF book.
The most important design principle I use in Blaze is “Sparation of Concerns” — also called the “Single Responsibility Principle”. Adhering to this principle made it possible for me to much more easily introduce new features without having to re-write large portions of the library and without changing the API.
There is a simple answer to that: costs.
It is true that often some features need to be delivered quickly. Quick solutions, however, raise the risk that the implementation quality suffers, which introduces technical debt. However, if this newly introduced technical debt is not documented and if there is no plan to deal with it, to refactor it, then it becomes technical neglect (a term coined by Kevlin Henney [1]). This technical neglect starts to introduce additional cost, which grows over time: understanding the code is more difficult, changes take longer, are more complex, and the risk to introduce bugs gets bigger.
In his book “Tidy First?” Kent Beck makes it clear that the dominant cost of software development is the cost of changing code. Thus from an economic point of view it is decisive to make sure that technical debt is payed off and does not become technical neglect.
In his talk “The Economics of Software Design”, J. B. Rainsberger makes the same point: the balance needs to be maintained simply to control development costs [2].
[1] Kevlin Henney, “Technical Neglect”, NDC London 2024
[2] J.B. Rainsberger, “The Economics of Software Design”, DevTernity 2018
Maintainability includes readability, but also changeability and extensibility. As Kent Beck recently described in his book “Tidy First?”, the biggest cost factors of software are understanding, changing and extending software.
Software design is mostly concerned about enabling easier change and extension of the software by reducing the inter-dependencies between software components. Thus software design is implicitly also targeted at reducing the overall cost of software development.
If you want consistent software quality — for both small or large teams — you have to make everybody — developers and managers alike — understand the increasing costs of bad software. Everybody has to understand that bad software (design) is expensive, while good software (design) is cheap. Thus striving for simpler, more changeable, more extensible, more readable, more maintainable software is paramount. Code reviews, documentation and guidelines are reasonable means to achieve this, but only help if all people involved have the right mindset.
AI is a tool purely based on statistics. It can only parrot what is has been fed before. It doesn’t have any creative capabilities, although due to the amount of data it may have been fed before you might get this impression. Also, since the data that we feed it is — on average — not high quality and since the tool doesn’t understand the input, you cannot expect that it will create a fitting design based on your requirements for you. Based on this background, I don’t see how AI will positively influence how we design software.
What I do see is that AI can assist in dealing with the boring parts of software development. It is a great tool to automate the things that have been done millions of times before, and it can create code for you that has been written millions of times before. This is the capability that modern IDEs utilize to your advantage.
I would recommend the following books to learn more about software design pattern and understanding the costs of software design:
Kent Beck, “Tidy First?”, O’Reilly
Freeman and Robson, “Head First Design Patterns, 2nd Edition”, O’Reilly
I would recommend the following talks, that I’ve mentioned before:
Kevlin Henney, “Technical Neglect”, NDC London 2024
J.B. Rainsberger, “The Economics of Software Design”, DevTernity 2018
Check out my newer talks available on YouTube. Most of them have to do with software design and design patterns. Also check out the workshops offered at conferences like CppCon or NDC TechTown: I might give a workshop on “C++ Software Design” or similar topics.
I cannot think of anything right now. But thank you very much for the invitation to this interview.