Talks at the Meeting C++ 2015 Conference
In a world of ever increasing parallelism and concurrency, the complexity of software systems is not going to stop increasing as well. That's why functional programming is becoming each day more and more a hot topic, leaving academia and eventually approaching industry.
That is also why functional programming languages are gaining traction, and functional features are becoming standard in any reasonable programming language, so much that even Java now has lambda expressions.
C++11 introduced Variadic Templates and constexpr that ease and allow type-safe computations at compile time. For embedded development this is an important aspect, because it provides a means to create ROMable data in type-safe C++. In combination with C++14 mechanism of Variable Templates, which actually define constants, there are unprecedented possibilities for compile-time computations.
C++ is a language suited with a widespread set of tools, paradigms, idioms. Nobody denies that C++ is a complex language to learn, but its a shame that most C++ programmers keep one of its most powerful features out of the toolbox, just categorizing it as bizarre black magic.
Of course I'm talking about templates and template meta-programming.
C++ embedded programming is very difficult. There are some limitations that are not always present in traditional programming environments such as limited memory, slower processors, and older C++ 98 compilers. Embedded C++ programmers must typically avoid using new and delete to avoid memory fragmentation and to maximize the limited amount of memory that they can use for their applications.
Horrible software engineering technique conceived in the forge of Hell or the Only True Way of doing C++ in 2015, template metaprogramming and its cohort of companion techniques are sure to create animation in a group of programmers.
What if we were to tell you that an actual software product, actually sold to real customers and in production for now several years has been built on it? What if we were to tell you that a lot of advanced template techniques helped us to build a better software faster?
This talk is all about real life examples of template metaprogramming, why they are useful and when and how you could use them in your own projects.
Boost.Geometry is a concept-based, generic Boost library (accepted in Boost in 2009) that offers primitives and algorithms for solving geometric problems. Its design is based on meta-functions and tag dispatching, and offers a dimension-agnostic and coordinate-system-agnostic geometry kernel. In the past couple of years a lot of functionality has been added to Boost.
Most well-known structured data serialization libraries possess a dedicated
deserialization routine, which parses persistent data on disk into
its in-memory form.
Practice shows that the usage of the callback based approach to asynchronous programming is usually uncomfortable. To simplify the writing and supporting a complex asynchronous code you can use a different approach - using coroutines. It significantly reduces the size and complexity of the source code.
"C++ is a programming language for efficiency and performance!" you hear C++ programmers boast about their language of choice. Or similarly "I'm using C++, therefore my program is very efficient!” Whereas we as C++ programmers would love these statement to be an absolute truth, in practice runtime performance proves to be an elusive beast: There are numerous performance pitfalls C++ programmers have to know about and our faith in the compiler is all too often misguided, as it does not magically transform any given code into efficient executables.
Performance is and always has been an important, often crucial, aspect of software development. With the emergence of multi-core systems, performance increases are now often tried to be accomplished by parallelization. However, here I will argue that in many situations even the single-core performance can be improved tremendously if one manages to utilize the full power of modern CPUs.
Using modern C++11 features and functional programming now it's possible to create universal constructs to separate business logic and common processing algorithms. As step forward from just decoupling actions from iterations using filter,map,reduce operators we go to custom functional chains, transducers and the ways to wrap it into the simplest possible syntax.
The first important thing when talking about embedded and IoT applications it to define and classify them. This talk will start having a look at SW architectures and design of embedded applications, but very focused on IoT ones, and the role that C++ plays and will play in it, as traditionally it has been a minor language (very dominated by C in the embedded and other Ruby, Java, etc in the cloud/server part of the applications).
There are two features planned for C++17 that are poised to reinvent the language like the lambdas and auto did for C++11. Those are the ranges (N4128) and the await (N4134).
Ranges are objects that represent a sequence of elements in a similar, but improved manner compared to the iterator pairs.
General-purpose computing on graphics processing units (GPGPU) has become a widely adopted way of leveraging highly demanding workloads in e.g. scientific simulations, computer aided engineering, visualization and data analysis in recent years. The number of open-source libraries that use GPUs to mitigate algorithmic bottlenecks exposing a high degree of parallelism is increasing almost weekly.
C++14 has been announced as the next best thing since sliced bread in terms of simplicity, performance and overall elegance of c++ code. This talk is the story of why and how we decided to migrate one of our old 'modern C++' software library -- BSP++, a C++ implementation of the BSP parallel programming model -- to C++14.
Small micro-controllers are a small :) but important subset of embedded systems. The traditional language for programming small micro-controllers is C, but C++ has much to offer beyond C in abstraction power, compile-time computation, and compile-time checks. But constrained resources and real-time requirements make the art of programming a small microcontroller significantly different from programming a larger system like a desktop PC.
Why is the world rushing to add Parallelism to base languages when consortiums and companies have been trying to fill that space for years? How is the landscape of Parallelism changing in the various standards, and specifications? I will give an overview as well as a deep dive into what C, C++ is doing to add parallelism, but also how consortiums like OpenMP is pushing forward into the world's first High-level Language support for GPGPU/Accelerators and SIMD programming.
Pairs of iterators are ubiquitous throughout the C++ library. It is generally accepted that combining such a pair into a single entity usually termed Range delivers more concise and readable code. Defining the precise semantics of such Range concept proves surprisingly tricky, however. Theoretical considerations conflict with practical ones.
Vor bald 20 Jahren hatte Matt Austern ein Problem mit der Performanz von STL-Algorithmen aufgezeigt, wenn diese auf hierarchisch structurierten (segmentierten) Daten operieren, z.B. einer deque .
Der Artikel skizziert auch eine Lösung mit Hilfe von Hierarchie-adaptiven Algorithmen und sog.
The High Performance Computing (HPC) community is facing a technology shift which will result in a performance boost by three orders of magnitudes within the next 5 years. This rise of performance will mainly be acquainted by increasing the level of concurrency in such a way that a user of those systems needs to accommodate to billion way parallelism.
Multi-core architecture is the present and future way in which the market is addressing Moore’s law limitations. Multi-core workstations, high performance computers, GPUs and the focus on hybrid/ public cloud technologies for offloading and scaling applications is the direction development is heading.
Like it or not, C++ programs aren't exactly quick to compile. It's a fact of life that developers using other languages enjoy faster turnaround times for testing out their ideas. And yet, all is not lost. In this talk, we'll take a look at the ways in which С++ can be treated in a fashion that allows for quickly testing ideas.
For around two decades now, Scott Meyers' Effective C++ series forms an indispensable resource for even the most experienced C++ developer, and many of us own a copy. Its bite-sized items allow you to read one small chapter at a time, and return to your work, applying what you just learned right away.
One of the most important new features being proposed for standardization in C++17 is Resumable Functions. Resumable functions are a form of coroutine designed to be highly scalable, highly efficient (no overhead), and highly extensible, while still interacting seamlessly with the rest of the C++ language.
In many models of Intel processors that include Intel® Graphics Technology, you can offload a reasonable amount of parallelizable work. The Intel® C++ Compiler provide a feature which enables offloading general purpose compute kernels to processor graphics using Intel® Cilk Plus programming model which gives a seamless porting experience for C/C++ developers.
The importance of creating easy to use APIs is usually undervalued. Easy to use and intuitive APIs can significantly help the process of developing larger scale applications of frameworks. Qt, as one of the most widely used C++ frameworks, has always focused on easy to use and intuitive APIs. Understanding the principles behind these APIs and how they are designed, can help in creating better and more maintainable applications.
C++ is used in applications where resources are constrained and performance is critical. However, its power in this domain comes from the ability to build large, complex systems in C++. These systems leverage numerous C++ features in order to build and utilize abstractions that make reasoning about these complex systems possible.