Hi everyone!
As Codeforces now supports C++23, it seems to be the right time to discuss some of the particularly interesting features.
Some noteworthy ones that were already mentioned elsewhere include:
views::zip
that maps two ranges into pairs(A[i], B[i])
.views::enumerate
that maps range into pairs(i, a[i])
.views::adjacent<k>
that maps range into tuples(a[i], ..., a[i+k-1])
.views::cartesian_product
that maps two ranges into pairs(A[i], B[j])
with all possiblei
andj
.- Some more specialized views.
ranges::to<Container>
that creates a container out of a view, e.g.to<vector>(views::iota(0, n))
.ranges::fold_left
andranges::fold_right
, range versions ofstd::accumulate
/std::reduce
.insert_range
/append_range
/prepend_range
/assign_range
for containers (not in GCC yet).print
/println
for formatted printing (it seems that standard formatters for ranges are not in GCC yet).
But there are also two features that weren't covered in as much detail as they deserve, deducing this and generators.
Deducing this and recursive lambdas
Assume that you want to write a recursive lambda. What are your options? Naturally, you'd try something like this:
auto fact = [&](int x) {
return x ? x * fact(x - 1) : 1;
};
You will not be allowed to do it, because inside the lambda, you use fact
before its type is deduced. One way to circumvent it is to write function<int(int)> fact = ...
instead. While it is attractive, it makes the calls to fact
more expensive, and in certain cases might even lead you to TLE, e.g. if you try to use this for depth-first traversal of a big graph.
Until C++23, the best practice was to do something like this:
auto fact = [&](auto &&self, int x) -> int {
return x ? x * self(self, x - 1) : 1;
};
cout << fact(fact, n) << endl;
While much more efficient on practice, it looks very ugly. And C++23 helps us, allowing to do this instead:
auto fact = [&](this auto fact, int x) -> int {
return x ? x * fact(x - 1) : 1;
};
cout << fact(n) << endl;
Just as if it was a proper recursive function!
std::generator
Another interesting feature that I haven't seen mentioned in competitive programming discussions at all are coroutines. Assume that you need to factorize a number. If you depend on Pollard's rho algorithm, your flow probably looks as follows:
vector<int> factors;
void factorize(uint64_t m) {
if(is_prime(m)) {
factors.push_back(m);
} else if(m > 1) {
auto g = proper_divisor(m);
factorize(g);
factorize(m / g);
}
}
And it's always annoying that to store the result, you have to either keep a global vector, or take output vector as an argument by reference (and pass it around each time). Ideally you'd want to return a vector, but then you might be afraid of accidentally getting a quadratic runtime, and even besides that you'd need to write extra code to merge the results. Coroutines allow us to rewrite it as follows:
std::generator<uint64_t> factorize(uint64_t m) {
if(is_prime(m)) {
co_yield m;
} else if(m > 1) {
auto g = proper_divisor(m);
co_yield std::ranges::elements_of(factorize(g));
co_yield std::ranges::elements_of(factorize(m / g));
}
}
for(int p: factorize(m)) {
...
}
In this manner, factorize
will return a generator, which is basically a view wrapper to the function above which generates consecutive elements on the range "on the fly", while also suspending execution of the function between accesses to the resulting range. This way, we avoid the need to store the results of the recursive function somewhere, as well as the need to incorporate external logic or callbacks into the function if we want to do something as soon as you get the next element.
From performance perspective, of course, coroutines may be slightly inferior to having a global vector to store the results. For example, this submission to finding strongly connected components takes 278ms and 108 MB of memory, while its global vector version only needs 229ms and 65 MB. Still I think coroutines are pretty nice concept to keep in mind and might simplify code or make it better structured in a lot of cases.
It smells like Python.
Absolutely, and I think Python's generator is even easier to use : /
is there a way to make std::print not flush?
If you do
then some stuff will become available from multiple namespaces, and c++ gives an ambigiuity error. Such as with
sort
that's available asstd::sort
orstd::ranges::sort
. Same withiota
and a lot of other things.If you don't want to type ranges:: and views:: everytime then use this trick:
This way, in case of ambiguities, the compiler will resolve to views and ranges library.
I learned this trick from someone's submission, idr.
hey arent you that guy on the monkeytype discord
i am
Modifying the namespace std is strictly undefined behavior in C++. This is a bad idea.
I mean the compiler isn't going to sue us for that.
The compiler is legally allowed to do anything in undefined behavior, so this includes suing us...
Can someone explain why using
function<int(int)> fact = ...
is more expensive?std::function is a complex object that can store lambdas, function pointers and functors. if you store a lambda, it allocates on heap (significant performance loss). also, the compiler (and cpu (?)) can't reason which function you are going to call (similar performance loss as to function pointers).
Thanks for explaining it. I just tested and found that std::function can be 3x slower than auto&& for just a fibonacci lambda (even slower for more complex ones).
Pardon?
Well, "global" in a sense that it should be defined outside of main function's scope and managed externally (or outside the lambda like in your example). I suppose one other way to circumvent it is for function to take a callback as an argument and invoke it with returned values each time this is needed.
Ok, fine... I replaced all the recursive
function<>
s withauto &&self
monstrosity in my prewritten library. It was a sad decision to make.If you're fine using C++23, deducing this should be a much nicer alternative :)
But c++23 does not exist everywhere yet, so I am not ready to put it in my templates
A question regarding the generator part, suppose I ran an in-order traversal on a perfect binary tree using this technique:
Ignore the bugs if any.
Suppose the tree consisted of n vertices would the traversal be O(n) or O(n log n) given that each vertex v would need to propagate through depth(v) calls to reach the first call to "traverse".
After timing the code below with
N = 1e6
andN = 1e8
(see line 6), I believe that it's $$$O(n \log n)$$$.(Compiled with
-std=gnu++23
, without optimization flags.)This code used ~310ms on
N = 1e6
, and used more than 8s onN = 1e8
.P.S. If there's anything that I'm doing wrong, please tell me :)
Thanks for the analysis. I got similar results. I also made an iterative version and it was much faster than the recursive one.
While there is a certain overhead to using generators, I don't think it's just depth(v), because e.g. the following code runs quickly enough:
While a proper depth(v) would mean that it should be quadratic. But I also don't know how exactly it is optimized out, as running the code above from David9006 in Codeforces invocations, indeed, gets 108ms on N=1e6, but suddenly around 8s on N=1e7 (not N=1e8).
It's also worth pointing out that
elements_of
was specifically designed with avoiding the issue of depth(v) calls in mind (see p2168r3), and the intended behavior is for the coroutine to recursively delegate control when it returns elements_of, rather than to copy all elements into the returned view.I think the underlying issue might simply be that constructing
elements_of
is very expensive for small subroutines. Compare the following three versions:But at the same time:
So,
elements_of
seems to perform terrible when the wrapped coroutine will consist of just 1, or generally a small number of elements. Here's a more direct example:I don't understand why std::generator isn't just rewritten to callback argument containing the continuation...
Here is an even funnier example:
This works in 1921ms, but removing
if(false)
makes it run in 9233ms.Also notewrothy, it seems to be Codeforces-specific, locally they both run much faster for me, and putting N=1e8 runs in under 5s.
Should I use
[](this auto&& self)
or[](this auto self)
? I’m curious about the difference between using && and not using &&.I benchmarked recursive factorial implementations with a traditional function, this auto/this auto&& lambdas, and auto/auto&& lambdas.
With result:
It seems that the speed of all these lambdas are nearly the same and lambdas are faster than traditional function (why?).
I think there is some comprehensive info about why std::function is really bad in nor's blog.