Блог пользователя adamant

Автор adamant, история, 44 часа назад, По-английски

Hi everyone!

As Codeforces now supports C++23, it seems to be the right time to discuss some of the particularly interesting features.

Some noteworthy ones that were already mentioned elsewhere include:

  • views::zip that maps two ranges into pairs (A[i], B[i]).
  • views::enumerate that maps range into pairs (i, a[i]).
  • views::adjacent<k> that maps range into tuples (a[i], ..., a[i+k-1]).
  • views::cartesian_product that maps two ranges into pairs (A[i], B[j]) with all possible i and j.
  • Some more specialized views.
  • ranges::to<Container> that creates a container out of a view, e.g. to<vector>(views::iota(0, n)).
  • ranges::fold_left and ranges::fold_right, range versions of std::accumulate/std::reduce.
  • insert_range/append_range/prepend_range/assign_range for containers (not in GCC yet).
  • print/println for formatted printing (it seems that standard formatters for ranges are not in GCC yet).

But there are also two features that weren't covered in as much detail as they deserve, deducing this and generators.

Deducing this and recursive lambdas

Assume that you want to write a recursive lambda. What are your options? Naturally, you'd try something like this:

    auto fact = [&](int x) {
        return x ? x * fact(x - 1) : 1;
    };

You will not be allowed to do it, because inside the lambda, you use fact before its type is deduced. One way to circumvent it is to write function<int(int)> fact = ... instead. While it is attractive, it makes the calls to fact more expensive, and in certain cases might even lead you to TLE, e.g. if you try to use this for depth-first traversal of a big graph.

Until C++23, the best practice was to do something like this:

    auto fact = [&](auto &&self, int x) -> int {
        return x ? x * self(self, x - 1) : 1;
    };
    cout << fact(fact, n) << endl;

While much more efficient on practice, it looks very ugly. And C++23 helps us, allowing to do this instead:

    auto fact = [&](this auto fact, int x) -> int {
        return x ? x * fact(x - 1) : 1;
    };
    cout << fact(n) << endl;

Just as if it was a proper recursive function!

std::generator

Another interesting feature that I haven't seen mentioned in competitive programming discussions at all are coroutines. Assume that you need to factorize a number. If you depend on Pollard's rho algorithm, your flow probably looks as follows:

    vector<int> factors;
    void factorize(uint64_t m) {
        if(is_prime(m)) {
            factors.push_back(m);
        } else if(m > 1) {
            auto g = proper_divisor(m);
            factorize(g);
            factorize(m / g);
        }
    }

And it's always annoying that to store the result, you have to either keep a global vector, or take output vector as an argument by reference (and pass it around each time). Ideally you'd want to return a vector, but then you might be afraid of accidentally getting a quadratic runtime, and even besides that you'd need to write extra code to merge the results. Coroutines allow us to rewrite it as follows:

    std::generator<uint64_t> factorize(uint64_t m) {
        if(is_prime(m)) {
            co_yield m;
        } else if(m > 1) {
            auto g = proper_divisor(m);
            co_yield std::ranges::elements_of(factorize(g));
            co_yield std::ranges::elements_of(factorize(m / g));
        }
    }
    
    for(int p: factorize(m)) {
        ...
    }

In this manner, factorize will return a generator, which is basically a view wrapper to the function above which generates consecutive elements on the range "on the fly", while also suspending execution of the function between accesses to the resulting range. This way, we avoid the need to store the results of the recursive function somewhere, as well as the need to incorporate external logic or callbacks into the function if we want to do something as soon as you get the next element.

From performance perspective, of course, coroutines may be slightly inferior to having a global vector to store the results. For example, this submission to finding strongly connected components takes 278ms and 108 MB of memory, while its global vector version only needs 229ms and 65 MB. Still I think coroutines are pretty nice concept to keep in mind and might simplify code or make it better structured in a lot of cases.

  • Проголосовать: нравится
  • +196
  • Проголосовать: не нравится

»
40 часов назад, # |
Rev. 2   Проголосовать: нравится +15 Проголосовать: не нравится

It smells like Python.

  • »
    »
    38 часов назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Absolutely, and I think Python's generator is even easier to use : /

»
39 часов назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

is there a way to make std::print not flush?

»
39 часов назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

If you do

using namespace std;
using namespace std::ranges;
using namespace std::ranges::views;

then some stuff will become available from multiple namespaces, and c++ gives an ambigiuity error. Such as with sort that's available as std::sort or std::ranges::sort. Same with iota and a lot of other things.

If you don't want to type ranges:: and views:: everytime then use this trick:

#include <bits/stdc++.h>

using namespace std;

namespace std::ranges::views {
    auto solve() {
        return iota(1, 5) | transform([](int a) { return a + 10; });
    }
};

int main() {
    for (int i : std::ranges::views::solve()) cout << i << " ";
    cout << endl;
}

This way, in case of ambiguities, the compiler will resolve to views and ranges library.

I learned this trick from someone's submission, idr.

»
32 часа назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Can someone explain why using function<int(int)> fact = ... is more expensive?

  • »
    »
    30 часов назад, # ^ |
    Rev. 3   Проголосовать: нравится +6 Проголосовать: не нравится

    std::function is a complex object that can store lambdas, function pointers and functors. if you store a lambda, it allocates on heap (significant performance loss). also, the compiler (and cpu (?)) can't reason which function you are going to call (similar performance loss as to function pointers).

    • »
      »
      »
      29 часов назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks for explaining it. I just tested and found that std::function can be 3x slower than auto&& for just a fibonacci lambda (even slower for more complex ones).

»
26 часов назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

And it's always annoying that to store the result, you have to either keep a global vector, or take output vector as an argument by reference

Pardon?

vector<int> factorize(uint64_t m) {
    vector<int> factors;
    auto inner = [&](this auto inner, uint64_t m) -> void {
        if(is_prime(m)) {
            factors.push_back(m);
        } else if(m > 1) {
            auto g = proper_divisor(m);
            inner(g);
            inner(m / g);
        }
    } 
    inner(m);
    return factors;
} 
  • »
    »
    26 часов назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    Well, "global" in a sense that it should be defined outside of main function's scope and managed externally (or outside the lambda like in your example). I suppose one other way to circumvent it is for function to take a callback as an argument and invoke it with returned values each time this is needed.

»
26 часов назад, # |
Rev. 3   Проголосовать: нравится +5 Проголосовать: не нравится

Ok, fine... I replaced all the recursive function<>s with auto &&self monstrosity in my prewritten library. It was a sad decision to make.

»
7 часов назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

A question regarding the generator part, suppose I ran an in-order traversal on a perfect binary tree using this technique:

std::generator<Node*> traverse(Node* n) {
    if (n) {
        if(n -> left) {
            co_yield std::ranges::elements_of(traverse(n -> left));;
        } 
         
        co_yield n;

        if(n -> right) {
            co_yield std::ranges::elements_of(traverse(n -> right));;
        } 
    }
}

Ignore the bugs if any.

Suppose the tree consisted of n vertices would the traversal be O(n) or O(n log n) given that each vertex v would need to propagate through depth(v) calls to reach the first call to "traverse".

  • »
    »
    5 часов назад, # ^ |
    Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

    After timing the code below with N = 1e6 and N = 1e8 (see line 6), I believe that it's $$$O(n \log n)$$$.

    #include <generator>
    
    using namespace std;
    using i64 = long long;
    
    constexpr int N = 1e6; // or 1e8
    
    std::generator<int> traverse(int n)
    {
        if (n < N) {
            if ((n << 1) < N)
                co_yield std::ranges::elements_of(traverse(n << 1));
    
            co_yield n;
    
            if ((n << 1 | 1) < N)
                co_yield std::ranges::elements_of(traverse(n << 1 | 1));
        }
    }
    
    int main()
    {
        for (int _ : traverse(1))
            ;
    
        return 0;
    }
    

    (Compiled with -std=gnu++23, without optimization flags.)

    This code used ~310ms on N = 1e6, and used more than 8s on N = 1e8.

    P.S. If there's anything that I'm doing wrong, please tell me :)

    • »
      »
      »
      4 часа назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks for the analysis. I got similar results. I also made an iterative version and it was much faster than the recursive one.

  • »
    »
    4 часа назад, # ^ |
    Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

    While there is a certain overhead to using generators, I don't think it's just depth(v), because e.g. the following code runs quickly enough:

    generator<int> traverse(int n) {
        if (n < N) {
            co_yield n;
            co_yield ranges::elements_of(traverse(n + 1));
        }
    }
    

    While a proper depth(v) would mean that it should be quadratic. But I also don't know how exactly it is optimized out, as running the code above from David9006 in Codeforces invocations, indeed, gets 108ms on N=1e6, but suddenly around 8s on N=1e7 (not N=1e8).

    It's also worth pointing out that elements_of was specifically designed with avoiding the issue of depth(v) calls in mind (see p2168r3), and the intended behavior is for the coroutine to recursively delegate control when it returns elements_of, rather than to copy all elements into the returned view.

    • »
      »
      »
      2 часа назад, # ^ |
      Rev. 6   Проголосовать: нравится 0 Проголосовать: не нравится

      I think the underlying issue might simply be that constructing elements_of is very expensive for small subroutines. Compare the following three versions:

      843ms, O(n log n)
      155ms, O(n)

      But at the same time:

      1061ms, O(n log n)
      1436ms, O(n)

      So, elements_of seems to perform terrible when the wrapped coroutine will consist of just 1, or generally a small number of elements. Here's a more direct example:

      1749ms, O(n), N = 1e7
      • »
        »
        »
        »
        68 минут назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I don't understand why std::generator isn't just rewritten to callback argument containing the continuation...

        • »
          »
          »
          »
          »
          64 минуты назад, # ^ |
          Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

          Here is an even funnier example:

          std::generator<int> traverse(int n) {
              if (n < N) {
                  if(false) for(auto it: traverse(n << 1));
                  else co_yield ranges::elements_of(traverse(n << 1));
                  co_yield n;
                  if(false) for(auto it: traverse(n << 1 | 1));
                  else co_yield ranges::elements_of(traverse(n << 1 | 1));
              }
          }
          

          This works in 1921ms, but removing if(false) makes it run in 9233ms.

          Also notewrothy, it seems to be Codeforces-specific, locally they both run much faster for me, and putting N=1e8 runs in under 5s.