This blog post brought to you by pajenegod, Narut, and the rest of #constant-factor on the AC discord server. If you're interested in discussions like this, feel free to join!
For the longest time, CF has constrained us to the tyranny of 32 bit C++. Not only has that slowed down our long long
operations, it has also constrained us to a measly 32 bit constant factor reduction with bitsets!
Let's take 2 pieces of equivalent code.
For Loop:
int a[MAXN], b[MAXN]
for (int it = 0; it < iters; it++)
for (int i = 0; i < MAXN; i++)
if (a[i] & b[i]) cnt++;
Bitset:
bitset<MAXN> a, b;
int cnt = 0;
for (int it = 0; it < iters; it++)
cnt += (a | b).count();
On Custom Invocation C++17 (32 bit) for MAXN=1e4 and iters=1e5, this boring for loop code runs in 3594 ms, while the equivalent bitset version runs in 104ms. 3594/104 is approximately 32. Ok, disappointing but makes sense.
However, Mike, in his generosity, has recently decided to free us from our chains, and give us 64 bit C++! Just imagine, 64x constant factor improvements vs 32x! However, what happens if we try our previous benchmark?
For Loop (C++17 64 bit): 3736 ms
Bitset (C++17 64 bit): 107ms
What's going on?
Huh? What gives? Have we been lied to? Where are our improvements? I'm not alone in noting this, see tribute_to_Ukraine_2022's comment here
So, what's the problem? The answer, as it so often is, is Windows. On Windows, long
is always defined to be 32 bits, regardless of what the underlying OS is using.
And as it turns out, GCC bitset uses long
as it's data type! Which means that even though our compiled code can use 64 bit instructions, the implementation of STL bitset is artificially constraining us! >:(
Misuse of Macros
So the STL is implemented in a way that forces us into 32 bit bitset. Are we screwed? No! As it turns out, before PL people realized how good of an idea namespaces were, they gave us the C++ macro and #include systems! Thus, even though you're importing code that you have no control over, you can actually modify its internals through shameless abuse of macros.
For example, say your colleague's heard of something called "encapsulation" or "abstraction", and made the internals of his data structure private so you can't modify them. Obviously, you're smarter than they are, and now you need to modify the internals of his data structure. You could ask them — or you could #define private public
!
We can use a similar idea here, where we want to replace long
with long long
. In principle, #define unsigned unsigned long
would work. Unfortunately, the GCC programmers never thought of such a brilliant usage of their library, and this breaks their internal code. I won't go through the entire process of what we needed to do, so let me present: 64 bit bitsets on Codeforces!
64 bit bitsets!
#include <string>
#include <bits/functexcept.h>
#include <iosfwd>
#include <bits/cxxabi_forced.h>
#include <bits/functional_hash.h>
#pragma push_macro("__SIZEOF_LONG__")
#pragma push_macro("__cplusplus")
#define __SIZEOF_LONG__ __SIZEOF_LONG_LONG__
#define unsigned unsigned long
#define __cplusplus 201102L
#define __builtin_popcountl __builtin_popcountll
#define __builtin_ctzl __builtin_ctzll
#include <bitset>
#pragma pop_macro("__cplusplus")
#pragma pop_macro("__SIZEOF_LONG__")
#undef unsigned
#undef __builtin_popcountl
#undef __builtin_ctzl
#include <bits/stdc++.h>
using namespace std;
signed main() {
bitset<100> A;
A[62] = 1;
cout << A._Find_first()<<endl;
}
If we rerun our benchmarks, we see
For Loop (C++17 64 bit): 3736 ms
Actually 64 bit Bitset (C++17 64 bit): 69ms
Ah... 3736/69 ~= 64. There we go :)
Running the benchmarks with MAXN=1e5
and iters=1e5
makes it more obvious.
Bitset (C++17 64 bit): 908 ms
Actually 64 bit Bitset (C++17 64 bit): 571ms
Real Problems
Let's take a real problem Count the Rectangles, code taken from okwedook here.
Adding our "template" speeds up his solution from 373ms to 249ms!
Caveats
Hm... None that I can see :^)
Note: This code is provided with no warranty. If you get WA, TLE, or your code crashes because of this template, don't blame me.