Inspiration
I'm very excited about this blog, as it took me quite a lot of effort and scavenging through the internet to completely grasp the concept of this technique(That's probably because I have almost zero knowledge in Linear Algebra or I'm just plain dumb). So I feel like I genuinely conquered a challenge, and I really want to share it with someone. But there's no way my CP friends circle will believe it, they'll think I'm just trying to show off :P
So here I am, sharing it on CF. I also created a personal blog, so that if I ever feel like sharing something again(not only about CP), I can write a blog there. I also added this same post there, you can read it there if you prefer dark theme. I'll be pleased to hear any thoughts on the blog or if I can improve it in some way ^_^
Introduction
Since it concerns Linear Algebra, there needs to be a lot of formal stuff going on in the background. But, I'm too much inconfident on this subject to dare go much deep. So, whenever possible, I'll try to explain everything in intuitive and plain English words. Also, this blog might take a while to be read through completely, as there are quite a few observations to grasp, and the example problems aren't that easy either. So please be patient and try to go through it all, in several sits if needed. I believe it'll be worth the time. In any case, I've written the solutions, codes, and provided links to their editorials(if available). I'll provide more details in the solutions tomorrow and put more comments in the codes, since I'm really tired from writing this blog all day.
Now, the problems that can be solved using this technique are actually not much hard to identify. The most common scenario involves: you'll be given an array of numbers, and then the problem asks for an answer by considering all the xor-sums of the numbers in all possible subsets of the array. This technique can also be used in some online-query problems: the problem can provide queries of first type instructing you to insert numbers in the array(_without removal_, I don't know how to solve with deletion of elements) and in-between those queries, asking for answers in separate queries of second type.
The whole technique can be divided into two main parts, some problems can even be solved by using only the first part(Don't worry if you don't understand them completely now, I explain them in details right below): 1. Represent each given number in it's binary form and consider it as a vector in the $$$\mathbb{Z}_2^d$$$ vector space, where $$$d$$$ is the maximum possible number of bits. Then, xor of some of these numbers is equivalent to addition of the corresponding vectors in the vector space. 2. Somehow, relate the answer to the queries of second type with the basis of the vectors found in Part 1.
PS: Does anyone know any name for this technique? I'm feeling awkward referring to it as 'technique' this many times :P If it's not named yet, how about we name it something?
Part 1: Relating XOR with Vector Addition in $$$\mathbb{Z}_2^d$$$
Let me explain the idea in plain English first, then we'll see what the $$$\mathbb{Z}_2^d$$$ and vector space means. I'm sure most of you have already made this observation by yourselves at some point.
Suppose, we're xor-ing the two numbers $$$2$$$ and $$$3.$$$ Let's do it below:
Now, for each corresponding pair of bits in the two numbers, compare the result of their xor with the result of their sum taken modulo $$$2$$$:
Bit no. | First number | Second number | $$$\oplus$$$ | Sum | Sum taken $$$\pmod 2$$$ |
---|---|---|---|---|---|
$$$1$$$st bit | $$$0$$$ | $$$1$$$ | $$$1$$$ | $$$1$$$ | $$$1$$$ |
$$$2$$$nd bit | $$$1$$$ | $$$1$$$ | $$$0$$$ | $$$2$$$ | $$$0$$$ |
Notice the similarity between columns $$$4$$$ and $$$6$$$? So, we can see that taking xor between two numbers is essentially the same as, for each bit positions separately, taking the sum of the two corresponding bits in the two numbers modulo $$$2.$$$
Now, consider a cartesian plane with integer coordinates, where the coordinate values can only be $$$0$$$ or $$$1.$$$ If any of the coordinates, exceeds $$$1,$$$ or goes below $$$0,$$$ we simply take it's value modulo $$$2.$$$
This way, there can only be $$$4$$$ points in this plane: $$$(0, 0), (0, 1), (1, 0), (1, 1).$$$ Writing any other pair of coordinates will refer to one of them in the end, for example, point $$$(3, 2)$$$ is the same point as point $$$(1, 0)$$$ since $$$3 \equiv 1$$$ and $$$2 \equiv 0$$$ modulo $$$2.$$$
In view of this plane, we can represent the number $$$2 = (10)_2$$$ as the point $$$(0, 1),$$$ by setting the first bit of $$$2$$$ as the $$$x$$$ coordinate and the second bit as the $$$y$$$ coordinate in our plane. Refer to this point as $$$P(0, 1).$$$ Then, the position vector of $$$2$$$ will be $$$\overrightarrow{OP}$$$ where $$$O(0, 0)$$$ is the origin. Similarly, the position vector of $$$3$$$ will be $$$\overrightarrow{OQ}$$$ where $$$Q = (1, 1).$$$
An interesting thing happens here, if we add the two position vectors, the corresponding coordinates get added modulo $$$2,$$$ which actually gives us the position vector of the xor of these two position vectors. For example, adding vectors $$$\overrightarrow{OP}$$$ and $$$\overrightarrow{OQ}$$$ we get $$$\overrightarrow{OR}$$$ where $$$R(1, 0)$$$ turns out to be the point corresponding the xor of $$$2$$$ and $$$3.$$$
This is all there is to it. Transforming xor operations to bitwise addition modulo $$$2$$$ and, in some cases, vector addition in this way can be helpful in some problems. Let's see one such problem. Before that, let me explain in short what vector space and $$$\mathbb{Z}_2^b$$$ meant earlier. I apologize to any Linear Algebra fans, since I don't want to write formal definitions here to make things look harder than it is. I'll explain the idea of these terms the way I find them in my mind, kindly pardon me for any mistakes and correct me if I'm wrong.
$$$\underline{\text{Vector Space}}$$$: Just a collection of vectors.
$$$\underline{\mathbb{Z_2}}$$$: $$$\mathbb{Z_m}$$$ is the set of remainders upon division by $$$m.$$$ So, $$$\mathbb{Z_2}$$$ is simply the set $$$\{0, 1\},$$$ since these are the only remainders possible when taken modulo $$$2.$$$
$$$\underline{\mathbb{Z_2^d}}$$$: A $$$d-$$$dimensional vector space consisting of all the different position vectors that consists of $$$d$$$ coordinates, all coordinates being elements of $$$\mathbb{Z_2}.$$$ For example, earlier our custom cartesian plane was a two-dimensional one. So, it was $$$\mathbb{Z_2^2}.$$$ $$$\mathbb{Z_2^3}$$$ would be a small $$$3d-$$$plane with only $$$2^3 = 8$$$ points, all coordinates taken modulo $$$2.$$$
So, what we've seen is that the xor-operation is equivalent to vector addition in a $$$\mathbb{Z}_2^d$$$ vector space. See how unnecessarily intimidating this simple idea sounds when written in formal math!
Anyways, the problem:
Problem 1 (Division 2 — C)
Find the number of non-empty subsets, modulo $$$10^9 + 7,$$$ of a given set of size $$$1 \le n \le 10^5$$$ with range of elements $$$1 \le a_i \le 70,$$$ such that the product of it's elements is a square number.
Link to the source
If you'd like to solve the problem first, then kindly pause and try it before reading on further.
Since the number of different possible masks were just $$$70$$$ in the previous problem, we had been able to use dynamic programming for checking all possible xors. But what if the constraint was much bigger, say $$$10^5.$$$ That is when we can use Part $$$2$$$ of this technique, which, in some cases, works even when the queries are online.
Part 2: Bringing in Vector Basis
We need a couple of definitions now to move forward. All the vectors mentioned in what follows, exclude null vectors. I sincerely apologize for being so informal with these definitions.
$$$\underline{\text{Independent Vectors:}}$$$ A set of vectors $$$\vec{v_1}, \vec{v_2}, \ldots, \vec{v_n}$$$ is called independent, if none of them can be written as the sum of a linear combination of the rest.
$$$\underline{\text{Basis of a Vector Space:}}$$$ A set of vectors is called a basis of a vector space, if all of the element vectors of that space can be written uniquely as the sum of a linear combination of elements of that basis.
A few important properties of independent vectors and vector basis that we will need later on(I find these pretty intuitive, so I didn't bother with reading any formal proofs. Let me know in the comments if you need any help):
For a set of independent vectors, we can change any of these vectors by adding to it any linear combination of all of them, and the vectors will still stay independent. What's more fascinating is that, the set of vectors in the space representable by some linear combination of this independent set stays exactly the same after the change.
Notice that, in case of $$$\mathbb{Z}_2^d$$$ vector space, the coefficients in the linear combination of vectors must also lie in $$$\mathbb{Z}_2.$$$ Which means that, an element vector can either stay or not stay in a linear combination, there's no in-between.
The basis is actually the smallest sized set such that all other vectors in the vector space are representable by a linear combination of just the element vectors of that set.
The basis vectors are independent.
For any set with smaller number of independent vectors than the basis, not all of the vectors in the space will be representable.
And there cannot possibly be larger number of independent vectors than basis in a set. If $$$d$$$ is the size of the basis of a vector space, then the moment you have $$$d$$$ independent vectors in a set, it becomes a basis. You cannot add another vector into it, since that new vector is actually representable using the basis.
For a $$$d-$$$dimensional vector space, it's basis can have at most $$$d$$$ vector elements.
With just these few properties, we can experience some awesome solutions to a few hard problems. But first, we need to see how we can efficiently find the basis of a vector space of $$$n$$$ vectors, where each vector is an element of $$$\mathbb{Z}_2^d.$$$ The algorithm is quite awesome <3 And it works in $$$O(n \cdot d).$$$
The Algorithm:
This algorithm extensively uses properties $$$1, 2, 3$$$ and $$$4,$$$ and also the rest in the background. All the vectors here belong to $$$\mathbb{Z}_2^d,$$$ so they are representable by a bitmask of length $$$d.$$$
Suppose at each step, we're taking an input vector $$$\vec{v_i}$$$ and we already have a basis of the previously taken vectors $$$\vec{v_1}, \vec{v_2}, \ldots, \vec{v_{i - 1}},$$$ and now we need to update the basis such that it can also represent the new vector $$$\vec{v_i}.$$$
In order to do that, we first need to check whether $$$\vec{v_i}$$$ is representable using our current basis or not.
If it is, then this basis is still enough and we don't need to do anything. But if it's not, then we just add this vector $$$vec{v_i}$$$ to the set of basis.
So the only difficuly that remains is, to efficiently check whether the new vector is representable by the basis or not. In order to facilitate this purpose, we use property $$$1$$$ to slightly modify any new vectors before inserting it in the basis, being careful not to break down the basis. This way, we can have more control over the form of our basis vectors. So here's the plan:
Let, $$$f(\vec{v})$$$ be the first position in the vector's binary representation, where the bit is set. We make sure that all the basis vectors each have a different $$$f$$$ value.
Here's how we do it. Initially, there are no vectors in the basis, so we're fine, there are no $$$f$$$ values to collide with each other. Now, suppose we're at the $$$i$$$'th step, and we're checking if vector $$$\vec{v_i}$$$ is representable by the basis or not. Since, all of our basis have a different $$$f$$$ value, take the one with the least $$$f$$$ value among them, let's call this basis vector $$$\vec{b_1}.$$$
If $$$f(\vec{v_i}) < f(\vec{b_1})$$$ then no matter how we take the linear combination, by property $$$2,$$$ no linear combination of the basis vectors' can have $$$1$$$ at position $$$f(\vec{v_i}).$$$ So, $$$\vec{v_i}$$$ will be a new basis vector, and since it's $$$f$$$ value is already different from the rest of the basis vectors, we can insert it into the set as it is and keep a record of it's $$$f$$$ value.
But, if $$$f(\vec{v_i}) == f(\vec{b_1}),$$$ then we must subtract $$$\vec{b_1}$$$ from $$$\vec{v_i}$$$ if we want to represent $$$\vec{v_i}$$$ as a linear combination of the basis vectors, since no other basis vector has bit $$$1$$$ at position $$$f(\vec{v_i}) = f(\vec{b_1}).$$$ So, we subtract $$$\vec{b_1}$$$ from $$$\vec{v_i}$$$ and move on to $$$\vec{b_2}.$$$
Note that, by changing the value of $$$\vec{v_i}$$$ we're not causing any problem according to property $$$1.$$$ $$$\vec{v_i}$$$ and $$$\vec{v_i} - \vec{b_1}$$$ is of same use to us. If in some later step we find out $$$\vec{v_i}$$$ is actually not representable by the current basis, we can still just insert it's changed value in the basis, since the set of vectors in the space representable by this new basis would've been the same if we inserted the original $$$\vec{v_i}$$$ instead.
If, after iterating through all the basis vector $$$\vec{b}$$$'s and subtracting them from $$$\vec{v_i}$$$ if needed, we still find out that $$$\vec{v_i}$$$ is not null vector, it means that the new changed $$$\vec{v_i}$$$ has a larger value of $$$f$$$ than all other basis vectors. So we have to insert it into the basis and keep a record of it's $$$f$$$ value.
Here's the implementation, the vectors being represented by bitmasks of length $$$d$$$:
int basis[d]; // basis[i] keeps the mask of the vector whose f value is i
int sz; // Current size of the basis
void insertVector(int mask) {
for (int i = 0; i < d; i++) {
if ((mask & 1 << i) == 0) continue; // continue if i != f(mask)
if (!basis[i]) { // If there is no basis vector with the i'th bit set, then insert this vector into the basis
basis[i] = mask;
++sz;
return;
}
mask ^= basis[i]; // Otherwise subtract the basis vector from this vector
}
}
Let's view some problems now:
Problem 2a
Given a set $$$S$$$ of size $$$1 \le n \le 10^5$$$ with elements $$$0 \le a_i \lt 2^{20}.$$$ Find the number of distinct integers that can be represented using xor over the set of the given elements.
Link to the source
Problem 2b
We have a graph of $$$2^{k}$$$ nodes numbered from $$$0$$$ to $$$2^{k} - 1,$$$ $$$1 \le k \le 30.$$$ Also, we're given $$$1 \le M \le 10^5$$$ integers $$$x_1, x_2, \ldots, x_M$$$ within the range $$$0 \le x_i \le 2^{k} - 1.$$$ In the graph, two vertices $$$u$$$ and $$$v$$$ are connected with an edge iff $$$u \oplus v = x_i$$$ for some $$$i.$$$ Find the number of connected components in the graph.
Link to the source
Link to the editorial and reference code
Problem 3
Given a set $$$S$$$ of size $$$1 \le n \le 10^5$$$ with elements $$$0 \le a_i \lt 2^{20}.$$$ What is the maximum possible xor of the elements of some subset of $$$S?$$$
Link to the source
Problem 4 (1st Hunger Games — S)
We have an empty set $$$S$$$ and we are to do $$$1 \le n \le 10^6$$$ queries on it. Let, $$$X$$$ denote the set of all possible xor-sums of elements from a subset of $$$S.$$$ There are two types of queries.
Type $$$1$$$: Insert an element $$$1 \le k \le 10^9$$$ to the set(If it's already in the set, do nothing)
Type $$$2$$$: Given $$$k,$$$ print the $$$k$$$'th hightest number from $$$X.$$$ It's guaranteed that $$$k \le \mid X \mid.$$$ Link to the source
Problem 5 (Division 2 — F)
You're given an array $$$0 \le a_i \lt 2^{20}$$$ of length $$$1 \le n \le 10^5.$$$ You have to answer $$$1 \le q \le 10^5$$$ queries.
In each query you'll be given two integers $$$1 \le l \le n$$$ and $$$0 \le x \lt 2^{20}.$$$ Find the number of subsequences of the first $$$l$$$ elements of this array, modulo $$$10^9 + 7,$$$ such that their bitwise-xor sum is $$$x.$$$
Link to the source
Problem 6 (Education Round — G)
You are given an array $$$0 \le a_i \le 10^9$$$ of $$$1 \le n \le 2 \cdot 10^5$$$ integers. You have to find the maximum number of segments this array can be partitioned into, such that -
1. Each element is contained in exactly one segment
2. Each segment contains at least one element
3. There doesn't exist a non-empty subset of segments such that bitwise-xor of the numbers from them is equal to $$$0$$$
Print $$$-1$$$ if no suitable partition exists.
Link to the source
Conclusion
This is my first take on writing tutorial blogs on CF. I hope it'll be of use to the community.
I apologize for my terrible Linear Algebra knowledge. I would write this blog without using any of it if I could. I don't want to spread any misinformation. So please let me know in comments if you find any mistakes/wrong usage of notations.
I plan to write on Hungarian Algorithm next. There's just so many prerequisites to this algorithm. It'll be an enjoyable challenge to write about. I'd be glad if you can provide me some resource links in the comments to learn it from, though I already have quite a few.
Wow
I saw some coder using this technique, what is the name of this technique?
I think some people refer to it as Gaussian Elimination modulo $$$2$$$ or bitwise Gaussian Elimination, since we use a similar idea to efficiently find a basis of $$$\mathbb{Z}_2^d$$$ vector space. But, personally, I don't like it much. Because, Gaussian Elimination is not the main theme of this technique, it's the use of xor->vector space transition and then finding a basis of that vector space.
If it really doesn't have a common name yet, how does xor-basis sound? Simple and easy to remember. Also perfectly resembles the idea of this technique.
I usually consider it as either —
Addition of vectors over GF(2)/Z(2)
Or just system of linear equations mod 2.
To be very particular about the term, Gaussian elimination is the very concept of having different pivots ( leftmost $$$1$$$ ) for each row in a matrix. The main highlight here is that itself, "Let, $$$f(v)$$$ be the first position in the vector's binary representation, where the bit is set. We make sure that all the basis vectors each have a different $$$f$$$ value."
But this is all just language, apart from that, thanks a lot. I never tried to learn Gaussian elim because it looked tough ( saw long codes everywhere, and couldn't understand without the correlation with Linear Algebra ). But now I feel it's so easy.
Isn't this really just a weird variant of the Gram-Schmidt procedure to find an orthogonal basis of a subspace?
There is a $$$\mathbb{Z}_2$$$ analogue of the Gram-Schmidt orthogonalization procedure (using the least significant bit of popcnt(x & y) as a $$$\mathbb{Z}_2$$$ analogue of the inner product) to find a basis, but that isn't described here. Over the real numbers this variant of the Gaussian elimination process can be represented as a limit of Gram-Schmidt processes with (non-standard) inner products increasingly skewed to weigh the first coordinate more and more heavily, but I'm not aware of any way to (even "formally") take such a limit over $$$\mathbb{Z}_2$$$. (Of course, one could still use a relative of $$$\mathbb{Z}_2$$$ dual numbers to justify the analogy, but I think that seems rather artificial.)
EDIT: Oops! I clearly misremembered something. This pseudo-inner-product is unsuitable for use with the Gram-Schmidt process over $$$\mathbb{Z}_2^n$$$, as it is degenerate, lacking positive-definiteness for $$$n>1$$$. That type of construction can still be useful in discrete linear algebra, but not in this context.
I'd just say "I used some linear algebra".
Link to the source of the problem 4 is not working :/
You have to join the group first to access the problem. The group name is probably The 1st Hunger Games.
Done.
Sorry, I'll edit the blog to mention about it. This problem is from a CF group. You need to join the group as a spectator first to view it.
Group Link.
More information.
Can you please explain why the answer for problem 5 is 2 ^ (l — b). I couldn't understand properly.
There are $$$2 ^ {(l - b)}$$$ subsets of the set containing only the non-basis element vectors. I claimed that, for each of these subsets, if we take the vectors from this subset, then there will a unique way to select some more vectors from the basis, so that addition of all these vectors equals $$$x.$$$
To see this, pick some subset from the set of non-basis element vectors and let the xor-sum of it's elements be $$$y.$$$ Then, we need to pick some elements from the basis so that their xor-sum is $$$y \oplus x,$$$ which can be done in a unique way by the definition of basis.
Thanks, it is clear now.
Yes, it works :) Nicely utilized the constraints.
The following problem uses the concept of Problem 2 of in this blog: https://www.codechef.com/COOK106A/problems/XORCMPNT
Seriously speaking, I am finding this topic hard to comprehend(I mean solving XOR/bits related problems with Gaussian Elimination modulo 2). Do I need a degree in Mathematics for that? XD
Thank you :) Added the problem. Nice problem solaimanope vai ^_^
Thank u very much. Almost everything in this post was completely clear for me. Very good.
Glad it helped! ^_^ It feels very nice to receive appreciative words for my work :)
Btw I wonder, how did you come across this post once it went out of the "recent actions" list?
My friend shared a link.
Thank you so much! It helps me a lot.
But a small question in the code about how to get the basis in n vectors.
You said "basis[i] keeps the index of the vector whose f value is i" , and then you just made "basis[i] = mask;" .
I find it okey, but in my opion the "mask" means the vector value, instead of the index of that mask.
Perhaps I just get a wrong understanding QAQ. At last thank you!
Sorry, you probably read it from my blog. I fixed it here, but forgot to do so in the blog. Thanks for noting it!
How can i find a and b, if i have a*b and a^b?Thank you.
This is pretty well known...
Thank you a lot! It's the best blog on xor-basis i've seen so far
If we have to count the number of distinct integers that can be possible using OR over the set of the given elements. then can we solve this problem using gaussian elimination technique? How can we solve this problem?
I recently came up with a super easy solution with xor basis for Problem 5: 73340139
Please don't stop writing!
Yes Please. Especially problem 5 :P <3
ROFL! Even I found that one particularly interesting
Lovely explanation. Never thought, I'd be using vectors and vector spaces for finding if product is square. Honestly, it still won't come to me at first sight. LOL
Will read the blog again.
For Problem 1, the second DP represents something different than the first DP. dp[at][mask] = # of non-empty subsets using values <= at (from the array) with this mask. Just thought that was unclear.
Also, the poss[at] array can be computed as 2^(count[at] — 1) directly, by the binomial theorem (irrespective of even/odd).
Pardon me, but for arbitrary large dimensions of $$$d$$$, this algorithm actually works in $$$O(nd^2)$$$, correct? Here you assume that XOR can be performed in $$$O(1)$$$, but that is not generally the case, if say d=10^4. Would using bitsets help?
You are right and the actual complexity is just $$$O(n \cdot d^2 / 32)$$$. It's just that $$$d$$$ usually is just binary length of numbers from the input so it's around $$$32$$$ in 90% of cases.
how many distinct numbers can be generated by doing bitwise OR of one or more integers between A and B (inclusive)? where 1<=a<=b<=2^60
as it is problem for ongoing contest can u plz help me with this after 1 day..the contest will be over till then..thanks in advance i am very much curious to know its solution that solves it in an optimal time.
very nice tutorial, thank you for your efforts
Thanks for the nice tutorial.
A nice problem that can be solved using this technique.
Thank you for you explanation !
Another problem using this technique: Problem A of atcoder grand contest 045.
Thanks man, I landed on this page searching for explanation for XOR Battle because of your comment.
In the problem 5 you said we can answer the queries online. But you have solved it offline right? Or did I misunderstood the definition of offline queries?
In code of part 2, it says
Can someone explain how subtracting is synonymous to XOR here ? Is this an error ?
It is explained earlier in the blog that XOR related to addition/subtraction under $$$\mathbb{Z}_2^d$$$. Here, you are trying to represent $$$mask$$$ as sum of some $$$basis[i]$$$. If the representation contains $$$basis[i]$$$, then xor-ing it again with $$$mask$$$ will exclude it from $$$mask$$$.
A much-needed blog for today's contest! :P Thanks for sharing!
Your work is really appreciable. Thanks a lot and keep bringing more such content!!
what u are actually storing in basis[i]?
If i insert 7,3 then i can represent vector space as [1,1,1], [1,0,0] as basis. Am i right or not someone please clarify?
Yes. The first number is 111 and it will mark the least significant bit as taken. Then 011 is changed into 100.
DrSwad , -is-this-fft- in problem 5, Is the idea is like this, addition in a vector space is closure and x should be present in the given vector space. So to form a subsequence with xor sum equals to x, from non-basis vectors we have xor sum of the form (K xor x), and as addition is closure so it is possible to form vector of the form K from basis vectors?
Downvote for tags.
Also I didn't understand anything from what you said, can you please clarify?
I am asking about the basic idea behind the number of subsequences whose xor sum is equal to X.
Assume you have these $$$\ell$$$ vectors and suppose $$$b$$$ of them form a basis $$$B$$$. Then all other $$$\ell - b$$$ vectors can be represented as the XOR of some basis vectors. I will write "$$$\operatorname{span} B$$$" to denote the set of all vectors that can be represented as the XOR of some basis vectors, notice that if $$$x, y \in \operatorname{span} B$$$, then also $$$x \oplus y \in \operatorname{span} B$$$.
Suppose we take a vector $$$x \in \operatorname{span} B$$$. Pick any subset of the "other $$$\ell - b$$$ vectors", let their XOR be $$$K$$$, then also $$$K \in \operatorname{span} B$$$. If we wanted to pick some vectors such that the XOR is $$$x$$$, and the XOR of the non-basis vectors we chose is $$$K$$$, then we need to pick such basis vectors that their XOR is $$$K \oplus x$$$, and there is exactly one way to do that (because $$$B$$$ is a basis).
If we want to form $$$x$$$, we have $$$2^{\ell - b}$$$ ways to choose from the "non-basis vectors" and 1 way to choose from the basis vectors, so the answer is $$$2^{\ell - b}$$$.
Got it, thanks.
Sorry its closure not commutative.
OK, it makes a bit more sense now.
Goes to my favorite blogs without doubt
Why is the maximum size of basis for a d-dimensional vector atmost d?
Each vector in your basis is responisble for one of the dimensions from the $$$d$$$-dimensions ( responsible refers to the fact that only that vector has a $$$1$$$ in that dimension and other vectors have a $$$0$$$ ). Since your vectors are $$$d$$$-dimensional, there can be atmost $$$d$$$ vectors that each take care of different dimensions.
Consider a case where all numbers (in space)of a given dimension have 1. Then we cannot find a basis (with size>1) for which a value could be responsible for that same dimension ( since all values of basis would have 1) so how to use atmost d numbers in basis?
Okay got it! We can always perform linear combination to get the same form which you described !
I got goosebumps after reading your solution of problem 2a,just amazing!!
Problem 2a is giving wrong output at set s={3,4,5} it should give 4 ans but it is giving 8 can anyone look
You're probably taking XOR on pairs only. The problem requires you to take XORs on all the subsets. Here are the results I'm getting:
How do we solve the problem 1 with bigger constraints (But what if the constraint was much bigger, say 10^5)? Can someone please help with the solution?
I assume you already know how to quickly factorize, and that you understand the solution for the small version.
Let's learn how to solve "how many subsets are there with xor sum $$$0$$$" first. Compute prime masks as in the small version, and the answer is simply $$$2^{no.\ of\ non-basis\ elements}$$$. Indeed, for any non-basis subsets, the basis can form exactly the same xor sum. So the number of $$$0$$$-xor subsets is just the number of non-basis subsets. The blog already described how to build the basis.
Understand up to this part first before proceeding. I am not sure whether there's an OJ problem of "counting $$$0$$$ xor sum subsets" aside from that problem 1.
Now, you may be thinking "there are too many primes to consider". You're right, there are exactly $$$9592$$$ primes, which is a little too much. The catch here is that there can be at most one "large prime" larger than $$$\sqrt{10^5} \approx 316$$$, and there are only $$$65$$$ small primes (smaller than $$$316$$$). Let's see how we can speed up the algorithm with this observation.
We will build the basis from the most significant bit down (like problem 3). The algorithm for adding a mask into the basis can be seen as follow:
Now, since there is at most one large prime, there is also at most one bit out of the $$$9527$$$ larger bit positions, and this is also the most significant bit. Since there is at most one bit out of these, either it immediately gets added to said basis position, or it immediately gets xor-ed with the mask at said basis position, and there will be no more large bit that are set. So you can continue doing normal basis building for the $$$65$$$ small bits.
In terms of implementation it should look something like this:
And the answer is once again $$$2^{no.\ of\ non-basis\ elements}$$$. You need a bitset or something like that to store basis this large.
Do ask if there are some things unclear or I made any mistakes.
Hey! Thank you for such a clear explanation.
1) "We will build the basis from the most significant bit down (like problem 3)" I understand that in this problem, we need to build the basis from the most significant bit down because we could have msb that is greater than > 65 and it wouldn't be algorithmically efficient to build it from lsb (harder to maintain which bits are set in mask). But do we really need to do same for problem 3? I have posted my doubts in comments below : https://codeforces.me/blog/entry/68953?#comment-768724.
2) "You need a bitset or something like that to store basis this large". By large you mean that bitset for 65 small bits, right (and not 9592 bits bitset)? Because we only need to maintain this 65 small bits and one msb separately?
I have few doubts. Can someone help me with an intuitive reasoning for them?
1) Generating basis vectors algorithm : Is it true that the bits for which basis are found will always be the same despite the order in which we insert vectors? And why so? I understand that the basis vectors can be different but will the bits position for which basis are found will be same?
2) In problem 2a, "For any linear combination of the basis vectors, we get a different possible xor of some subset. So, the answer would be 2^size of basis". But why so?
3) In problem 3, why do we need to alter the definition "alter the definition of F(b). Instead of F being the first position with a set bit, let it be the last position with a set bit"? Can't we just calculate the maximum xor in the same manner as given in solution ( i.e going from highest bit to lowest bit) without changing the definition at all? What does changing the definition achieve if the basis bits position we are going to find will be the same (assuming my doubt 1 is true)?
Can someone please help me with above doubts?
Hi. Late answer but I hope you're ok :)
1 and 3: The process of "basis building" here is called Gaussian Elimination. We have an independent set of vectors and we want to build a basis out of it. The main property here is that, we can mix up these vectors (i.e. xor them to each other in ANY way we want), and the span doesn't change. So we apply Gaussian Elimination, i.e. we are inserting all vectors into a matrix, and get it down to row echelon form.
I'm not sure if the span changes at all (I think it doesn't), but you can see how you can't greedily manipulate the smaller bits without disrupting the larger bits on the right case.
And to answer your question to my above comment, yes you are correct :) you just need the $$$65$$$ small bits. In C++ there is
__int128
which is large enough.Yes, just because I tried to find a counter test case and couldn't. What is mind-boggling is that the value of the vectors in the basis can be different and you still can get the XOR of all the possible subsets using these different bases.
You can exploit the fact that basis vectors are independent, so you can't get one of them using a linear combination of the others. That means if you didn't include a vector and then included it, the resulting value will surely change. So, you will have exactly $$$2^{\text{sz}}$$$ different values.
Altering the definition of $$$f$$$ will change the bits of the basis. For example, when you insert $$$3$$$ starting of LSB you will have the 1st bit in the resulting basis, but when you start from MSB you will have the 2nd bit only.
For the greedy algorithm to work using LSB basis you need to go from the MSB when forming the maximum XOR. Suppose you are at bit $$$b$$$, you need to search in the basis vectors for a vector that has the bit $$$b$$$ with value $$$1$$$. The problem with the LSB basis is that you may find more than a vector with the bit $$$b$$$ is set to $$$1$$$ and you can't take any of them, the greedy approach here will not work, and you need to try all the different possible combinations of basis vectors.
For problem 3, this approach can also work: after forming basis with the given n elements using the earlier mentioned implementation(where f(i) is the 1st position with set bit), we can just iterate from 2^21 to 0 and the first element which can be formed from the basis will be the answer. PS: Correct me if I'm wrong
Correct, but the problem is still solvable by the author's solution when the number of bits is bigger, maybe 60 bits not 21.
thank you for such a good toturial!
Nice blog. up up.
can someone explain the following part in problem 4 : if ((low < k && (mask & 1 << i) == 0) || (low >= k && (mask & 1 << i) > 0)) mask ^= basis[i];
There are
low
possible elements that can be generated that have the i-th bit on, ignoring the higher bits. Same for the i-th bit off.For
(low < k && (mask & 1 << i) == 0)
, we need $$$k$$$-th highest value, there arelow
elements that have the i-th bit off and $$$k$$$ is greater than that, so we need that bit on. If the current number has that bit off, we need to turn it on. Notice that xoringbasis[i]
tomask
won't affect the bits that were set before because of the way the $$$f$$$ values are handled.For
(low >= k && (mask & 1 << i) > 0)
, there are more elements that have the i-th bit off than $$$k$$$, so we need that bit off by applyingmask ^= basis[i]
.got it, Thanks alot.
how so xor
For problem 6, how can we relate the size of the basis with the amount of segments in the optimal division?
I understand that the vector space generated by all the subsets of segments is the same as the one generated by the subsets of prefixes. It's also clear why every non-empty subset of segments in the final division should generate a different xor-sum.
But why is the maximum amount of non-overlapping segments equal to the size of the basis? Would it be possible to reconstruct the answer once we've generated the basis?
Thanks! (and great blog btw, thank you Mr. Swad)
I thought about it the whole day and here is what I came up with. whenever you create a basis the first number you push you in your basis is pref[n] i.e. the last prefix element. Now if the basis has size k, then you will push k-1 more elements in your basis but no matter how many elements you push into the basis the last prefix will always be pref[n] hence
the whole array will be divided. Intrinsically that is the reason why if all the elements of the array sum to 0 (xor-sum up I mean) then you will never be able to push pref[n] in the basis hence you will never be able divide the whole array into complete segments.
Can someone explain me this one ?
For a set of independent vectors, we can change any of these vectors by adding to it any linear combination of all of them, and the vectors will still stay independent. What's more fascinating is that, the set of vectors in the space representable by some linear combination of this independent set stays exactly the same after the change.
DrSwad sorry for necroposting but how do you prove property 7?
Think about our 3D world, if you use a single vector, you can only represent a line. If you have two independent vectors you can represent a plane (surface). If you have $$$3$$$ independent vectors, you can represent the whole world.
3Blue1Brown YouTube channel has a playlist called "Essence of Linear Algebra", it is awesome and there are lots of animations and visual explanations.
For the problem 6, I don't know the reason why when I tried to insert X into the basis instead of the prefix XOR, the solution would be the same.
Could you please elaborate how this code is working in problem-1
int tmp = poss[a][0]; poss[a][0] = (poss[a][0] + poss[a][1]) % MOD; poss[a][1] = (poss[a][1] + tmp) % MOD;