Performance of bitset optimization

Revision en1, by Orn0, 2025-01-15 21:23:19

Hello Codeforcers,

Recently I've stumbled upon this AtCoder problem. The naive solution in $$$\mathcal{O}(n^2m)$$$ gets TLE ($$$n \leq 2000$$$, $$$m \leq 2000$$$), and the solution is to use arrays of bitsets.

I don't understand why it's so much more efficient. The editorial uses bitsets of size $$$1000$$$, so I feel like the overall complexity would be $$$\mathcal{O}(1000nm)$$$, which is pretty similar to the naive complexity. Should I understand that performing $$$XOR$$$ on bitsets of size $$$n$$$ isn't of complexity $$$\mathcal{O}(n)$$$ ? (using precomputed tables with masks of size 64 ?)

I had a hard time understanding the solution, so any similar problem resource to learn would be appreciated.

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en1 English Orn0 2025-01-15 21:23:19 771 Initial revision (published)