Блог пользователя unalive

Автор unalive, 3 недели назад, По-английски

I am unfortunately not very good at writing code and can barely function without an easy way to debug said code. I therefore need a debug template at ICPC and spent some time reducing the length of the debug template I use normally. I think it's pretty short already, but it seems like it can be shortened further. I don't know how to do so, hence this blog.

Some considerations:

  1. Can only use features introduced in C++ 17 or earlier, as my region is weird.
  2. Need to be able to debug all STL containers, and any nested versions thereof.

Now, if C++ 20 were allowed, one could simply use the following:

Code

__print() works by repeatedly considering the elements that constitute x and calling __print() on them (whilst ensuring that the output of each __print() call is separated by ,) until << is defined for x by default.

Now, what's the problem with making this code compatible with C++17?

The problem is that there doesn't seem to be a short (in terms of code length) way in C++17 to differentiate between pairs and iterable containers.

I found two solutions, both of which aren't good at all:

1) Use is_literal_type_v to check if T is a pair
Code

This will work if we have pairs like std::pair<int, float> but not with something like std::pair<int, vector<int>>. This is a significant loss of functionality since we now cannot debug things like map<int, vector<int>> which are often used.

2) Just create a separate templated function for pairs
Code

This is also bad because:

  1. Much longer code.
  2. Notice that we now need to use a class/struct/namespace for the two __print() functions as they can call each other.

Can someone help me fix these issues and/or make the code shorter in general? Right now, I think the last one is the best. I can't believe I spent the last 3 hours shortening a piece of code by 5 lines.

For whatever reason, GPT and Claude seems to be unhelpful here. I ran out of access to 4o trying to get it to help me, despite purchasing GPT plus T_T

Полный текст и комментарии »

  • Проголосовать: нравится
  • +16
  • Проголосовать: не нравится

Автор unalive, 4 недели назад, По-английски
  • Проголосовать: нравится
  • +72
  • Проголосовать: не нравится

Автор unalive, 2 месяца назад, По-английски

Disclaimer: It might have bugs, don't send me death threats if you FST.

I couldn't find a nice dynamic bitset template so I wrote one.

It can be found here.

It has additional functionality as compared to std::bitset (you can answer many kinds of range queries on it, for example: "Find $$$k$$$-th set bit in range $$$[l, r]$$$).

Some poor documentation

Efficiency:

Firstly, always use the following pragmas with it:

pragmas

They can reduce runtime by upto 50% (thanks to mr qmk for enlightening me on this).

I am too lazy to run any proper benchmarks, but I solved a few problems with it and it was always faster than std::bitset and tr2::dynamic_bitset. Here are some sets of submissions on the same problem with all 3:

1. Using &=, | and >>

  1. My bitset: 284156267
  2. std::bitset: 284156622
  3. tr2::dynamic_bitset: 284156883
Bitset Time Memory
My bitset 765 ms 944 KB
std::bitset 859 ms 1628 KB
tr2::dynamic_bitset 1077 ms 1240 KB

2. Using &=, >>=

edit: Redid these because apparently the server was under high load at the time of the initial submissions.

  1. My bitset: 284262107
  2. std::bitset: 284277251
  3. tr2::dynamic_bitset: 284267738
Bitset Time Memory
My bitset 343 ms 1124 KB
std::bitset 405 ms 1140 KB
tr2::dynamic_bitset 390 ms 844 KB

So it seems that my bitset is as good or slightly better in every manner. I have no idea why this is the case though, as there is nothing which seems particularly faster in my implementation.

Parting notes:

  1. If you use it and find some bugs, let me know.
  2. If you think it's missing some significant functionality, let me know.

Thanks for reading my blog.

Bitset Waifu

Полный текст и комментарии »

  • Проголосовать: нравится
  • +113
  • Проголосовать: не нравится

Автор unalive, 2 месяца назад, По-английски

Notice the decline by 3 million last year? OpenAI kidnapped 3 million chinese kids and they are serving as the backend for GPT-o1 from a basement.

#FreeChineseKids

Полный текст и комментарии »

  • Проголосовать: нравится
  • +630
  • Проголосовать: не нравится

Автор unalive, 3 месяца назад, По-английски
  • Проголосовать: нравится
  • +48
  • Проголосовать: не нравится