Top Comments
On DilshodbekXMeme of the day, 35 hours ago
+114

No, this is the meme of the day

+97

Developer of codeforces getting some WA on production

On 1.618034Jiangly !!, 38 hours ago
+75

Yas . I am bankrupt. Can yuo offar me some job, mr.retard? thanks.

On deng_wo_ni_xiWhat happened……?, 20 hours ago
+57

Users with a CodeForces rating of 1900 or higher can freely add tags to problems, but some immature users(e.g. a part of the primary school student in China, their rating is greater or equal than 1900, But they're only 10 or 11 years old) maliciously abuse this privilege by deleting all tags or adding a large number of irrelevant tags. I think we should implement a version history for problem tags, similar to Zhihu(a Chinese website), and publicly display the tags added by each user. Additionally, we could grant users with a rating above a certain threshold (e.g., 2400) the ability to report malicious tag additions or deletions (with moderation). If a user is reported multiple times, we could ban them from adding tags for one year.

Or, Increase the rating threshold to 2400.

Sorry for my poor English.

It would be cool if everyone got their nickname as a title

9 problems, 2 subtasks with one task having a score of 8000, SCARY !!

Another fun experience in this contest was that when I looked at problem H2, it had this notification at the top: "The problem statement has recently been changed. View the changes."

And when I clicked on "View the changes", it showed me that the sample output has changed, even with a nice diff:

Since there was no announcement, my theory is that this was an accidental leak of the reference solution output. What the reference solution seems to do is to just keep moving there and back in the beginning and at the end (probably using the matchings we find in H1), and then each token at some point just moves without stopping from the starting to the ending location. Still not sure how to complete this to a working solution, though :)

Congrats jiangly on becoming jiangly :))

Fun fact: It is still Div. 2 B level problem

Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."

On Ali_BBNCongratulations to jiangly, 37 hours ago
+43

Go and cry.

Just upsolved F2. It is one of the best problems I have seen. On the other hand, I have no clue on how the editorial just skips to the following claim :

Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p. There is a restriction: j≥li.

I do get there eventually, but the steps are definitely non trivial. I decided to write my own solution for both F1 and F2, explaining the intermediate steps.

F1 Solution
F2 Solution

Any chance finals can be rescheduled to avoid conflict with both European and Asia Pacific ICPC Finals?

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

On too_rustyNice, 37 hours ago
+37

world war will wipe out everything before that

On Ali_BBNCongratulations to jiangly, 36 hours ago
+37

Imagine count of users in your country is less than count of Iran's grandmasters and you talk shit about Iran

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

Thank you for the contest! The problems were really interesting. Best Codeforces round in a while.

Take a look at the first hack, it is indeed that test so it was added by hacks

B >>>>>>>>>>>>> F1

+27

As a tester, little did I know that I tested a historical round

So who is Jiangqi Dai, jqdai0815, djq_cpp or djq_fpc?

On Ali_BBNCongratulations to jiangly, 37 hours ago
+25

And everyone hates racists.

On Ali_BBNCongratulations to jiangly, 37 hours ago
+25

I highly recommend you stop wasting other people's oxygen by doing you know what <3

Can someone explain F2 in detail?

On 1.618034Jiangly !!, 37 hours ago
+24

absolute cinema this contest has been....Jiangly finally clinched No1...

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

I loved problem D!

see https://en.algorithmica.org/hpc/compilation/contracts/#arithmetic

tl;dr: bit shifting doesnt work directly for signed integers cuz of rounding reasons (stupid c++ rounding toward 0 as always)

Do not use unordered_map in any contest.

Problem C is cool

Screencast of me solving this contest in Rust. No camera as it was blocked by some app

On MikeMirzayanovMaraTON Challenge 1, 45 hours ago
+20

This competition presented a unique opportunity to explore custom compression techniques tailored to the characteristics of TON blocks. My journey began with an in-depth study of serialization, starting at the root of the cell. From the outset, I noticed inefficiencies in the provided serialization mode 2. This realization marked the beginning of a path toward uncovering optimization opportunities.

Analyzing the provided 01 - Cells.cpp file proved instrumental in understanding the block's underlying structure. At its core, the block was represented as a Directed Acyclic Graph(DAG), where each vertex corresponded to a cell containing a value(hash data) and a type(exotic or general). Recognizing that the DAG structure didn't have to be serialized in its default form, I began exploring how to reconfigure it for better storage efficiency and customized it, :)

Initially, the challenge of storing DAG information seemed daunting. This data-comprising connected cell numbers and their respective counts-accounted for roughly 15% of the total size. However, I observed that sorting the cells by data length revealed an intriguing pattern: the connection counts of adjacent cells in the sorted array were surprisingly continuous. This insight became the cornerstone of my approach to optimizing DAG storage.

To compress the connection data, I adopted a layered strategy, I applied RLE to the array of connection counts. This simple yet effective method leveraged the continuity discovered in the sorted data. Then I constructed a Huffman tree to encode the decision points, replacing verbose representations with compact codes.

Although the number of cells rarely exceeded 8000, storing the connection counts required 13-14 bits per entry(14*8000*4). When multiplied across cells and connections, this quickly became a sizeable portion of the data and there was a room to use Huffman coding. Through this, I achieved significant savings, paving the way for further optimizations.

With the DAG structure compressed, I turned my focus to the cells themselves. To simplify processing, I serialized all cell data into a single sorted sequence(hash data only). This arrangement revealed additional patterns, particularly in how common subarrays appeared across adjacent cells.

I designed a dynamic programming solution; The state variable, dp[i], represented the minimum compression length up to the i-th location and which maximized i - dp[i] independently. At each step, I updated the state using 3 strategies such as direct updating(copy the current data as-is), Maximum common subsequence(Identify and compress the longest matching subsequence within a fixed window [i - window_size, i)) and LCP(find the best LCP of latest precnt cells in lcp arrays).

Through experimentation, I tuned the parameters to maximize efficiency for typical transaction histories, ultimately setting on a window size of 30 and LCP depth of 3. This method allowed me to compress the hash data efficiently, leveraging both local and global patterns in the dataset.

After completing the custom compression steps, I applied a final layer of improvement. The compressed data was subjected to LZ4, the method suggested in the problem. However, I found that my tailored approach had already eliminated most redundancies, rendering LZ4's impact neglible.

The contest highlighted the importance of understanding the data's structure and tailoring solutions to tis specific characteristics. The DAG's redundancies and the unique properties of TON cells required a departure from traditional compression techniques. By focusing on the block's intrinsic patterns, I developed a method that noy only compressed the data effectively but also deepened my understanding of specialized compression strategies. https://pastebin.com/65ZUg5ZL

Thank you to the ton foundation and this community for such a excellent contest. I look forward to applying these insights in future contests and projects.

Thanks for the very nice round!

When trying to solve H2, I realized that my solution for H1 fails on this testcase:

1
4 2
1001
1010
1 2
3 4

Essentially, I was processing the connected components completely independently.

I was not sure if I should resubmit, as we often have pretests equal to systests these days. However, I convinced myself that there should definitely be a case in systests that verifies something that every reference solution needs to have an if for, and resubmitted. It turns out pretests were equal to systests in this problem as well :)

(It seems that all solutions of the top participants on the scoreboard handle this correctly, only 1 out of 12 accepted submissions fails this case)

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

On coderdhanrajJiangly Became Tourist?, 34 hours ago
+20

There's just one thing missing...

Spoiler

Problem G, I — amazing! Other problems were great too, I liked everything except A. Thanks for an awesome round!

+20

TheForces orz

As it stands now it seems like $$$\geq 4$$$ of the top 50 are affected by the clashes... Petr and Egor are both EUC judges (I don't know how stringent the requirements are for being present at EUC, but it seems at least really unfortunate), while (correct me if I am wrong) tkacper and me are both contestants at the EUC.

On Ali_BBNCongratulations to jiangly, 36 hours ago
+19

WTF!

+19

I have moved to a country which is exactly 2.5 hours ahead of my home. Good luck for me that this contest happens to fall at my "usual" time at home. I have to participate for old times sake.

On Ali_BBNCongratulations to jiangly, 37 hours ago
+18

my contribution: +9 -> -16.

Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.

That said, if it does not pass, it is easy to change to a pre-computation. So the options are like

  • do the simple thing, -50 and a few minutes if wrong, nothing else if right
  • do the correct thing, down maybe a minute but definitely OK

I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.

orz

As a first time tester, I almost had the first comment :(

+17

And I got rank 99 in Round 999 :)

On Rogue_RoninThoughts on DeepSeek R1??, 12 hours ago
+17

I notice that the problems that it can solve from your data set are a couple of years old, meaning there's a good chance that the model has those problems in it's training data

Testing it on some newer problems (single shot):

  • Photoshoot For Gorillas (o1 solvable): 302277443 | (AC) (1400)

  • Paint a Strip (not o1 solvable): 302278718 | (WA on test 1) (1200): It admitted in its COT That it couldn't figure it out, funnily enough

  • Penchick and Desert Rabbit (idk if o1 can solve it): 302279772 | (WA on test 1) (1700): It was confident in its answer this time looking at the COT

It looks impressive, and it's nice to see the "behind the scenes" of the COT, but it also has the same flaws as GPT o1, and since o1 is already readily available (paid through ChatGPT, free but janky through Github Models), I'm not sure if this will affect the cheating epidemic to a large degree. It's nice that it's more openly available though, and the ease of access could make it easier for problem setters to test their problems on.

An additional note, for Paint a Strip and Penchick and Desert Rabbit, it took 4 — 5 minutes to respond

EDIT: Tested it on 2 problems from the USACO December 2024 contest, could solve It's Mooing Time from Bronze, but not Cake Game from Silver (which was simple enough to be a bronze question)

Person becomes enlightened version of themself.

So, jiangly will become jiangly

Thanks for this insightful post, it was really helpful and well-written

On 1.618034Jiangly !!, 37 hours ago
+16

On Ali_BBNCongratulations to jiangly, 36 hours ago
+16

A stupid 10 year old kid should be sitting in his house and playing with his Barbies instead of coming and commenting on us.

+16

Why is the monumental codeforces round 1000 at 7am in the definitively best timezone on a weekday??? (did I just piss off a good part of the world? yes and I'm not sorry)

I really wanted to learn about Little John and his galvanized square steel, screws he borrowed from his aunt, and eco-friendly wood veneers

Sadly I guess I wont be able to.

GLHF to those participating, may bitset waifu bless you.

I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

On coderdhanrajJiangly Became Tourist?, 33 hours ago
+16

That looks really good

Problem C can be solved (overcomplicated) with 2-sat.

Let's figure out what the array should look like. It should look something like:

$$$0 \ \ 0 \ \ 0 \ \ \texttt{LIAR} \ \ 1 \ \ 1 \ \ 1 \ \ 1 \ \ \texttt{LIAR} \ \ 2 \ \ \texttt{LIAR} \ \ 3 \ \ 3$$$

The algorithm consists of iterating over the array from $1$ to $$$n$$$. Let's maintain the current variable $$$\texttt{sync}$$$, initially $$$0$$$. An element $$$a_i$$$ is out of sync if it does not equal $$$\texttt{sync}$$$. Naturally, a liar is found nearby and we do $$$\texttt{sync++}$$$. Formally, what this means is that at least one of the $$$i$$$ or $$$(i-1)$$$ is a liar. We can write $$$(p_{i-1} \ \lor \ p_i)$$$. Additionally, we write $$$(\neg p_{i-1} \ \lor \ \neg p_i)$$$, since at least one of them tells the truth. There are a few cases. For example, if $$$a_i > \texttt{sync}+1$$$, then $$$i$$$ is surely a liar. We write down $$$(p_i \ \lor \ p_i)$$$.

A good way to check if a solution exists is to run a standard SCC 2-sat. I personally avoided doing so during the contest and chose to do tedious casework. Afterward, when you are sure at least one valid solution exists, find adjacent clause components. If there are intersections or singular clauses like $$$(p_i \ \lor \ p_i)$$$, the component has exactly $$$1$$$ way to be filled with LIARs. If the component has no intersections, such as in $$$(4,5), (6,7), (8,9)$$$, it has $$$k+1$$$ ways to be filled with LIARs, where $$$k$$$ is the total number of clauses in the component. Let me show an example:

$$$(\textbf{*},\ \cdot \ ), (\textbf{*},\ \cdot\ ), (\textbf{*},\ \cdot\ )$$$
$$$(\textbf{*}, \ \cdot\ ), (\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*})$$$
$$$(\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$
$$$(\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$

Since the components are independent, just multiply the results of each.

Here is my submission: 302126337

Excited to see a contest hosted by my alma mater! Best of luck to all!

Congratulations jiangly for crossing 4000 rating.

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

On 1.618034Jiangly !!, 24 hours ago
+15

I think that was his girlfriend!

score distribution ?

I failed in a = {1,5,5,6}

On Ali_BBNCongratulations to jiangly, 36 hours ago
+14

Congratulation to jiangly!

On 1.618034Jiangly !!, 35 hours ago
+14

talk like yoda, you do

+13

as a tester, we needed more testers

Perhaps more than half of the top 50 participants will be unber 18 :(

he could still be a liar even if what he says is correct

+13

让我们说英文

I think so.

Putting the code in compiler explorer, you can see the assembly generated when using count() as boolean has one less std::_Rb_tree_increment(std::_Rb_tree_node_base const*) loop.

Meanwhile the modified one that uses count(a) == 10 has

        call    std::_Rb_tree_increment(std::_Rb_tree_node_base const*)
        add     rbp, 1
        mov     rdi, rax
        cmp     r12, rax
        jne     .L71
        cmp     r13, r14
        mov     rax, r14
        cmovge  rax, r13
        cmp     rbp, 10
        cmove   r13, rax
        jmp     .L75

which seems to be the process of count() as it has a cmp rbp, 10 (comparing if the count == 10).

+13

I didn't BOOK MY SEATS for taking classes for USACO (BRONZE,SILVER,GOLD,PLATINUM) our 2025 batch is open now, so I will fail.

Doing CP is the break.

On 1.618034Jiangly !!, 37 hours ago
+12

Double clutch!

Some highlights of the contest.

Fuck off with condescending statements like "your claims are bogus" or "I'm sorry for you", ok? Nobody appreciates passive aggressiveness. Stay direct and to the point.

The exact number of operations isn't 10^8. My code is $$$O(N 2^M)$$$ with 2e8 loop executions and 7 instructions per loop, like this:

	movl	(%rdx), %esi
	andl	%edi, %esi
	cmpl	%esi, %ecx
	cmovg	%esi, %ecx
	addq	$4, %rdx
	cmpq	%rdx, %r8
	jne	.L115

The code you link has 36 instructions per equivalent loop (not counting labels and noops). In addition, one of them on my system (likely different from CF) is call __popcountdi2@PLT. We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.

Not all operations are equal. CPI matters, branch prediction matters, avoiding branching by proper assembly matters. AND is one of the fastest, but it's only one among many. Compare, branch and memory access are the important ones, not AND. Then there's a whole another factor — pipelining.

Correctly predicting that the code you linked would fit in TL is nigh impossible as the exact number of operations is well above even your optimistic rule of thumb and there are some slow operations. Predicting that my code would fit is easier since it's intentionally written to be more constant-optimal, but still unreliable when compared to other problems, and much better to just write and measure. That's what I did.

The issue is that this bruteforce is intended to pass, not that it can pass. Unintended real-time-efficient alternatives to an optimal algorithm happen often enough but this is a completely different situation.

+12

My screencast (in Rust)

Score Distribution ??

CodeTon Round 9

Seems like jiangly 4K

speedforces

So many solutions got FST on B :/

Justice for Java users 🫠

The score distribution will be announced soon. When is 'soon'?

I swear I might just be getting dumber as more contests come and go...

On Ali_BBNCongratulations to jiangly, 37 hours ago
+10

thank you so much :)))

Interesting round!

Question about onsite: Is it right that it is okay to just participate in this round and then wait? I couldn't find any registration on the IAEPC website.

+10

Special but after plag check you lose it :(

+10

mine?

+10

70

On doraimotaHow To Get More Contribution, 21 hour(s) ago
+10
  1. Not Posting This Blog
On 1.618034Jiangly !!, 37 hours ago
+9

Stop crying bro

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

Thank you for the round! And could you please clarify what is included under 'travel expenses'? Does it cover round-trip flights, accommodation, meals, and other related costs?

On too_rustyNice, 36 hours ago
+9

Who knows, maybe with the help of AGI (if one was created), we would be able to participate in div.1 + div.2 contests daily. Then, 10k would be accomplished much faster.

On coderdhanrajJiangly Became Tourist?, 21 hour(s) ago
+9

I think everyone should have a different rank tributed to only them.

So, say Tourist reaches 4000+ rating again, his rank will be Tourist.

Say that, a new LGM achieves 4000+ rating, their rank will their username.

Because whoever reaches that rating should have recognition and tribute

On deng_wo_ni_xiWhat happened……?, 19 hours ago
+9

student in China, their rating is greater or equal than 1900, But they're only 10 or 11 years old

w what..? so im stupid to enroll primary school in China?

On deng_wo_ni_xiWhat happened……?, 16 hours ago
+9

I tried to fix the tags but after I deleted the abnormal tags, they quickly returned.

W

On fractalIZhO 2025, 2 days ago
+8

Same for me. C also has problems — selecting the part with less vertices in the bipartite graph to be the "maximal matching" passes on the tests.

On fractalIZhO 2025, 2 days ago
+8

Wow, that is interesting. Also, congratz for the win!

On fractalIZhO 2025, 2 days ago
+8

Thanks!

Auto comment: topic has been updated by TyroWhizz (previous revision, new revision, compare).