So today I heard a news:
Personally I would like these tests to be included in the main tests, but that's not what I want to bring up now. Often these days, I find some cases where the constraints and tests seem unreasonable.
Example 1. 2056C - Palindromic Subsequences consisted of a total of 12 tests. Why was there a need to add so many tests, when a single test can include every $$$n$$$? Even if we want to include a maximal test, one additional test with $$$t=n=100$$$ is enough. The only reason it had so many tests I can think of, is just because the Polygon rules allowed it.
Example 2. If you remember this problem: 2034A - King Keykhosrow's Mystery, some naive bruteforce solutions, written in C++ with int
passed, but the ones that used long long
(some luckily passed) or slower languages couldn't. As stated in https://codeforces.me/blog/entry/136579?#comment-1222858, it turned out that they only focused on following the Polygon rules and didn't actually consider enough that whether a solution with $$$2 \cdot 10^8$$$ operations is good to pass or not.
I don't get why 2A problems should keep their constraints and size absolutely tiny when the core part of every viable solution can be executed under 10 ms, while they all have like 4 pretests, which definitely would take much more time than multiplying $$$t$$$ by 10 and using just 2 pretests. Why is it not based on the actual soluiton's expected running time but on the size of the input?
I also don't know if adding a few more tests to a C problem is that risky. To me, it feels like it should rely much more on the number of participants, expected difficulty of the earlier problems, and how it is likely that the participants would submit a wrong solution after confirming the examples (so that the number of submissions increase). I'm not sure if I want to agree with this pretest=systest rule (is it even publicly stated? this guide doesn't have it), but it is still there and we can use systests if needed.
The rules exist because they should make sense in almost all cases. But in many contests, I find them very unfitting to some problems. It doesn't really make sense that a fixed constraint can be applied to different problems, because what the intended solution and not-intended-to-pass solution use can be heavily different depending on the problem. What the tests should focus on and the number of tests that are actually needed to properly judge the solutions also differ. The expected running time for the solutions also differs. The same constraints with same intended time complexity can lead to expected running time from a mere 100 ms to 1000 ms depending on the constants involved in the solution. It can't just solely rely on the "limit of variables" and "number of tests."
These issues are coming from exceedingly prioritizing following the Polygon rules than anything else. I mean, yes, rules should be followed. But if the rules are often making no sense, maybe we should try to make them less strict, or state it as a guideline instead so that we don't need to blindly follow it.
Max test for C is $$$n = 2e5$$$...
Sorry, I linked to a wrong problem. I was referring to https://codeforces.me/contest/2056/problem/C . Fixed now.