Why?
Weak pretests discourage guessing.
There are many contestants who only have a faint idea of why their solution works at all. The problem style of recent years has contributed to this. If the solution boils down to "check this elegant condition, answer YES
or NO
based on that", then it is very much possible to solve problems by guessing. Especially with full feedback.
Remember 1375C - Element Extermination? The solution is to check if $$$a_1 < a_n$$$. Tons of people under the contest announcement were excited to explain that this is how to solve the problem. Only a few could explain why it worked.
This is also a situation where it is easy for a high-rated author or coordinator to deceive themselves. If you already know such a solution, then it may seem very easy. The implementation is easy and if there is a clean observation you have to check, making that observation seems easy in hindsight. And the large number of solves in the contest "confirms" that it is easy. But in fact, the problem is only easy to guess, not easy to solve.
In the good old days it was not seen as a big violation if pretests were weak. It was normal. Right now weak pretests cause an uproar because people are not expecting it. If weak pretests were the norm, it would not be such an issue.
Make weak pretests the norm again. It will improve the quality of standings. Thanks.
Auto comment: topic has been updated by -is-this-fft- (previous revision, new revision, compare).
I also miss the times when it was the norm that pretests do not cover everything. In addition to the reasons in the post above, it boosts the overall importance of local testing, which I see as a Good Thing.
I agree, but if so then it's better to judge all submissions after contest.
No feedback (if that's what you mean) is bad for different reasons, I still want to fail pretests sometimes or most of the time. I just want the danger of failing system tests to be open for reasons stated above. If you want to experience no feedback you can try Topcoder: very different dynamic. If I understand correctly the Codeforces system started as a way to have best of both worlds.
I didn't know about that, thanks, but isn't weak pretests same as no pretests at all? Because if you can't trust pretests you should think about every case to pass, same as no pretests. I think you should choose one from no pretests and strong pretests. Sorry for bad English.
Congratulations, you now understand what does it mean "to solve the problem".
That's why I prefer no pretests at all.
I think I should specify that I prefer no pretests(during contest) and get all submissions judged after contest, not just no tests.
Well, I acknowledge that weak pretests don't necessarily mean bad contests. However, with the current system of Hacks, it's completely garbage. Assuming my solutions will all pass, this time I failed to win (and lost 128$) only by the luck of the room. I don't want such situations to be the norm.
Might as well judge all solutions after the contest is over. No need for pretests.
I agree that hacking introduces variance as a result of the room system. The other issue I see (which I've written about on numerous occasions) is that it's unclear whether or not pretests will be weak prior to the competition. This means that the optimal strategy is determined by an unobserved decision of the authors: if pretests are weak, the correct strategy is to test your code yourself before submitting, while if pretests are strong, the correct strategy is to submit and go for proof by AC.
Subjectively, I prefer contests without pretests/hacking since looking for edge cases is not fun or interesting to me. However, if Codeforces does want to move towards fewer/weaker pretests, here's an idea that's a bit less radical than it sounds: make all pretests public (e.g. by providing a downloadable zip file containing all pretests). This way, the strength of pretests becomes public information and contestants no longer have to guess whether pretests include e.g. maximal cases, an edge case they're worried about, etc. (For what it's worth, this is essentially the approach taken by TopCoder--each problem has <10 pretests without multitesting and those pretests are public.)
I like the idea (it sure has its own drawbacks because of bad actors, but well, the contest is intended for honest participants anyway).
An alternative is to spell out in the problem statement which subset of constraints is covered by pretests, and then the authors cover them to the best of their ability. Unfortunately, this would limit the range of possible problems, and make it generally harder to prepare problems.
An anecdotal evidence.
In the recent contest, I've hacked 7 solutions 50+ minutes before contest end.
Out of these, only 2 tried to submit again, unsuccessfully.
Maybe it's an evidence that people generally have no idea why their code works? :P
Perhaps the real reason is that people just forgot that hacking exists. :(
Weak pretests also force contestants to check their solution before submitting, which makes a contestant stronger.
I agree that weak pretests are not always bad. But in this contest, hacking more problems was giving more points than solving an additional problem because F2 had only 1000 points. It was all about are you dedicating your time to solve one more problem or to hacking.
I strongly disagree this claim from when I heard the following opinions:
Personally, guessing the strongness of pretest during coding phase and without seeing the pretest is nonsence.
If I can know the pretest when I make a lock, pretests=samples, or with seperating haking phase from coding phase, I can agree the claim that "weak pretests are a good thing".
Maybe dynamic scoring for hacks based on the proportion of FST (and/or number of successful hacks)? At least, hacks in free Hackforces problem should not be as valuable as hacks on strong pretests.
Successful hacks: $$$S$$$
Unsuccessful hacks: $$$U$$$
$$$D=S-\frac{U}{2}$$$
If $$$D \le 0$$$, lose $$$100 \times D$$$ points
otherwise, get $$$2T \times (1 - 2^{-D})$$$, $$$T=100$$$ or $$$200$$$
for each problem can be a solution?
The points are limited to $$$2T$$$, every positive-scored( $$$U < 2S$$$ ) hacker gets positive bonus, and +1:-0 hacker can gain $$$T$$$ points.
Maybe this idea is good. But then for sure the idea with hacked is bad. For example, today some people have made 7+ hacks and have not passed the system tests themselves. Therefore, weak pretests do not say that the person who "passed" the task understands it better
But it's the charm of programming contest that you can submit the answer by just intuition and without any proof. If you think the answer should be accepted only if the participant can prove it's correctness, you should take part in math contest instead of Codeforces.
Yeah, you can submit just by intuition and without any proof, and it's totally reasonable that once in a while you will be absolutely screwed by it.
I agree with you. While proof is always championed, Instincts are also important.