This weekend, there were two contests in a row which became unrated due to technical issues; one because of long queue and one because of server returning 500/502/504 all the time. While I do not understand the technical details, I am pretty sure that these two failures are related to each other.
Similar incidents have happened before; for example, just before Codeforces Round 639 (Div. 2) took place, a technical issue was suspected, and the round was rescheduled because of this. However, no one made sure that the issue was actually fixed, and when the contest did actually take place, it was ruined by long queue. In both of these cases, there were known technical issues before the contest started, but the contest took place anyway.
I propose that whenever there is a good reason to believe that a round is likely to be affected by a technical problem (such as after a contest becomes unrated due to queue/server problems, which is reason to believe that the next contest will likely face the same problem), a testing round must occur before the actual round takes place.
Maybe we can also have regularly scheduled testing rounds (such as once every week). These rounds can be hidden from the contest list or deleted entirely after they end, to reduce clutter.
Agree
Maybe we can also have regularly scheduled testing rounds (such as once every week)
This is interesting.
Also to reduce the load from the server, all the other services like solving a problem from problem set, or watching videos on EDU etc can be shut down during those two hours of the contest.
There can be a maximum limit on how much submissions a user can have per minute, so that people just don't blindly spam on the submit button, even after doing very minute changes which do no good and just increase the queue
Not true, in the last contest I coded a problem and couldn't find mistake after 2 minutes of contest submission knowing 100% my algorithm was on point. After contest submission, I realized that I needed two small changes and an endl. Sometimes, there is nothing you can do besides checking if it AC's or not.
Yes, I completely agree with you. Since you are an experienced coder it is easy for you to get those mistakes in an attempt or two. But there are people who don't have much idea about what they are doing and just try out different things like a shot in the dark, hoping that it would work out
Good testing round should be mandatory. Use a rated round with sufficiently low rating cap for testing.
Is it possible to "rerun" an old round (simulate resubmitting all the solutions and queue them up in the judge) programmatically?
This wouldn't necessarily catch all kinds of technical issues, but it could be done without any human intervention (and would have similar player/submission count as an actual round).
That sounds like a great idea! But what about problems that are triggered by large numbers of accesses to the website, not by large numbers of submissions?
That could be simulated as well (maybe renting a large number of machines for a short time, idk exactly). Although things have a tendency to work in testing but not in real life :P
Yeah, it wouldn't catch everything. On the other hand, I don't think Testing Rounds would simulate the same load as a regular round anyway, because I imagine they would be less popular.
Basically Testing Rounds sound good to catch various edge cases and logistical issues, especially after a system update or an outage or some "high-risk" period of time, but doing a testing round every week sounds too much like you're using human labor to run tests that could be automated.
[del]
The two rounds had different issues though.
This means another Div4 round!! XD