Блог пользователя djm03178

Автор djm03178, 3 недели назад, По-английски

It always makes me sad whenever I see almost-obvious alt accounts on "congratulations to the winners" section of contest announcement posts, especially in Div. 2 contests where there are no trusted participant rules. Of course, maybe some of the "grey participants" are actual honest accounts, and in that case I really want to congratulate them. But due to how many of them are likely to be alts, I already start hating most of the mentioned handles.

I think the most effective way to prevent alts is to eliminate motivations to create alts in the first place, and winning lower divisions could be one of those high motivations. Well, there could be not many who can actually try to win them, but due to these alts, people who actually deserve high places can't even get there at all.

It's sad that they can't be praised enough if we stop mentioning the winners, but I'd want to give them a more deserved place in the standings table (this also helps many many top 100 participants), rather than seeing alts being praised by the official announcement and be biased to think that all the winners could be alts.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +167
  • Проголосовать: не нравится

Автор djm03178, 4 недели назад, По-английски

UPD: I'm bringing this up again, because I think this is a serious issue that's been bothering us for months already and I really want a proper explanation from the admins. Please MikeMirzayanov, don't just look over this. It's causing a huge inconvenience for many normal users.


I don't know since when this has been like this, but now it seems Codeforces has exclusive delay policy for submission pages.

For example, try to open 286905356 286913842 286925333 286943910 287014608 these pages in 5 separate tabs (open one in each second). If it's not only me, it will go like this:

  • 1st tab: opens immediately
  • 2nd tab: waits for 10 seconds and then opens
  • 3rd tab: waits for 20 seconds and then opens
  • 4th, 5th tab: immediately goes to the "Oops! Probably Codeforces can't be reached right now or your Internet connection is broken." page

If you open the same submission page multiple times, the only difference is that all of them eventually opens with 10 seconds of delay, one after another, instead of going to the error page.

I honestly don't see the purpose of this overwhelming policy. Does opening a few submission pages really hurt the server that much? Of course, I am maybe a bit biased because it's extra stressing when you face this while trying to hack hundreds of submissions, but it's annoying for other usages, too.

For example, this applies not only for the whole submission page, but also for the submission view on popup windows. You can try clicking a submission number on my submission list, then the popup appears immediately. But if you press the '#' button on the top of the popup, it gets that 10 seconds of delay. Then if you press the 'Click to see test details' button on the bottom of the page, it gets another 10 seconds of delay! Furthermore, if you're trying to hack a submission, opening the hacking page also gets additional delay. If you accidentally close a tab (or a popup) and try to open it again, you get 10 seconds of delay.

Why do we need to wait for every consecutive action regarding the submissions? Doesn't it need only to block real spam actions, such as auto scrapping every submission?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +376
  • Проголосовать: не нравится

Автор djm03178, 3 месяца назад, По-английски

This is a tutorial to how Codeforces scoring system works and some basic strategies about it. I will only explain about rated rounds here, but note that there are other types of scoring in unrated contests.

Please do share this blog when you see questions about it.

Scoring types

There are two different types of scoring system:

  • Standard (Div. 1 / Div. 2, Div. 1 + Div. 2)
  • Extended ICPC (Educational, Div. 3, Div. 4)

The standard (this is not a formal name, but I don't know if one exists) scoring is Codeforces' own scoring system where each participant's rank is determined solely by their final score. On the other hand, extended ICPC literally uses the ICPC rule for calculating ranks, using number of solved problems & penalty, with slight modifications.

Standard

Contests using this scoring system have pre-determined initial scores on each problem. By initial it means you get that score when you solve it in the $$$0$$$-th minute (before exactly 1 minute passes). Each problem's score, however, will decrease every minute.

Basic formula

Let's start with the formula. In a contest with duration of $$$d$$$ minutes, the score you get by solving a problem with initial score $$$x$$$ at $$$t$$$-th minute with $$$w$$$ incorrect submissions is:

$$$\max(0.3 \cdot x, x - \lfloor \cfrac{120x \cdot t}{250d} \rfloor - 50w)$$$

There are some things that needs detailed explanation here:

  • Each $$$x$$$ is always notified in the announcement blog, at least a few hours before the contest starts. If you see 'Scoring distribution: $$$500$$$ — $$$1000$$$ — ...' in the announcement, it means that problem A will have score of $$$500$$$, problem B will have score of $$$1000$$$ and so on.
  • The amount of decrement per minute is determined both by the problem's initial score and contest duration. Typically, if $$$d=120$$$, it means a $$$500$$$-scored problem loses $$$2$$$ points, $$$4$$$ points for $$$1000$$$-scored problem, $$$7$$$ points for $$$1750$$$-scored problem, etc. If $$$d=180$$$, then a $$$500$$$-scored problem loses $$$1$$$ point after the first minute, but it loses $$$2$$$ points after the second minute (so in total $$$3$$$ points).
  • $$$t$$$ only treats the submission time in integer minutes. For example, 00:02:59 is the same as 00:02:00, both being $$$t=2$$$.
  • $$$w$$$ is the number of incorrect submissions before your first accepted submission. All submissions after the first 'Accepted' submission do not count[1]. It also does NOT include submissions that got incorrect on (pre)test $$$1$$$ [2] as well as compilation error. 'Hacked' or 'Skipped' also count as incorrect submissions. Therefore, unless you found an obvious mistake in your 'Pretests passed' submission, you're discouraged to resubmit as getting another 'Pretests passed' will make the previous one as 'Skipped' (and therefore $$$w$$$ is increased), as well as increasing $$$t$$$. I'll explain more about pretests in the next section.
  • No matter how late you were, or how many wrong submissions you made, the score never goes below $$$0.3 \cdot x$$$ as long as you have at least one 'Accepted' verdict on it.
  • $$$d$$$ is fixed when the contest starts; i.e. it does not change if the round is extended during the contest.

Pretests and system test

During these rounds, your submission will be judged only on a subset of tests, called pretests. If you pass all pretests, you get 'Pretests passed' verdict. This does NOT guarantee that your submission is correct[3], but preliminarily it will be shown as you got points from that problem. After the round, all 'Pretests passed' submissions will be rejudged with the full test, and this phase is called system test. If your submission also passes all tests in system testing, it will get 'Accepted' verdict and the score you got becomes final.

Hacks

You're assigned to a room with a few dozens of participants when the contest starts. During the contest, you can lock problems on which you got 'Pretests passed', and try hacking other participants' solutions in your room. Hacking is about making an input for another submission where it would likely to get any incorrect verdict (wrong answer, time limit exceeded, etc.). Locking means you cannot resubmit on that problem again. For each successful hacking attempt, you get $$$100$$$ points. For each unsuccessful hacking attempt, you lose $$$50$$$ points. Other verdicts, such as 'Invalid input', 'Generater crashed', 'Ignored', or 'Unexpected verdict' do not count as any of them. This score is independent from the basic formula and there is no limit of score you can get or lose from hacks.

After the contest, Candidate Masters and above can try uphacking, but this does NOT affect anything regarding the scores and the verdict will remain as 'Accepted' (along with the hacked mark)[4].

Subtasks

Sometimes some problems will have subtasks, typically named like 'C1', 'C2'. They can either be problems with one of them being strictly eaiser than the others, or be a little different problems with many similarities. However, being subtasks doesn't affect anything in the scoring[5] and they work just same as two different problems. These problems' score, however, is usually stated with parenthesis like $$$(1250 - 1000)$$$ in the announcement, so you can assume that this problem is actually two subtasks with $$$1250$$$ and $$$1000$$$ scores each.

Strategies

Let's say you need $$$t$$$ minutes to solve only that problem with score of $$$x$$$. Then with the most simplfieid version of the formula[6], the optimal strategy is to solve problems is to go in order of $$$\cfrac{x}{t}$$$ in decreasing order. For example, if you need $$$10$$$ minutes to solve a $$$500$$$-point problem and $$$30$$$ minutes to solve a $$$1000$$$-point problem, solving the $$$500$$$-point one is better. If you can solve the $$$1000$$$-point problem in $$$15$$$ minutes, then solving it first is better. Though, it's usually the case that a double scored problem takes far more than double time to solve, so the most recommended order to solve problems is to start with the easiest problem and so on[7].

However, you may also want to keep a few more things in mind:

  • If you're left with too little time to solve $$$n$$$ problems but you think you can solve $$$n-1$$$ problems, then you can try to go for the harder ones.
  • If you don't make a single submission during the round then you're unrated, so another viable strategy is to read harder problems and see if you can solve them first. If lucky, you may get a huge headstart by solving a hard problem very early.
  • For subtasks, sometimes the easier subtask is significantly easier than the harder one, and after solving the easier one the harder one may not be too advantageous anymore. In this case you can skip the harder one for now and come back to it later, after solving other more rewarding problems first.

Extended ICPC

ICPC rules are simpler. There are two factors that determine one's rank:

  • Number of solved problems (the more, the better)
  • Penalty (the less, the better)

The number of solved problem is always the higher priority. For example, solving $$$4$$$ problems with $$$1000$$$ penalty is better than solving $$$3$$$ problems with $$$10$$$ penalty. There are no predetermined scores in ICPC rules, and all problems have equal effect on the rank.

The penalty can be calculated as the sum of the following formula for each solved problem. Unsolved problem has NO penalty no matter how many attempts you made on that problem. Let's say the time in minutes for the first 'Accepted' submission is $$$t$$$ and the number of incorrect submissions before the first accepted submission is $$$w$$$. Then the formula for the penalty is simply $$$t + 10w$$$. Note that every submission after the first 'Accepted' submission does NOT count. You may notice that the incorrect submission penalty is $$$10$$$ instead of $$$20$$$ in actual ICPC contests, and it's because of shorter contest time and less problems in Codeforces rounds, compared to real ICPC contests.

The best strategy in these rounds is to simply solve from the easiest to the hardest. There are no alternatives. However, you may want to note that getting hacked or failing system test on easier problems is far more fatal compared to failing harder problems, because of how penalty is calculated.

Another thing to note that in these rounds you can make multiple 'Accepted' submissions, and the previous ones do not get skipped. Therefore, if you have the slightest doubt in your previous submission, you can resubmit as much as you want.

Open hack

Unlike standard rounds, extended ICPC rounds do not have rooms and hacks during contest. Instead, they have an additional 12-hour phase after the contest called open hacking. In this phase, anyone (including non-participants) can try to hack any 'Accepted' submission any number of times, without any additional points or penalty. When the hacking attempt is successful, the hacked submission gets 'Hacked' verdict and then the participant's rank is recalculated accordingly. It could either the problem becomes unsolved with no other 'Accepted' submission, or that submission becomes an incorrect submission penalty if the participant has a later 'Accepted' submission, or it changes nothing if they have an earlier 'Accepted' submission.

After this phase, all successful hack tests are added to the main tests and all 'Accepted' submissions will be rejudged, and only the ones that passed all these tests will remain as 'Accepted'.

How to read & use the scoreboard

In both type of rounds we see very similar scoreboards, and they indeed work very similarly. Let's take a look at an example of a standard round first.

Firstly, '=' means the total score of that participant. This includes both scores from the problems and hacks. '*' means the number of hacks. + means successful hack, and — means unsuccessful hack.

For the numbers in the problem columns:

  • Green / blue positive number: This is the score the participant got from each problem. During contest, the numbers will shown as blue color, which means the score is preliminary from 'Pretests passed' verdcit and is not final. Green means it passed system tests and is final. The time of the correct submission is also marked.
  • Negative numbers: They're representation of the number of wrong submissions so far when they haven't gotten an 'Accepted' or 'Pretests passed' verdict yet, and it is NOT part of the score. Having "-1" on a problem score cell does NOT mean their score is decreased by 1.
    • Grey number: It means their only verdicts are about failing on pretests.
    • Bold red number: It means either "failed system test" (FST in short) or "hacked and locked". In both case the participant cannot resubmit on that problem so the verdict is final[8].
    • Normal red number: This can only mean "hacked and not locked", which means the participant is able to resubmit to the problem again before the contest ends.

If you double click on each problem cell, you can see the detailed submission history of that participant for that problem. This is also used when you try to hack. If you want to see all submissions of that participant, you can double click the cell where participant name is in (not shown in this image). On mobile environment, you can single tap on them instead.

In extended ICPC rounds, it looks little different.

The meaning of '=' now is the number of solves, and we have an additional column named 'Penalty', which is the penalty explained above. '*' means hacks here too, but this has nothing to do with the scores / ranks in extended ICPC rounds.

For the numbers in the problem columns, we no longer have the scores, but only a '+' or '-' sign with the number of incorrect submissions.

  • '+': This means that the participant has at least one correct submission. The number after this sign is the number of incorrect submissions before their fist correct submission. Having no number means $$$0$$$ incorrect submissions. We have the timestamp of the first correct submission here as well.
  • '-': This means that the participant only has incorrect submissions and the following number is the number of incorrect submissions. There are no different indications for 'Hacked' or "failing on pretest / system test"[9].

Double clicking on each cells shows the same detailed submission history. For some reason, it doesn't seem that we can open it on mobile environments for these rounds.

Notes

[1] Normally there will be only one accepted submission, but you can make multiple of them (I don't think this was intended to happen) if you submit another code before you get 'Pretests passed' on the previous submission.

[2] Even if you got incorrect on examples, if it's not the very first one (i.e. pretest 2+), you get incorrect submission penalty.

[3] However, recent trend is to have strong pretests, so in most cases it will also pass system test. There are always exceptions, though.

[4] Submissions made after contest (including virtual participation) are affected, as well as getting 'Hacked' verdict.

[5] Usually hacks are allowed only when you solve both subtasks, but that's another thing.

[6] ... that does not consider incorrect submission penalty and lower limit of the score, etc.

[7] This is just current trend, and the scoring may change to be more exponential later, then the strategy can also change.

[8] However, in virtual contests, they will get this by failing on any case, but they can still resubmit.

[9] There is no 'pretest' in extended ICPC rounds. They are just called 'tests', although the system works pretty much similar to having pretests, in perspective of that it also has the system test phase.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +191
  • Проголосовать: не нравится

Автор djm03178, 3 месяца назад, По-английски

It is already well-known that using unordered series in Codeforces is super dangerous, and it is not recommended to use them at all unless you know how to prevent them properly (well, even if you know, just using a regular map or set is more beneficial in most cases). Thanks to Blowing up unordered_map, and how to stop getting hacked on it, it is easy to access to the basics of hacking these hash-based structures. In this blog, I'm going to get into more details on how it works, and the ideas to hack them with various constraints on the variables.

The basics

As mentioned in the linked blog, gcc implements its hash table simply by modding the key with certain primes, and these primes vary between different versions of gcc. So, basically the easiest way to make a hack case is to put numerous distinct multiples of these primes. However, an important thing to note is that it's a list of primes instead of just one prime. So although the blog mentioned one prime for each of the two versions, its bucket size actually gets through a list of primes as rehashing keeps occurring and the mentioned ones are just on the middle of the list. Anyways, from now I'll call these primes evil primes and the multiples of such primes evil values.

First, let's slightly modify the insert_numbers function in the blog. We'll divide the function into the construction part and the testing part, so that we can easily use only the construction part when we actually hack. Note that unordered_map and unordered_set work exactly the same for the key part, so it doesn't matter which one we use.

#include <bits/stdc++.h>
using namespace std;
using ll = long long;

vector<ll> make_evils(ll prime, ll n)
{
	vector<ll> ret(n);
	for (ll i = 0; i < n; i++)
		ret[i] = (i + 1) * prime;
	return ret;
}

unordered_set<ll> st;
void test_evils(const vector<ll>& evils)
{
	st = unordered_set<ll>();

	clock_t start = clock();

	for (ll x : evils)
		st.insert(x);

	clock_t end = clock();

	cout << fixed;
	cout.precision(3);
	cout << "Time: " << double(end - start) / CLOCKS_PER_SEC << " seconds" << endl;
}

int main()
{
	vector<ll> evils = make_evils(172933, 100000);
	test_evils(evils);
}

You can run this on custom test with G++20, and it will simply exceed the time limit. Changing the prime parameter to anything else, or using other version of G++ will make it run very fast.

The reason why it is slow is that when the modded key values are the same, the unordered structure has to compare the new key with every existing keys to see if there are no real duplicates. So, this process takes $$$\mathcal{O}(n)$$$ time each, and therefore inserting $$$\mathcal{O}(n)$$$ elements takes $$$\mathcal{O}(n^2)$$$ time in total.

What are the evil primes?

So we now know that $$$172933$$$ is one of the evil primes in G++20. But, this isn't enough to hack unordered series in many cases, due to the constraint on the key values. Typically, $$$10^9$$$ is the upper limit of the values in many problems, so there can be only $$$5782$$$ distinct multiples of $$$172933$$$ within this limit (excluding $$$0$$$). Let's say the constraint on the sum of the number of elements is $$$2 \cdot 10^5$$$. Can we simply repeat inserting $$$5782$$$ elements $$$34$$$ times to make it slow?

The answer is, no. It won't get rehashed enough to get to our desired bucket size, so inserting $$$5782$$$ elements takes just linear time. So this means we need to work with smaller evil primes. Here is the list of the notable evil primes for each version of G++:

G++14: 823, 1741, 3739, 7517, 15173, 30727, 62233, 126271, 256279
G++17: 709, 1493, 3209, 6427, 12983, 26267, 53201, 107897, 218971
G++20: 541, 1109, 2357, 5087, 10273, 20753, 42043, 85229, 172933

As we can see, the next bucket size is the smallest prime that is more than double of the current size in this list. With this list, we can finally make a sequence of evil values within the constraint.

For example, we can insert $$$48185$$$ distinct multiples of $$$20753$$$ in G++20, and it takes approximately $$$1.3$$$ seconds. If the limit of the number of elements in all test cases is $$$2 \cdot 10^5$$$, we can have $$$4$$$ such tests. This will make it run for around $$$5.2$$$ seconds, which is enough to exceed the time limit for most of the problems.

    for (ll i = 0; i < 4; i++)
    {
        vector<ll> evils = make_evils(20753, 48185);
        test_evils(evils);
    }

But... is it really the worst?

In fact, we can make it even more evil. When we insert $$$48185$$$ multiples of $$$20753$$$, not all $$$48185$$$ elements are making it slow. Inserting an element is slow only when the current bucket size is our desired bucket size, and it is not slow before and after we step on that bucket size. So, to find the perfect spot, we need to know when exactly rehashing occurs.

Let's try these two cases:

    vector<ll> evils = make_evils(42043, 20753);
    for (ll i = 0; i < 100000; i++)
        evils.push_back(42043);
    test_evils(evils);

Output: Time: 0.002 seconds

    vector<ll> evils = make_evils(42043, 20754);
    for (ll i = 0; i < 100000; i++)
        evils.push_back(42043);
    test_evils(evils);

Output: Time: 5.096 seconds

The only difference between these two is that the latter inserted just one more evil value to it, but it made a huge difference in the execution time. Why did this happen? Notice that $$$42043$$$ is the next evil prime to $$$20753$$$. So we can assume that rehashing happened just when the number of elements inserted exceeded the current bucket size, and therefore $$$42043$$$ became the evil prime.

So, we can see that if we're using $$$20753$$$ as the evil prime inserting more than $$$20753$$$ elements will no longer have the slowdown effect. Therefore, we can reduce the number of elements inserted to $$$20753$$$, and put $$$9$$$ test cases instead to make it run for around $$$11.7$$$ seconds. Further doing the evil thing, we can also fill the rest of the test cases with smaller evil values, or just spam other small cases.

    for (ll i = 0; i < 9; i++)
    {
        vector<ll> evils = make_evils(20753, 20753);
        test_evils(evils);
    }

If duplicated keys are allowed

This is something I've yet to fully discover and I need more research on it. But at least there is something I can say now.

We can make it even worse if duplicates are allowed. In the experiments in the above section, after inserting distinct evil values, I kept inserting more of the evil prime. In this case, it kind of worked. But what if you insert other evil values? Check these two versions out:

    vector<ll> evils = make_evils(42043, 20754);
    ll x = evils[evils.size() - 2];
    for (ll i = 0; i < 100000; i++)
        evils.push_back(x);
    test_evils(evils);

Output: Invocation failed [TIME_LIMIT_EXCEEDED]

    vector<ll> evils = make_evils(42043, 20754);
    ll x = evils.back();
    for (ll i = 0; i < 100000; i++)
        evils.push_back(x);
    test_evils(evils);

Output: Time: 0.003 seconds

Surprisingly, using the second to last element took far longer than using the very last element. So I think we can assume that it has to do with the insertion order, but I'm not too sure about the exact rule. For example, running this in G++17 with its evil primes shows very different behavior, where using the second to last element is as fast as using the very last element while using the first element or something in the middle was far more slower. If someone knows about this in detail, please tell me in the comments. But anyways, this opens a huge possibility in making worse tests in problems that allow duplicates, because this means we don't need to make several tests that initializes the unordered structure every time and instead we can keep inserting an evil value in its worst state possible.

When the constraints are tough

(This part is slightly modified for generalization, thanks pajenegod for pointing this out.)

Sometimes, we have very small range of elements available, such as only up to $$$m \le 10^6$$$. In these cases, it's pretty hard to make a proper hack case since we can't really insert a lot of distinct elements with the same modded key value. But depending on the problem and the code, it can still be slowed down to some extent.

Suppose the number of elements is $$$n$$$ and the problem allows duplicates, and there is an evil prime that is around $$$\sqrt{m}$$$. This means we can insert $$$\mathcal{O}(\sqrt{m})$$$ elements to get the desired bucket size. From now on, we can insert the same worst evil value for the rest of $$$\mathcal{O}(n) - \mathcal{O}(\sqrt{m})$$$ elements. Therefore, it takes $$$\mathcal{O}(n \cdot \sqrt{m})$$$ time in total. In practice, this isn't slow enough to hack normally, but it still depends on case by case. For example, you can try analyzing what my effort was to make https://codeforces.me/contest/1985/hacks/1049559 this hack.

To summarize, the worst time complexity we can achieve with $$$n$$$ elements with duplicates in range $$$[1, m]$$$ is $$$\mathcal{O}(n \cdot \min(n, \sqrt{m}))$$$, as each element can make at most $$$\mathcal{O}(\sqrt{m})$$$ collisions for each insertion. Without duplicates allowed, it is meaningless to insert more than $$$\sqrt{m}$$$ elements. Therefore, when multiple test cases are allowed it is best to insert $$$\mathcal{O}(\sqrt{m})$$$ elements so that it becomes $$$\mathcal{O}(m)$$$ time complexity for each test case, and make $$$\mathcal{O}({n_{sum} \over \sqrt{m}})$$$ such cases to achieve $$$\mathcal{O}(n \cdot \sqrt{m})$$$ in total (but with much smaller constant).

Another attack: the clear() bug

There has been a long-living bug in gcc's unordered series, and it has to do something with the clear() method. By standard, performing clear() should only take time proportional to the number of elements (i.e. size()), but in fact it takes time proportional to its bucket size as well. If this bucket size was also initialized properly, then it would have no issue. But unfortunately, it does NOT initialize its bucket size, and this leads to a huge vulnerability. Let's take a look at an example.

#include <bits/stdc++.h>
using namespace std;
using ll = long long;

unordered_set<ll> st;
int main()
{
    for (ll i = 0; i < 200000; i++)
        st.insert(i);
    for (ll i = 0; i < 100000; i++)
        st.clear();
}

This simple code takes 5952 ms in G++20!!! As you can see, it simply inserts $$$2 \cdot 10^5$$$ elements in the set, and then just calls clear() $$$10^5$$$ times. No hash attack is involved in this. It simply increased its bucket size, and let each clear() to iterate through the whole bucket every time.

So when the number of test cases is enough, any code that initializes the set with clear() can be hacked with a case that inserts a lot of distinct elements, and then spams the smallest case possible. If you're the coder and want to avoid this, then you need to initialize the set simply by constructing a new object, i.e. st = unordered_set<ll>();. I wonder why they haven't fixed this bug till this day...

Полный текст и комментарии »

  • Проголосовать: нравится
  • +245
  • Проголосовать: не нравится

Автор djm03178, 3 месяца назад, По-английски

As many people may have noticed, I've done quite a number of TL hacks on 2000G - Call During the Journey. The hacked submissions even include many masters' and grandmasters' solutions, and most of them don't even have suspicious running time on the original tests. I predict many more solutions will fail on system testing because I've checked only some of C++20 and C++17 submissions and haven't tried running all the hack cases. So, how did they get time limit exceeded?

They're actually caused by a few different mistakes, and we need different strategies to hack each of them.

For convenience I'll assume that $$$\mathcal{O}(n)$$$ = the number of nodes = the number of edges, because whether the factor becomes $$$n$$$ or $$$m$$$ can vary depending on the implementation, and the number of edges is likely to be at least the number of nodes. Also for convenience (and as this problem is like that) I'll assume that the graph is bidirectional, but it works the same for directed graphs (only a bit harder to slowdown due to smaller constants).

For reference, you can also read my old article in Korean, but the content may be quite different.

1. Revisiting the same node

This is the most common mistake on Dijkstra implementations. By revisiting it means that it checks all adjacent edges of the same node multiple times. There are a few well-known ways to prevent this:

  • Check if the distance information took from the priority_queue matches the information written in the distance array.
  • Make a visit-check array and mark it after taking it from the queue for the first time.
  • Use a std::set instead of a std::priority_queue, and don't let duplicated nodes in the set by removing the old element before updating. (Personally I don't recommend this because... it's a slow set...)

But if for some reason you forgot to do none of them, then the problem occurs. When we use a priority_queue for Dijkstra algorithm, we usually put the same node into the queue whenever the distance is updated. This means that in the worst case we can have $$$\mathcal{O}(n)$$$ elements of the same node in the queue. This itself is okay, but what happens if you check all the edges attached to it? It can have $$$\mathcal{O}(n)$$$ edges, so it will check a total of $$$\mathcal{O}(n^2)$$$ edges.

So how we want to hack it is to force the algorithm to update the same node $$$\mathcal{O}(n)$$$ times, while the updated node also has $$$\mathcal{O}(n)$$$ edges. Here is a simple way to do it:

  • $$$i$$$-weighted edges: $$$1$$$ <-> $$$2$$$, $$$1$$$ <-> $$$3$$$, $$$1$$$ <-> $$$4$$$, ..., $$$1$$$ <-> $$$n-2$$$
  • $$$inf -2i$$$-weighted edges: $$$2$$$ <-> $$$n-1$$$, $$$3$$$ <-> $$$n-1$$$, $$$4$$$ <-> $$$n-1$$$, ..., $$$n-2$$$ <-> $$$n-1$$$

where $$$i$$$ is the number of the node (except for node $$$1$$$ and $$$n-1$$$). This makes nodes from $$$2$$$ to $$$n-2$$$ are visited in order, while each of them updates node $$$n-1$$$ with a slightly less distance every time. Therefore, node $$$n-1$$$ will be dequeued $$$\mathcal{O}(n)$$$ times, and it will check its $$$\mathcal{O}(n)$$$ edges (although none of them will actually update any node's distance), resulting in $$$\mathcal{O}(n^2)$$$ in total.

The edge to the $$$n$$$ can be anywhere with $$$inf$$$ weight. It just needs to be visited at the very end.

Hacks: https://codeforces.me/contest/2000/hacks/1067650 and https://codeforces.me/contest/2000/hacks/1068253 . It required two versions because some people started Dijkstra algorithm from node $$$1$$$ and others did from $$$n$$$.

2. Visiting furthest nodes first

It's also a common mistake to make the priority queue keep the furthest node at the top, not the closest node. Although it looks ridiculous, random tests aren't very good to hack these either. You can see how my intended solution 276237524 worked in 890 ms while the reversed operator 276320086 worked in 577 ms in the original tests.

To hack this, we want to make a long chain of nodes, and make that chain to be updated many times. Let's say we have a chain of nodes $$$A$$$ — $$$B$$$ — $$$C$$$ — $$$D$$$ — $$$E$$$ connected with edges of weight $$$1$$$. We can also make edges from $$$1$$$ to each of them in increasing order of weight with weight difference of $$$2$$$. Then, with the reversed operator, $$$E$$$ will be visited first, but when $$$D$$$ is visited from $$$1$$$ it will also revisit $$$E$$$, and when $$$C$$$ is visited it will revisit both $$$D$$$ and $$$E$$$, visiting $$$B$$$ makes all $$$C$$$, $$$D$$$, $$$E$$$ revisited, and we will also get to the whole $$$ABCDE$$$ chain. When generalized, we can have a chain of $$$\mathcal{O}(n)$$$ nodes, each of them visited $$$\mathcal{O}(n)$$$ times, in total taking $$$\mathcal{O}(n^2)$$$ time.

Hack: https://codeforces.me/contest/2000/hacks/1069602

3. Visiting in order of node number

This also happens a lot because many people use std::pair of each $$$(node, distance)$$$ in priority_queue. Because the comparator of std::pair compares first first, by default the node with larger number will be popped first. If one used greater<pair<int, int>>, then the smaller node number precedes. In any case, it's still hard to hack them just with random tests. See 276321227 and 276321937.

The basic idea for hacking these is the same as 2. This time it just needs to be the node number instead of distance, while controlling our intended visit order to keep updating the other nodes' distance.

Hacks: https://codeforces.me/contest/2000/hacks/1069643 and https://codeforces.me/contest/2000/hacks/1069602 . I used a bit different pattern here, but I guess it's weaker than the other one.

4. Not large enough infinity value

It's also a common case that the value used for representing infinity isn't large enough. Although this was well-prevented in this problem, in many cases it's easy to miss them. For instance, many graph problems have much tighter constraint on the weight of edges compared to the possible answer, and if the tests are purely random, it is very likely that the largest answer remains extremely small compared to the limit. Therefore, there needs at least one case of a linear tree graph where all edges have the maximum weight to test if the solution has large enough $$$inf$$$ value.

To problem writers who are planning to make Dijkstra problems...

Dijkstra algorithm is a basic but very interesting subject, and many people would love to create wonderful problems based on it. However, as shown in this blog, making strong tests against wrong implementations of it requires deep consideration of what can actually be mistaken, and relying on large $$$n$$$ and random tests will most likely fail to prevent them from getting AC. I'd like to urge those who want to make Dijkstra problems to take some time to write various wrong implementations and see if your tests can actually make them fail.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +401
  • Проголосовать: не нравится

Автор djm03178, 3 месяца назад, По-английски

These days it feels like most of the contest announcements are greatly downvoted after the round, even when the round itself didn't have any critical issue. Although I myself am not very good at complimenting others and usually like to criticize, I don't think they deserve such hates at all.

As a former problem writer who hosted 3 rounds before, I know how hard it is to come up with good problems and prepare for them. The coordinators' job to ensure that everything works fine is even tougher. However, it is impossible to prevent every possible minor issues, and the writers will be highly demotivated if we show our hates towards the contests for every single thing we don't like.

Thus, here I'm bringing up a small campaign to show our respect to the authors — pressing the upvote button. It's nothing hard at all, takes only a few seconds, and is completely anonymous. This is the minimum appreciation we can give to their hard work. I saw a blog that said we had many 1k+ upvoted announcements a few years ago but these days they are gone even though we have much more people to vote. I want to see them again as well.

To clarify: I'm not saying that we should publicly target each contest and tell people to upvote it. It's more about changing our usual mindset and behavior, and nobody should be forced to join this wave.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +166
  • Проголосовать: не нравится

Автор djm03178, 4 месяца назад, По-английски

Recently, after like every Div. 2 contest I see some blogs/comments saying that ~2C problems are getting easier. What's the evidence? They say these problems have many solves.

No, I think it's the opposite.

Definite number of solves is not a good measure for evaluating the difficulty. A not too precise, but statistically more reliable way to evaluate the difficulty of a 2C problem would be calculating (# of official solves on C) / (# of official solves on A), assuming that most people have always been able to solve 2A.

Let's calculate them for recent 10 Div. 2-only contests' (not counting Educational rounds) C problems:

In average it's around ~33.6%.

So, was it harder a year ago? Let's check 10 contests from exactly 1 year ago:

In average it's around ~38.4%.

As you can see, this proportion in recent rounds has actually decreased compared to 1 year ago. Although it's not drastic, we can't really seem to be able to say that 2C's are "getting easier". Furthermore, with the amount of cheats nowadays, after we remove such counts, we can almost surely say that the problems are actually getting harder.

So, is this 'traditionally' the intended difficulty of 2C? Well, let's take a look at the contests 5 years ago:

It looks more significantly different. In these contests, the average is around ~46.7%.

As I wrote in a previous blog, I think recent rounds of lower divisions are generally too hard, and part of the reason I think is that people are scared of getting too large absolute number of solves. Having 40 solves on 2F nowadays isn't same as having 40 solves on 2F 5 years ago. Nobody would argue that a 2F was too easy if it had 10 solves 5 years ago, but these days people will complain 40 solves on 2F is way too easy, even though the number of participants has increased like 4 times. Similarly, having 8k solves on 2C nowadays isn't same as having 8k solves on 2C 5 years ago. It's like having 2k solves back then, which nobody would've thought to be too much.

Also, I want to believe that as time goes on, the average skill level of participants also increases, so if two problems have similar value of this proportion, then the one made recently is more likely to be harder. We are not getting easier problems than before.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +226
  • Проголосовать: не нравится

Автор djm03178, 4 месяца назад, По-английски

This is a half-serious blog, so I don't have a strong opinion on this topic and I welcome others' thoughts.

For years I've been seeing people calling some contests 'speedforces', but it seems like people have different definitions of it. The most common situation where I see people commenting that a contest was a speedforces is when the contest was in general too easy and therefore they had to solve many problems in a short time. However, I have a little different opinion on it.

Speed is, in almost all cases, very important on CP contests, but there are cases when it becomes even more significant compared to being able to solve harder problems, and I think it's not just when the problems in general are easy; instead, I call a CF round speedforces for people with specific range of rating when most of them could solve $$$K$$$ problems but not $$$K+1$$$ and this range is very wide. So, with my definition this term is not universal for all participants, but instead it applies only for certain participants.

For example, yesterday's contest Codeforces Round 960 (Div. 2) can be called speedforces (no offense, I think the contest was cool) for people with rating 1800 or above because solving A-D led to performance of from 1750 to 2400, depending on how fast they solved them. The fifth problem was quite hard so having a little better skill to solve harder problems didn't help much for most people in this range, unless they absolutely outperformed as a Div. 2 participant. The range of solving A-C on the other hand, has only a range of from 1450 to 1750 performance, so participants in this range experienced less speedforces and being able to solve just a bit harder problem was more helpful than just solving fast.

With this definition, one of the things that makes speedforces is when the hardest problem is too hard so barely anyone solves it, because it means we need to divide the whole performance range into $$$N$$$ segments instead of $$$N+1$$$ (including 0 solve), so in average each range gets widened and more people will get stuck at each number of solves.

So unlike the other definition I assumed above, having easy problems doesn't necessarily make speedforces. It's more like when the first $$$K$$$ problems are easy to everyone but the $$$K+1$$$-st problem is relatively much harder so most people cannot solve it. If these ranges are even then it will still evenly distribute participants into each segment depending on whether they solved one more harder problem rather than solving easy problems faster.

I just wanted to bring this up because I've seen some people regarding it as a mere complain from having skill issues and not something that exists, but I want to say it's a natural phenomenon that came from difficulty curve of a contest and therefore it is a valid argument (whether it's an actual issue or not is another story).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +154
  • Проголосовать: не нравится

Автор djm03178, 5 месяцев назад, По-английски

Hacking has always been a hot issue and many people had mixed feelings on it. While I personally love the concept of hacking, I should agree that hacking on normal rounds is not very functional nowadays and is only causing several issues.

  • The concept of rooms is heavily luck-based. Depending on one's luck, there may be 0 to several submissions that can be hacked, and it is unfair to give several hundreds of free points to those who get a jackpot room. Even for the ones who submitted a wrong solution that passes pretests, it requires luck for them to be hacked by someone else so that they can realize that the solution was wrong and resubmit.
  • Image blurring is not effective. Although by the rules using OCRs to convert the images to texts is prohibited, there is no way to know if one is using them or not. In the old days, even if someone uses them we may have thought that it won't be very accurate, but nowadays OCRs are just too good and only the honest people who don't use them gets disadvantaged.
  • Recent trend is to have strong pretests. Especially with the multi-test problem formats, it is usually very unlikely to find any hackable solution in a single room.
  • It is abused for cheating. As it has been brought up many many times, the ones who distribute solutions to others don't use their own solution; instead, they open others' codes and share them instead. This way it is harder to detect who originally started to spread them, and sometimes the original author is mistakenly flagged as sharing their code online.

For these reasons, I propose that we completely remove normal hacks during rounds, and apply post-contest open hacks to all rated rounds, along with a few changes.

  • Assign only a few (1-3) hours to open hacking. I hope this phase to be somewhat concentrated and be an active period where people discuss about hackable solutions and try to contribute to strengthening the tests. People don't need to wait for another full day to have their ratings updated.
  • Treat open hacking phase seriously. Just like during the contest, any issue that occurs during this phase (queue delay, wrong validator/checker, unexpected verdict) must require immediate action by the writers/coordinators and be announced properly. If this heavily affected the phase, extending the phase must be an option.
  • (Optional) Like we did on very old Educational/Div. 3 rounds, mention the best hackers in the announcement. An example can be found here.

These can all be applied to current Educational/Div. 3&4 rounds as well. I hope that this will also lighten up the burden of the problemsetters to prepare the strongest tests possible.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +237
  • Проголосовать: не нравится

Автор djm03178, 6 месяцев назад, По-английски

Often these days it feels like the problems are too hard for their positions but it's almost never the other way. I'm not saying these problems are fundamentally bad — it's good to confront them to improve our problem solving skills — but it's not very healthy in a contest's perspective.

Let me bring up some examples:

Example 1. Yesterday's contest, Codeforces Round 945 (Div. 2), had no official all solves. Most of Grandmasters, and even LGMs suffered from the difficulty of problem F. Less than 1% of the official participants were able to solve any of the harder half of problems of the contest.

Example 2. Codeforces Round 944 (Div. 4), had the hardest problem 1971H - ±1 that requires knowledge of 2-SAT. Only a very few official participants were able to solve it even though the topic was pretty well-known, just because the subject's level itself is too high. I think this is one of the harder topics even for the last problem of Div. 3, but it appeared on a Div. 4 contest.

Example 3. 1932G - Moving Platforms. I once created a very similar problem to this (guess what, the problem title is literally "teleporting platform") with constraints that do not require extgcd, and even so the problem was evaluated as hard as a typical 2F problem. This problem does require extgcd, but this was a Div. 3 problem with no official solves.

My suggestion is NOT that we should remove harder problems from contests. Instead, I want to say: Why not hold a higher division contest with these problems? Many of Div. $$$n$$$ contests just feel perfect to be held as Div. $$$n-1$$$ contests, especially for most of Div. 4 and Div. 3 contests. In recent contests, the hardest problems from Div. 4 are not easy even for Experts, and if you can all solve a Div. 3 contest you're most likely near Master level.

Here are a few points from me:

  • In a contest's perspective, problems with little to no official solves are 'wasted' because they cannot affect the official standings at all. If the writers prepared 6 problems, but only like 1% of the participants can really do anything about the later 3 problems, wouldn't these problems be more enjoyable for higher level unofficial participants if they were also rated, as well as making the competition more interesting?

  • For all we know, even if the hardest problems have a few solves, most of them are from alts, meaning that they're still too hard for actual legit official participants. For lower divisions, these solves often come from people whose rating is low just because they didn't participate in many CF contests.

  • Let's talk about a little history about Div. 2-only contests. Until a long time ago (around 6 years?), Div. 2-only contests were rated only up to Experts. Then, someone brought up that the harder problems in these contests were quite challenging even for CMs, and that's why we started to include CMs in Div. 2-only contests. But what we have in Div. 2-only contests nowadays are problems that are too hard even for Masters. I don't think this is what we expected when we decided to extend the rated range.

  • We shoudn't be afraid of having a too 'easy' contest once in a while. Even if an extreme case happens, 95% of the participants still won't even have a chance to tackle the last problem, so it's still pretty good for most people.

Because these contests are not rated for many high-rated people, they cannot fully enjoy the contests even when the problems are pretty interesting to them. I think the drought of Div. 1 contests, and comparably low frequency of Div. 2 contests to Div. 2-4 contests altogether, are asking for necessity for a more lenient difficulty threshold for the divisions.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +365
  • Проголосовать: не нравится

Автор djm03178, история, 9 месяцев назад, По-английски

I've always loved hacking in CF rounds, especially for open hacking and uphacking. Recently, I've been focusing on hacking for TLE because there are plenty of solutions that can pass every single randomly generated tests but can still have broken worst case time complexity and the tests for that can only be found under deep investigation on the code.

One of the findings that I saw during hacking in recent rounds, is that there is a fairly common mistake that many people make but most problems don't have tests against it. The conditions for such mistakes are:

  1. The problem must be a multi-test problem (around $$$10^4$$$ or more if possible).
  2. The constraint on the $$$n$$$ variable of a single test should be as large as the constraint on the sum of $$$n$$$ in all tests.
  3. The solution has both of these slow aspects described below but none of them are that slow to get TLE when only one of them is attacked.

Condition #1 and #2 are very common in recent rounds so it's not hard to find two or more problems of such types in every round. Condition #3 is usually found when you just sort accepted solutions by execution time.

So, the slow aspects are:

  1. Slow operations for every test case: Most commonly, using endl (or just alternating cin and cout without tie(0)) or initializing a whole array with memset or defining a vector with maximum $$$n$$$ size, etc.
  2. Bad time complexity or bad constant but still fitting in TL.

Now you can see what I'm talking about. Most such problems have tests that are against #1 or #2, but not both. #1s are just easily be attacked by any random test with maximum number of test cases. Attacking #2s may require further preparation from the writer to come up with specific edge cases, but usually it is pretty well-done. However, I have never seen a round myself where they prepared a case that can counter both.

For example, let's see a problem from a recent round: 1923D - Slimes. The constraint on the number of test cases is $$$10^4$$$ so the #1 constraint is satisfied, and we have $$$n \le 3 \cdot 10^5$$$ so the #2 constraint is also satisfied.

Now let's take a look at one of the hacks I made for it: https://codeforces.me/contest/1923/hacks/998199. If you look at the submission (247991883), you'll see that the solution was accepted before the hack while the original test set already had a test with maximum number of tests (test #2-4) and various maximum $$$n$$$ tests (tests #8~). The maximum $$$t$$$ test took 1794 ms (test #3) while the maximum $$$n$$$ test took 639 ms (test #18).

The reason why it took long on maximum $$$t$$$ is simple: it calls con() for every test, which does some calculations that take $$$O(N)$$$ time where $$$N$$$ is the maximum $$$n$$$ possible. Therefore, just by having $$$10^4$$$ tests the code will perform like $$$10^4 \cdot 3 \cdot 10^5$$$ lightweight operations, but it's still fitting in TL. It is likely that test #3 and #4 will also reach almost at the limit of sum of $$$n$$$ in all test cases, but they really didn't add up much, because each $$$n$$$ is too small.

For maximum $$$n$$$ tests I didn't even try to find anything special about the worst case test, though if anything something with a series of small answer would have been more effective by the looks of the tests.

So, here's what I did: https://codeforces.me/contest/1923/hacks/998199/test. If you look at the generator, it's simply making $$$9999$$$ test cases with $$$n=1$$$, and a single $$$n=300000-9999$$$ test which wasn't even against its weakness (it was made for another submission before). A $$$n=290001$$$ test shouldn't be much different from a $$$n=300000$$$ test, but having $$$t=10000$$$ itself caused huge slowdown. So you know what happened: Successful hacking attempt.

Similar situations can occur in almost every problem with similar constraints. Therefore, I think it is something that future writers should consider: Add several tests of such type for such problems. I hope this blog would help strengthening main tests against these solutions that really should not pass even without my hacks.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +140
  • Проголосовать: не нравится

Автор djm03178, история, 21 месяц назад, По-английски

This won't be a full solution to these problems but may help a little.

It's easy for anyone to post spoilers and someone reads it before the managers take care of them, especially on the contest announcement.

So, how about we block wring new comments, at least on the contest announcement during the contest? I don't see a reason to enable any kind of comment regarding the contest before the contest ends. If someone has to say something to the contest managers, it'd better for them to be guided to writing through the question feature of the contest.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +45
  • Проголосовать: не нравится

Автор djm03178, история, 2 года назад, По-английски

Every time a (rated) round ends, I always see the same questions from people who are new to Codeforces. So, here I made an FAQ that answers to these repeated questions. Please do provide the link to this blog when you want to answer to them.

Why is my rating still not updated yet?

Be patient. It takes a while (can be a few hours, or even more) to finalize things such as detecting cheats and confirming test results, or there can be other issues that need investigation. It won't be unrated unless they announced that it will be unrated.

My solution was hacked after contest, but why didn't my rank / rating change?

It is called uphacking, and it does NOT affect the standings nor rating changes. The score you gained from uphacked solution will remain. The hack tests will be added to the tests for practice submissions later, but there won't be any rejudges on already accepted solutions.

I got a system message that says that my submission coincides with other solutions, but I really didn't cheat.

Don't use online IDEs like ideone with public mode. Other people can read it! Make sure you're using private (needs to be signed in) or secret mode.

(Educational & Div. 3/4 rounds) It has been 10 hours after the contest, why is my rating still not updated?

For Educational & Div. 3/4 rounds, there is 12-hour open hack phase after the contest where anyone can read anyone's solutions and try to hack them. Unlike uphacking for standard rounds, successful hacks made during open hack phase WILL change their verdicts to hacked and they will be considered as incorrect submissions. Hack tests made during open hack phase will be added to main tests after the phase, and all the accepted solutions will be rejudged upon all original tests + hack tests and only the submissions that passed them all will be considered accepted.

(Educational & Div. 3/4 rounds) So the open hack phase has ended, but my rating still hasn't changed yet?

Again, be patient. They need time to add hack tests and run rejudges. After that, it can take a few more hours again.


Any suggestion to improve this FAQ is welcomed.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +196
  • Проголосовать: не нравится

Автор djm03178, 3 года назад, По-английски

I'm glad that we now have each user's unofficial contests history. I want another thing similar to this: Can we have such standings table for each contest, too? Currently, there's an option called 'show unofficial', which includes virtual participants, and for some reason practice submissions as well which is not meaningful to be there at all.

I think participating in contest time and participating virtually are fundamentally different. We don't know what information virtual participants already got before starting virtual contest. Even if they didn't actually read the problems, they could have heard some concept or technique required for them — which would be considered as cheating if that was on contest time — but we don't consider them cheating for virtual participants, but they're treated the same as unofficial participants who solved them in contest time in unofficial standings table.

As someone who also hosted 3 rounds before, I like to look back at these rounds and recall memories of them. But I can't see the unofficial standings in the form it was at that time. It also feels not right to consistently 'lose' my unofficial rank in unrated(for me) contests' unofficial standings. I assume it's not hard to implement it as we already have it in personal history page, so I want to ask if we can have such feature soon.


Unrelated, but I'm sure we had links to personal place of contest standings on each user's contest history long ago, anyone knows why they're removed?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +284
  • Проголосовать: не нравится

Автор djm03178, история, 3 года назад, По-английски

I've been noticing that some of the 'easiest' problems in CF rounds not to be actually easy for beginners. Indeed, most of these problems:

  • Become really trivial once you notice how to destroy the problem.
  • Require one line of input, and one line of output (probably one or two ifs?), thus take <30s to code.

But in fact, to destory them, you need to be in a much advanced level. For beginners, it's really hard to ever come up with an idea to look the problem in a different way other than how it pretends to look like. So, for advanced programmers it's typical 1 min-solve problem (because the required code is so short), but for the real target audience it's just a real hard problem. Not because they don't know how to code 2 lines for input and output, but because they can't even start coding.

I think these problems should be more in a way that anyone can solve with enough time invested in them. They can really require more implementation than just input and output (because this is a programming contest), but in every way I think the solution should be more straightforward and be easily found just by reading the description carefully.

How are your opinions? Please leave a comment below.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +35
  • Проголосовать: не нравится

Автор djm03178, история, 4 года назад, По-английски
Tutorial is loading...

Solutions

Tutorial is loading...

Solutions - C++ - Java - Python 3

Tutorial is loading...

Solutions - C++ - Java - Python 3

Tutorial is loading...

Solutions - C++ - Java

Tutorial is loading...

Solutions - C++ - Java

Tutorial is loading...

Solutions - C++ - Java - Python 3

Полный текст и комментарии »

Разбор задач Codeforces Round 688 (Div. 2)
  • Проголосовать: нравится
  • +178
  • Проголосовать: не нравится

Автор djm03178, история, 4 года назад, По-английски

오랜만이에요, 코드포스! (Long time no see, Codeforces!)

I'd like to welcome all of you to Codeforces Round 688 (Div. 2)! The contest will start at Dec/04/2020 16:05 (Moscow time), and it is rated for all participants with ratings under 2100. Note the semi-unusual start time.

You will be given 6 problems and 2 hours and 15 minutes to solve them. The score distribution will be announced soon.

All problems are prepared by me, with a lot of help from the testers making me realize that my solutions are dumb.

Thanks to Green55, JooDdae, cs71107, YeongTree, Savior-of-Cross, jh05013, blobugh, 39dll, InfiniteChallenge, Pentagon03, sonjaewon, slah007, jooncco, and kalki411 for testing the round, and especially xiaowuc1 for helping polish English statements as well. I would also like to thank 300iq for round coordination, and MikeMirzayanov for the great Codeforces and Polygon system.

See you in the round!

UPD: The scoring distribution is 500 — 1000 — 1500 — 2000 — 2500 — 3500.

UPD 2: The round is finished. Thanks for your participation! I'm sorry about underestimating the difficulty of problem B, but I hope you still enjoyed the problems! The editorial will be posted in a minute.

UPD 3: The editorial is out!

UPD 4: Congratulations to the winners!

Div. 2

1: caoyizhong

2: Depth_First_Search

3: Misaka23334

4: PleasePreyForMe

5: EzioAuditoreDaFirenze

Unofficial Div. 1

1: Geothermal

2: jiangly

3: neal

4: saketh

5: Pyqe

Полный текст и комментарии »

  • Проголосовать: нравится
  • +757
  • Проголосовать: не нравится

Автор djm03178, история, 5 лет назад, По-английски

I've seen phrases like this in contest announcements many times:

The score distribution is standard: 500 - 1000 - 1500 - 2000 - 2500 - 3000

However, I don't really see that this 'standard' distribution is balanced in practice or anything, so I don't get why this uniform distribution is called standard.

The score decreasing rate (i.e. the number of points lost per minute) is proportional to the initial score of the problem. For example, in a 2-hour contest, if you solve a 1000-point problem in 1:59, you get 524 points, which is still higher than solving a 500-problem in 0:00. But if you solve a 3000-point problem in 1:59, you only get 1572 points. That's much less than solving 2000 or 2500-point problem fast enough.

In most of Div. 2 contests, when I see the standings table, the general tendency of gained score of each participant is like this: A <<< B < C >= D >= E >= F. This includes my recent contest (yes, I underestimated D a lot).

I think the reason we give more points to a harder problem is to reward participants who manage to solve those problems. But in practice, a better strategy is to almost always solve easier problems first, and as a result, most people gain less points by solving harder problems. Another side effect is that when you fail on system test, the most critical one is failing on B or C, and not the harder problems. Let's exclude those who first read harder problems and decide to participate only when they're confident to solve it fast.

This is also related to Div. 1 / 2 parallel rounds. I see most of these rounds tend to use score distribution of 2C~2F = 1A~1D + 1000 when these problems are the same ones. But this means Div. 2 participants are rewarded much less for solving 2E or 2F than Div. 1 participants solving 1C or 1D, compared to how much they get for solving 2C(1A) or 2D(1B).

Of course, I didn't count getting wrong answers to get penalty, but the average number of tries to get AC won't be that much to affect this in general.

Conclusively, I'd like to argue that it will be better not to hesitate setting higher score on harder problems. Let me suggest a 'balanced' score distribution I think: 500 — 750 — 1250 — 1750 — 2500 — 3500.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +160
  • Проголосовать: не нравится

Автор djm03178, история, 5 лет назад, По-английски

1304A — Two Rabbits

Tutorial
Solution

1304B — Longest Palindrome

Tutorial
Solution

1304C — Air Conditioner

Tutorial
Solution

1304D — Shortest and Longest LIS

Tutorial
Solution

1304E — 1-Trees and Queries

Tutorial
Solution

1304F1 — Animal Observation (easy version)

Tutorial
Solution

1304F2 — Animal Observation (hard version)

Tutorial
Solution: O(nmlogm)
Solution: O(nm)

Полный текст и комментарии »

Разбор задач Codeforces Round 620 (Div. 2)
  • Проголосовать: нравится
  • +205
  • Проголосовать: не нравится

Автор djm03178, история, 5 лет назад, По-английски

안녕하세요, 코드포스! (Hello, Codeforces!)

I'm glad to invite you to Codeforces Round 620 (Div. 2). The contest will start at Feb/15/2020 16:05 (Moscow time), and it is rated for all participants with ratings under 2100.

You will be given 6 problems and one of the problems has 2 subtasks. The contest duration is 2 hours. The score distribution will be announced later.

All problems are prepared by me, with huge help from the testers with developing great solutions.

I'll be on the community Discord server after the contest to share my thoughts and get feedback about the problems.

Thanks to 79brue, molamola., Savior-of-Cross, evenharder, cs71107, Justice_Hui, rkm0959, chpark1111, imeimi, InfiniteChallenge, gaelim, jh05013 (Good tester), yuto0518, N_jara, aryanc403, SnowGoose, solvemproblr, surung9898, and ko_osaga for testing the round. I would also like to thank 300iq for round coordination, and MikeMirzayanov for the great Codeforces and Polygon system.

Hope you enjoy the problems!

UPD: The scoring distribution is 500 — 1000 — 1500 — 1750 — 2000 — (2000 + 1000)

UPD2: The contest is finished! Thanks so much for your participation! The editorial is here.

UPD3: Congratulations to the winners!

Div. 2

1: Itst_boyfriend

2: COVID-19

3: naive_xay

4: anta.baka

5: ChitandaEru

Unofficial Div. 1

1: wucstdio

2: ksun48

3: jiangly

4: uwi

5: teapotd

Полный текст и комментарии »

  • Проголосовать: нравится
  • +530
  • Проголосовать: не нравится

Автор djm03178, история, 5 лет назад, По-английски

The problems with Gildong (B, D, F) are by me, and Amugae (A, C, E) are by nong.

Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...

Полный текст и комментарии »

Разбор задач Codeforces Round 578 (Div. 2)
  • Проголосовать: нравится
  • +157
  • Проголосовать: не нравится

Автор djm03178, 5 лет назад, По-английски

안녕하세요, 코드포스! (Hello, Codeforces!)

We're glad to introduce you to Codeforces Round 578 (Div. 2), which will start at Aug/11/2019 15:35 (Moscow time). The round is rated for Div. 2 participants.

You will be given 6 problems and 2 hours to solve them. The score distribution will be announced later.

The problems are prepared by nong and me.

Thanks to pllk, Learner99, Rox, mohammedehab2002, cheetose, jh05013, rkm0959, edenooo, and alex9801 for testing the round. We would also like to specially thank to KAN and arsijo for coordinating the round, and of course, MikeMirzayanov for Codeforces and Polygon platform.

This is our very first round, so I hope you enjoy it a lot!

UPD: The scoring distribution is 500 — 1000 — 1250 — 2000 — 2000 — 2500

UPD2: The contest is finished. Thanks for joining us! Here's the editorial.

UPD3: Congratulations to the winners!

Div. 2

1: gamegamewillwinioi2021

2: oliviahye

3: cnoi

4: 2om_neek

5: Tropical_maid

Unofficial Div. 1

1: kcm1700

2: uwi

3: square1001

4: kmjp

5: KrK

Полный текст и комментарии »

  • Проголосовать: нравится
  • +583
  • Проголосовать: не нравится

Автор djm03178, 6 лет назад, По-английски

Currently, any successful hacks on any problem provide exactly 100 points. For problem A and B it seems quite reasonable, but I find it barely motivating to lock harder problems and try to hack (unless it's known that the pretests are exceptionally weak), since:

  • You risk losing a lot of points on system test. Even if you realize mistakes in your code after lock, you can't do anything.
  • There are (usually) not many solutions to look into in your room in the first place.
  • Harder problems (usually) require hard techniques and longer implementation, so reading each solution takes a lot of time.
  • Even if you successfully hack, 100 points don't affect your rank too much.

Rewarding more points for hacking solutions for harder problems would make the system more interesting and balanced. My suggestion is to reward 1/10 of the problem's score for successful hacks. i.e. 50 points for hacking a solution of a 500-point problem, and 200 points for hacking a 2000-point problem.

Here are some reasons for the suggestion:

  • Hacking solutions of a hard problem requires you to understand the hard problem itself, high-level logic behind the codes, and complicated implementations.
  • There are usually more pretests on harder problems, so the chance of passing all pretests on harder problems with a wrong solution is already low.
  • Currently hacks are mostly only for easier problems. If people would be motivated to lock harder problems too, that would make the use of hacks more balanced.
  • It may seem that 300 points for hacking one solution is too much. But this implies that the contestant has solved (or at least, passed pretests) a 3000-point problem. For example in Codeforces Round #509 (Div. 2), if you solved all problems and have 7700 points, you would be at rank 33. Even if you successfully hack a solution on F, you would be only at rank 25. On the other hand, if you solved problem A only and have 450 points, then you hack a solution on A, your rank will be: 4410 -> 4062.
  • For easier problems you usually read a lot of solutions written by people new to problem solving. Solutions for harder problems are mostly written by good coders and reading them itself helps you study as well.
  • It can be seen as an additional reward for passing pretests of a hard problem.

Of course, this is just my opinion and I know it has a lot of disadvantages as well, but I just want to share ideas and imagine how codeforces contests would be with a different hack system. Any other opinions are appreciated.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +240
  • Проголосовать: не нравится