# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 156 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Name |
---|
just feel that i'm so stupid that i cannot even win chatGPT :(
divided by rating, united by thoughts
Hacked :)
lol chatgpt is trash
she is much smarter than you already
what's her @
chatgpt.com
Doing God's work 🙏
That is savage
cold af
Great job but their ratings will grow though the submissions were hacked.
They got hacked during the hacking phase of the EDU, which essentially means they FSTed. So no, they did not gain any rating from these submissions.
I means that they might still get a positive delta.
Coldest Sigma (Fire emoji)
what do u mean
CP is Chess now.
CP not C hea P tho
I am impressed that people even attempted to use ChatGPT on problem F, something that is 1400 rating higher than ChatGPT. ChatGPT must feel flattered if it could feel.
is this the end?
meanwhile when chatgpt will get that much level CP will become dumb thing?? NO i don't want to listen this..these above all three are indian guy -_-
that's some next level prompting skills for sure
Feel like this is more of an L on the authors' part to not make strong enough tests, where even brute force works.
If you think about it, this might actually be the way to go, since offering full feedback makes it much easier for people who are unskilled at cp to gpt their way through problems.
Does it worsen the experience for other people? Yes, but I’d prefer to have weaker pretests as compared to hundreds of gpt greys above me in the final ranklist.
This actually is a great idea the authors can feed their question into gpt and design the testcases in such a manner that this solution passes the pretests. But another problems could arise that people could easily hack these solutions and get points.
Maybe Hacker Cup was right all along...
So, is it acceptable to describe the grey and green as borderline retarded? This is clearly rude, insulting and retarded at the same time.
I don't know much about Codeforces Code of Conduct but it can't be that one can just insult others like that.
Whether the blog author meant to refer to the particular grey and green participants mentioned in the blog or to all people with these ranks, I think it's equally unacceptable and this blog should be edited or deleted.
No it's not. Cheaters are borderline retarded. Newbies and pupils are not.
I disagree. Cheaters are just cheaters. If you prove someone cheats, then you apply whatever rules you have for that.
Anyway, the author is clearly describing all the grey and all the green, not the two cheaters in the post.
womp womp
I'm more sad about the fact that it took me 50 minutes to carefully implement D, although I've got the idea instantly. And o1-preview solves it in less than a minute. Guess the "borderline retarded" goes all the way up to the 1836 rating at least.
The glimmer of hope here is that there are people way above 1804 rating. Meaning AI still can't beat all of us and we have the potential to be better than o1.
Side note I think Educational Rounds should be unrated.
Give it as an unrated contest if you want to codeforces allows you to do that,why cry?
Because many people who does educational rounds to get $$$X$$$ rating does not have the capability to get $$$X$$$ rating with regular rounds, including me in the past (namely $$$X=1900$$$). That ruins the point of rating. Knowing some classical tricks does not mean you can solve real problems of the same difficulty.
omg, do u mean that edu rounds are easier ?
Their problems are more classical which means you can usually find similar techniques in other problems or even books or lectures. To be good at them you need to learn more classical techniques like binary search instead of improving your problemsolving mindset.
Again how does it affect you? Your rating is only dependant on contests YOU choose to give being rated. Dont give edu rounds and skip the inflation in YOUR ratings,simple fix.
So what's wrong giving suggestions? Besides, it doesn't affect me either way because I'm Div.1 and forcefully unrated in those contests. And it's not about inflation. It's about these educational rounds serves more educational purposes than actual competition so we might want to exclude them from regular ratings.
I guess that's true, my average rating change in last 4 educational rounds is +81, but I usually loose rating in regular rounds.
Plz don't call me retarded I am trying to improve :(
Dont think so.The author used such word just to further satire those cheaters.You're not "the grey".You are a grey Nowbie who intend to improve by oneself.
He meant to say regarding the "grey" and "green" in the mentioned submissions, not the grey and green as a whole!
No, he meant the grey and the green as a whole. And he meant to make what he meant so clear by not including the third cheaters (the blue one).
ah , don't forget A as well , I hacked 7 submissions that works on $$$O(XY)$$$ , $$$O(\min(X,Y)^2)$$$ , $$$O(X^2+Y^2)$$$ which are obviously trash under current constraints.
Hacks:
288555420 288553320 288538902 288557947 288607545 288529601 288572580
I don't think these people cheated though... I think they just couldn't think of a better construction...
However there was also 288542915 this... this guy fully KNEW the construction, but decided to run some random nonsense loops before outputting the construction... and the saddest thing is, I couldn't even hack him with the worst case (when the nested loops run to 999 and 1000 respectively, and there are 5000 testcases)...
how can i be good at math like u
According to CF plag system , I'm not good , I'm a cheater
chicken butt.
blue again omg lol
bruh
This post actually inspires a great way to combat cheating using GPT -- simply make the pretests weaker so that those brute force solutions by GPT will be allowed to pass pretests. As displayed by the previous OpenAI blog on CP, the performance of the model increases quite significantly when the number of allowed submissions increases; in addition, it is known that AI performs worse when the feedback it receives is not 100% accurate (i.e. pretest passed but FST). This really seems like a plausible way to reduce AI's effectiveness while affecting a genuine human solver much less (any competent contestant submitting an O(n^2) brute force to a n=10^5 question should know they'll FST anyway).
The above can be done in multiple ways, e.g. not including a max test in the pretests, which also helps to reduce the pretest judging time. A downside to this is that people can now hack all of these brute force solutions to get a lot of points -- maybe we can redesign the hacking system in some way. I'm sure there are other better methods than this, but this is just a suggestion for a starting ground.
That being said, if CF does want to take this path, it might be beneficial to make an announcement about this, mainly to protect the newer contestants at CF who have been familiar with the strong pretests these days so that they do not get frustrated unexpectedly.
There's another downside to the above example method -- people can now submit bruteforce to a difficult problem on an alt, lock it and copy a legitimate solution from the room on their main.
Maybe it's time to reconsider in-contest hacking in the GPT era... But maybe someone can come up with a clever method that preserves the hacking system while still making the above cheat-combating method work.
I don't think very many people want to preserve in-contest hacking.
How about just making pretests weak and not allowing in-contest hacking?
is this water on her nose or something else ?
Yes, maybe it's even possible to create a separate short phase after the coding where people can challenge solutions of others and get points for that. Like, imagine being the top coder in your room just based on hacks. But on codeforces I don't think even a single round matches a format like that, not that I remember.
If there is a concern that participants might hack GPT brute-force solutions, it could be possible to run system tests immediately after the contest and only open up hacking afterward.
MrDindows will strongly disagree with you
I like the idea in principle, but there have been several problems where constant factor is a real issue and having max tests in pretests is our main line of defense to measure those things (I don't want to have to make random max tests and test those in custom invocation for every problem).
For a very recent example, many solutions to 2035F - Tree Operations with the right complexity got TLE in pretests, as that problem requires a low constant implementation.
I don't really think so. You can counter that by local testing i.e. writing brute-force code to check the correctness of the produced program, and benchmark to see whether they can run within the time limit. I'm sure a green or a cyan will be more than competent enough to do those things. The only exceptions where I think this strategy would fail are those problems where generating strong tests is extremely difficult, such as graph problems, but they don't appear often enough to prevent cheaters from still having high performance. Codeforces best bet is probably just let them do what they want, cause they are gonna leave the platform after like 5 contests to get that juicy interview anyway.
GPT is such a noob doesn't know how to calculate complexity