Блог пользователя robinyqc

Автор robinyqc, история, 6 часов назад, По-английски

GPT-o3 recently achieved a 2700 rating in OpenAI's Codeforces test, which means fewer than 200 active users worldwide have a higher rating (less than a single page on the CF leaderboard).

I used to feel unbothered by this because there were many people with ratings higher than GPT. Back then, my rating was also higher. I knew it would surpass me eventually, but I didn't expect it to happen so quickly—just about a year. Now, I feel slightly panicked: the only advantage I have over GPT is that I'm cheaper (I heard o3 costs a lot to solve a problem, while I can solve them for free, lol). I genuinely enjoy solving problems, but that doesn't stop me from feeling anxious. While I know my passion will keep me engaged in competitive programming, I still want to seek support.

I recalled that Um_nik once wrote a blog titled On AI ruining "solving math problems with computer", which expressed his views on AI. I wonder if he anticipated AI advancing this quickly. Back then, I thought the article made sense, but now, replacing the "1600" in it with "2700" and "Div2A/B" with "Div1C/D" feels really strange and unsettling.

As described in that article, the CF community now has people "running around yelling 'We will all die, somebody do something about AI.'" again. I used to think this was just unnecessary panic, but now I feel it's somewhat justified. I might soon become one of them!

These are my current thoughts: mild panic and anxiety. Many users with similar ratings to mine have expressed their own feelings. Kozliklekarsky's comment perfectly captures my sentiments: Is this the real life? Is this just fantasy?

I also have some thoughts about the future. Currently, o3 is prohibitively expensive. If its price drops in the future, people will undoubtedly start using it for competitive programming. (Don't tell me “o3's price will never drop”—it already achieved a 2700 rating, so I can't bring myself to believe it's impossible.)

This has both good and bad implications. On the bright side, GPT's level surpasses mine, which means I can use it as a resource to learn and improve. On the downside, it also means I could use GPT to cheat on CF and achieve a rating higher than my actual skill level. Sure, such a rating would be fake, but we're talking about a top 200 global ranking! Ordinary people have vanity. If I could achieve this ranking, I'd be laughing in my dreams. For an 1800 rating, I believe 85% of people could resist the temptation to cheat. But for a 2700 rating—my dream rating—how many could truly resist?

The influx of high-level accounts would also greatly impact the mindset of top competitors. Moreover, it feels like there's no way to prevent this from happening. For instance, if I asked GPT, “How do I solve this problem?” and then learned its approach and wrote the code myself, how could anyone prove it was AI-assisted? I admit this leans towards conspiracy thinking, but I can't help pondering it.

What are your thoughts on AI's current rating in competitive programming?

  • Проголосовать: нравится
  • +13
  • Проголосовать: не нравится

»
5 часов назад, # |
  Проголосовать: нравится +12 Проголосовать: не нравится

I never enjoyed contests nearly as much as I enjoy solving hard problems offline at my own pace, so I wouldn’t really care if contests were to end entirely.

Now, I feel slightly panicked: the only advantage I have over GPT is that I'm cheaper (I heard o3 costs a lot to solve a problem, while I can solve them for free, lol).

Soon, we are going to lose this advantage too, and not just in sport programming but all areas of life.

»
5 часов назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I really wished for the 'AI winter' to come back when o1 just came out, but it's over now. No one can stop this anymore. LGM-equivalent ChatGPT model will arrive within 2 years, but it’ll take some time for humanity to fully appreciate ChatGPT’s power. After that, within the next 10 years, everyone will lose their jobs—except for those in manual labor.

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How can you say that you recall that Um_nik once wrote a blog when it was just three months ago? I recall that like I recall yesterday.

»
5 часов назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

i think tourist was the one who was solving on this ai account

https://x.com/que_tourist/status/1866705352710033467

»
5 часов назад, # |
Rev. 3   Проголосовать: нравится +13 Проголосовать: не нравится

If anyone can cheat, the incentive for cheating is greatly reduced. A clear example is chess, where top engines are publicly available and far stronger than any human.

A key difference, however, is that chess has frequent in-person contests that ground ratings. For competitive programming to have a future, in-person contests would need to become more frequent. One upside of intelligent AI is that it could make writing problems for in-person contests much easier.

I think another reason many people feel sadness because of this news is tied to the value they get from being able to solve difficult problems. Right now, if you have an algorithm problem of moderate difficulty, say, under 2000, the best course of action is often to ask someone skilled in competitive programming. But if GPT becomes equivalent to a 2700-rated competitor, that role essentially disappears. The "problem-solving skills" you worked hard to develop become little more than trivia, as anyone could achieve similar results by prompting an AI model.

»
5 часов назад, # |
Rev. 4   Проголосовать: нравится +8 Проголосовать: не нравится

For an 1800 rating, I believe 85% of people could resist the temptation to cheat. But for a 2700 rating—my dream rating—how many could truly resist?

Here's what we exactly should do: make general perception that using AIs in contests is absolutely bad and hate those who use them in contests. In everyday life, we prevent ourselves from doing bad things because we know we shouldn't do bad things, not because it's hard to do bad things.

Most of the temptation to use AIs comes from the lack of this perception, because even though using AIs is prohibited now, many people still think "but... it's not really that bad, isn't it?" This is also the case for alts: we're lacking general perception that alts are absolutely bad that we should never make alts. I almost everyday see people advertising that they have alts in public channels without feeling guilty at all. It would be crazy to see similar situations with the AIs.

I don't like all those "It's over" comments because that's exaclty what makes the opposite perception. It makes us feel that there's no meaning to have rules in CP anymore and thus increases the temptation to use AIs. What we should do is to stop self-ruining the atmosphere and focus on continuing the community as people who love CP. Do not give credits who do bad things against the rules.

As the broken windows theory suggests, we can only hope that some good effort is done to catch the AI users, so that this temptation does not spread out to more and more people.

  • »
    »
    3 часа назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    In everyday life, we prevent ourselves from doing bad things because we know we shouldn't do bad things, not because it's hard to do bad things.

    Probably that's actually because doing bad things activates some chain reaction that brings unexpected consequences.