Пожалуйста, прочтите новое правило об ограничении использования AI-инструментов. ×

Блог пользователя predator4hack

Автор predator4hack, история, 2 года назад, По-английски

OK! Now you have an approach to solving a problem, but wait! Will this always work? Is there any counterexample that fails this algorithm? These are the questions that I constantly face nowadays.

Let me elaborate on this with some examples:

Empty Graph: In this problem, the first approach that I came up with was to just assign 10^9 to k smallest numbers. Of course, this was wrong, but I couldn't find a counterexample and I had to ask a friend.

Maximize the minimum: One of the approaches that came to my mind was to assign the maximum value of adjacent elements and do it k times. (tho it was a bit easier to find counterexamples for this approach)

There are two ways(that I know) that one can use to verify a solution:

  1. Prove it mathematically (using induction, contradiction, etc).
  2. Find a counterexample

I have talked about this to some of the people(who are doing great on codeforces) and most of them said that they usually go with the second approach. How do they find a counterexample? They say, and I quote "It usually comes with experience" but I'm here to know if there is a systematic way that we can follow to find counterexamples.

  • Проголосовать: нравится
  • +32
  • Проголосовать: не нравится

»
2 года назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

For greedy algorithm, you can use exchange argument then you would either get a counter example or a proof

»
2 года назад, # |
Rev. 2   Проголосовать: нравится +55 Проголосовать: не нравится

I think "It usually comes with experience" is really how it is. I got these tips which probably could help. If you have some algorithm and want to find a counterexample:

  • Imagine this is a game. Your friend has this algorithm for a problem. Your job it is to hack it now. Try to outwit him, find the hole in his algorithm. This could help to let go of the mindset "This is my algorithm, it is good".
  • If you look for counterexamples, try extreme examples. What if all numbers are equal? What if some numbers are really big? What if some of them are really small? What if we make some sort of alternating pattern? Make the tree have many branches. Make the tree one long line. Make the tree one long line with many short branches. Analyze the task. What is its structure? What is the structure of your algorithm? Try to think of easy but extreme examples.
  • Bit more advanced: If you have an algorithm and cannot prove why it works, you usually try to find an counterexample. What if you can't find one? Try to find reasonings why there is no counterexample. Try to prove, that there is no counterexample. This can help in trying to find a hole in your algorithm or help you find a coutnerexample.
  • Last but not least: Work on many problems and do maths. Not only solving problems, but understanding their solution and understanding the counterexamples. Imagine you have an algorithm with several steps. One of them seems strange or unneeded. Try to remove this step and then find a counterexample that shows why this step in fact is needed. Try to think of extreme cases and search for the step in the algorithm that makes them solvable.

I am not sure how much these tips will help you. But those are some thought processes I use during contests or after contests when analyzing problems.

»
2 года назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

I wrote a small paean about the unbearable effectiveness of automated stresstesting: mostly that despite writing up to 3 programs per problem, if the generator and known-good-but-smol-n solution are short enough, then it's probably worth their initial investment to put an upper bound on how much time you burn on your actual solve (lots of feedback = certainty = maybe speeeeed?).

Then I wrote a less small book on how it can become a dirty crutch.

Buuut if you find you're not developing your intuition as well as you'd like, you might get some mileage out of incorporating quick stresstesting habits: just make sure you are intentional/explicit about what you're testing (OleschY's post: good stuff), and maybe dedicate some extra practice time to coming up with minimal sets/ranges of effective testcases vs. relying on randomized spam and luck as a hopefully last resort (that lets your own intuition get rusty).

The ultimate downside is that you'll notice when setters do the randomized spam pile as their first approach to tests, which may or may not amplify FST salt (if, for instance, there's a mis-weighted problem after one with weak tests).

Actually, the really big downside is that while it'll catch some misread problem statements, it won't catch the uncanny valley misreads (close enough to pass samples but still wrong), and since the minimum time investment is higher, that can be more painful.

There's probably an analogous conversation to be had for just-add-more-dimensions dp as a first step and then reducing from that (because it's more mechanical and straightforward than reasoning your way through why a greedy approach might fail), but that's probably "above my paygrade".