OK! Now you have an approach to solving a problem, but wait! Will this always work? Is there any counterexample that fails this algorithm? These are the questions that I constantly face nowadays.
Let me elaborate on this with some examples:
Empty Graph: In this problem, the first approach that I came up with was to just assign 10^9 to k smallest numbers. Of course, this was wrong, but I couldn't find a counterexample and I had to ask a friend.
Maximize the minimum: One of the approaches that came to my mind was to assign the maximum value of adjacent elements and do it k times. (tho it was a bit easier to find counterexamples for this approach)
There are two ways(that I know) that one can use to verify a solution:
- Prove it mathematically (using induction, contradiction, etc).
- Find a counterexample
I have talked about this to some of the people(who are doing great on codeforces) and most of them said that they usually go with the second approach. How do they find a counterexample? They say, and I quote "It usually comes with experience" but I'm here to know if there is a systematic way that we can follow to find counterexamples.
For greedy algorithm, you can use exchange argument then you would either get a counter example or a proof
can u pls elabroate.Thanks
I found Errichto's blog on this topic: Exchange arguments and this will give a nice overview
Thank you
I think "It usually comes with experience" is really how it is. I got these tips which probably could help. If you have some algorithm and want to find a counterexample:
I am not sure how much these tips will help you. But those are some thought processes I use during contests or after contests when analyzing problems.
This was a real help
I wrote a small paean about the unbearable effectiveness of automated stresstesting: mostly that despite writing up to 3 programs per problem, if the generator and known-good-but-smol-n solution are short enough, then it's probably worth their initial investment to put an upper bound on how much time you burn on your actual solve (lots of feedback = certainty = maybe speeeeed?).
Then I wrote a less small book on how it can become a dirty crutch.
Buuut if you find you're not developing your intuition as well as you'd like, you might get some mileage out of incorporating quick stresstesting habits: just make sure you are intentional/explicit about what you're testing (OleschY's post: good stuff), and maybe dedicate some extra practice time to coming up with minimal sets/ranges of effective testcases vs. relying on randomized spam and luck as a hopefully last resort (that lets your own intuition get rusty).
The ultimate downside is that you'll notice when setters do the randomized spam pile as their first approach to tests, which may or may not amplify FST salt (if, for instance, there's a mis-weighted problem after one with weak tests).
Actually, the really big downside is that while it'll catch some misread problem statements, it won't catch the uncanny valley misreads (close enough to pass samples but still wrong), and since the minimum time investment is higher, that can be more painful.
There's probably an analogous conversation to be had for just-add-more-dimensions dp as a first step and then reducing from that (because it's more mechanical and straightforward than reasoning your way through why a greedy approach might fail), but that's probably "above my paygrade".