This is a question about psychology, not about the best way to program something.
I have often felt that the top contestants seem to prefer writing DP solutions using for-loops and arrays, instead of memoized recursive functions.
Certainly there are times when one or the other method is "right" based on the density of the search space, depth of recursion etc. But often it doesn't matter, and in those cases, the top programmers overwhelmingly use loop+array. This seems counterintuitive to me because the loop+array approach often has corner cases to think about regarding array initialization and loop boundaries. Programming with a recursive function, on the other hand, seems to correspond more "naturally" with a mathematical expression of a recursive formula. The base cases are much simpler and correspond more directly with the math of the problem.
For a silly example, take f(n) = { if (n == 0) return 1; else return n*f(n-1); }, this is almost like the mathematical way of writing it (like this notation from Wikipedia ).
But in no math book would you see f[0] = 1; for (int i = 1; i <= n; i++) { f[i] = i*f[i-1]; }. That loop is not "the math way". Yet the top programmers seem to favor loops.
To test this theory I performed a rigorous scientific experiment. Here is the problem for topcoder TCHS 44 medium: https://community.topcoder.com/stat?c=problem_statement&pm=8250&rd=10795
It has a simple recursive solution. I then looked at ALL the red user submissions in the practice room, 32 total. This screenshot shows that the top 12 all used array+loop. The 13th used memoization (and his solution was faster — because even for those guys I think saving a few seconds thought on boundary cases makes a difference).
Of the 32 "red" solutions, only 6 used recursive functions.
We know the top guys are highly mathematically minded, so why don't they favor that style in their programming? Do they start out that way and then turn to array/loops after practicing lots of problems? Or do they start out writing like that the moment they encounter recursive solutions because that is the way their mind things? (bottom-up instead of top-down)? Do they hate writing a separate function when you can do everything in the main() ?
Would be curious to get anybody's thoughts, if they more naturally write loop+array, why they do so. And I don't know if it's just "top" programmers who do this or some set of people across all ranks. I mainly study top programmers solutions to learn, so my sample is obviously biased.
With bottom up dp, one can easily optimize memory usage for some type of problems. CMIIW.
Yes with the loops you know exactly how much memory is being used, with recursion you have the stack as well as the memo. But still, many times it doesn't matter.
Yes, sometimes, for example, a two-dimensional dp[i, j] depends only on previous values dp[i-1, k]. Bottom-up approach will require only a linear amount of memory. And even if you didn't notice this at first and wrote an O(n2)-memory algorithm, it can be quickly fixed by replacing
dp[i,j] = f(dp[i-1, k])
withdp[i%2,j] = f(dp[(i-1)%2, k])
or something like that.Beside to uptimist comment , i think that bottom up dp is easier to debug
OK.. I will think about that. It's true that I find debugging difficult with top-down recursion. On the other hand with more boundary conditions in bottom up, there can be more to debug.
But certainly, looking at array values is easier to debug than recursive nesting. I like this answer!
Not always true — that recursive is more "mathematical" — I think using for loops makes it easier to realize the invariant, catch errors, plus you don't have to think about the recursion stack overflow chance.
Can you maybe explain "realize the invariant" with an example?
Using iterative dp allows you to optimize the running time, this makes easier to handle how to compute the values and that's the trick to apply dp optimizations (memory space and time). Is very common to see problems that using prefix sums can reduce running time by a factor of N (to mention an example). And usually the code is shorter.
Edit: Here you have a sample problem 611D - New Year and Ancient Prophecy
OK but there are enough cases where recursion performs better because it visits fewer states, while arrays will always visit EVERY state.
But that was not my point though, about efficiency. My point was, when there's no difference in getting the solution accepted, which these guys will easily know which method is more efficient and they will use the best method IF it makes a critical difference.
And shorter code is not "usually", I would argue recursion is shorter just as often (and recursion has usually just "boilerplate" code which is the same every time — function header, initialize memo with memset, check memo).
So far, "easier to debug" makes sense because all the data is captured in the arrays. And there is a comment about "invariants" which I don't understand yet.
Thanks!
Suppose in the previously mentioned problem you coded a recursive solution just to see what is happening in the output, which can be useful sometimes, and then while (or after) coding it you notice that prefix sums are useful, then you will need to either code the solution again, or copy some of it and fix it, coding it again is not optimal for competitions with a limited amount of time, and copying it could introduce unexpected bugs, then why not just code the iterative at the beginning?
And, you can also use the iterative solution to decrease the amount of memory you use.
You don't need to code it again, just make a new prefixSum(n) function and define it as prefixSum(n) = dp(n) + prefixSum(n-1).
In almost every case iterative dynamic programming solution is much faster than recursion and then it's just a habit to write optimal solutions even if suboptimal will also work.
It's like when you need a function to check if the number is prime. I'm sure, 99% of the people will write a function which checks divisors upto sqrt(N), even if the solution with checking upto N would also pass.
Talking about your statement "there are enough cases where recursion performs better because it visits fewer states, while arrays will always visit EVERY state.". I don't agree that there's enough such cases. In most problems for maximum test recursion will visit every state, and if it's not the case (which is pretty rare) then be sure that top programmers will write a solution with the recursion.
"it's just a habit to write optimal solutions even if suboptimal will also work."
Finally a good psychological answer. That makes sense. Of course I always check divisors to sqrt(N), it's no real extra effort. Maybe for some people the bottom-up method is no real extra effort, either. (For me, bottom-up is a big extra mental effort to code)
You are right regarding "fewer states". I cannot think of good examples.
Though I am not top rated.
But I do not know how can we data structural optimization to recursive DP? I have never faced a TLE in Non recursive dp while I have when I wrote top down, then why will I take any chance? The only time I will probably write a recursive implementation is only when I cannot understand topological ordering of the states!
Forget TLE for a minute or your ranking, I'm interested in anybody's answer.
Does it take you same mental effort to code bottom-up nonrecursive, as top-down recursive?
Even though in some cases bottom-up dp requires some extra mental effort, it had better be used to write nonrecursive solutions, since in this way we won't face implementation difficulties whenever this becomes really necessary.
I'd say there are several reasons.
Tradition
People are taught to implement dynamic programming using loops. Then they teach the next generation to implement dynamic programming using loops. Hardly surprising, regardless of whether they are right or not.
Convenience
To write loops instead of memoization, the required additional effort is to topologically sort the states in a loop-friendly way. This effort often pays off.
To begin with, the asymptotic complexity of a bunch of for loops is usually much more obvious than the complexity of a recursive solution. Furthermore, many advanced dynamic programming problems will require additional insights after figuring out the most obvious dynamic programming solution. There are often multiple ways to topologically sort the states, and some of them make optimizations possible — and visible. Some examples are computing prefix sums to speed things up (already mentioned a few times), or even getting rid of one of the parameters.
Forward and Backward Dynamic Programming
Often, we can express the objective function f(s) on state s as some combination of f(t1), f(t2), ..., f(tk), much like a mathematical formula. This can be expressed as backward dynamic programming. At a certain point in time, we want to calculate f(s). Alright, take f(t1), f(t2), ..., f(tk) — either they are already calculated with a loop-based solution, or we use recursion with memoization to calculate them as needed — and combine them in the required way (calculate their sum, maximum, or whatever). Here, we move from state s back to states ti as needed.
However, some solutions are better expressed as forward dynamic programming. At a certain point in time, consider state s. At this point, f(s) is already calculated. We know that some states r1, r2, ..., rp depend on state s. So, right now, we want to update the intermediate calculations in these states using f(s). When we later arrive at, say, r1, it happens only when we already processed all states it depends on, and thus f(r1) is already calculated. One example is Dijkstra's algorithm: whenever we add a vertex s to the tree of shortest paths, we update the minimum distance to each of the vertices r1, r2, ..., rp directly reachable from s.
With this approach, we simply cannot write the mathematical formula like f(s) = G(f(t1), ...) because the dependencies go the other way! Technically, it is still always possible to transform forward DP into backward DP (the entire graph between the states of dynamic programming can be stored, topologically sorted, and visited in the resulting order), but for some problems, forward DP can be more natural and more suitable for optimizations.
On a side note, I really like to see such things being discussed. Looking forward for more insightful answers!
I always use forward DP if it's possible. It's similar to BFS and therefore is more natural. It should be easier to think like "OK, I'm standing at dp[i][j], what can I do now?" I think learning DP must start with this approach.
Sometimes forward DP is less convenient, e.g. if DP is combined with prefix sums or two pointers or some other things, but when you face such kind of DP, you usually have enough skill to understand all three techniques (forward, backward, recursion) clearly.
Neat — I very much feel comfortable with BFS and the ever-expanding outward frontier of states. Never thought of applying the same thought process to forward DP. Great way to think of it.
And, the dress is yellow.
I cannot agree that recursion is a more natural and mathematical way to write a solution. Loop-based code looks a lot like the inductive proof:
In contrary, recursion looks like a mathematical formula only before you add memoizing.
Now that you explain it — yes, it looks just like an inductive proof. This is amazing.
1) Easy to debug 2) Loops can be optimized by compiler, and recursion is the thing compilers are afraid of. Also recursion need stack memory, what isn`t usually very big(1 mb, as far as I know in C++, ans much more less in python/php/etc) 3) because of previous fact loops are faster than recursion in non-functional programming languages(non-functional languages usually dont optimize tail recursion and functional ones must do this couse they have no loops in "classical" form) 4) memoized recursion needs global variables or work with pointers and loops do not.
I favor recursive solution, and this is why I do :
[1] Of course because I learned dp in recursive way :) and I think it's really great for beginners — because this way is easy to learn, and learning this way teaches more things. (backtracking, recursion, memoization, graph-theoretical thinking etc.)
[2] Finding the direction of loops are not always easy — For example, try to solve this problem in linear time with loops, and compare your code with recursive codes.
Sometimes you need to optimize complicated backtracking code with memoization. It's also hard to find the direction of loop in that case, too.
[3] You can always change your recursive code to iterative code if you need to. However, some problem can't simply solved by iterations. Then, It's clear why we should learn dp in recursive way :)
[4] In my opinion, recursive dp is strong when you are dealing with messy DPs. I think this is true because of two reasons — you don't need to care the direction of loops & it's more easy to check base cases. less WA + less time -> more scores :)
(I have no idea how CF's auto numbering system behaves..)
Loop directions has always been tough for me. Sometimes I see a nested-loop DP where one loop is going up and another is going down.
And regarding [3], my forward-DP skills are so weak, that in some cases even though I KNOW that an array implementation is needed, e.g. the recursion depth is clearly too much, I will STILL write a first solution recursively as a trial run, and debug it. Once it works on small test cases then I know the recursive formula is correct and try to convert it to loops. So this is good advice.
Thanks!
I've wondered about this as well, what I've noticed is that some countries like Egypt only use recursive DP (all different universities/cities). I also find recursive solutions more "natural" but seeing that all reds use it, I'm now more inclined to use it as well. My guess is, when you're dealing with harder problems (maybe involving graphs/complex data structures) an "in-place" DP table solution is faster and more intuitive than passing whatever you're dealing with to a recursive function.
I use to code DP's with recursion. I always find hard to handle the base cases with the array method. However, it's a LOT easier to optimize the array-based DP. With a recursion, you can't use DP tricks like the Knuth Yao Optimization, the Convex Hull Trick, Range queries on previous states, memory optimizations (like the one used in the Knapsack Problem), etc (as far as I know).
Both methods have their pros and cons for some people. In my opinion, DP's with bitmasks states are easier to code as a recursion with memoization.
Haha, the writers of round 349 div. 2 must have read my question, because problem C forced me to use array-based DP instead of recursion due to the constraints. Thanks, writers! :-)
Not really. You can solve it by top down. Here : http://codeforces.me/contest/667/submission/17587445