I mean to ask whether there is any dp question that can be solved with top down but not with bottom up or vice versa. If so, I would like to see some examples.
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 155 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
I mean to ask whether there is any dp question that can be solved with top down but not with bottom up or vice versa. If so, I would like to see some examples.
Name |
---|
I don't have the examples right now, but the following are actually there on the internet, so you can just search them up.
Fo the top-down approach, in theory, are better because it only visits necessary states. In practice, if you often need to visit all of the states, then there will be no noticable differences. But for some concrete problem, the top-down approach end up visits very few states, and you can calculate the total number of the necessary states, the top-down should be prefered.
On the other hand, the bottom-up dp has 1 optimization, not for time, but space. Consider problem that your states having form
dp[i][j]
, anddp[i][j]
only depends ondp[i-1][k]
for somek
. Then for the currenti
, you don't need to store every values ofdp[x][y]
for allx < i-1
. That way, the memory is reduced by a factor ofn
, which is actually necessary for a lot of problem. This trick is called rotation. To apply it, you can use 2 vectors, one for the previous state, one for the current. After the calculation for the current state, just swap 2 vectors. The other way is to declare an array of sizedp[2][m]
instead ofdp[n][m]
, and for eachi
, you can overwrite the current do values ontodp[i%2][j]
. Rotation can also be applied to more than 2 layers.Actually, if you have to visit nearly all states top-down is often 4-8x slower.
I guess one question was that you can add or subtract 1 , also divide by 2,3,5 if divisible.minimum steps to make n to 1
https://atcoder.jp/contests/agc044/tasks/agc044_a
Another one is the cses problem where given n you can substract any single digit from n which is present in n, n <= 10^18
https://cses.fi/problemset/task/2174
This tasks have sparse states but are dependent on n, in this type of questions only top down can be applied , please correct me if I am wrong.
i am pretty sure that there exist many problems that gives TLE with top down approach but AC with bottom up approach.
I guess he is concerned over class of problem , rather than fitting in time limit.
Often it's easier to make forward transitions in DP (what state can I go to from this one) than backward transitions (from which states can I reach this one). Top down DP is incompatible with forward transitions.
To add another layer of nuance to this discussion: A big majority of DP that arise in practice can be formulated in two ways: One forward and one backward. For example, to count the paths from $$$u \to v$$$ in a DAG, you can consider paths $$$u \to x$$$ for some $$$x$$$ (the forward DP) or you can consider paths $$$x \to v$$$ for some $$$x$$$ (the backward DP). The choice of which to calculate is distinct from (and only sometimes related to) that of whether you compute the DP in a bottom-up or a top-down (lazy) fashion.
I think the example navneet.h gave of AGC044A suffices to show some of the subtlety here. The most obvious forward DP (computing the minimum cost to go from $$$0$$$ to $$$x$$$ for various $$$x$$$) is easy to lazily compute, but tricky or unreasonable to eagerly compute bottom-up without either wasting a lot of work or explicitly pre-calculating which values $$$x$$$ will be relevant (which isn't so different from lazily computing). The corresponding reverse DP (computing the minimum cost to go from $$$x$$$ to $$$N$$$ for various $$$x$$$) is easy to calculate bottom-up (i.e. from $$$N$$$) for the values of $$$x$$$ that could be relevant, but tricky to calculate top-down (i.e. from $$$0$$$) because it's not obvious how to compute the forward transitions. (That last difficulty is, incidentally, (the reverse of) the same issue that ivan100sic touched on in his comment.)
Tasks with Sparse states can be better dealt with Top-down and dense tasks are better dealt with Bottom-up, if we stretch these to limits we have problems only solvable by one approach
And with top-down dp in real life, we can also easily add caching strategies LFU and LRU and make caching size constant, on the cost of recomputing some parts.
python also supports this out-of-the-box @lru_cache(maxsize=10**5)