I recently came across this problem Icy Roads of Nomel from Petrozavodsk Summer-2013. Gennady Korotkevich Contest 1. The problem statement is simple:
You are given an $$$(n+1)\times (m+1)$$$ grid with row costs $$$a_1,a_2,\dots,a_{n+1}$$$ and column costs $$$b_1,b_2,\dots,b_{m+1}$$$. You are at $$$(1,1)$$$ and want to go to $$$(n+1,m+1)$$$ by repeatedly performing one of the two following operations when you are at $$$(x,y)$$$:
- Move to $$$(x,y+1)$$$ with a cost of $$$a_x$$$;
- Move to $$$(x+1,y)$$$ with a cost of $$$b_y$$$.
You want to know what is the minimum total cost you can achieve. The constraints are $$$1\leq n,m\leq 5\times 10^5$$$, and costs are of magnitude $$$10^9$$$.
This does look like Problem 1 and Problem 2, where one usually utilizes the convexity of costs to speed up the dynamic programming. However, here the costs are arbitrary, and there is absolutely no convexity.
From reading some code from the Atcoder mirror, (I think), the solution looks like performing some kind of greedy on two lower convex hulls formed by points $$$(i,a_i)$$$ and points $$$(j,b_j)$$$. However, I have no idea how to prove this. Could someone please help? Many thanks in advance!
If you can read Chineses, this is the editorial https://wcysai.github.io/2019/09/10/Did-you-see-the-convex-hull/.
Thanks! However, this editorial is just too hand-wavy and didn't provide any formal proof. I wonder how to formally prove that it is sufficient to only consider vertices on the lower convex hull?
I spent like 1 hour yesterday trying to prove it and managed to do nothing but build intuition because the paths of proof that I tried ended up nowhere. I'd be glad if someone helped with this one.
The two observations are:
When you find the first observation, it may seem like it's sufficient to solve the problem — repeatedly find a minimum intersection, and split into subproblems (from start to intersection, and from intersection to end). The only issue is when the minimum intersection is the start or the end, which is where the second observation helps:
At first, set $$$r = \infty$$$ as above. Now the minimum intersection is $$$(1, 1)$$$. Now imagine slowly decreasing $$$r$$$ until the minimum intersection isn't unique. This will happen when some $$$a_i + ir = a_1 + r$$$ (or vice versa for some $$$b_j + jr = b_1 + r$$$), and when this happens, you know that there's an optimal solution that goes from $$$(1, 1)$$$ to $$$(1, i)$$$ (or vice versa for some $$$(j, 1)$$$).
So the idea is that if you keep decreasing $$$r$$$, the optimal path remains while you can build it edge by edge (either advancing horizontally or vertically, depending on the next event).
The final observation is that, if you map the set of points $$$(i, a_i + ir)$$$ on the plane, then its lower convex hull consists of the same set of points regardless of $$$r$$$. So instead of "decreasing $$$r$$$ slowly", you can compute the lower hull for both $$$a$$$ and $$$b$$$, and from this you can simulate decreasing $$$r$$$ (which means advancing with two pointers over the hulls, depending on the slopes of the hull edges).
This is just so clear! Thanks!