For this problem, first I tried an O(mn) approach (http://ideone.com/rDWvZ) which gave me TLE at pretests.
Then I tried to search for a way to use data structures to speed up the process, but didn't get to anything that could work. Then I noticed that if I precalculated the answer for some groups, the minimum, and the maximum in those groups, I would have enough information to join those pieces and get answers for intervals. So with this, I speed up my solution to O(m * sqrt(n)) which should pass in time: http://ideone.com/y2kEV
Right now I'm getting WA 4, but the case is too big to debug it. Could someone please tell me why it fails?
I did the thing similar to you, I also get WA at test 20,21
The thing is I found that block size is effect to answer (If I change block size, answer also change, some size give me good appoach, some not) but I don't know why? anyone can help me?
This is my submission (with many try to change block size)
1198182 1198177 1198164 1198094
Now I've gotten AC (1202174), the idea was OK, but I failed to notice that I could get l > r in some cases where the size of the query's interval is smaller than sqrt(n).
I also got AC now. Thank you :)
I think it is a classic problem using segment tree to solve.
you need to maintain a segment tree node which has lmax,rmax,ans,sum;
lmax : from left to right the max continuous sum; rmax : from right to left the max continuous sum; sum : the interval sum; ans : the interval max continuous sum;
then, you can update the four values from bottom to top.
O(M * log N)
1197412 is my accepted submission id.
Well I've improved it to O(M lgN) (1202837) using RMQ, the idea is the same than that of the O(M sqrt(N)) but the sizes of the buckets change.