Hi everyone. Just posting this blog so we can discuss the solutions to the problems of the 2021-2022 ICPC Latin American Regional Programming Contest.
Mirror on beecrowd: https://www.beecrowd.com.br/judge/en/users/contest/649
Mirror on gym: https://codeforces.me/gym/103640
Links to individual problems (PDF files):
K — KIARA is a Recursive Acronym
Happy discussion :)
I'm washed and only tried the mirror for a bit but here's some sketches
Just BFS over $$$(x, y, time)$$$ pairs, the clouds all leave after at most $$$100$$$ so the answer is small and you can just do whatever.
Work with the prefix sum array $$$s_0, s_1, \dots, s_n$$$ so you add $$$x$$$ to some suffix and the happiness is the number of subsegments $$$s[l..r]$$$ such that $$$l < r$$$ and $$$\min(s[l..r]) = s[l]$$$. For each index $$$l$$$ compute the smallest $$$f(l) > l$$$ such that $$$s_{f(l)} < s_l$$$, so the number of segments that start at $$$s_l$$$ is exactly $$$f(l) - l$$$ unless the suffix you add $$$x$$$ to starts at some $$$i$$$ with $$$l < i \le f(i)$$$. Sweep the starting point from right to left and keep the answer updated, you can do this with a treap.
$$$2^k > 1 + 2 + 4 + \dots + 2^{k - 1}$$$ so $$$A$$$ always gets $$$n$$$ and $$$B$$$ always gets $$$n - 1$$$, then $$$B$$$ can get anything else that's in the same connected component as $$$n - 1$$$ if you delete $$$n$$$ and nothing else.
Compute the cost of placing each odd-indexed city in each feasible position and then run min cost matching.
Notice that if you start on Monday, Wednesday or Friday then you get back to the same position after 91 days, then just simulate.
Turns out it's impossible if and only if there are two pairs of points with endpoints on the border that are arranged like $$$A, B, A, B$$$, so it reduces to checking if some pair of intervals intersects which is easy.
Just code it lol
Iterate over the number of easygoing people that end up seating next to another easygoing person. Once you fix this the entire rest of the process is fixed and you can compute the total happiness easily, and computing the probability is easy math.
Assuming you start at time $$$0$$$ doing the tasks greedily by increasing order of deadline works. Then you can check if you can place a certain task at the start in $$$O(1)$$$ by using some prefix maximums. Then just delete the smallest task you can place and recurse.
In problem D, it's possible to keep the answer using only a segment tree with lazy propagation instead of a treap.
Can you post the codes of I, J, M, I have been WA...thk
Auto comment: topic has been updated by pabloskimg (previous revision, new revision, compare).
Are the test cases the official ones? I got AC on problem C with an incorrect solution.
My AC code (152805814) answers 3, but the correct answer is 1 (and this other AC submission answers 1 152705102)
Edit: And this other test breaks all currently accepted solutions except from 152757395:
They might have fixed problem D's cases because the team that passed it passed with a wrong solution but for the other problems I think it's the same cases.
Thanks for pointing these out, added them to the testset.
I'm sorry for my poor English
get the prefix sum, for every $$$i$$$ count $$$sum[i]$$$, then the anser is the number of pairs $$$[l,r]$$$ such that $$$sum[l-1] = min(sum[l-1]......sum[r])$$$ ,so we get the the nxt[i] which means the first position that $$$sum[nxt[i]]<sum[i-1]$$$ && $$$nxt[i]>=i$$$, and we get the anser that ignoring the x
we add x on position p, we will affect the i that $$$nxt[i]>=p$$$ && $$$i-1<=p$$$,wo when we iterate form $$$n$$$ to $$$1$$$,when we enter $$$nxt[i]$$$,we modify the contribution of $$$i$$$,and when we leave $$$i$$$,we also modify the contribution of $$$i$$$, then we get a corrct solution when $$$x > 0$$$ because when we add $$$x$$$ on $$$p(p<=nxt[i])$$$,the first position $$$nxt[i]$$$ will only move from left to right
but when $$$x<0$$$ the $$$nxt[i]$$$ will move from right to left,we should modify all $$$i$$$'s $$$nxt[i]$$$ to $$$p$$$ such that $$$sum[i-1]>sum[p]+x$$$ ,I don't know the easy way to do this, I use a segment tree to get the $$$nxt$$$ array and another segment tree to modify the $$$nxt$$$ array
let $$$F[l][r][0/1]$$$ be the min cost when we start from $$$l/r$$$ $$$F[l][r][0] = min(max(F[l][k][1],F[k][r][0])+D[k]+dist[k])-dist[l]$$$ when iterate k from $$$l$$$ to $$$r$$$ there will exist a $$$p$$$ such that
we can do binary seach to get $$$p$$$,when $$$k$$$ in $$$[p+1,k]$$$,we should get the min $$$F[l][k][1]+D[k]+dist[k]$$$
we can use a segment tree to do this
problem G I don't know the solution, I just guess
for a tree of size $$$x$$$, we iterate its factor $$$p$$$,and devide it into $$$x/p$$$ blocks(only one way),each block's value is $$$val$$$, and count the number of tree $$$y$$$ and its value equal to $$$val$$$
problem A I don't know how to solve it,wish somebody could tell me, I guess the key is that for a $$$Ax+By>=S$$$ the different $$$A$$$ and $$$B$$$ is $$$O(n^2)$$$,but I don't know how to solve the condition of simple quadrilateral
Could you explain better what you do when x < 0
let $$$r[i]=nxt[i]$$$,$$$l[i]=i$$$,then the answer will be $$$r[i]-l[i]$$$,for example $$$j=3,nxt[j]=5$$$ and when iterate $$$i=4$$$,assume that we get $$$sum[i]+x<sum[j-1]$$$,now $$$nxt[j]$$$ should be $$$4$$$,so we need to modify all $$$j$$$ that $$$sum[j-1]>C$$$,we can use a segment tree to modify $$$[sum[i]+x,inf]=i$$$,so we use segment tree to record
$$$cnt[x]$$$ the number of $$$i$$$ in this node
$$$sumr[x]$$$ the sum of $$$nxt$$$ in this node
$$$suml[x]$$$ the sum of $$$i$$$ in this node
my solution https://pastebin.com/RvzgS7n5
I have a question about your proposed solution for problem E. How do we update the segment tree? Because let's say we want the minimum in range $$$[p+1, r]$$$, this minimum depends on a third parameter $$$l$$$ which is outside the range, so if you change $$$l$$$, all the values in the range change.
You can build $$$n + 1$$$ segment trees where you fix $$$l$$$ for each segment tree. Each segment tree has size $$$O(n)$$$ so you have overall $$$O(n^2)$$$ memory in Segment trees.
Oh,my fault,I use $$$O(n)$$$ segment tree to do it, the memory is $$$O(n^2)$$$ here is my solution (https://pastebin.com/Wut9fPYZ)
let me just expain it
$$$F$$$ is the same as I mentioned above
$$$G$$$ and $$$H$$$ are the segment tree
you can discuss 4 situations and use $$$4 * n$$$ segment tree to update the $$$F$$$
as the memory limit, so I just use $$$maxm = 8000$$$ instead of $$$maxm = maxn « 2 = 12000$$$ or you can use $$$vector$$$
btw,as you just want to know a min/max of prefix/suffix,you can use fenwick tree, the memory will be $$$1/4$$$ of the segment tree or std::set maybe also can solve this problem
I'm sorry I haven't answered your so long
One more solution
To find the maximum sum, first sort arrays $$$F$$$ and $$$C$$$ in non-decreasing order. When $$$k=N$$$, the answer is $$$\sum_{i=1}^{N}F[i]×C[i]$$$.
For $$$k<N$$$, sum the pairs $$$F[i]×C[i]$$$ with the greatest positive product value. If there are less than $$$k$$$ such pairs (let's say $$$r$$$), sum those $$$r$$$ pairs, then take the $$$k-r$$$ remaining elements in $$$F$$$ and $$$G$$$ with the smallest absolute values, and multiply them in non-decreasing order. You can do this part in $$$O(nlogn)$$$ using FFT.
To find the minimum sum, multiply by $$$-1$$$ the elements in one of the arrays and do the same as above.
Did you actually implement that solution? I did it using the complex FFT with double precision. But round-off error kills it. To get enough precision you have to use long double (80 bit) floating point. Is that what you did? Or is there some other trick?
Yes, I implemented a solution using FFT with C++ complex of double datatype (8 bytes) and rounding the real part of the answer to the nearest integer.
Most FFT implementations have shit precision issues simply due to the implementation. Read this comment/blog https://codeforces.me/blog/entry/48465?#comment-325890
Thanks. I got my solution to work just by replacing the computation of w^j. Instead of computing it by multiplying w by itself j times, I did it by precomputing its value at the beginning by using the complex exponential function.
Another option to try is Karatsuba, or a Number Theoretic Transform, both of which stay in integers and thus have no precision issues. Our team tested and both pass problem B if implemented efficiently (Karatsuba passed just barely, it is really close to the Time Limit in the mirror judge).
I really liked problem G. Is there a way to encode (e.g. find a unique string/integer sequence to represent it) an undirected, unlabeled tree faster than in O(n*log^2(n)) ( let's say without hashing ) ?
Yes, you can encode any rooted tree in $$$O(n log n)$$$ implementing AHU algorithm carefully. You can read more about it here. Then, using the code for a rooted tree is easy to generalize it for an unrooted one because each tree has at most two centroids. So you can call the algorithm for each centroid and merge both vector/strings.
It's also possible to have an $$$O(n)$$$ algorithm if we use radix sort in AHU algorithm. Though I have never implemented it with radix sort but I read about it in this comment. But if I wanted a linear algorithm, I would prefer hashing to receive a simple integer to encode the tree. It's somehow similar to hashing a string and I learned it from this wonderful blog and this paper.
However, I've seen your code for problem G and it seems that you are already familiar with a naive AHU algorithm. So to give you a complete answer I will try to explain how you can go from $$$O(n log^2 n)$$$ to $$$O(n log n)$$$.
If I'm not wrong, what you are doing to encode a rooted tree is the following algorithm:
You are recursively encoding the subtree rooted at node $$$u$$$ as a string as follows.
If $$$u$$$ is a leaf, return "[ ]"
Otherwise, you encode all the children of $$$u$$$ and put each string in a vector. Then you sort the vector and combine all these strings into a code $$$ans$$$ and finally return "[ $$$ans$$$ ]"
The size of the code is $$$O(n)$$$ but the algorithm is $$$O(n^2 log n)$$$ because for each node you are sorting a vector of size $$$O(n)$$$. However, for an unrooted tree, as you use the centroid, then each level of the tree will have complexity $$$O(n log n)$$$ and there are $$$O(log n)$$$ levels so the overall complexity is $$$O(n log^2 n)$$$.
Let's optimize it to $$$O(n log n)$$$ for a rooted tree. The problem is that you are storing the complete code for all the subtrees. Instead of that, we can compute something like a "partial code" based only on the children of the node and not on the whole subtree. However, in this new algorithm, we will return a list of integers instead of a string and we're going to process level by level, starting from the deepest level.
For each level, each node will have an integer label that represents the order of the codes (i.e. the nodes that have the lexicographically smallest code will have label 1, the next nodes will have label 2 and so on). We will distinguish 2 phases in this algorithm (though they can be implemented in parallel).
Phase 1) Label Assignment
For each level $$$L$$$ (from the deepest level). Traverse all the nodes $$$u$$$ of this level and for each of them build the sorted vector from the labels of their children. Call this vector $$$childrenCode[u]$$$
Then sort all the nodes of level $$$L$$$ based on $$$childrenCode[u]$$$
Finally, using this sorted order, assing the labels to the node of this layer (starting from 1).
Phase 2) Code generation:
We will build a code of size $$$2n - 1$$$ as follows
Set $$$code = []$$$ (empty vector)
For each level (from the deepest level). Traverse all the nodes $$$u$$$ of this level using the sorted order of their labels (from phase 1). And append all the labels of $$$childrenCode[u]$$$ to $$$code$$$. Also append a special integer to separate the "partial codes" of each node. We can append 0 as we haven't used this integer.
The code has size $$$2n - 1$$$ because each node will append 0 (so we have $$$n$$$ zeroes) and each node will have its label appended by its parent, except from the root that has no parent (so $$$n - 1$$$ more integers). And the overall complexity is $$$O(n log n)$$$ because the sum of sizes of all the vectors that we sort is $$$O(n)$$$ instead of $$$O(n^2)$$$ in the naive algorithm.
Here is my implementation to encode a rooted tree (for unrooted tree just encode for each centroid). I hope it helps.
Thank you for devoting your time to answer. The algorithm I came up with is pretty much what you described, except to achieve good complexity what I do is; at each node, I sort all the strings corresponding to small subtrees and merge them, I just dont touch the heavy subtree (if there is one, then I just reuse the string corresponding to heavy subtree without copying it). I think this trick is called just "merge small onto large" or something, I am not sure. Tbh now that I look at my algorithm I think a bound tighter than O(n * log^2(n)) may be possible. I cant come up with a tree when the complexity is as bad as n*log^2(n). Definitely n*log(n) is a lower bound. I feel like n*log(n) is the real upper bound as well.
What you said totally makes sense though. We can just give ranks to nodes, and compare the ranks of the children, we dont need whole subtrees.
I did the first approach but avoided the $$$n^2\log n$$$ complexity by comparing the strings with hashes as well. I got a pretty bad constant but I think it can be improved.