Hi. Just posting this blog so we can discuss the solutions to the problems of the 2020-2021 ACM-ICPC Latin American Regional Programming Contest.
Mirror on Gym: https://codeforces.me/gym/103185
Links to individual problems:
A — Almost Origami
B — Beautiful Mountains
C — Crisis at the Wedding
D — Dividing Candy
E — Excellent Views
F — Fascinating Partitions
G — Game of Slots
H — Halting Wolf
I — Impenetrable Wall
J — Job Allocator
K — Key Logger
L — Lola's Schedule
M — May I Add a Letter?
N — Non-Integer Donuts
Happy discussion :)
A.
bactracking with pruning.
You can only build heights of the form $$$\frac{1}{n}$$$, for some natural $$$n$$$.
With heights $$$a^{-1}$$$ and $$$b^{-1}$$$ you can make the height $$$(a + b)^{-1}$$$
You can see all the created elements as an increasing sequence.
With a backtracking approach, you can generate the biggest element with lowest elements in the sequence.
(pruning) do not repeat the largest element, i.e. with sequence 1, 2, 3, x, x = 4 can make with 2 + 2 or 3 + 1.
(pruning) the maximum length is 10 for $$$m$$$ between 1 and 100 (hardcoding).
(greedy pruning?) I have not been able to prove this, but the sequence is $$$f_1 = 1$$$, $$$f_i = f_{i-1} + f_{i - a_i}$$$, for some $$$a_i$$$ between $$$1$$$ and $$$i-1$$$.
first code
second code (backtracking / greedy)
I also tried greedy from hint 7 without proofs and it runs significantly faster than without it. It would be interesting either to see if it fails at some point (outside of the bounds of this problem of course), or a proof that it is correct.
Some intuition about the greedy:
Whenever you "create" a number, this number will be used to create the following number, i.e numbers created are used immediately in the chain.
A counter example to the greedy would look like this. Suppose you have the set $$$S$$$ of numbers available. From this set you create numbers $$$A$$$ and $$$B$$$ only from numbers in $$$S$$$, and then get $$$C = A + B$$$.
This is not the only counter-example since chains can be longer, but this is a possibility.
https://en.wikipedia.org/wiki/Addition_chain#Brauer_chain if I understood this correctly then it works until 12508
How to solve M?
Maintain the suffix array of the reversed string (prefix array?) online with hashes and binary search (in a set with custom comparator). Now just count the total number of different substrings and subtract the number of substrings that appear exactly once. Both of these things are easy to calculate via the LCP of adjacent suffixes in the suffix array, so you can easily calculate the difference that each update causes in the answer.
Alternatively you can create a trie and do the suffix (prefix?) array of that trie using the same deterministic suffix array algorithm that doubles information + power of 2 ancestors.
This is my code using this idea, i.e. no hashing.
I tried to add few tests to avoid several hashes. But of course it is very hard to avoid them on ICPC.
Edit: Wrong solution, sorry. I got AC but really the time complexity is bad because of the rollbacks. Thanks to brunomont for the clarification.
I solved it using suffix automaton: here my code. To solve it with this structure I thought two things:
For the first one, we can use stacks (I used five) to keep enough information that allows us to recreate the changes that occurred when inserting that last character and thus be able to undo those changes.
For the second one, keep in mind that each different subtring is only contained in a single state of the suffix automaton and that the number of different substrings contained in a state $$$u$$$ $$$(u \not = 0)$$$ of the suffix automaton is equal to $$$len(u) - len(link(u))$$$.
Every time we insert a character a new state is created, let's call it $$$u$$$, if at the end of the insertion $$$link(u) \not = 0$$$, that means the substrings contained in the $$$link(u)$$$ state already occur more than once in the string, so we can add $$$len(link(u)) - len(link(link(u)))$$$ to the answer. However, we only have to add that value to the answer the first time that $$$link(u)$$$ is assigned as a suffix link. To do this (and also to undo changes) we can add to the states a counter that stores how many times that state has been assigned as a suffix link when inserting a character (this counter must also be copied when creating a clone state in the insert algorithm).
About suffix automaton: https://cp-algorithms.com/string/suffix-automaton.html
Are you sure this is the correct complexity of your solution? From what I understand, suffix automaton have only an amortized $$$O(1)$$$ cost for adding a character, and linear worst case. So I don't see how this would not be quadratic when trying to do rollbacks.
I tried your code on the following instance: $$$AAAA....AB-B-B-B-...$$$, and it took more than one minute, which would indicate the quadratic behavior.
I have almost no experience using suffix-automaton, but without understanding much about the internals, I think you can make the solution work by using persistent arrays to store all information for the SA. That way you can go back to the previous version by restoring the old root for each array in $$$O(1)$$$.
There is an extra log factor both for memory and time, but that is probably ok.
I'm not entirely sure I understand how that would work, but I think this instance would break it:
$$$AAAAAA...A$$$
$$$B--B--B--B--...$$$
The idea is that each new $$$B$$$ would take time linear on the current string size, and I don't think memorization/persistence would help in this case.
Totally, you are right. After thinking a lot about how to do the rollbacks I lost the notion of the amortized complexity and in the end I code this that seemed to work. I sent the code without doing exhaustive tests and with the AC I was wrongly convinced. Now I feel a little bad about getting the AC. Thanks for the clarification!
Is it possible that the time consumed by your solution on removals is $$$O(n)$$$ in the worst case? Since the changes done by adding one character can be up to $$$O(n)$$$ right, and they need to be reversed.
If this is the case, on an adversarial test, your solution is forced to add and remove a character that triggers this behaviour all the time, and it can take up to $$$O(N \cdot Q)$$$.
How to solve G?
I liked the problem, congratulations to the author. Here is the solution.
Step 0: The probability that two cards are equal is way below the required precision, hence we may assume that the $$$2n$$$ cards are distinct.
Step 1: Alice should place her cards in increasing order on the slots.
Let $$$x_i$$$ be the number on the card Alice puts on the $$$i$$$-th slot. Assume that Bob's cards are fixed. Let $$$C(x_1, x_2,\dots, x_n)$$$ be the amount gained by Alice if Bob plays optimally. With a swapping argument (encoded in the following lemma) one can show that $$$C(x_1,x_2,\dots, x_n)$$$ is maximized when the $$$x_i$$$ are increasing.
Lemma: If $$$i<j$$$ and $$$x_i>x_j$$$, then
$$$C(x_1, x_2,\dots, x_i, \dots, x_j, \dots, x_n) \le C(x_1, x_2, \dots, x_{i-1}, x_j, x_{i+1}, \dots, x_{j-1}, x_i, x_{j+1}, \dots, x_n) \,.$$$
proof. Let $$$y_i$$$ be the card put by Bob on the $$$i$$$-th slot in an optimal solution when Alice cards are $$$x_1, x_2, \dots, x_{i-1}, x_j, x_{i+1}, \dots, x_{j-1}, x_i, x_{j+1}, \dots, x_n$$$. Let us split in $$$3$$$ cases:
Step 2: Bob should place his cards greedily starting from the largest one.
Let $$$x_i$$$ be the card placed by Alice on the $$$i$$$-th slot (thanks to the previous step we can assume that $$$x_i<x_{i+1}$$$ for each $$$i$$$). Let $$$y_1<y_2<\cdots<y_n$$$ be Bob's cards.
Lemma: The optimal way for Bob to put his cards is to repeat the following move until possible (and then place arbitrarily the remaining cards):
proof. It is sufficient to show that there is an optimal solution such that the largest card is placed as the described. Let $$$j$$$ be the largest index such that $$$x_j < y_n$$$. Assume that an optimal solution for Bob is to place $$$y_{\sigma(i)}$$$ on the $$$i$$$-th slot ($$$\sigma$$$ is a permutation). One can prove (considering a small number of cases) that Bob's gain does not decrease if he swaps the positions of $$$y_n$$$ and $$$y_{\sigma(j)}$$$; thus we can assume that $$$y_n$$$ is on the correct position $$$j$$$ as desired.
Step 3: All the possible card orderings are equally likely.
Let $$$c_1<c_2<\dots <c_{2n}$$$ be the values on the $$$2n$$$ cards. Let $$$s_i$$$ be $$$0$$$ if the $$$c_i$$$ belongs to Alice and $$$1$$$ if it belongs to Bob. It is clear that Alice's gain depends only on the string $$$s_i$$$ (since a card's value is used only to compare it to another card).
It turns out that the $$$\binom{2n}{n}$$$ possible binary arrays $$$s_1,s_2,\dots, s_{2n}$$$ can appear with equal probability. This is very intuitive and quite boring to prove, the idea is that, once we require that the cards are distinct, the random sampling described in the statement is equivalent to
Step 4: Solve the problem given a card ordering.
Assume that we are given the binary string $$$s_1,s_2,\dots s_{2n}$$$ with $$$n$$$ zeroes and $$$n$$$ ones. What is the answer to the problem? Thanks to what we observed in Step 1 and Step 2, we can perform the following algorithm
We keep the amount gained by Alice $$$res$$$, which initially is $$$0$$$. Furthermore, we also keep a counter $$$q$$$ which denotes how many Bob's cards we have encountered and not used.
We iterate from $$$i=2n$$$ to $$$0$$$.
At the end of the algorithm, the variable $$$res$$$ contains the amount gained by Alice.
Step 5: Solve the problem for all possible card orderings via dynamic programming.
It is well-known that, if we want to compute the expected value of a certain algorithm on a family of inputs, dynamic programming is often the solution.
Let $$$din[z][o][q]$$$ be the sum of the amounts gained by Alice over all binary strings $$$s_1,s_2,\dots, s_{z+o}$$$ with $$$z$$$ zeroes and $$$o$$$ ones such that, if we perform the algorithm described in step 4, at the end the number of unused Bob's cards is $$$q$$$. It is easy to check that the value of $$$din[z][o][q]$$$ depends only on the values of $$$din[z-1][o][q+1]$$$, $$$din[z][o-1][q-1]$$$ (and possibly $$$din[z-1][o][0]$$$ if $$$q=0$$$).
The answer to the problem is
The complexity of this solution is $$$O(n^3)$$$.
How to see solution of other people?pabloskimg
You need to enable coach mode to see all of the submitions in such training contests. You have to be either 1900+, or 2100+ (I don't remember exactly) to enable it in the gym section.
How to solve B?
Consider every possible $$$k\ge 3$$$ such that
n % k > 3
. For each of those $$$k$$$, you can split the array in the contiguous subarrays of size $$$k$$$, and try to check for each subarray if it can be converted into a mountain. For checking that, you can do a binary search over the known values to find the leftmost and rightmost known values of the current subarray and do some $$$\mathcal{O}(1)$$$ case-work for determining if a subarray is good or not.The final complexity would be $$$\displaystyle\sum_{k=3}^n\left\lfloor\frac n k \right\rfloor\cdot \log n = \mathcal{O}\left(n\cdot \log^2 n \right)$$$.
use sparse table. improves to O(n log n).
Whats the concept behind J and B and how to solve these?
What should be the output for following testcase for Problem N,
1
6
2 2 2 2 2 2
How to solve E??
Editorial
Could someone tell me how to solve F...Thanks!
I have a question with Problem F ,is the max cost with k steps euqal to the sum of the first K great numbers?