Hello Everyone!
Japanese Olympiad in Informatics Spring Camp 2021 will be held from Mar. 20 to Mar. 23.
There will be four online mirror contests during the camp.
- day1 : Mar. 20, 02:00 — Mar. 21, 00:00 GMT
- day2 : Mar. 21, 02:00 — Mar. 22, 00:00 GMT
- day3 : Mar. 22, 02:00 — Mar. 23, 00:00 GMT
- day4 : Mar. 23, 02:00 — Mar. 24, 00:00 GMT
The duration of the contest is 5 hours. You can choose start time from specified range freely. Please note that if you start competing after 19:00 GMT, competition ends at 00:00 GMT and duration is less than 5 hours.
There are 3 or 4 problems in each day. Problem statements will be provided both in Japanese and English. There might be unusual tasks such as output-only tasks, optimization tasks, reactive tasks and communication tasks, just like International Olympiad in Informatics (IOI).
The contest site and registration form will be announced about 30 minutes before the contest.
Details are available in the contest information page.
We welcome all participants. Good luck and have fun!
UPD : We made the time windows longer.
Is there any reason why the contest window is only 10.5 hours?
Edit: Thank you for extending the windows!
On one hand this looks great, I remember JOI got great quality problems. But on the other — these hours... Would it be possible to make these time windows longer by at least 2 hours or sth (end them 2 hours later to be precise)?
Yeah, two hours later and even two hours earlier would be perfect. Because for us in school, we have to pull an allnighter and it starts too late to end before school. :*(
Do we know yet if there will eventually be English editorials for the problems?
Auto comment: topic has been updated by yutaka1999 (previous revision, new revision, compare).
Thanks!
Day 1 seems to be over.
What strategies did people use on the output only?
Interesting problems!
How would you solve food court and IOI Fever?
Food court: we do two passes over the operations. In the first pass, we find for every ice cream, what is the index of the person who gets it (with the $$$i$$$th person added to a queue having index $$$i$$$). In the second pass, we get the groups of these people.
For the first pass, maintain in segment trees the total amount of people added to each stall so far, and the current amount of people at each stall. Given a query $$$a\ b$$$, let $$$t$$$ and $$$c$$$ be the total added so far and current number for stall $$$a$$$. Then, if $$$b \leq c$$$, the ice cream goes to the person with index $$$t - (c - b)$$$.
For the second pass, sort the indices of people for every stall. Maintain for every stall, the number of people that must be added to the stall before the total number of people added so far is at least the next index. Whenever you get a range add operation, subtract from the range, then while there is some stall with value at most $$$0$$$, answer its query with the group of the current operation, and increment its value by the difference between the current and next indices.
Code
I thought of something similar during the contest. However, I wasn't sure how to calculate the number of people at a certain stall (c) at any given time. I could only think of calculating it using segment tree beats, but I didn't implement it because I thought it would be too slow. Will segtree beats or smth similar pass, or am I just being stoopid?
Edit: I just glanced at the code you posted, and it seems like segtree beats does indeed pass! I feel silly for not implementing during contest :sob:
I tried segment tree beats and got TLE. Thankfully, you don't need it, since you only query $$$1$$$ index at a time. Storing interval maximum, and value to add to interval is enough.
It's also possible to do it with lazy propagation; the tag is $$$(a, b)$$$ which means that, if the value in the segment tree is $$$x$$$, it should be changed to $$$min(x-a, 0)+b$$$. Sadly that got TLE too; it was about 2x slower than getting the groups of the people, which had an extra log. I didn't know lazy propagation had such a huge constant...
Yes, i got TLE on that too. It seems that JOI has super tight time limits.
IOI Fever: assume person $$$1$$$ starts at $$$(0, 0)$$$. select the direction person $$$1$$$ moves to. This uniquely determines the directions of every other person: there is only one direction, such that there exists some $$$t$$$, such that moving in that direction, at time $$$t$$$, the person is at most $$$t$$$ units away from the origo.
We will compute for every person the first time that person is infected (multiplied by $$$2$$$ for convenience). This is easy to do in $$$\mathcal{O}(n^2 \log n)$$$. To optimise it, we do the following instead:
For each of the 8 directions E, SE, S, SW, W, NW, N, NE. For every direction $$$d$$$, make a vector containing every person, sorted by
Now, write a segment tree that maintains two sets of values $$$a_i$$$ and $$$b_i$$$, and supports queries
Make one such segment tree over each of your $$$8$$$ vectors. set the values $$$b_i$$$ to be how far along the line the person with index $$$i$$$ in the sorted order is, multiplied by $$$-1$$$. Initially set $$$a_i = 0$$$ for $$$i$$$ corresponding to person $$$1$$$, and $$$a_i = \infty$$$ for all other $$$i$$$.
Now, handle persons in increasing order of time of infection. Assume a person is infected at time $$$t$$$, and is going east. They hit exactly three types of people:
Further, the people they hit after getting infected are exactly the people at least $$$2t$$$ away (in manhattan distance) on the same NW, W or SW line. This corresponds to a segment tree operation of type $$$2$$$ on some interval, which you can binary search!
To get the person with next smallest time of infection, get the minimum value over the 8 segment trees. Once you handle a person, set all $$$b_i$$$ corresponding to them to infinity, so they do not get selected again.
Code
I had a workaround which helped me to avoid segment tree. Let the last edge of the shortest path to reach $$$v$$$ from a certain direction is $$$u -> v$$$, and then $$$v'$$$ be the previous node on this direction. Then either you cannot go directly from $$$u$$$ to $$$v'$$$ (due to distance being too short), or the last edge of the shortest path to reach $$$v'$$$ from this direction is $$$u -> v'$$$, and that the distance to $$$v'$$$ is shorter than distance to $$$v$$$ from this direction.
Thus, we can run dijkstra with $$$8n$$$ nodes (but only $$$\le 3n$$$ will be visited) and when we visit a node $$$v'$$$, we only add the closest one we can reach in each direction ($$$\le 3$$$ edges of the form $$$v'->x$$$), and we should also take care of the above case and add edges $$$u->v$$$ suitably.
Still, my code is 200+ lines... So much pain
Key idea is to sweep per the indices of stall, not per the indices of query (which is naturally suggested).
Solve the problem offline, and maintain a segment tree with query index as a key. Sweep by the index of a stall. You can capture all relevant queries stored in a data structure, for each stalls.
Consider push as number +k, pop as number -k. Then, the current people in the queue at time i, is the maximum sum subsegment that exactly ends at index i. Computing this value is standard.
With this information the index of k-th person can be found by descending down the tree (or maintaining separate fenwick trees)
when will the registration for day 2 start? Also do I need to re-register for day 2 or use the username and password of the previous day?
I think the 4 days are continuous.
yes. I just checked. Day 2 has started.
Where can I find Day 1 problems? Are there PDFs or text files available?
I'm really interested in how Shopping could be solved, seems impossible :O
Best I can do is a bit over 8000 bits, by having A output the 9 most significant bits of L and R, and B output a binary representation of the cartesian tree, that has a vertex for every value in the two $$$2^{11}$$$-length blocks containing $$$L$$$ and $$$R$$$, and a single vertex representing the minimum in between (+ its index).
Solution idea which uses ~1k: we have interactive binary search between Bruno and Anna, Bruno picks some interval [a,b] such that a-1 and b+1 are cheaper than everything in the interval, and Anna responds whether L and R are both contained in the interval. If they aren’t, then we can discard items [a,b], if they are we can restrict to this interval. We can find such an interval of size between n/3 and 2n/3.
At the end we do the same as above, sending Cartesian tree.
This approach can be improved somewhat but doesn’t seem like it can get down to 18+300 queries.
Where can I upsolve the problems?
ojuz usually uploads the problems to oj.uz after the test data gets published but the test data is not published till now.I think they will publish it after all the 4days get over
How to solve Day2 A (escape route) ?
How tf did all of you guys managed to fit road_construction in TL xd? I coded $$$O(n \log^2 n)$$$ in three different ways and after an hour of optimizing it I finally managed to pass subtask with $$$n \le 10^5$$$, but still a long way to go to 250000 xd
Same here.
My optimizations include:
Rewriting the code in order to compress coordinates in $$$O(n)$$$ per binary search iteration
Replacing segtree with BIT (which should help to reduce the number of function calls?)
It still failed on 250000 (Took 11.25s on one of my own tests). I'm curious about how so many people passed TL while I couldn't. Was there a way to solve the problem in $$$O(n log n)$$$ that I didn't know about?
I'm really curious about the $$$n \log^2n$$$ solution because I passed with something like $$$N \sqrt{N}\log{N}$$$, albeit with a lot of time spent optimizing and probably relatively weak tests.
Binary search the last value. Use and ordered_set to get roads less than the last value.
Yes,it can be solved in O(n log n), though I pass it with two logs.
Can you also solve this in NlogN? They are very similar, but k is large and you need to output only the k-th distance. I don't know how.
Code in case anyone wants to help me. It's roughly the same solution as the $$$O(n$$$ $$$log$$$ $$$n$$$ $$$log$$$ $$$x)$$$ one ko_osaga described.
I just used vanilla order_statistic_tree with PBDS and it passed on the first try.
Wow, I thought that "maybe I can use this problem in an educational way for learning pbds", but then thought "no way I will pass with this, cause this is probably gonna be much slower" xd
I think mine is n log n w/ persistent segment tree.
Half of the comments are downvoted for no reason, but I don't care about contribution, so can you tell how to do it in NlogN?
I didn't code it like that (I had a slower solution), but I imagine it could work as follows:
Let's say that for each point, you want to find the closest point in the Manhattan metric. Let's simplify the problem a bit: for each point $$$(x, y)$$$, find the closest point $$$(x', y')$$$ such that $$$x' \geq x$$$ and $$$y' > y$$$. I hope you can easily see that the non-simplified problem reduces to $$$4$$$ runs of the simplified problem, for $$$0^\circ$$$, $$$90^\circ$$$, $$$180^\circ$$$, and $$$270^\circ$$$ rotations of the plane.
You can solve the simplified problem by sweeping from right to left using a segment tree: when reaching a point $$$(x, y)$$$, store $$$x+y$$$ at the position $$$y$$$ of the segment tree, and read from the segment tree the minimum value of $$$x'+y'$$$ over the points $$$(x',y')$$$ such that $$$y' > y$$$; the sought distance is $$$|x' - x| + |y' - y| = (x'+y')-(x+y)$$$, since $$$x' \geq x$$$ and $$$y' > y$$$. (I'm sweeping the coordinate compression under the rug, but it's quite trivial.) This can be easily done in $$$O(n \log n)$$$ time.
Now, back to the original problem. Do the four runs of the sweep-line algorithm as above, but make the segment trees above persistent. Now, for each point on the plane, we have four persistent snapshots of a segment tree, each created when reaching this point on a sweep-line. In other words, for each point $$$P$$$ and for each of the quadrants of the plane centered at this point, you know the closest point $$$Q$$$ to $$$P$$$ in this quadrant. You can put the distances from $$$P$$$'s to $$$Q$$$'s in a priority queue. When you remove a pair $$$(P, Q)$$$ from the priority queue, you remove $$$Q$$$ from the corresponding snapshot of the segment tree, producing a new snapshot, from which you can read the closest point from $$$P$$$, different than $$$Q$$$, in the quadrant containing $$$Q$$$. Continue till you find $$$k$$$ pairs of points. The total time and space complexity is $$$((n + k) \log n)$$$.
Actually, since the problem is symmetric, you can notice that you don't need four rotations; only two ($$$0^\circ$$$ and $$$90^\circ$$$) are necessary because the remaining two will produce the same pairs of points as the first two.
My $$$\log n \log X$$$ solution uses coordinate compression + fenwick tree and passes in 4s first try.
Probably, if you want $$$\log n$$$ solution, you can maintain persistent segment trees for each quadrant. Then for each point, you can capture relevant neighbors in the segment tree. To visit the top-k candidate for each point, pick some interval, find the maximum, and split the interval per the maximum's occurence. To do it globally, you can use priority queues.
Could you share your log n log X solution? It sounds the most similar to mine, so I would probably be able to understand how I could have done that more efficiently.
And thanks for explaining log n solution. Sounds nice in theory, however it would be at least a few times harder to implement.
Here's my $$$\mathcal{O}(n \log n \log X)$$$ with coordinate compression and fenwick tree. I don't remember how fast it was, but it did pass first try.
Code
Lol I think it's just fast because of breaks in L41
I coded the $$$O(n\log n)$$$ persistent segment tree solution and it still took more than half of TL (6s-ish :|).
My solution was basically:
I only needed to use a
std::multiset
, which was quite surprising. I have no idea what the time complexity of this is, but it passed first try without any optimisations. Do you think the test data was weak?Could you explain how do you rotate the plain by 45 degrees? (I mean how the coordinates will change?)
Map coordinates $$$(x, y)$$$ to $$$(x + y, x - y)$$$. This is also convenient because it turns Manhattan distance into Chebyshev distance
I was able to cheat it with $$$O(n^2logX)$$$ and some optimizations — https://pastebin.com/UymFvR2j
Ahaha I spent all the time frustrated trying to optimize the solution to that problem. In the end my submissions together cleared all the test cases though I only got 18 points. I am not a very experienced competitor, so I'm curious, don't these tight time limits ruin the fun of a contest for you higher rated guys?
I think intended solution is O(nlogn) with persistent segment tree.(which is what I did)
Subtask 2 is likely $$$O(n$$$ $$$log^2$$$$$$n)$$$ — the simplest solution involves sorting the points by $$$x$$$-coordinate and then do a binary search over the length of the k-th distance, and in each iteration we would do a binary search for each point to know the number of points within a distance $$$\le$$$ a given number $$$x$$$ from that point. Because of this I assume a $$$O(n$$$ $$$log^2$$$$$$n)$$$ overall solution is intended to pass.
Actually now that I think of it, subtask 2 can also be solved in $$$O(n$$$ $$$log$$$ $$$n)$$$ by using two pointers, taking advantage of the fact that the points are already sorted, so now I'm not sure what is the intended complexity either.
The problem Shopping inspired me for this little poem.
Whether you're red
or you're blue
there's always a problem
you don't even have a clue how to do
A solution for Ancient Machine that gets 100 points:
First, I will describe an algorithm that gets us maximal good removals, provided that we know the values of all indices:
Step 1: Let $$$f$$$ be the first occurence of $$$X$$$. We remove all indices before $$$f$$$.
Step 2: Let $$$lst$$$ be the last occurence of $$$Z$$$. Initially, we set $$$lst$$$ to $$$f$$$.
Step 3: Loop all indices $$$i$$$ from $$$f + 2$$$ to $$$n - 1$$$. If we encounter an occurence of $$$Z$$$, we remove, in reverse order, all indices from $$$i - 1$$$ to $$$lst + 1$$$, and then remove $$$i$$$ and set $$$lst$$$ to $$$i$$$.
Step 4: Remove the remaining numbers.
It is possible to prove that this algorithm is optimal.
Now, we have to send the position of the first $$$X$$$, as well as all following $$$Z$$$s. This is possible to do with a bitstring of length $$$n$$$. We will try to compress this bitstring.
Notice that, in the aforementioned algorithm, if we have an occurence of $$$Z$$$ that is immediately before another occurence of $$$Z$$$, it doesn't matter if we remove it or not. Therefore, we ignore all such indices.
By doing so, we obtain a bitstring that has the property that no two values of $$$1$$$ are adjacent to each other. Therefore, for every block $$$n$$$ of this bitstring, there are only $$$fib(n+2)$$$ possible states, where $$$fib(i)$$$ is the $$$i^{th}$$$ fibonacci number. Furthermore, it is possible to map each bitstring to an unique number by a simple dynamic programming.
We divide the bitstring into blocks of $$$T$$$ elements. For each block, the number of bits needed to display the value of this bitstring is $$$\left \lceil{log_2(fib(T+2))}\right \rceil$$$, therefore we can estimate the total number of bits to be $$$n \times \frac{\left \lceil{log_2(fib(T+2))}\right \rceil}{T}$$$. This number is small enough when $$$T = 93$$$, where the value is about $$$n \times 0.699$$$.
As a side note, as $$$T$$$ increases, the value converges to $$$n \times log_2{\varphi}$$$, which is about $$$0.694 \times n$$$.
P.S. Sorry for causing a solution leak. I realized this several minutes after I posted this, but CF didn't allow me to delete anymore.
The contest is still going on.
Oops, sorry.
Nice solution :)
Deleted
Here's a solution that I think is correct:
First, identify the first X, the last Z, and the last Y before the last Z; clearly anything before the first X or after the last Z is useless. Also, identify all occurrences of "YX" as a substring between the first X and the last Y before the last Z. Mark all of these 3+2k characters as "special". Then, Bruno should do 3 passes:
The proof of correctness essentially comes from considering the runs of Ys and non-Ys. Each run of Ys can cause at most one good removal, and we need to pick one character from each run of non-Ys to act as borders of the good removals. For each run of non-Ys, we arbitrarily decide to using its first character (either X or Z). If the first character is Z, then there is a good removal in the left-to-right pass. If the first character is X, then we save it for the second pass.
To compress this data, we can directly transmit the locations of the first X, the last Z, and the last Y before the last Z in about 60 bits. For the rest, we use the same observation that, because we're trying to send a list of non-overlapping substrings of length 2, the number of possible strings is a Fibonacci number, so we can compress by a ratio of $$$\log(\phi) / \log(2) \approx 0.694$$$. I compressed blocks of 250 characters into 174 bits each, but other choices of block size work too.
Almost exactly the same solution as ecnerwala, but I used only the first X information (for the 2 part, erased until the leftmost X appears). In addition to checking whether the character is Z or else, I marked only Zs whose left character is not Z, then the number of states follows fibonacci sequence. I enumerated fibonacci numbers and compare $$$ceil(\log2 fib_i)$$$ and $$$i * 0.7$$$ to find bucket size (I set 63).
Huh? The algorithm works in that case. With the removals, we have
XYXYZZ
XYXZZ — Good removal!
XYZZ
XZZ — Good removal!
XZ
X
_
I also got 100p with the same algorithm, but a dumber interaction protocol. I output the distances between the Z's, packed with an optimal code.
My bad, I misread and therefore misunderstood the way the algorithm worked. Thanks for clearing it out with the example.
What a nice changeup to be able to get 100 points within 18 minutes after two days of doing basically nothing xD (I didn't take serious attempts at first two days, but still)
Most problem have tight time limit :(
Solution for Shopping:
Let $$$X$$$ be the index Bruno knows. Bruno constructs a sequence of intervals $$$(I_n)$$$ as follows:
In short, Bruno repeats adding the larger end to the interval $$$N-1$$$ times, starting from $$$[X, X]$$$. He wants to know the minimal index $$$n$$$ s.t. $$$[L, R] \subseteq I_n$$$, as the answer is the index taking the minimum of $$$P$$$ in $$$I_n$$$. Though the exact value of $$$n$$$ may not be determined, they can reduce candidates by binary search. Let $$$[s,t]$$$ be candidates for $$$n$$$, then Bruno sends the index taking the minimum in $$$I_s$$$, and the Cartesian tree for the remaining $$$t-s+1$$$ indices. They are enough for Anna to find the answer.
Consider a segment tree for an array of size $$$N$$$. For technical reasons, we ignore nodes with depth $$$>13$$$. Then, Anna sends the deepest node containing $$$[L, R]$$$ in the following way:
After that, Bruno knows some index in $$$[L, R]$$$ (otherwise, the interval is short enough that Bruno can send a Cartesian tree naively.) Then, they repeat binary search as many times as possible, and Bruno sends the Cartesian tree for the remaining indices. This fits in the query limit.
The above solution consists of the following steps:
$$$\log N$$$ bits for Anna and $$$\frac{1}{2} (\log N)^2 + O(\log N)$$$ bits for Bruno are enough for this solution.
When is the final rank list going to be published? We were using this contest for the team selection in our country. Contestants were told to do the contest right after the window starts so that they cannot download problems from fake accounts beforehand. But without the rank list, it's quite hard to verify this. I hope we would be able to see submission times in the final rank list.
Lol, my code started working on sample 25 seconds before contest ended. I submitted it with some time left, but it didn't evaluate before time ended and my access was shut xd. Can I get to know in any way the result of my submission xd?
EDIT: OK, I created second account and submitted from there xd. Sorry for hypothetical spoiling the ranking, but that should not be a big deal. I got 0/100 and since the timer was quickly running out I was checking my code only on the 4th sample close to the end, but it didn't work even on 1st one, so I got 0/100 :). But I already got 100/100 and that was really a quick fix :<
Navigation was a really cool problem, my favourite from whole camp I think.
I got a feeling that whole set of 12 problems was really DS and implementation heavy. Problems with standard format were really not varied and rather boring. Bodyguard, reporter, food court, road construction for sure + events, fever, meetings to some extent as well (oh, did I mention 7 out of 8 "batch" problems xd?). Good that communication tasks helped with variety and originality a lot
How do you get more than 50 points in navigation?
You divide all cells into 9 equivalence classes by remainders modulo 3 in both dimendions. Wherever Bruno stands, he sees exactly one cell from each such class (but he doesn't know which is which).
Let us highlight all cells from arbitrarily chosen equivalence class and give them unique number (e.g. 14 in a version where we aim at 14 colors). Then Bruno is able to tell to which equivalence classes cells he sees belong to. We can take 7 of the remaining 8 equivalence classes and say that i-th of them will have information about i-th goal. Let us now focus on some particular cell that will keep information about i-th goal. If it is close to it, at most 1 far apart in both coordinates, it says in which out of these 9 surrounding cells the goal is. If it is further away then there is a direction in which it is at least 2 cells far apart. If we see that cell then we are at most 1 far apart from this cell, so we can safely make a step in that direction and that cell will tell us which way to go (4 possible values). Note that we didn't use one equivalence class at all.
In total we use 9 (for cases when goal is close) + 4 (for cases when goal is far) + 1 (for highlighting one class) = 14 colors what gives 75 pts.
Getting it down to 12 colors is quite a challenge as well. Main idea used to that is that there exist 2 classes so that they have no goals on them and I highlight both of them with one color and gain 2 colors that way (since there are 7 instead of 9 colors needed for the case when goal is close). One issue is that telling which cell belongs to which equivalence class based on highlighted classes is harder since we have two highlighted cells in our proximity, but it can be uniquely determined as 3 is odd (details left to the reader).
Mind elaborating?
Let's say that we have two of these classes and call them $$$A$$$ and $$$B$$$. If we are allowed to use one more color then we can color $$$A$$$ with 12 and $$$B$$$ with 13 and we can always get our relative position to 12 which we used to determine where is information for $$$i$$$-th goal. What if we color $$$B$$$ with 12 as well? One may think we don't know anymore, but that's not true. For exactly one of these classes it is true that second one is the first one translated by one of vectors from the set $$${(0, 1), (1, -1), (1, 0), (1, 1)}$$$, so we can actually tell them apart and save one color. In a visual way, no matter what two classes you choose they will form a set of touching pairs and these pairs will not touch between each other and you can fix one element in each pair for each of 4 shapes these pairs may form.
What exactly do you do with the remaining $$$11$$$ colors?
Well, it's basically written in my comment from yesterday, but I will try to explain it more clearly maybe.
I distinguish 9 classes and take two of them which don't have any goals on them. I call them A and B. Alice colors them with 12. Whenever Bruno sees 9 cells, he always sees exactly 2 occurences of 12 and he is able to tell which one Alice called A and which one B (using the method from previous comment). Alice and Bruno agreed on some way of telling which classes will keep information about which party (relative to A). E.g. if you take a square that has cell of A in its top-left corner then you can list cells of that square in lexicographic order and assume that i-th cell that is not from A and B (there are 7 of them) keeps info about $$$i$$$-th goal. So Bruno is able to tell which out of $$$9$$$ cells he looks at keeps info about $$$i$$$-th goal. There are 9 cells around this cell (some of them not seen by Bruno at the moment), but 2 of them are from A and B, so there are only 7 close cells that hypothetically can have $$$i$$$-th party. I use 7 colors for cases when that $$$i$$$-th goal is close to that cell to denote which out of these is the one with party and 4 to denote direction Bruno should go at if it is further away (note that these directions should be valid no matter where Bruno stands cause there are 9 possibilities for that, but since goal is far away it can be satisfied)
Here are my codes: https://ideone.com/GzvnZS (Anna.cpp) and https://ideone.com/sJkSF0 (Bruno.cpp), but I wouldn't look at them unless you are desperate :P
Makes sense. So the 85-pt solution below could also be adjusted to do 13 using only 8 of the 9 cells and 12 using all of them.
Index position $$$(y, x)$$$ as $$$3 (y \text{ mod } 3) + (x \text{ mod } 3)$$$.
In positions with index $$$0$$$, store value $$$1$$$. We do not use value $$$1$$$ elsewhere, so this lets Bruno figure out what his $$$x, y$$$ are mod 3.
In positions with index $$$1 \leq j \leq 7$$$, store instructions to reach party $$$j$$$. This information needs to tell every position adjacent to this position, what move leads to party $$$j$$$. It turns out $$$13$$$ options is enough: 9 if the party is adjacent to this position, and 4 otherwise. Thus, values up to 14 suffice.
To improve this to values up to 13:
For every party, there is only one position with the same index adjacent to it. Thus, at most 7 of the possible 9 adjacency values are used.
In the 14-value solution, we didn't use index $$$8$$$. Now, store in index 8 some unused adjacency value. Thus, we only need to use 8 adjacency values.
Anna.cpp Bruno.cpp
I definitely agree that bodyguard/worst_reporter/food_court/road_construction were all pretty standard/well-known DS/implementation problems, but that didn't seem like too much (especially for a team selection contest; testing implementation/fundamentals is probably the best way to pick a team).
The other problems were pretty interesting to me. Fever had some really nice observations (afterwards it was kind of like bodyguard, but it was a good implementation challenge), meetings was short and sweet, and events had some interesting ideas. I'm a little surprised you didn't mind escape_route actually, I thought it was a little boring and the time limit was ridiculously tight.
I did not mind escape_route cause I didn't take serious attempt at Day2. Unfortunately I had only around 2.5 hours for it and road_construction took me basically all of it just to get 32 points. Escape route looked rather implementation heavy as well, but I didn't give it enough thought to be confident about my opinion about that problem.
How would you solve escape route?
If I didn't state it clearly — I did not spent on this problem longer than a minute, I have no idea :P
I definitely enjoyed foodcourt / fever / escape_route / meetings2. Fever was implementation oriented, but I don’t mind. And foodcourt is really simple: it’s not that all probs are impl heavy because they have segment trees inside :)
On the other hand, event2 was boring because it was a reiteration of APIO2009B.
Could someone who got AC on the Worst Reporter share their solution?
Let us solve version with tree. Medusas are just annoyance.
This is basically LIS, but on trees. From each subtree you want to have a representation of full information "what is the lowest cost so that guy with the lowest rating in this subtree had rating >=x?". You can think of this as a staircase-type function with argument x. Of course you can do coordinate compression, so there are only $$$O(n)$$$ valid values of $$$x$$$, not $$$10^9$$$. However if you look more closely, result can only change for values of x that are present in this subtree, so in fact there are only $$$subtree\_size[v]$$$ valid values of $$$x$$$ in subtree of $$$v$$$. After you choose your favourite way to represent this knowledge you can apply "smaller to larger" method in order to handle merging subtrees.
I chose some messy way, but apparently the cleaner way do represent this function is to keep a map telling us by how much our function changes in each point. Adding two such staircase function is then very easy cause you just add point-wise two maps. You need to also support adding new vertex, but that can be done. Since maps result in additional log factor, whole solution works in $$$O(n \log^2 n)$$$ time.
Is it possible to solve this problem in $$$O(n \log n)$$$ time? I spent the whole contest optimizing $$$O(n \log^2 n)$$$ and couldn't get rid of TLE on one case ;-;
You can merge two BBSTs of sizes $$$m \le n$$$ in $$$O(m \log(n/m))$$$ using treaps or splay trees. This sums up to $$$O(n \log n)$$$ time to merge $$$n$$$ elements instead of $$$O(n \log^2 n)$$$ time. I don't know of a more elementary way though.
I think you can replace BBSTs with segment trees.
There was a CF article about mergeable segment tree. It was by some Chinese, but I can't find it now. It was very simple to implement, just the analysis required some potentials.
Blog by TLE, I think this is the one you were talking about.
Yeah, I was talking about this. Thanks.
There is another solution using HLD and set ,which is much easier to implement.
Consider the tree case:
You can build a tree where a[i] is the father of i.Let dp[i] means now consider the subtree of i and i is selected to be unchanged , what is the max sum of c[i] of unchanged people in the subtree of i.
dp[i] = $$$c[i]+\max {\sum_{u\in S} dp_{u}}$$$,where no two element in S is ancient and son.And $$$\forall h[u] \geq h[i],(u\in S)$$$
Analysis the process ,you can implement is easily:
If you jump out of the root , the current answer:$$$ans=ans+val$$$,and tag[i]become c[i].
After you add all nodes ,the final answer $$$=\sum c[i]-ans$$$
A short solution which gives 79 pts:
https://paste.ubuntu.com/p/zqSXYR7XRc/
For dag,you can change it to tree!
In a scc you can link them from small to big.
The 100pts solution: https://paste.ubuntu.com/p/hGxpz8Z9dT/
Time complexity :O(n log n)
Your code clearly looks $$$O(n \log^2 n)$$$. You do std::set operation for each heavy chain in path...
Though, I think top trees can optimize this to $$$O(n \log n)$$$.
When visiting a element in a set you either delete it or end the process,so it is O(nlogn)
Oh, I see.
How would you combine two staircase function maps? Would it simply just be adding the contents of one map to the same indexes in the other?
If you store the deltas, it's just adding the contents.
I see, thanks for the clarification and thanks Swistakk for the solution
You basically execute the binsearch LIS algorithm on tree, but do extra three things:
https://codeforces.me/blog/entry/88748?#comment-774036
After making the functional graph and dealing with the cycle at the end, you can just do dp on each tree similarly to CEOI 2019 Tree.
Problem Statements for all 4 days
Thanks for the contests! There was a lot of data structure implementation, but that is just what I like 8)
My favourite problem was shopping, even though I had no chance on that one :D
When will day4 scoreboard and the test data be released?(it currently shows only day1,2,3)
You can at least already upsolve on the contest website if you want.
Day 4 results are now out
How would you solve escape routes and bodyguards?
I barely got 70 points in contest on escape routes with the following solution, but I hope it helps. Basically, the solution divides the queries into buckets (pairs (start_node, start_time)) such that from the moment another day starts, the paths start from the same node. To get these buckets, you run dijkstra backwards from each end of an edge, from the moment it becomes unusable. You end up having some time intervals that are equivalent for each starting node. Now, you just need to run dijkstra normally for each of these pairs (start_node, start_time) and get the answer for a query based on these. Complexity is $$$O(N \cdot M^2)$$$. I would also be interested in a solution that gets 100 points, if anyone can share it. Code
For 100, I calculated for every pair of vertices $$$s, t$$$, and every edge $$$(s, u)$$$ incident on $$$s$$$, the minimum amount of time it takes to reach $$$t$$$ from $$$s$$$, if we start from $$$s$$$ at the last possible time we can cross edge $$$(s, u)$$$. I also added edges $$$(s, s)$$$ with last possible crossing time $$$0$$$, to calculate the minimum times to reach $$$t$$$ from $$$s$$$ if we start at the start of a day.
Then, for every vertex $$$s$$$, we handle all queries starting from that vertex. We do sweepline, from the last second of the day to the first, maintaining the minimum time it takes to reach every vertex we can reach without waiting for the next day, and adding edges as we become able to cross them.
Whenever an edge becomes crossable, we update minimum times to reach every vertex by the time it takes to reach the vertex through the edge.
Whenever the time decreases past a query, answer the query. Either travel directly to the target within the same day you started, or loop over which vertex you wait for the next day on.
This has complexity $$$\mathcal{O}(n^2 m \log n + n Q)$$$. The $$$\log n$$$ comes from using Djikstra in the first phase.
Code
As no one already posted a solution for 100 pts in events, here is one:
Compress coordinates (now all interval extremes lie on [0,2N[).
For each i in [0,2N[ store the maximum left extreme of an interval ending before or in i. (this can be done by sorting all intervals by right extreme for example). Let this value be f[i].
Create a Sucessor Graph with edges i->f[i].
Using dp, compute for each value in [0,2N[ the largest number of interior disjoint intervals one can find in [0,i] (this is trivial as dp[i]=dp[f[i]]+1).
Notice that now we know how to compute the maximum number of interior disjoint intervals on [L,R]: start in R, go successively i->f[i] until current position is < L.
However, we can do this faster using the dp array and the Sucessor Graph, since: 1. The maximal number of interior disjoint intervals in [L,R] will always be either dp[R]-dp[L]-1 or dp[R]-dp[L] 2. We can check if x intervals fit in [L,R] by using binary fast-forward on the Sucessor Graph (checking if fast_forward(R,x) >=L). Thus we check if x=dp[R]-dp[L] intervals fit in [L,R] and hereby know the maximum number of int.dis. intervals in [L,R] in O(logN).
Back to the original problem: we store the current free intervals using a set of pairs (initially [0,2N-1]) and the current maximal possible number of intervals (initially dp[2N-1]).
We iterate intervals i:0->N-1. For each interval we check using the set wether the interval fits in any free interval. If so, assume the interval is [L,R] and the free interval is [l,r].
We can incorporate [L,R] in our answer iff, after adding it to the answer, we are still able to find a total of K interior disjoint intervals.
By adding [L,R] to the answer, current max intervals -> current max intervals + Max Disjoint(l,L) + Max Disjoint(R,r) + 1 — Max Disjoint(l,r). If this is still >=K, we add [L,R] to the answer, update the set of free intervals and update the number of current max intervals.
This may seem unintuitive on a first thought as there may be intervals on [l,L] or [R,r] that have indexes <i and therefore may not be further used. However if any of these intervals (of index j<i) made a difference, it would imply that some set of K int.dis. intervals including interval j would exist, which is a contradiction since interval j<i was not included in the answer when we iterated over him.
This solution has overall complexity O(NlogN).
The test data link in the contest page is not working :(
UPD: Now working :)
You can solve all the problems here: https://oj.uz/problems/source/547