E869120's blog

By E869120, 20 months ago, In English

Hello, everyone!

Japanese Olympiad in Informatics (JOI) Spring Camp 2023 will be held from March 19 to March 22. There are 4 days in this contest.

The duration of the contest is 5 hours. You can choose start time from specified range freely. Please note that if you start competing after 19:00 GMT, the contest ends at 24:00 GMT (and it means the duration is less than 5 hours). For example, if you start competing at 22:00 GMT, the duration is 2 hours.

The contest information is as follows. Details are available in contest page.

  • Number of problems in each day: 3 or 4 problems
  • Max score for each task: 100 points
  • Style: IOI-Style (There may be some partial scores)
  • Problem statement: Japanese & English
  • There may be some unusual tasks (e.g. output-only task, communication task, reactive task) like IOI

The registration page and live scoreboard will be announced 10 minutes before the contest.

We welcome all participants. Good luck and have fun!

Tags joi, ioi
  • Vote: I like it
  • +304
  • Vote: I do not like it

| Write comment?
»
20 months ago, # |
  Vote: I like it +11 Vote: I do not like it

I don't understand, is there a different contest each day?

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

What are "reactive tasks"?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Accordingly to this paper of IOI reactive tasks are "Reactive tasks: Solutions comprise a single source file of a computer program that is compiled together with an “opponent” library provided by the organizers, and interacts with it according to the specification given in the task description. Such solutions are not allowed to read anything from the standard input, or write anything to the standard output."

    AKA interactive tasks.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I have been waiting for this for a while, so its great seeing this post. Good luck guys!

»
20 months ago, # |
  Vote: I like it +10 Vote: I do not like it

is there any live standings ?

»
20 months ago, # |
  Vote: I like it +8 Vote: I do not like it

are we allowed to discuss day 1?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    How to do passport faster than O(n^2)

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      It is nearly identical to USACO 2021 December P1.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      its a subset of tickets ;-;

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Bruh I solved tickets during the contest without any problem but didn't solve this

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        theres one observation: at all times there is a continuous range of accessable intervals therefore you can just calculate minimum tickets/passports to reach 1 and N, which is identical to tickets

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Can someone explain this in more detail, I can't figure out why my code worked.

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

What's the point of the last subtask in Festival? I'm not denying it (yet?), just curious. It was worth almost nothing what suggests that there should be not much difference between it and the previous one. One reason could be cutting off O(n^2) memory, what would seem perfectly fine, but the constraints are quite huge and there's absolutely no way that my O(n^2) passes within TL. I spent like 20 minutes trying to optimize it and I sped it up notably, but it still is like 5 tikes too slow

Btw for me it was much easier than passports even though standings disagree heavily

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    What I heard in the editorial at onsite contest hall is that, speeds up $$$O(N^2)$$$ DP with a technique called "relaxed convolution", and achieves the time complexity of $$$O(N^{1.59})$$$ or $$$O(N \log^2 N)$$$.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      there's no way this was intended, right? is there a faster solution without these techniques or am i just supposed to optimize my N^2 dp more

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +35 Vote: I do not like it

      What the actual f&#$ xd?

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      shouldnt have wasted one hour on this. this is high-end technology. a.k.a magic.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I guess relaxed convolution is just D&C and FFT?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    May I ask how to do the O(n^2) dp? I have no idea how to approach this task...

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +16 Vote: I do not like it

      I am just going to sleep now and that is a lengthy explanation, so I would suggest the following. Visualize everything as segments on the OX axis. Try to do the dp that puts numbers in order 1,2,3,... Ask yourself the always important question in dps — "what do I need to know about the elements that I already decided?" We also need to know how optimal greedy looks like (choosing earliest end recursively). You should arrive at four qualitatively different cases of how a vertical line (say, at x=i+0.5) intersects the segment from bad greedy and the segment from optimal greedy (one of these four is "I already won and I don't care a lot") and you will also need to keep track of how many intervals you opened but did not close and also need to distinguish between these that opened before and after the end of the last segment from optimum segments, what gives O(n^3) states. You can get rid of the count of the intervals that opened before the end of last optimal segment by putting in some factorials what reduces states count to O(n^2)

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

What's the intended solution to currencies?

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Sort the edges and then find a point in time when the sum of edges on the path is at most y. Binary search + persistent/2d data structure works.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I see, thanks. I used Mo's algorithm on trees, and it required quite a bit of constant optimization to get 100 points.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Was your complexity $$$O(n^{1.5})$$$ or $$$O(n^{1.5} \cdot \log n)$$$? I got the former and I got 100/100 without any optimizations (can't check whether I was close to TL now though)

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          It was $$$O(n^{1.5})$$$. To be exact, my code has $$$n=2\times 10^5$$$ since I created a new node for every checkpoint (if there are $$$k$$$ checkpoints $$$c_1, c_2, \dots, c_k$$$ from $$$u$$$ to $$$v$$$, then the edges will be $$$u$$$ — $$$c_1$$$ — $$$c_2$$$ — $$$\dots$$$ — $$$c_k$$$ — $$$v$$$). (Please tell me if I'm dumb and could've done something better)

          There are no log factors.

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
            Rev. 2   Vote: I like it 0 Vote: I do not like it

            Makes sense. I did the same stretching of edges. I guess it helped me TL-wise that I contracted all edges with no checkpoints what gave me tree on exactly m edges even though I did that only for my convenience (dubious if that really makes it easier, but well)

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Interesting idea, I guess my code would've passed easily if I used the same contracting of edges.

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            How do you remove the log factor? I used a fenwick tree to keep track of the checkpoints to allow querying the most number gold coin saved fast given a certain amount of silver coins.

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
              Rev. 2   Vote: I like it 0 Vote: I do not like it

              Sort the $$$m$$$ checkpoints according to number of silver coins needed. Let $$$s = \left \lceil \sqrt{m} \right \rceil$$$. For $$$i=0, 1, ..., s-1$$$, maintain the sum of silver coins needed for all the active checkpoints $$$x$$$ satisfying $$$si \le x < s(i+1)$$$.

              By doing so, to answer a query we can simply sweep from the blocks $$$i=0$$$ to $$$s-1$$$, find the first "block" of checkpoints $$$j$$$ that the sum of silver coins needed to pass all blocks from $$$0$$$ to $$$j$$$ exceeds the number of silver coins we have. Then iterate over every single checkpoint inside the block $$$j$$$. (Some details omitted, I assume it's clear enough.)

              This reminds me of the "avoiding logs in Mo's algorithm" technique in this blog, I think it is a similar idea.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 3   Vote: I like it +8 Vote: I do not like it

      With parallel binary search you can overcome the 2D data structure part, and replace it with just a one-dimensional one with updates. So essentially it reduces the problem to edge weight updates and distance queries in a tree, which can be done in $$$O(q \log(n))$$$ with euler tour, fenwick tree and LCA. So with the $$$\log(n)$$$ factor of the parallel binary search, you get an $$$O(n \log^2(n))$$$. I heard there's also a $$$O(n \log(n))$$$ solution to currencies, which involves a persistent segment tree.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Mo I guess?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    $$$O((n+m+q)\log(n))$$$ with persistent trees I guess.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +26 Vote: I do not like it

    You can solve in $$$O((m + q)\log(m) + n\log(n))$$$ using persistent segment trees. Normally it's $$$log(C_{max})$$$ but you can make it $$$log(m)$$$ by compressing values. The idea is to use persistent segment trees to find the sum and count of elements $$$\leq x$$$ from a node to root. You can do this by keeping a persistent segment tree on root and for every node creating another segment tree by updating its parent's segment tree persistently. Then, you can find those values for a path with $$$f(a) + f(b) - 2 * f(lca(a, b))$$$. When you can find that, you can binary search for the max price you'll take. This way, one query is $$$O(log^2(m)$$$. However, you can use binary search on segment tree trick. You walk on the segment tree as usual but instead of 1, you keep track of 3 pointers.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      True, solved it like this. They could have made higher restrictions.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don't get how you can "find the sum and count of elements <= x from a node to root". Can we do it without persistent segment tree?

»
20 months ago, # |
  Vote: I like it +5 Vote: I do not like it

Will we be able to upsolve?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I see an "Until analysis ends:" timer on the contest page of day 1. I guess that means upsolving is available.

»
20 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Will there be editorial after the contest ends?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think there will be an editorial here (not posted yet, maybe after contest). As the editorial of JOISC2022 is posted here.

»
20 months ago, # |
  Vote: I like it +10 Vote: I do not like it

Contest Day2 is Over

The contest Day2 is finished. Thank you for your participation, and now you can discuss the problem.
Day3 will start 2 hours later (March 21, 02:00 GMT — 24:00 GMT), so good luck and have fun!

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hi. Will upsolving be open after all 4 days of contests, or we can upsolve somewhere right now?

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Day 1 upsolving: link (login required)

      Similarly, I think upsolving for the other days will be available (around 1 hour?) after the contest ends.

  • »
    »
    19 months ago, # ^ |
      Vote: I like it +4 Vote: I do not like it

    Hi. Is upsolving still open?

»
20 months ago, # |
Rev. 5   Vote: I like it 0 Vote: I do not like it

What’s the intended Mizuyokan complexity? I only came up with $$$O(n\log^2{L} +Q \log{n} \log^3{L})$$$ looks similar to IOI elephants and wombats

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Is this just using the fact that in a zigzag the big segment in an optimal solution has <= log(L) size? And putting that in a segment tree? Maybe there's some way to optimize the merge function in the segment tree. But it seems hard, and then you will save one log factor at most.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes. I was also thinking that sqrt decomp is also viable.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    could you elaborate more ?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    With this complexity you can barely solve one query within TL, let alone 50000 of them (but that's also the best I came up with). However Radewoosh told me that we can prove that some kind of greedy works.

    (First key, but easy fact is that all small chunks are single elements, which I assume onwards). Let us assume we are solving a query for the entire sequence. Let $$$g(k)$$$ be the lowest index of a k-th small chunk in any viable solution. Let $$$f(i)$$$ be the highest $$$j<i$$$ such that both $$$j$$$ and $$$i$$$ can be consecutive small chunks. It is obvious that $$$f(g(k+1)) \ge g(k)$$$. But the key observation is that $$$g(k+1)$$$ can actually be set as the lowest $$$i$$$ such that $$$f(i) \ge g(k)$$$. There is a technical exchange argument involving $$$g(k), f(i)$$$ and $$$i$$$ that goes along the way "if $$$g(k)$$$ and $$$i$$$ cannot be put as consecutive small chunks then you can replace $$$g(k)$$$ with $$$f(i)$$$ and put $$$i$$$ into your greedy optimal sequence", which works because of some inequalities. You need to take some care about what is happening around the borders, but that is less interesting I guess.

    There is still a lot of work to be done, but I guess that was the stopping point for the majority of people

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      So most is true, but the explanation skips the fact that the things on the ends of the interval are kind of messy. For example it's not clear whether we should start the partition with an "up" of with a "down", so just assume that we consider one of these two cases and look at the stuff somewhere in the middle (and then we can define $$$g(k)$$$ as mentioned).

      Further in the solution: it is mentioned that for some $$$v = g(k)$$$ we can define $$$jump(v)$$$ as the lowest $$$i$$$ such that $$$f(i) \geq v$$$. So the solution is to simply follow the function $$$jump(v)$$$ until we reach $$$b$$$ (again a bit messy at the end, but mostly like that). The first thing: only $$$O(\log(A))$$$ values of $$$jump(v)$$$ change when an update is performed (oh, it's important to mention that if $$$f(i)<i-65$$$, then we can just set $$$f(i)=i-65$$$ and the solution will still work), so we can update each of them in total time $$$O(\log^2(A))$$$.

      Now jumping could take $$$O(n)$$$ time, so we need to speed it up. The easiest (imo) way is to calculate $$$fastjump(v)$$$, which basically follows $$$jump(v)$$$ and stops when it reaches $$$v'$$$ such that $$$\lfloor\frac{v}{p}\rfloor \neq \lfloor\frac{v'}{p}\rfloor$$$ (and also remembers the number of jumps) for $$$p$$$ that we choose. If we set $$$p = \sqrt{n}$$$, when the values of $$$jump(v)$$$ change on an interval of length $$$O(\log(A))$$$, $$$fastjump(v)$$$ only change on an interval of length $$$O(\log(A)+\sqrt{n})$$$. With $$$fastjump(v)$$$ we can do multiple jumps at once which makes the complexity correct.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    Mine is $$$O(n\log(A) + q(\log^2(A)+\sqrt{n}))$$$, where $$$A = 10^9$$$, but I guess we can get rid of $$$\sqrt{n}$$$ part (but it was the easiest one to implement).

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve council from day 2?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Case analysis a little bit and then you end up with M pairs of bitmasks of size N $$$(A_j, B_j)$$$. They say that if you choose $$$i \in A_j, j \in B_j$$$, the answer for pair (i,j) increases by 1. Then, you need to do subset dp over the sets of $$$B$$$ to achieve the biggest answer given that i is contained in some mask of As. Some care is needed for i=j case.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Some care is needed for i=j case can you explain it further?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        You need to maintain both the biggest answer (with its position) and the second biggest answer while subset dp. When $$$i=j$$$, you have to calculate with the second biggest one.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Just maintain the second maximum (assuming it has a different j index)

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I know that for each bitmask $$$A_i$$$ ($$$1 \le i \le N$$$), there is a corresponding bitmask $$$B_i$$$, which represents the set of ordinances that have exactly $$$\left \lfloor \frac{N}{2} \right \rfloor$$$ votes excluding the chairperson $$$i$$$. Therefore to find the optimal answer we need to find the $$$j$$$ such that $$$i \ne j$$$ and $$$popcount(B_i$$$ & $$$A_j)$$$ is minimized.

      Did I miss anything?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        $$$X=B_j\land A_i$$$, then $$$X \subset A_i, B_j$$$, we can first set $$$dp[a]=count(a)$$$ for all subsets of some $$$B_j$$$, and then calculate the dp bottom up.

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Can you elaborate? A problem I found is the "subset of some $$$B_j$$$" includes the empty subset, so if the dp is calculated bottom up then all the values will be zero? How exactly is the dp done?

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            First, you set $$$dp[a]=count(a)$$$ for all subsets of some $$$B_j$$$, then you do the relaxation step for $$$y \subset x$$$: $$$dp[x]=max(dp[x],dp[y])$$$

»
20 months ago, # |
  Vote: I like it +8 Vote: I do not like it

How to solve problem Conveyor from day 2?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Group edges by depth mod 3 and always mark the biggest group and flipping known edges upward the root.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +23 Vote: I do not like it

    Let us group vertices by their depth modulo 3 and take the biggest group. With one query you can get at least one edge outgoing from each of these (which will be disjoint). And recurse on all connected components in parallel. That gives $$$\log_{1.5}(n)$$$ queries

    There is one additional caveat. Ideally when recursing we would like to forget that edges whose directions we already know about exist. That can't really be done, but you can orient them all towards the root and never query a root of a remaining connected component

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      That caveat you mentioned, isn't really very important, when you are querying over that biggest group (placing the cakes) and after you get information on which tables are cakes placed, you can first check children of $$$u$$$, and if there are no cakes on any of them, then you know cake is on the parent of $$$u$$$.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I don't think this will save queries in the worst case, because objects placed on the roots may still go up. So placing objects on the roots are not guaranteed to yield any new information.

        I also think this will make the implementation slightly harder. It is possible for a child of $$$u$$$ to be a root in it's own separate component if $$$u$$$ is a leaf in its own component. This means if you are placing objects on roots you may have placed an object there too. And so both $$$u$$$ and some children of $$$u$$$ can have objects placed on them, making it harder to determine which objects started where. Though it may still be possible, the implementation will probably involve more thought and casework.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +22 Vote: I do not like it

    In fact, a lot of solution can pass this problem. Here's the standard solution(in my point of view):

    Before one operation, if on this table a product is put, then we say the table is occupied.

    As you can see, if the table is occupied, then after a operation, at least one of its edges('s direction) can be confirmed. In order to work out in 30 operations, we want to occupy as many tables as possible in every single query, but occupied tables also influence each other.

    Also, confirmed edges can be annoyed. If we know the direction between (a,b). Next time when occupy table a, use the reverse operation to make sure $$$a\leftarrow b$$$. What's more, a and b can't be occupied at the same operation.

    Based on the trival thought above, we can figure out the method to solve a chain in three queries. It's the time to promote this method.

    • Root the tree with node 0, and get every node's depth, mantain three set, the first set cantains all nodes which depth mod 3 = 0, the second cantains all nodes which depth mod 3 = 1 etc.
    • Choose the largest set, occupy all the nodes in it in this operation.
    • If every edge of one node is confirmed, delete it from the set. Repeat step 2.

    The correctness of this method is simple, if two nodes are in the same set, they won't influence each other. Every time, no less than $$$\frac 1 3$$$ of the undirected edges can be comfirmed, so at most $$$\log_{\frac 3 2}n=29$$$ operations are need.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      There's another solution which is also elegant and easy to code, but I can't prove the correctness.

      Easy to find, the less undirected edges the node have, the less influence it gives when occupied.

      Mantain how may undirected edges every node has. In every operation, first iterates nodes which has only one undirected edges, if this node hasn't been influenced, occupies it, inserts its influence. Then iterates nodes which has two undirected edges...

      If only iterates node with no more than two undirected edges, it can still pass. But Atomic-Jellyfish said he can hack it :(

  • »
    »
    20 months ago, # ^ |
    Rev. 4   Vote: I like it +8 Vote: I do not like it

    I solved Conveyor in a different way, without even using the knowledge of where the product ended up, it only looks at if some product remained stationary (and all edges where thus pointing inwards for this vertex).

    It is easy to see that we can confirm the direction of a leaf edge in 1 query, by just putting a product on the leaf, and checking in the end result if the product stayed in place or not. In fact this same trick works for all leafs at the same time.

    After we know the direction of some edge of a leaf, we can just as well ignore this edge from now on, by pointing it in the right direction.

    So if there are a lot of leaves in our forest in the current step we are done.

    Otherwise it must be the case that there are a lot of degree 2 vertices. Let's query those degree 2 vertices by placing product on each degree 2 vertex (actually not on two adjacent degree 2 vertices).

    With 3 queries we can query all combinations of edge directions of its two edges. (not 4, because 1 query can be saved because we know what its answer will be). This takes slightly too many queries, but during the degree 2 process we can also still use our leaf removing strategy in parallel, and this gives AC.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Is this also about 30 queries?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Yes, it passed. Although it seems not easy to prove (if it even works in the worst case). You can prove some logarithmic bounds on it. Given a tree with $$$k$$$ leaves, it must have $$$\geq k$$$ branching nodes (degree $$$\geq 3$$$). So in either case, if you you have $$$k$$$ leaves then the inequality

        $$$k \geq \text{branching nodes} = n - k - \text{(degree 2 nodes)}$$$ holds. If you choose some cutoff point for $$$k$$$ to switch between the first or second strategy you can get bounds on how many nodes will be deleted.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    It seems there are many interesting solutions to Conveyor. Here is my solution:

    Firstly, the subtask where the tree is a line can be solved in two queries.

    This can directly be used for the full solution. Take all nodes with degree $$$\leq 2$$$ and group them into connected components. These components will be lines and can be solved in parallel all together using two queries using the method for the previous subtask. Then "remove" all these nodes and all their edges by orienting the edges towards the remaining graph. And repeat.

    It is not hard to show that the number of nodes with degree $$$\leq 2$$$ is always strictly greater than half of the total number of nodes. This gives at most $$$2 \log_2 n$$$ queries. Although that can be greater than $$$30$$$, this is a high upper bound and it can be shown this solution will never need more than $$$30$$$ queries.

    Here is how
»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve 'council'? I don't know how to find minimum of __builtin_popcount(mask & a[i]) in faster than O(4^m), a[i] is the i-th bitmask of the input. And when/where can I upsolve these problems?

»
20 months ago, # |
  Vote: I like it +16 Vote: I do not like it

How to solve tourism from day 3?

  • »
    »
    20 months ago, # ^ |
    Rev. 6   Vote: I like it +60 Vote: I do not like it

    D&C on $$$1...M$$$, if you have a time interval $$$(l,r)$$$, calculate the answer for all queries containing the middle $$$m$$$. First, build a virtual tree on $$$C_l, ... C_r$$$, then run dfs from $$$C_m$$$ on the virtual tree, you get a triple $$$(x,y,w)$$$ for each edge meaning, increase the answer of all queries with $$$l\leq x \lor y \leq r$$$ by $$$w$$$, (notice $$$x<m, y>m$$$), this can be done with a fenwick tree. This gives $$$\mathcal{O}(n\log^2{n})$$$ complexity.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 4   Vote: I like it 0 Vote: I do not like it

      Looks like you can do exactly the same thing with centroids. Checking for queries containing the centroid is a bit harder but it is comparable to the complexity of virtual trees.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      There's also a $$$\mathcal{O}(n\sqrt{n})$$$ solution assuming you know how to insert a random vertex to a virtual tree in $$$\mathcal{O}(1)$$$.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        assuming you know how to insert a random vertex to a virtual tree in $$$O(1)$$$

        How can this be done?

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          LCA is pretty easy to get, but you would need to access neighbouring DFS order vertices from a subset which is pretty hard to do fast.

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Yeah, I am wondering how to do it too. I wonder if fast $$$O(m \sqrt{m} \log{m})$$$ can pass, since I passed subtask 5 using fast $$$O(m \sqrt{m} \log{m})$$$. (Of course the case is much complicated for general trees)

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Assuming you do lca in O(1) the bottleneck part (finding successor on the dfs order) should be the same.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Very nice :). I had much more complicated solution ;__;

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it +58 Vote: I do not like it

      Imo mine is easier.

      In each vertex of the tree put a value $$$0$$$. Then sweep over $$$i$$$ from $$$1$$$ to $$$M$$$. When at some $$$i$$$, if $$$i>1$$$, change the values on the path from $$$C_{i-1}$$$ to $$$C_i$$$ to be equal to $$$i-1$$$. Then change the value in the vertex $$$C_i$$$ to be equal to $$$i$$$. If there is a query with $$$R=i$$$, the answer is the number of vertices with a value greater than or equal to $$$L$$$.

      How to solve this? Ofc use HLD to change a tree path into $$$O(\log(n))$$$ intervals in an array and solve a standard problem "change an interval to be equal to some given x" and "calculate the number of positions with a value greater than or equal to y" (sets and fenwick).

      Imo it's really straightforward. I sweep over $$$i$$$ and for each vertex I remember the highest $$$j$$$ such that for an interval $$$[j,i]$$$ we should count this vertex.

      • »
        »
        »
        »
        20 months ago, # ^ |
        Rev. 3   Vote: I like it 0 Vote: I do not like it

        Very nice.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Very nice and straightforward idea, thanks!

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        "change an interval to be equal to some given x" and "calculate the number of positions with a value greater than or equal to y" (sets and fenwick)". Can you briefly explain how to do this? Thanks.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I had a solution using MO in $$$O(n\sqrt{q}logn)$$$ but I couldn't make it fit in the TL :(

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Did you use something like set.lower_bound for dfs order?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Yes

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I think inserting and using it++ would be faster.

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            I tried doing something like this :

            code

            but it still didn't work

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +23 Vote: I do not like it

      Using a vEB tree instead of a std::set is sufficient for AC (max time 2.01s).

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Cam you share your vEB tree implementation :p

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        could you share an implementation for it ?

        and since it's much faster why doesn't it replace std::set

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Because it's integers only and uses more memory I think

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          vEB tree only works if the values are integers, and the memory complexity is $$$O(\max A)$$$ unlike std::set with $$$O(N)$$$ regardless of the range of the values.

          Here is an implementation of vEB tree.

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
            Rev. 3   Vote: I like it 0 Vote: I do not like it

            I think it's $$$O(\sqrt{A}+N\sqrt[4]{A})$$$

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            just some questions to make the usage clear

            is it possible to use custom comparators ?

            veb.init(...) the parameter here is a bitset with size 2 ^ log maxA ?

»
20 months ago, # |
  Vote: I like it -8 Vote: I do not like it

Contest Day3 is Over

The contest Day3 is finished. Thank you for your participation, and now you can discuss the problem.
Day4 will start 2 hours later (March 22, 02:00 GMT — 24:00 GMT), so good luck and have fun!

»
20 months ago, # |
Rev. 2   Vote: I like it -8 Vote: I do not like it

What's the solution of Day3T3 Tourism? I have a solution of $$$O(n\sqrt n)$$$ by using Mo's algorithm, but it can only get 10 points because of TLE >3

»
20 months ago, # |
  Vote: I like it +19 Vote: I do not like it

Will there be official editorials? If so, when?

»
20 months ago, # |
Rev. 2   Vote: I like it +48 Vote: I do not like it

Finally a good performance and an individual Day 4 win ^_^.

Subtasks in Guard were very nice, there were certainly a few leaps of faith in this problem and these subtasks validated each of my following hypotheses. In ICPC binary format I would have probably lost faith at some moment and wouldn't finish this task. Though the intermediate formulation seems elegant enough (not too trivial and not too difficult) that maybe i would have kept going. Though the last subtask being worth only 24pts is somewhat underwhelming, it was definitely the most demanding one for me (atypical for JOI to undervalue hardest subtasks I would say). I would have removed completely the first subtask (not sure what it was for) and moved these 12pts to the last one.

I am curious about the optimal Battle solution. I managed to get "n-1" bits (i.e. 42 out of 43), but I did it with some random local search, while 43 is kinda tight information theory bound.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    How to solve the last subtask of problem guard?

    I passed the first subtask by considering the islands with $$$S_i=2$$$, and realized there should be one more guard between adjacent $$$S_i=2$$$ islands. Not clever enough to come up with the idea of the second subtask though.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Ok, maybe first subtask has some point that I skipped through actually. Anyways, I think some of the easier subtasks should be a bit less valued.

      Do you already know how to get 76 pts since you are asking how to solve the last one or should I start from the beginning?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I know how to solve 76pts, but I can't actually prove it from the second subtask to the fifth. "Submitted and it just passed", said my classmates.

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it +23 Vote: I do not like it

          I can't prove it either as I think I made it clear through mentioning my leaps of faith :). But I had a strong intuition, maybe I even had some proofs, but I couldn't be bothered to organize my thoughts properly, while already believing what the condition is and knowing I can validate it through subtasks.

          For other people, I will mention what the answer for a given graph and k=0 is. It's the cost of MST on graph where edge $$$a-b$$$ costs $$$S_a+S_b$$$, minus the sum of all $$$S_i$$$, plus maximum of all $$$S_i$$$ (note that the summands other than MST do not depend on the graph at all). But I will not explain why.

          For nonzero $$$k$$$ you want to remove some $$$k$$$ edges from that MST and replace them with the cheapest possible edges connecting remaining connected components. These will obviously be edges connecting global minimum with minimums from other components whose total can be rewritten as $$$(k-1)S_{min} + \sum_{C} \min_{v \in C} S_v$$$, where C loops over all remaining components (and again, note that the first summands is independent of the graph, so we can forget about it). But you do not know which edges to remove yet.

          By meta intuition we can get the conclusion "if this problem is solvable at all, it's probably true that the set of removed edges for k+1 is a superset of edges removed for k", hence we want to determine these solutions one by one. Adding edges is easier than removing them, hence let us reverse the timeline, i.e. the solution for k=n-1 corresponds to the empty graph and we will be adding edges from original MST getting answers for $$$n-2,n-3,...$$$. We always want to add the edge that will increase the cost by the lowest possible amount. Per the formula from previous paragraph, that change is the sum of its ends minus the bigger of two minimums of two components its ends belong to (because it stops being a minimum in some component)

          How to find that edge efficiently? Each component will keep all outgoing edges from it and we will maintain them through standard "smaller to larger technique". But the key problem is that we cannot maintain these costs explicitly. Instead, costs associated with edges outgoing from a particular component will not include the "minus minimum" part — they will simply be sums of their ends. Each component will produce a single candidate for the cheapest edge as the edge with smallest sum of its ends coming out of this components, minus the minimum from that component (note that the maximum on its other end may be bigger and that cost may be fake). Additionally, we will keep a global set of candidates from all components. The best edge will always be the best candidate and all of this is easily updatable. Note that a single edge may end up appearing two times in that set lf candidates with two different costs, but everything is fine anyway

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            The trick of "reverse the timeline" is really cool. Thanks for your solution! Would you like to share how you improve your meta intuition? ;)

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
                Vote: I like it +15 Vote: I do not like it

              That's a very interesting question! A key difference between solving problems in research setting and in competitive programming setting is that for problems in competitive programming we know that they are solvable within given limits and can guide our thinking process based on that information. As such, we can rule out all the approaches that would not lead to efficient solutions. For example, for a brief moment I had an idea that the solution to Guards may have something in common with weighted vertex covers, but I quickly ruled it out since anything connected to vertex covers on general graphs is NP-hard.

              We can also have some intuition about their difficulty if we have an access to scoreboard and rule out approaches that seem too hard if a problem has a lot of solves and ones that are too easy if a problem was not solved by many

              Sometimes, when I am in an intermediate point of solving a problem and I reduced the original problem to another one (like here with MSTs etc) I may judge whether that reduced problem seems elegant and fun enough. If yes, then that's quite likely my reduction is fine, if no, then I might have done some mistake and recheck my thought process. I may do sth similar for solutions whose correctness I am not sure about. I often can tell whether some algorithm "looks like a model solution" intuitively and increase my confidence in unproven solutions that "look like a model solution".

              Or I may just go with the intuition like "if that problem is solvable at all, it has to be done in that way/some clam must be true, because otherwise it would be just too hard" like the one that I talked about for this problem in the previous comment

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I did each of the first four subtasks one by one, although the idea for the third and fourth subtasks came clear almost immediately after I solved the second subtask.

    I didn't solve $$$N \le 16$$$ in contest, but I assume it was some kind of brute force?

»
20 months ago, # |
Rev. 2   Vote: I like it +21 Vote: I do not like it

Btw I suppose that the vast majority of solutions to Travel was $$$O(n \log n \log R)$$$ (where R is the range of coordinates), so let me describe a tricky $$$O(n \log R)$$$

First things first, we are changing directions at most $$$O(\log R)$$$ times, because each two changes of directions we are at least doubling the covered distance. So, we can try to simulate quickly our process if we are able to group steps in one direction somehow.

Let us assume that we covered the interval [a,b] and are currently at b (where a and b are indices of points) and denote the distance between a and b as L. We will definitely take all the segments to the right as long as they are at most L. When we get to the first segment that is at least L we will either change directions (what can happen at most log times) or take this long segment, what will at least double our covered interval (compared to initial [a,b]), what will happen at most log times too — hence if we are able to quickly find that segment, we are done. This is easily done by segment tree, but since I am at the same time stupid and clever, I missed that and did sth else. Let us note that even if that long segments has length at least L/2, we will still be fine because we are increasing our covered distance by 1.5 times instead of 2, which is still fine. Because of that, we may group segments by logarithm of their length and for each i and k compute the longest segment of length at least $$$2^k$$$ that is to the right of i-th position (easily precomputable in $$$O(n \log R)$$$) (and do the same for going left).

(actually I was stupid once again and used sets and lower bounds for the last part, but Radewoosh made me realize it is done in constant time)

  • »
    »
    20 months ago, # ^ |
    Rev. 4   Vote: I like it 0 Vote: I do not like it

    My solution was $$$O(n + q (\log R + \log n)$$$ and it is a relatively simple implementation, how would a solution be $$$O(n \log n \log R)$$$?

    My idea was to compute right[i] and left[i], the last node you will reach before turning back if you are currently at node i and going right or left respectively.

    To do this, notice for each candidate turning point x[i], you can calculate the interval your starting node must be in to turn at that point (actually it's one before your starting node): WLOG say you're going right, then the distance to the next point at x[i] is x[i+1]-x[i]. So to turn, the node before your starting node must be to the right of x[i] - (x[i+1] - x[i]) = 2 * x[i] - x[i+1]. Similarly, the formula going left is 2 * x[i+1] - x[i].

    Then iterate in the reverse direction of each of the arrays (iterate leftward on right array and vice versa), and keep a stack of pair<turning node, boundary of interval>, popping from the stack when the current point is not in the interval.

    Then using this you can compute distance in $$$O(\log R)$$$ per query as you change directions at most $$$O(\log R)$$$ times, as in Swistakk's solution (each two changes of directions we are at least doubling the covered distance). For each direction change the next node is nxt = right[pos + 1] or nxt = left[pos — 1].

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it +3 Vote: I do not like it

      Straightforwardly do binary search on whether we should jump to left or right.

      If we jump to a turning point, then the interval will be twice longer after jump.

      If we jump to a non-turning point, then after the next jump the interval will be twice longer.

      Thus we will jump $$$O(\log R)$$$ times. The time complexity is $$$O(n\log n\log R)$$$.

      And the simple code
      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I don't understand the code, what is a and s (s is not the same s in the problem statement)? is this also to compute right[i] and left[i]?

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          $$$x$$$ is $$$S_i$$$ for the query. $$$s$$$ is the answer to the query. $$$a_i$$$ is $$$X_i$$$ in the problem. Sorry that I don't give explanations on the variables. The for-loop is calculating if we should jump to right (if(...){...}) or left (else{...}).

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            OK, the code was hard to read but I think I understand now, you can jump to a non-turning point here but it is still O(log R) as you double the interval each time.

            Actually my solution is probably better than log R amortized as it only jumps to actual turning points, and it only reaches log R if the points are exponentially spaced around a single point, like $$$X_i = (-2-\epsilon)^i$$$ (in order of turning points, not increasing). I don't know how to come up with a better bound though.

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Ah yes $$$p$$$ may not be the turning point. But still I can prove that the length of interval will be twice longer after two operations. I will fix the comment.

              As for your solution, if there's a testcase like:

              • First $$$O(\log R)$$$ points place exponentially spaced from $$$-n$$$ to $$$-10^9$$$.

              • Then $$$O(n)$$$ points place from $$$-n$$$ to $$$n$$$.

              • Then $$$O(\log R)$$$ points place exponentially spaced from $$$n$$$ to $$$10^9$$$.

              Your solution will run in $$$\Theta(q\log R)$$$ on queries, I think.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      How do you guarantee $$$O(log R)$$$ direction changes from right[i] and left[i]?

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        It's the same as Swistakk's solution: we are changing directions at most O(logR) times, because each two changes of directions we are at least doubling the covered distance. I'll edit to add a bit more detail.

        • »
          »
          »
          »
          »
          20 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I don't think that using left[i] or right[i] will get us to the next turn in $$$O(1)$$$ steps, do we not need to account for the interval we are covering?

          • »
            »
            »
            »
            »
            »
            20 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            No, left[i] and right[i] lets us jump to the next turn immediately, and you do not need to account for the interval you are covering by the jump as the distance jumped is smaller than the distance back to the next-next jump-point (turning point $$$\pm 1$$$), so it is impossible to have a turning point in between.

            • »
              »
              »
              »
              »
              »
              »
              20 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              I must be missing something, would u not need to do a binary search on the stack you were maintaining?

              • »
                »
                »
                »
                »
                »
                »
                »
                20 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                The stack is for precomputation of left[i] and right[i]

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Interesting idea, during the contest I thought that you could try answering queries offline by using a priority queue and DSU. It looks like you only need to store the logarithm for the covering interval, and maintain $$$X[i]-X[i-1]\leq 2^k$$$ in DSU. This gets us $$$\mathcal{O}(N\alpha{(N)} + Q\log{R})$$$

»
20 months ago, # |
  Vote: I like it +72 Vote: I do not like it

Contest Day4 is Over

The contest Day4 (final day) is finished. The overall ranking is as follows. All problems were solved by at least one, but there was no participants who got 1200 points.

Again, thank you for your participation, and let's discuss the problem.

Place Username Day1 Day2 Day3 Day4 Total
1st csy2005 300 260 300 277 1137
2nd yzc2005 300 260 300 247 1107
3rd He_Ren 300 260 300 233 1093
4th BurnoutAg7 300 228 300 260 1088
5th mhq 300 228 287 247 1062
6th xtqqwq 251 260 300 238 1049
7th Radewoosh 200 300 300 200 1000
8th Mr_Eight 237 228 300 227 992
9th Irisu 251 228 300 209 988
10th flower 205 228 300 247 980
10th larryzhong 210 228 300 242 980
»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

What's the intended complexity for cookies? The best I can think of is O(mS*dsu_complexity(S)) but I get TLE.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +45 Vote: I do not like it

    I passed with $$$O(n S \log(n) / w)$$$, where $$$w$$$ is the wordsize, because of bitsets. Maybe other people can comment what they did. This fitted in ~0.2s. I reduced the problem to some knapsack style DP with some extra constraints, and noticed that a bunch of states were not reachable giving $$$O(n S \log(n))$$$ states. My states were just true or false so I optimized the transitions with bitsets.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 2   Vote: I like it +28 Vote: I do not like it

      Can you explain why there’s only $$$nS\log n$$$ states? My solution have $$$nmS$$$ states and it can’t pass.

      • »
        »
        »
        »
        20 months ago, # ^ |
        Rev. 2   Vote: I like it +20 Vote: I do not like it

        I ordered the items $$$B_i$$$ from large to small. For each prefix, sum and amount of items I calculated: is it possible to get this sum choosing some subset (possibly with repeats) of $$$k$$$ items of this prefix. This seems not enough state as there are also the constraints because of the $$$A_i$$$. By cleverly transitioning these can also be satisfied.

        Because the $$$B_i$$$ are distinct, for the $$$i$$$'th prefix, there can be at most $$$S/(m+1-i)$$$ items picked. So the total number of states is a harmonic sum. So I guess my initial complexity isn't totally correct, but all those quantities are $$$15000$$$ anyway.

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Am I the only one who can't pass the first test of day 4 battle? (nvm I'm stupid)

»
20 months ago, # |
  Vote: I like it +5 Vote: I do not like it

were can we upsolve?

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve battle from day 4? My bruteforce only solves it for N=42.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    In order to know we need to sacrifice Tutis to zh0ukangyang

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 4   Vote: I like it 0 Vote: I do not like it

      The $$$N=42$$$ strategy that I have is to find some subset of $$$(49 - N)$$$ squares and values for each $$$(X, Y)$$$ pair such that each pair of $$$(X, Y)$$$ s should have at least one common square with a different value.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +35 Vote: I do not like it

      In fact, I posted this blog two days ago. I can't find a set of solutions using diagonal, so I used the first $$$3 \times 4$$$ rectangles.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    From the E869120's review:

    First, let's consider embedding the information of $$$(X, Y)$$$ in the $$$8$$$ squares of the diagonal. The method of embedding can be constructed by simulated annealing.
    In this case, if $$$X = Y$$$, $$$42$$$ squares remain, and if $$$X ≠ Y$$$, $$$43$$$ squares remain, so we can achieve $$$N = 42$$$.

    Next, in the case of $$$X = Y$$$, we will embed not only $$$(X, Y)$$$ on the diagonal but also the information of the last character of the string. Then, we need to embed $$$256$$$ types into $$$8$$$ characters, but the method of embedding can be discovered in a realistic time by speeding up the simulated annealing.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 3   Vote: I like it +10 Vote: I do not like it

      Duh, that's exactly what I did/tried to do, but I did not bother to improve my hill climbing to simulated annealing :/

      A bit disappointing, but thanks for explaining anyway :p It's a cool problem though anyway

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Hm, somehow I didn't think to encode message bits together with $$$(X, Y)$$$, but it does help with symmetry because there are $$$2$$$ ways in which the message can be changed if $$$X=Y$$$ and $$$4$$$ if $$$X \neq Y$$$. My $$$N = 42$$$ solution didn't use the diagonal at all so I thought that maybe I could improve it to $$$N=43$$$ by just running it for longer.

»
20 months ago, # |
  Vote: I like it +6 Vote: I do not like it

can you please provide me testcases for day1 currencies?

»
19 months ago, # |
  Vote: I like it +9 Vote: I do not like it

Now as the analysis mode has ended will the resources (test data, solutions, etc.) get published?

»
19 months ago, # |
  Vote: I like it +1 Vote: I do not like it

hello, any plans to publish the test cases?

»
19 months ago, # |
  Vote: I like it 0 Vote: I do not like it

It seems that the tasks will be published here from the contest site, but the link is currently broken.

»
19 months ago, # |
Rev. 2   Vote: I like it +27 Vote: I do not like it

It seems that the tasks and test datasets are now uploaded to AtCoder. You can now upsolve them here.

UPD: tasks and testdatas are uploaded here.

»
19 months ago, # |
  Vote: I like it +16 Vote: I do not like it

will the tasks and test data page be fixed?

»
19 months ago, # |
  Vote: I like it +10 Vote: I do not like it

Can you fix the test data page please? E869120

»
17 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Is there any official solutions?

»
16 months ago, # |
  Vote: I like it +18 Vote: I do not like it

Could you please provide any hints regarding the implementation of the interactor for the problems "The Last Battle" and "Belt Conveyor"? Formulating a sound strategy by just looking at the test data seems to be impossible for these two problems. It would be nice if the official stub used in the contest was provided.

»
12 months ago, # |
  Vote: I like it -12 Vote: I do not like it

use your brain pls stp