skavurskaa's blog

By skavurskaa, history, 8 years ago, In English

Today the ACM ICPC 2016 Brazil Sub-Regionals contest was held, featuring 12 problems and having more than 600 teams trying to advance to the Latin America Regional Finals that will happen in November. I created this thread so we can talk about the problems in the contest. The problems can be viewed in this link (Portuguese): http://maratona.ime.usp.br/maratona.pdf

A — Simple ad-hoc

B — Dynamic programming on DAG, binary search, square root decomposition

C — Shortest Paths Modelling (i have a tutorial on this: http://codeforces.me/blog/entry/45897)

D — Simple maths

E — DP over subsets (bitmask dp)

F — Graphs

G — Range Sum Query 2D

H — Simple palindrome check

I — Greedy algorithm

J — Sweep line

K — DP, geometry

L — DFS to check connected components

My team solved 7 problems (A,C,D,G,H,I,L), we also tried to solve B and J but failed. Btw we have advanced to the regional finals! :D

  • Vote: I like it
  • +19
  • Vote: I do not like it

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by skavurskaa (previous revision, new revision, compare).

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by skavurskaa (previous revision, new revision, compare).

»
8 years ago, # |
  Vote: I like it +2 Vote: I do not like it

F is really simple. Find 2 values for each tree: the length of the largest path consisting of middle nodes, and the length of the path consisting of middle nodes starting from the root. Then, with this 4 values, we can calculate the size of the merged tree by fixing the root of one of the trees, or merging both roots. The idea is that we can only overlap both trees on one of these middle node paths.

I'm also pretty sure that E can be solved using DP over subsets, in a similar way as the DP solution of TSP, after pre-calculating the sum of all numbers after removing a subset of digits. But my team could not finish it on time.

Congrats for advancing to the finals : ).

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Would you mind explaining how you can reduce problem G to a 2D Range Sum Query problem?

  • »
    »
    8 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Lets store two 2D arrays : MB and MW. MB[i][j] equals 1 if there is a black piece in position (i,j) or 0 otherwise. Same for MW but for white pieces. Now let's build a RSQ structure for MB (lets call it RMB) and a RSQ structure for MW (lets call it RMW). We can iterate over all possible positions (i,j), and for each position (i,j) we can check all squares that have this position as the top-leftmost position in the square. There are at most N such squares for each position. For each square, we can use RMB and RMW to check if there are black pieces and white pieces in this range. Assume we are using DP RSQ, so each query is O(1). So, the final complexity is O(N^3), because for each of the N^2 possible positions we check at most N squares in O(1).

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by skavurskaa (previous revision, new revision, compare).

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by skavurskaa (previous revision, new revision, compare).

»
8 years ago, # |
  Vote: I like it +3 Vote: I do not like it

where can we see the problems in English?

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone give me a hit about problem B??? I realized a sqrt decomposition and binary search on answer... but i have getting only TLE :/

Submission: http://codeforces.me/contest/1/submission/20722384

  • »
    »
    8 years ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    The code looks generally sound to me, unfortunately the only thing I can tell you at the moment is to try to improve your constant running factor (it seems like the expected solution was something like Mo's algorithm + a BIT, which has the same complexity but maybe runs faster for the test cases they used?)

    My solution actually doesn't have the sqrt(n) factor, but given that sqrt was intended, it shouldn't be necessary to do better.

    • »
      »
      »
      8 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Did you submit your solution using kth order statistics? I remember we talked a little about solving the problem using it but i realized that the answer to the query isn't always related to the median of the interval. Maybe i'm wrong though.

      • »
        »
        »
        »
        8 years ago, # ^ |
        Rev. 2   Vote: I like it +5 Vote: I do not like it

        Yes, my solution on URI uses kth order statistics. The answer might not be always the median, but it's always either the median or the first number present in the interval right below the median.

        I then adapted my code to have two queries instead of one:

        query(l,r,k) -- gives the kth number of the interval, same as in the quora link count(l,r,k) -- counts how many numbers in the interval are smaller or equal to k

        Then you should check whether count(l,r,median) or count(l,r,median-1), gives a better probability (if median-1 is better, you also need to know which is the number right below the median — so you answer query(l,r, count(l,r,median-1)).

  • »
    »
    8 years ago, # ^ |
    Rev. 3   Vote: I like it +3 Vote: I do not like it

    I don't understand your logic (just looked briefly at your code), but here is my idea for the problem:

    First we have to compute for each child M[x], the minimum child we can get to if the teacher drops the ball to student x. We can do this applying dp on the following recurrence:

    M(x) = smallest vertex in the cycle if x belongs to a cycle

    M(x) = min(x,M(next[x])) otherwise

    Now that we have this array, we can create a square root decomposition structure over it (let's call the structure SRD). Sort the queries using the SRD algorithm, and now we only have to implement the SRD interface (add element, remove element, and process query). Let's use a BIT or Segment Tree to store for each interval [L,R] the frequency of each value in the interval. So the SRD add(x) and remove(x) are easy to implement: just update the frequency of the current value (BIT.insert(m[x],1) or BIT.insert(m[x],-1)). The process query function is implemented by doing a binary search over the BIT to find the highest X such that BIT.query(1,x) is smaller or equal than 50% the size of the interval [L,R]. Then you can choose to print this value or the next value (the first value higher than 50% of the size), just check which one is closer to 50% and if both are the same distance pick the smaller.

    The final complexity should be O(N) for the dp preprocess and O(N*sqrt(N)*log(N)) for the SRD algorithm to process all queries, enough to pass given the constraints

    • »
      »
      »
      8 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      My algorithm runs on O(N*sqrt(N)*log(N)) too :/

      It takes at maximum 3.5s on one test case in my pc (B_35). I think that only way is to try to improve constant running factor, as said by ffao earlier. Sad history...