Блог пользователя heavylightdecomp

Автор heavylightdecomp, история, 22 месяца назад, По-английски

Recently, I was solving a problem that involved disjoint ranges. And while stress testing my solution I found a simple trick to fairly and efficiently generate such ranges.

Let us first examine a set of $$$N = 3$$$ disjoint ranges: [{2,3}, {6, 7}, {4, 5}]. If we list their endpoints in order, we get a sequence of $$$2 \times N$$$ integers, where adjacent integers are paired together to form a range: [2,3, 4,5, 6,7]

It follows from this observation that you can generate N disjoint ranges by first generating $$$2 \times N$$$ unique integers (duplicates will affect the disjoint condition), and then "matching" adjacent integers together.

Here is some Python code:

click me

As an additional advantage, the output is sorted.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +15
  • Проголосовать: не нравится

Автор heavylightdecomp, история, 2 года назад, По-английски

Hello Codeforces,

Recently I was solving some more problems and came across IPSC 2015 G, Generating Synergy. I think the problem and my alternative solution are pretty cool.

TLDR: Store decreasing maxdepth monostack in each segtree node, for query, binary search $$$log N$$$ monostacks and get maximum update.

Code

Abridged problem version: You are given a tree of $$$N$$$ nodes, rooted at node $$$1$$$. Initially, the value of all nodes is $$$1$$$. Handle $$$Q$$$ operations of 2 possible types:

  • Update all nodes in subtree of node $$$a$$$ and depth $$$\le dep_a + l$$$ with value c. $$$dep_a$$$ (depth of node a) is defined to be number of nodes on the path from node $$$a$$$ to root node. So the root has depth $$$1$$$, the root's direct children have depth $$$2$$$...etc...
  • Find the value of node $$$a$$$.

We first apply Euler Tour Technique. Then the original update operation is represented by updating all positions in $$$[tin_a, tout_a]$$$ with depth $$$\le dep_a + l$$$ (in other words, a maxdepth of $$$dep_a + l$$$), the query operation is to find value of position $$$tin_a$$$.

Then, let's try to incorporate segment tree (after all, this problem is basically Range Update Point Query). From now, we refer to nodes in the original tree as "positions" to avoid confusion with nodes in the segment tree. Each segment tree node will represent a range of positions, to do this we store in each node a resizable array (like std::vector).

For updates we "push" the update information to $$$O(log N)$$$ nodes.

For queries, we are tasked with finding the LRU (latest relevant update) of some position $$$a$$$. In other words, the latest update which changes the value of position $$$a$$$, i.e. has a maxdepth $$$>= dep_a$$$ and has been pushed to a segment tree node which contains $$$a$$$ (any update that has a chance of changing value of $$$a$$$ must have been pushed to a segtree node which contains position $$$a$$$.). There are ~$$$log N$$$ such nodes.

For each query you can independently find the LRU of each of the $$$log N$$$ nodes and then get the latest one. For each node, iterate through all updates pushed to that node and check their maxdepth. Then maximum time of the updates that satisfy the maxdepth constraint is the LRU for that node. But this approach is too slow :(

However!

When pushing an update to some segtree node, we know that all preexisting updates (in that node) with a $$$\le$$$ maxdepth will never be the LRU (it is trivial to prove this by contradiction). In other words, the only way a previous update can be the LRU is if it has a strictly greater maxdepth than our current update. So we can delete all updates that don't have a strictly greater maxdepth. Provided we push updates to the back of the node's std::vector, it is also trivial to prove by induction that the updates we can delete will form a suffix of this vector (the invariant is that $$$maxdepth_{v_1} > maxdepth_{v_2} > maxdepth_{v_3}...$$$). Another invariant is $$$time_{v_1} < time_{v_2} < time_{v_3}...$$$, because later updates are pushed to the back, after earlier updates.

Basically, in each segment tree node we will maintain a monotonic stack sorted by decreasing order of maxdepth and increasing order of time. We push $$$O(QlogN)$$$ updates to nodes. $$$O(logN)$$$ times for each update. Since each pushed update is deleted at most once, we delete $$$O(QlogN)$$$ updates. In total, time complexity of updates is $$$O(QlogN)$$$.

The final observation:

With respect to a query position $$$a$$$ and a segtree node $$$x$$$ (which represents a range that contains $$$a$$$), the LRU of $$$x$$$ is the last update in std::vector of $$$x$$$ such that the update has a maxdepth $$$\ge dep_{a}$$$. Any updates after that will not be relevant to $$$a$$$ (maxdepth too small), and any updates before are relevant, but not the latest.

Because the std::vector is already sorted by maxdepth, you can binary search it to find the node's LRU. You need to find the LRU of $$$log N$$$ nodes for each query, so each query is completed in $$$O(log Q) * O(log N)$$$ which is $$$O(log^{2}N)$$$.

Our time complexity is $$$O(QlogN) + O(Qlog^{2}N)$$$ which is $$$O(Nlog^{2}N)$$$.

P.S: Thank you for reading my 2nd Codeforces blog post. I'm still learning the style of solution blogs around here, so if I overexplained something or didn't explain something well enough, please tell me in the comments. Other feedback is also greatly appreciated. Have a nice day :)

Further reading: Official editorial, which uses 2D Segment Tree

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор heavylightdecomp, история, 3 года назад, По-английски

Hello Codeforces,

Recently I was solving some problems and came across IOI 2012, Crayfish Scrivener. It is quite an interesting problem. I think my solution for 100 points is pretty cool so I decided to post it here. Hopefully it has not been mentioned before :)

For this solution we will be using 1-based indexing.

Firstly, I noticed that actually adding characters to the end of the string seems a little annoying to optimize. So I tried converting it to a more static problem. First we must notice that this dynamic string can be represented using a static char array. Also notice that for all subtasks, the number of queries is at most $$$1,000,000$$$ ($$$10^6$$$). Thus, the maximum size of our string is $$$10^6$$$. So if we have a char array of size $$$10^6$$$, initialized to some default value, and keep track of the actual "size" of the represented string in an integer, in order to "type a letter", we can just update the index ($$$size + 1$$$) with the desired letter (this is a point update) without affecting the correctness of our algorithm.

Actually, this reduction allows us to reframe the problem in a way that is very intuitive to solve. Observe that the $$$Undo$$$ operation just requires us to "copy" some arbitrary previous revision of the char array, and make a new revision from it. The $$$TypeLetter$$$ operation just requires us to do a make a new revision that does a point update on the current revision. The $$$GetLetter$$$ operation just requires us to get the value at a specified index, which is a point query. So we need a persistent data structure (to access any previous revision quickly) which can also do point updates and point queries quickly.

What data structure supports all this? Persistent segment trees! Thus, the solution is to build a persistent segment tree on the aforementioned char array.

Its space complexity is $$$O(QlogN)$$$ where $$$Q$$$ is the number of queries and $$$N$$$ is the size of our char array ($$$10^6$$$ in this case, the number of queries) which is $$$O(QlogQ)$$$. "Copying" a previous revision $$$O(1)$$$. Point updates and point queries are $$$O(logN)$$$, which is $$$O(logQ)$$$. Our final time complexity is $$$O(QlogQ)$$$. This is good enough to score 100 points.

Note: We do not use the full power of segment tree for this problem as all the char information is stored in the leaf nodes, which represent a single index.

Implementation note: Don't forget to convert 0-based indexing given in input to 1-based indexing, or implement your solution in 0-based indexing (idea stays the same, just a few minor changes).

PS: I hope you enjoyed reading through my first Codeforces blog. I spent a lot of time on it and if you have any suggestions or tips for improvement please post them in the comments. Though maybe this is not the most elegant way to solve the problem, this is the first idea I came up with and I feel like I understand it a lot more than the other solution which uses binary jumping (though I think it is pretty cool as well, you can view it here).


My implementation

Полный текст и комментарии »

  • Проголосовать: нравится
  • +45
  • Проголосовать: не нравится