Блог пользователя lrvideckis

Автор lrvideckis, история, 3 месяца назад, По-английски

Recently I was learning about how to binary search over suffix array to solve string matching (specifically, single text, multiple patterns, solve it online). Here (1 ("Pattern Query" section), 2, 3) describes how to solve it in O(|s| * log(|p|)) but I'll describe how to improve this to O(|s| + log(|p|)). There already exists resources online about this, but I will try to simplify it.

visualizing suffix array

Let's take the text s="banana", and consider all suffixes (written vertically):

a
n a
a n a
n a n a
a n a n a
b a n a n a
0 1 2 3 4 5

now let's sort it:

      a
    a n
    n a   a
  a a n   n
  n n a a a
a a a b n n
5 3 1 0 4 2

s's suffix array is [5, 3, 1, 0, 4, 2].

Now take any substring of s (like "an"). Observe there exists a maximal-length subarray of s's suffix array ([3,1]) representing all the suffixes (3 -> "ana", 1 -> "anana") where "an" is a prefix. I like to call this the "subarray of matches": as this subarray represents all the starting indexes (in s) of a match.

Let's define a function which calculates this: subarray_of_matches(s[s_l:s_r]) = suffix_array[suffix_array_l:suffix_array_r].

Now observe that subarray_of_matches(s[s_l:s_r+1]) is nested inside subarray_of_matches(s[s_l:s_r]). This is because every spot where s[s_l:s_r+1] is a match, s[s_l:s_r] is also a match.

But in particular, we can take some suffix (like "anana") and plug in all of its' prefixes to subarray_of_matches and we get a sequence of nested subarrays:

subarray_of_matches("a") = [5,3,1]
subarray_of_matches("an") = [3,1]
subarray_of_matches("ana") = [3,1]
subarray_of_matches("anan") = [1]
subarray_of_matches("anana") = [1]

in general, you can visualize it like this:

One more point: consider any subarray of the suffix array: suffix_array[suffix_array_l:suffix_array_r] and let lcp_length = the longest common prefix of these suffixes. Formally: for each i in range [suffix_array_l,suffix_array_r) the strings s[suffix_array[i]:suffix_array[i]+lcp_length] are all equal.

Now consider the set of next letters: s[suffix_array[i]+lcp_length], they are sorted. We can visualize it like:

visualizing the binary search

Given text s, s's suffix array, and query string p, our goal is to calculate the minimum index i such that p is a prefix of s[suffix_array[i]:] (the suffix of s starting at suffix_array[i])

so we can start the binary search as usual with l=0,r=size(s),m=(l+r)/2, and compare p to s[suffix_array[m]:]. if p is less, search lower (r=m;), else search higher (l=m;).

Let's also keep track of the "best" suffix so far, e.g. the suffix which matches the most characters in p. Let's store it as a pair {best_i_so_far,count_matched} with the invariant: p[:count_matched] == s[best_i_so_far:best_i_so_far+count_matched].

So now, depending on whether p[:count_matched+1] is less/greater than s[best_i_so_far:best_i_so_far+count_matched+1] we want to look for the green section which is before/after best_i_so_far. And here, we have the cases taken from here:

  • the middle red section will not contain the answer because the pattern p didn't match with that letter g, so it still won't match anywhere in that range.

  • the green sections will contain a match which is either better or the same as the middle red section

  • the outer red sections won't contain the answer because the LCP is too low

So back to our binary search, we have l,r,m=(l+r)/2.

  • If m lies in either of the red sections; then we can check for this in O(1) using a lcp query, and continue the search "towards" the green section.

  • If m is already in the green section, then continue matching p[count_matched:] with s[best_i_so_far+count_matched:] (and also update our best match {best_i_so_far,count_matched})

we start comparing characters in p starting from count_matched, so we only compare characters in p once, and achieve complexity O(log(|s|) + |p|).

code: https://codeforces.me/edu/course/2/lesson/2/3/practice/contest/269118/submission/277693465

Полный текст и комментарии »

  • Проголосовать: нравится
  • +47
  • Проголосовать: не нравится

Автор lrvideckis, история, 9 месяцев назад, По-английски

I recently read “Still Simpler Static Level Ancestors by Torben Hagerup” describing how to process a rooted tree in O(n) time/space to be able to answer online level ancestor queries in O(1). I would like to explain it here. Thank you to camc for proof-reading & giving feedback.

background/warmup

Prerequisites: ladder decomposition: https://codeforces.me/blog/entry/71567?#comment-559299, and <O(n),O(1)> offline level ancestor https://codeforces.me/blog/entry/52062?#comment-360824

First, to define a level ancestor query: For a node u, and integer k, let LA(u, k) = the node k edges “up” from u. Formally a node v such that:

  • v is an ancestor of u
  • distance(u, v) = k (distance here is number of edges)

For example LA(u, 0) = u; LA(u, 1) = u’s parent

Now the slowest part of ladder decomposition is the O(n log n) binary lifting. Everything else is O(n). So the approach will be to swap out the binary lifting part for something else which is O(n).

We can do the following, and it will still be O(n):

  • Store the answers to O(n) level ancestor queries of our choosing (answered offline during the initial build)
  • Normally in ladder decomposition, length(ladder) = 2 * length(vertical path). But we can change this to length(ladder) = c * length(vertical path) for any constant c (of course the smaller the better).

The key observation about ladders: Given any node u and integer k (0 <= k <= depth[u] / 2): The ladder which contains LA(u, k) also contains LA(u, 2*k); or generally LA(u, c*k) when we extend the vertical paths by the multiple of c.

the magic sequence

Let’s take a detour to the following sequence a(i) = (i & -i) = 1 << __builtin_ctz(i) for i >= 1 https://oeis.org/A006519

1 2 1 4 1 2 1 8 1 2 1 4 1 2 1 16 1 2 1 4 1 2 1 8 1 2 1 4 1 2 1 32 1 2 1 4 1 2 1 …

Observe: for every value 2^k, it shows up first at index 2^k (1-based), then every 2^(k+1)-th index afterwards.

Q: Given index i >=1, and some value 2^k, I can move left or right. What’s the minimum steps I need to travel to get to the nearest value of 2^k?

A: at most 2^k steps. The worst case is I start at a(i) = 2^l > 2^k, e.g. exactly in the middle of the previous, and next occurrence of 2^k

the algorithm

Let’s do a 2n-1 euler tour; let the i’th node be euler_tour[i]; i >= 1. Let’s calculate an array jump[i] = LA(euler_tour[i], a(i)) offline, upfront.

how to use the “jump” array to handle queries?

Let node u, and integer k be a query. We can know i = u’s index in the euler tour (it can show up multiple times; any index will work).


key idea: We want to move either left, right in the euler tour to find some “close”-ish index j with a “big” jump upwards. But not too big: we want to stay in the subtree of LA(u,k). Then we use the ladder containing jump[j] to get to LA(u,k). The rest of the blog will be the all math behind this.


It turns out we want to find closest index j such that 2*a(j) <= k < 4*a(j). Intuition: we move roughly k/2 steps away in the euler tour to get to a node with an upwards jump of size roughly k/2.

Note if we move to j: abs(depth[euler_tour[i]] - depth[euler_tour[j]]) <= abs(i - j) <= a(j)

how to calculate j in O(1)

Note we’re not creating a ladder of length c*a(i) starting from every node because that sums to c*(a(1)+a(2)+...+a(2n-1)) = O(nlogn). Rather it’s a vertical path decomposition (sum of lengths is exactly n), and each vertical path is extended upwards to c*x its original length into a ladder (sum of lengths <= c*n)

finding smallest c for ladders to be long enough

Let’s prove some bounds on d = depth[euler_tour[j]] - depth[LA(u,k)]

Note 2*a(j) <= k from earlier. So a(j) = 2*a(j) - a(j) <= k - a(j) <= d. This implies jump[j] stays in the subtree of LA(u,k).

Note k < 4*a(j) from earlier. So d <= a(j) + k < a(j) + 4*a(j) = 5*a(j)

Remember, we can choose a constant c such that length(ladder) = c * length(vertical path). Now let’s figure out the smallest c such that the ladder containing jump[j] will also contain LA(u,k):

if we choose c=5 then length(ladder) = 5 * length(vertical path) >= 5 * a(j) > d

I mean that constant factor kinda sucks :( Well, at least we’ve shown a way to do the almighty, all-powerful theoretical O(n)O(1) level ancestor, all hail to the almighty. If thou is tempted by method of 4 russians, thou shall receive eternal punishment

here’s a submission with everything discussed so far: https://judge.yosupo.jp/submission/194335

a cool thing

Here’s some intuition for why we chose the sequence a(i): It contains arbitrarily large jumps which appear regularly, sparsely. Sound familiar to any algorithm you know? It feels like linear jump pointers https://codeforces.me/blog/entry/74847. Let’s look at the jump sizes for each depth:

1,1,3,1,1,3,7,1,1,3,1,1,3,7,15,1,1,3,...

Map x -> (x+1)/2 and you get https://oeis.org/A182105 . https://oeis.org/A006519 is kinda like the in-order traversal of a complete binary tree, and https://oeis.org/A182105 is like the post-order traversal.

improving the constant factor

Let’s introduce a constant kappa (kappa >= 2).

Instead of calculating jump[i] as LA(euler_tour[i], a(i)), calculate it as LA(euler_tour[i], (kappa-1) * a(i))

Let node u, and integer k be a query. We want to find nearest j such that kappa*a(j) <= k < 2*kappa*a(j)

Then you can bound d like: (kappa-1) * a(j) <= d < (2 * kappa + 1) * a(j)

And you can show c >= (2 * kappa + 1) / (kappa - 1)

The catch is when k < kappa, you need to calculate LA(u,k) naively (or maybe store them).

The initial explanation is for kappa = 2 btw

submission with kappa: https://judge.yosupo.jp/submission/194336

Полный текст и комментарии »

  • Проголосовать: нравится
  • +115
  • Проголосовать: не нравится

Автор lrvideckis, история, 10 месяцев назад, По-английски

I came across this paper

On Finding Lowest Common Ancestors: Simplification and Parallelization by Baruch Schieber, Uzi Vishkin April, 1987

so naturally I tried to code golf it

lca https://judge.yosupo.jp/submission/188189

rmq https://judge.yosupo.jp/submission/188190

edit: minor golf

Полный текст и комментарии »

  • Проголосовать: нравится
  • +90
  • Проголосовать: не нравится

Автор lrvideckis, история, 21 месяц назад, По-английски

Hi Codeforces! If you calculate midpoint like

int get_midpoint(int l, int r) {//[l, r)
	int pow_2 = 1 << __lg(r-l);//bit_floor(unsigned(r-l));
	return min(l + pow_2, r - pow_2/2);
}

then your segment tree requires only $$$2 \times n$$$ memory.

test
proof

I was inspired by ecnerwala's in_order_layout https://github.com/ecnerwala/cp-book/blob/master/src/seg_tree.hpp

I'll be waiting for some comment "It's well known in china since 2007" 😂

Полный текст и комментарии »

  • Проголосовать: нравится
  • +64
  • Проголосовать: не нравится

Автор lrvideckis, история, 5 лет назад, По-английски

Here are some reasons I believe people do CP:

  • Enjoyment
  • Preparation for job (coding) interviews
  • Preparation for Competitions

CP seems to be a low priority activity. For example, school and job responsibilities usually take higher priority. It’s not possible (realistically) to make a living from CP. Thus, people taking part in CP usually have good reasons.

The nihilist viewpoint says CP is just solving contrived made-up problems. Who cares? There’s little/no benefit to society. Unlike CP, programming for a job creates services (value) for people. Unlike CP, Computer Science research pushes the boundaries of knowledge of the field (value). Why spend time in an activity which doesn’t product relative value? Again, it seems the people doing CP must have good reasons.

I’m wondering what people’s reasons are for doing CP.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +129
  • Проголосовать: не нравится