Блог пользователя SPyofgame

Автор SPyofgame, история, 3 года назад, По-английски

The Statement

Here in this problem 1107D. Compression

Given an integer $$$n$$$ and a binary table $$$A[n][n]$$$

We are asked to find maximum $$$x$$$ that exist a binary table $$$B\left[\left \lceil \frac{n}{x} \right \rceil \right]\left[\left \lceil \frac{n}{x} \right \rceil \right]$$$

That this equation is satisfied $$$B\left[\left \lceil \frac{i}{x} \right \rceil \right]\left[\left \lceil \frac{j}{x} \right \rceil \right]$$$ $$$= A[i][j]\ \forall\ 1 \leq i, j \leq n$$$ (sorry but I dont know why the latex is broken like that)

The solution

So my approach is to check for all $$$v | n$$$ if there is exist such table.

Let call a square $$$(lx, ly)$$$ is a subtable $$$A[lx..rx][ly..ry]$$$ for

$$$\begin{cases} 1 \leq lx \leq rx \leq n\\ 1 \leq ly \leq ry \leq n\\ rx - lx + 1 = v\\ ry - ly + 1 = v\\ v\ |\ rx\\ v\ |\ ry\\ \end{cases}$$$

So there are $$$\frac{n}{v} \times \frac{n}{v}$$$ such subtables

Binary Table $$$B[][]$$$ will exist if and only if $$$A[x][y] = A[u][v]$$$ for all subtables $$$(lx, ly)$$$ and $$$lx \leq x, u \leq rx$$$ and $$$ly \leq y, v \leq ry$$$

My solution is just check for all $$$x\ |\ n$$$ from $$$n$$$ downto $$$1$$$.

If $$$x$$$ is satisfied then just simply output it.

The checking can simply be done using partial sum matrix.

The complexity will be $$$\underset{x|n}{\Large \Sigma} \left(\frac{n}{x} \right)^2 = n^2 \times \underset{x|n}{\Large \Sigma} \left(\frac{1}{x}\right)^2 = n^2 \times \frac{\pi^2}{6} = O(n^2)$$$

This give us a simple solution that run in $$$373 ms$$$.

https://codeforces.me/contest/1107/submission/138112085

My Hacking Question

For brute-force version, I check it by comparing each position one by one.

So the complexity seems to be at most $$$O(d(n) \times n^2)$$$ and atleast $$$O(n^2)$$$.

It should fail but it didnt, it passed in $$$1216 ms$$$

https://codeforces.me/contest/1107/submission/138111659

Can someone construct a test hack for it, since I failed to find such test that it will fail.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор SPyofgame, история, 3 года назад, По-английски

Statement

This question is based on bonus of this problem.

We need to count such non-negative integer triple $$$(a, b, c)$$$ that satisfy $$$(0 \leq a + b + c \leq S)$$$ and $$$(0 \leq a \times b \times c \leq T)$$$.

Since the result may be very big, you can either use bignum or modulo $$$10^9 + 7$$$ for convention


Notice that:

  • $$$(0, 0, 1) \neq (0, 1, 0) \neq (1, 0, 0)$$$

Constraint:

  • $$$0 \leq S, T \leq 10^{18}$$$

  • $$$0 \leq a, b, c$$$


No Time Limit. But expect to be 10 seconds


Memory Limit: 1Gb


Input:

  • A single line contain only two positive 60-bit integers $$$S$$$ and $$$T$$$ ($$$0 \leq S, T \leq 10^{18}$$$)

Output:

  • Print a single integer, the number of positive tuple satisfy mathematical condition

Example:

Example 0
Example 1
Example 2
Example 3
Example 4
Example 5
Example 6
Example 7
Example 8
Example 9
Example 10
Example 11
Example 12
Example 13
Example 14
Example 15

After many hours, the answer for $$$f(10^{18}, 10^{18})$$$ is reached but not confirmed therefore I wont add in the example




Current Research

When $$$S \leq \lfloor \frac{S}{3} \rfloor \times \lfloor \frac{S + 1}{3} \rfloor \times \lfloor \frac{S + 2}{3} \rfloor \leq T$$$. The condition $$$a \times b \times c \leq T$$$ is satisfied, therefore the result is $$$\frac{(S+1)(S+2)(S+3)}{6}$$$

When $$$T = 0$$$, at least one of them must be zero, therefore the result will be $$$\frac{3S(S-1)}{2} + 1$$$

When $$$S = 0$$$, there is only one triple satisfied $$$(0, 0, 0)$$$

When $$$S = T$$$, the function $$$f(S, T) \approx 1.5 S^2$$$ (Tested with $$$10^7$$$ integers $$$n \leq 10^{12}$$$

Without depend on $$$O(f(S))$$$, the best current known algorithm is $$$O(T^{5/9})$$$

Without depend on $$$O(f(T))$$$, the best current known algorithm is $$$O(S^2)$$$ (can be further optimized but not on researched)

Sadly, there were no papers, documentaries, integer sequences or math formulas found to apply this function.

Math discussion for Proof

Used this Paper

Reference Code

Current best known algorithm: O(T^(5/9)) - Used modulo for large result

Note: It is now A347221

Полный текст и комментарии »

  • Проголосовать: нравится
  • +52
  • Проголосовать: не нравится

Автор SPyofgame, история, 3 года назад, По-английски

The problem

It is kinda annoying when we just want to solve some problems in EDU Codeforces but we have to load a bunch of comments that we even not want to read that time.

Or sometimes you just need to read the editorials and dont want to see the comments

Or sometimes you just want to see what are there in the blogs, relearn things in there but not the comments and you still need to load those comments for nothing


My Suggestions

Why dont we make something like hide all comments and only load it when we click on show all comments or show first/latest [x] comments in any kind of blogs

Or something like separates comments into sections where each section only have maximumly X comments, and we can edit value X in our setting

Or something like show some few high contributed (most upvotes) comments, and we only see all comments when we click See All

Or something like show some few comments, and we see next 10 comments or deeper comments (reply-comment) by clicking See more

Or if it is not possible to do it right now then are there any kind of tricks or apps you known that can prevent from loading huge amount of comments when unneeded to read ?

(Sorry if there is such topic about this before but I cant find it anywhere though I search google and codeforces that I write this blog)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски





Table of content

Teleporter Description
I. The Problem Describe about the problem
II. Bruteforce Approach Simple way to do the problem
III. Hashing Approach Reduce circular substring comparing complexity
IV. Sqrt Decomposition Divide into parts, solve each part then solve the whole part
V. KMP Approach Magical Linear Booth-KMP Algorithm
VI. Lyndon Factorization Approach Incredible Linear Duval-Lyndon Algorithm
VII. Suffix Array Approach Hard for constructing but simple to find result
VIII. Suffix Automation Approach Complex Linear Construction but simple to find result
IX. Elimination Tournament Algorithm From initial candidates, eliminate worst candidates until find one final champion
................................................................... ..........................................................................................................................


















































I. The Problem











A) The problem:



1) Statement:

Sometimes being as a coder, you will find some real life problems about strings. Some strings are just simple an array of characters with 2 ended. Sometimes you will face with circular strings, the one which circular around. Let take you to the biological laboratory, dont worry, I just teleport you without requiring any money. In the lab, you will find many interesting bacteria. How could you detect which one is belonged to the same group or not ? Let simplify the problem that two bacteria are in the same group if their circular encoded-DNA strings are the same. So how can you compare two randomly encoded-DNA ? One of the effectively way to do that is convert all strings into Minimal Lexicographical Acyclic Strings (a.k.a Lexicographically Minimal String Rotation) and then hash them for faster comparison.

This bring down to a much simpler problem. Let define a right-rotation of a string is that putting the leftmost character to the rightmost position. Given circular DNA string $$$S$$$ of size $$$n$$$. Find the minimum right-rotation to make it Minimal Lexicographical for all such rotations.


2) Question:

  • Given a string of latin words $$$S$$$ of size $$$N$$$

  • A right-rotation of $$$S$$$ is that $$$S_2 + S_3 + ... + S_{|S|} + S_1$$$ where ('+' notation mean concatenation)

  • Find the least right-rotation to make $$$S$$$ become the Lexicographically Minimal


3) Constraint:
  • $$$S$$$ is a string of lower latin (you can expand the problem to $$$A$$$ is an array of integers)

  • $$$|S| \leq 10^5$$$ (you can also expand the limit to $$$10^6$$$ or even higher)


4) Example:
Example 1
Example 2
Example 3
Example 4
Example 5
Example 6
Example 7
Example 8
Example 9
Example 10
Example 11
Example 12
Example 13
All Examples As Once










B) Real Life Applications



1) Finger print identification:
  • We can encode the finger print into many detailed circular strings. How to search such finger print again from those in very huge data base ? Circular comparision using lyndon factorization is requried.

2) Biological genetics:
  • In some cases, we need to know whether these two group's genetics are belonged to the same bacteria, virus.

3) Games:
  • Well, ofcourse we can apply the algorithm in some language/words-related games










C) Practice Links



1) CSES — Minimal Rotation
  • Require: Output least rotation string with $$$|S| \leq 10^6$$$

2) SPOJ — Minimum Rotations
  • Require: Output least rotation number with $$$|S| \leq 10^5$$$

3) UVA — Glass Beads
  • Require: Queries output least rotation number with $$$|S| \leq 10^4$$$


















































II. Bruteforce Apprach











A) Sorting and Searching:



1) Idea:
  • We will take all possible circular substring in $$$S$$$

  • For every possible rotation of the string, we add it with its number of rotations needed

  • Then we sort all the array by their string (if they are equal, we take one with its smaller index).

  • The answer will be the smallest string (minimal lexicographical) with its lowest index.


2) Complexity:
  • Compare two random strings $$$S$$$ and $$$T$$$ cost $$$O(min(|S|, |T|)) = O(n)$$$ (since $$$|S| = |T| = n$$$)

  • The complexity of sorting all array is $$$O(n log n)$$$

  • Hence the total complexity is $$$O(n^2 log n)$$$


3) Implementations:
Vector Sorting Bruteforce Solution - O(n^2 log(n)) time - O(n^2) auxiliary space
Map Sorting Bruteforce Solution - O(n^2 log(n)) time - O(n^2) auxiliary space










B) Loop over all rotations:



1) Idea:
  • Why should we store then sort anyway, we just need to loop over all rotations and comparing to select the smaller

2) Implementations:
  • For convention, you can just duplicate the string $$$T = S + S$$$

  • Then $$$S$$$ at the $$$ith$$$ rotations ($$$0 \leq i < n$$$) is the string $$$T[i \dots i + n]$$$

Bruteforce Solution - O(n^2) time - O(n) auxiliary space

3) Optimization:
  • By quickly break when the two strings arent equal, this can optimize the function a little bit

  • Instead of making duplicated string, you can also define $$$s[x] = s[x\ mod\ |s|]$$$ since $$$s$$$ itself is a circular string

  • Instead of taking modulo, since $$$0 \leq x < 2 * n$$$, we just need reduce $$$x$$$ by $$$n$$$ whenever it passed $$$n$$$

Optimized Bruteforce Solution - O(n^2) time - O(1) auxiliary space










C) Substring Based Elimination



1) Idea:
  • From inital set of candidates, keep comparing substrings and eliminate the bigger one until we get the final champion

2) Algorithm:
  • First, let $$$mn$$$ / $$$mx$$$ is the minimal / maximal character of $$$s$$$

Define: $$$mn = \underset{c \in S}{min}(c)$$$ and $$$mx = \underset{c \in S}{max}(c)$$$

  • Pre-elimination Round: We take the position $$$p$$$ that $$$s[p] = mn$$$. Since all other positions wont provide Minimal Lexicographical Circular String

Define: $$$candidate = $$$ { $$$p\ \ |\ \ p \in $$$ { $$$0, 1, \dots, |s| - 1$$$ } $$$ \cap s[p] = mn$$$ }

  • Then we take maximumly $$$n - 1$$$ Rounds from $$$d = 1$$$ to $$$d = n - 1$$$. For all current candidate, add the next character to it. Find the minimal substring, and eliminater unsatisfied others.

2) Optimization

Optimization: At the $$$p$$$ round we will find $$$c = $$$ minimal character among all candidate's $$$pth$$$-next character, then traverse again and eliminate the one whose $$$pth$$$-nxt character is greater than $$$c$$$

Define: $$$s[x] = s[x\ mod\ |s|]$$$ and $$$c = \underset{p \in\ candidate}{min}(s[p + d])$$$ and $$$next\ candidate = $$$ { $$$p\ \ |\ \ p \in candidate \cap s[p + d] = c$$$ }


3) Example:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Pre-elimination - 26 candidates
Round I - 18 candidates
Round II - 9 candidates
Round III - 3 candidates
Round IV - 3 candidates
Round V - 3 candidates
Round VI - 2 candidates
Champion

4) Notice
  • After all eliminations, if there are more than one candidate, select the one with lowest index

5) Complexity
  • If the string is $$$k$$$-repeated strings (each substring size $$$d$$$) or similar then there will be atleast $$$k$$$ candidate after $$$d$$$ eliminations. About $$$O(k \times d)$$$ in the best cases

  • Best case to say, like a string where all characters are unique, then it will be Linear $$$O(n)$$$

  • Worst case to say, like a string where all characters are the same, then it will be $$$O(n \times n) = O(n^2)$$$


6) Implementations
  • For convention, let not use obviously-not-optimized code
Detail Substring Based Elimination - O(n^2) time - O(n) auxiliary space
Noncomment Substring Based Elimination - O(n^2) time - O(n) auxiliary space


















































III. Hashing Apprach











A) Loop over all rotations



1) Bruteforces Algorithm:
  • Iterating over all rotations and comparing for finding the smaller

If they are fully equal, then that position not exceed, then we take the one which smaller index

Else find the first position of character that they are difference, which character is smaller, the which of it is also smaller

Bruteforce Approach - O(n^2 log n) time - O(n) auxiliary space

2) Hashing Improvement:
  • We reduce the complexity by using binary search to find the first different position of two strings:

  • Let $$$A$$$ is the circular substring of $$$s$$$ with the starting point $$$l$$$

  • Let $$$B$$$ is the circular substring of $$$s$$$ with the starting point $$$r$$$

  • Let $$$lcp$$$ (longest common prefix) is the last position that $$$A$$$ and $$$B$$$ equal

  • Let $$$t = s + s$$$, for convention of circular string

  • For every $$$p \leq lcp$$$, we have $$$t[l + p - 1] = t[r + p - 1]$$$

  • For every $$$p > lcp$$$, we have $$$t[l + p - 1] \neq t[r + p - 1]$$$

  • Hence we can use binary search to find $$$lcp$$$

  • Fully equal case is that $$$lcp = n$$$

  • If they are difference, compare $$$t[l + lcp]$$$ with $$$t[r + lcp]$$$


3) Implementations:
Single Hashing Approach - O(n log(MOD)) precalculation - O(n log n) time - O(n) auxiliary space
Multiple Hashing Approach - O(n Sigma(log(MOD))) precalculation - O(n log n * k) time - O(n * k) space

4) Optimization:
  • Hash are so heavy of hidden constant, obviously most by modulo operators, but you can have some tricks to solve it

  • Significant improvement: Declare $$$MOD,\ BASE,\ LIM$$$ as const or constexpr

  • In Single Hash, you can use overflow modulo for significant faster but it is also dangerous in same cases (especially hacking)

  • Replace vector with pre-declared array

  • Replace recursive power function by iterative one

  • Improve time to calculate inverse modulo. You can do it linear but cost more space and modulo operators, so it is better to do like below.

Optimized Bruteforce Approach - O(n^2 log n) time - O(n) auxiliary space
Optimized Single Hashing Approach - O(n + log(MOD)) precalculation - O(n log n) time - O(n) space
Optimized Multiple Hashing Approach - O(n Sigma(log((MOD))) precalculation - O(n log n * k) time - O(n * k) space










2) Logarithm Decomposition



1) Algorithm:
  • Let a candidate to say a starting position that might leed to lexicographically minimum string rotation. Hence we have the initial candidates are $$$0, 1, \dots, |S| - 1$$$

  • Let divide the candidates list into parts of size $$$\approx K$$$ (the final part might have much less). There will be $$$\lceil \frac{N}{K} \rceil$$$ parts.

  • Small iteration: For each parts, we find one smallest circular substring of size $$$K$$$, among all equal circular substrings, pick the one with smallest starting position. Each part will produce only one candidate

  • Big iteration: For $$$\lceil \frac{N}{K} \rceil$$$ next candidates we will find one smallest circular substring of size $$$N$$$, among all equal circular substrings, pick the one with smallest starting position. This will give you the answer


2) Proofs:
  • Let $$$A, B$$$ are the circular substrings start from candidates positions, that $$$|A| = |B|$$$. And let $$$X, Y$$$ are the prefixes of $$$A, B$$$, that $$$|X| = |Y|$$$

  • Since we are comparing in lexicographical order, if $$$X < Y$$$ then it will lead to $$$A < B$$$. Hence by using small iteration we sieved all unoptimal candidates, and reduce $$$N$$$ candidates to $$$\lceil \frac{N}{K} \rceil$$$ candidates only.


3) Complexity:

For small iteration, there are $$$N$$$ candidates, and each candidate are compared with $$$K$$$ lengthed circular substrings using hash. Hence $$$O(N \times log(K))$$$

For big iteration, there are $$$K$$$ candidates, and each candidate are compared with $$$N$$$ lengthed circular substrings using hash. Hence $$$O(\lceil \frac{N}{K} \rceil \times log(N))$$$

The total time complexity is $$$\approx O(N log(K) + \frac{N}{K} log(N))$$$

For optimal purpose, let $$$K \approx \log{N}$$$ the complexity therefore will be only $$$O(N log(log N))$$$


4) Notice:

We are comparing string in circular, you can either use modulo position or duplicated string

Becareful that the real size of string and the duplicated one


5) Implementations:

For convention, let just ignore those obvious not a good code (ugly — complex — stupid code worth nothing to say)

Single Hashing Approach - O(n * log((MOD)) precalculation - O(n log(log n)) time - O(n) space
Multiple Hashing Approach - O(n * Sigma(log((MOD))) precalculation - O(n log(log n) * k) time - O(n * k) space


















































IV. Sqrt Decomposition











1) Divide candidate list into many parts



1) Algorithm:
  • Let a candidate to say a starting position that might leed to lexicographically minimum string rotation. Hence we have the initial candidates are $$$0, 1, \dots, |S| - 1$$$

  • Let divide the candidates list into parts of size $$$\approx K$$$ (the final part might have much less). There will be $$$\lceil \frac{N}{K} \rceil$$$ parts.

  • Small iteration: For each parts, we find one smallest circular substring of size $$$K$$$, among all equal circular substrings, pick the one with smallest starting position. Each part will produce only one candidate

  • Big iteration: For $$$\lceil \frac{N}{K} \rceil$$$ next candidates we will find one smallest circular substring of size $$$N$$$, among all equal circular substrings, pick the one with smallest starting position. This will give you the answer


2) Proofs:
  • Let $$$A, B$$$ are the circular substrings start from candidates positions, that $$$|A| = |B|$$$. And let $$$X, Y$$$ are the prefixes of $$$A, B$$$, that $$$|X| = |Y|$$$

  • Since we are comparing in lexicographical order, if $$$X < Y$$$ then it will lead to $$$A < B$$$. Hence by using small iteration we sieved all unoptimal candidates, and reduce $$$N$$$ candidates to $$$\lceil \frac{N}{K} \rceil$$$ candidates only.


3) Complexity:
  • For small iteration, there are $$$N$$$ candidates, and each candidate are compared with $$$K$$$ lengthed circular substrings. Hence $$$O(N \times K)$$$

  • For big iteration, there are $$$K$$$ candidates, and each candidate are compared with $$$N$$$ lengthed circular substrings. Hence $$$O(\lceil \frac{N}{K} \rceil \times N)$$$

  • The total time complexity is $$$O(N \times (K + \lceil \frac{N}{K} \rceil))$$$

  • For optimal purpose, let $$$K \approx \sqrt{N}$$$ the complexity therefore will be only $$$O(N \sqrt{N})$$$


4) Notice:
  • We are comparing string in circular, you can either use modulo position or duplicated string

  • Becareful that the real size of string and the duplicated one


5) Implementations:

For convention, let just ignore those obvious not a good code (ugly — complex — stupid code worth nothing to say)

Detail Sqrt Decomposition Solution - O(N√N) time - O(N) auxiliary space
Noncomment Sqrt Decomposition Solution - O(N√N) time - O(N) auxiliary space
Optimized Sqrt Decomposition Solution - O(N√N) time - O(1) auxiliary space










2) Hash Improvement



1) Idea:
  • By using hash for comparing two known substrings $$$X$$$, $$$Y$$$ of equal length $$$D$$$. We can compare them from $$$O(S) \rightarrow O(log(S))$$$ by finding their LCP — Longest Common Prefix and comparing $$$X[lcp]$$$ with $$$Y[lcp]$$$

2) Complexity:
  • The total time complexity is $$$O(N \times log(K) + \lceil \frac{N}{K} \rceil \times log(N)) \approx O(N \times \sqrt{N})$$$ but significant smaller constant (About 2 or/to 3 times faster). But the trade off is more complex code and more space is taken.

3) Optimization:
  • Like the above Hashing approach

  • Use constant modulo

  • Replace vector with array

  • Replace recursive with iterative

  • Quick calculating inverse modulo


4) Implementations:
Single Hash Solution - O(N + log(MOD)) preprocessing - O(N√N) time - O(N) auxiliary space
Single Hash Solution - O(N + Sigma(log(MOD))) preprocessing - O(N√N * K) time - O(N * K) auxiliary space


















































V. KMP Approach











A) Normal KMP



1) Algorithm:
  • In this problem, there is no effeciency using normal KMP over all pair substring comparing, even if we divide into queries and optimize it. It still have no improvement compare to all other implementations

  • But if you want to compare two strings by KMP, then here it is

  • We find their first different position using KMP of two strings or KMP on concatenation string ($$$s$$$ + $$$c$$$ + $$$t$$$) — for such $$$c$$$ is smaller than any character in $$$s$$$ and $$$t$$$.


2) Implementations:
My Simple KMP Implementations - O(n) time - O(1) auxiliary space
Bruteforces using KMP - O(n^2) time - O(n) auxiliary space










B) Booth Algorithm



1) Idea:
  • Remember when you compare circular substrings using hashing ? We find first difference character $$$k$$$ by binary search and compare only at that position to know whether one is greater, equal or less than other. But this give you huge hidden constant that might lead you TLE (the Optimized version doesnt provides that much but its constant is not that small too). While we can take the advantage of $$$KMP$$$ to find its first difference too, in a much smaller constant ofcourse. But we have to do some clever tricks to ignore its high complexity.

2) Code Explanation:
  • For convention let duplicate the initial string $$$S + S$$$. We can easily find the prefix function for the string. We have to find that at position $$$i > n$$$, which position $$$p$$$ that the initial one and the current rotation differ, then compare the two rotations. However it can only provides comparision between inital rotation and the current rotation, if you want for all rotations you have to merge all $$$N$$$ circular substrings. But, since we only care about the minimal lexicographical, we can just immediately eliminate the worse rotation when you find a better rotation. Then we treat initial rotation by this new better a rotation, but by how can we do that ? The important is that we cut the relationship of the current prefix function with the previous one, by assign the previous as $$$-1$$$, hence we are comparing the prefix function for this current rotation. This allow the algorithm complexity drop from normal KMP $$$O(n^2)$$$ to Booth-KMP $$$O(n)$$$ and way much smaller constant than Hashing $$$O(n log n)$$$.

3)* Implementations:
Detail Booth Algorithm - O(n) time - O(n) auxiliary space
Non-comment Booth Algorithm - O(n) time - O(n) auxiliary space


















































VI. Lyndon Factorization Approach











A) Approach



1) Definitions:
  • For detail definitions, proves, applications, implementations, you can read Simple Linear and Effectively Duval Algorithm for Lyndon Factorization

  • Lyndon word is nonempty string that is strictly smaller in lexicographic order than all of its rotations.

  • Lyndon factorization is a unique way to split the string into many lyndon words in such a way that the words in the sequence are in non-increasing order. That means we factorize $$$s = s_1 + s_2 + \dots + s_k$$$ where $$$s_1, s_2, \dots, s_k$$$ are lyndon words and in non-increasing order ($$$s1 \geq s2 \geq \dots \geq s_k$$$)

  • From the above definition, we con conclude that the last factor in lyndon factorization of the string itself is minimal lexicographical.


2) Duval Algorithm:
  • In $$$1983$$$, Duval provides an effecient algorithm for listing the Lyndon words of length at most $$$n$$$ with a given alphabet size $$$s$$$ in lexicographic order in $$$O(n)$$$

For string $$$S$$$ of length $$$n$$$

For each new word $$$x$$$ from $$$S$$$ that $$$\forall i$$$ we have $$$x[i] = s[i\ mod\ n]$$$ (mean $$$x$$$ is a sub-string of some cyclic strings that shifted from initial $$$S$$$)

While the last symbol of $$$x$$$ is in sorted order, remove it to make a shorter word

Replace the last remained symbol of $$$x$$$ by its successor in sorted order.


3) Examples:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Lyndon Factorization

4) Notice:
  • Wait wait wait, are you going to take the head position of last factor of string factorization and it will be the answer ?

  • Nope, becaure the string are circular, you must duplicate the string before doing so, else there might exist such string that start from from right but connect with left to form a minimal lexicographical string rotation.


5) Implementations:
Detail Duval Solution - O(n) Time - O(n) Auxiliary Space
None-comment Duval Solution - O(n) Time - O(n) Auxiliary Space

6) Optimization:
  • We dont need to duplicate the string

  • We dont need to continue the function when $$$S_2$$$ has the size of $$$n$$$ (or when starting point of $$$S_2$$$ $$$< n \leq$$$ ending point of $$$S_2$$$)

  • We just skip those cases $$$S_2 = s_x + s_x + \dots + s_x + s_y$$$ but jump directly to the next character

Optimized Duval Solution - O(n) Time - O(1) Auxiliary Space


















































VII. Suffix Array Approach











A) Approach



1) Wrong Idea:
  • Store all suffix of $$$S$$$ (with its index) into an array then sort it (in lexicographical order).

Let $$$S = $$$"$$$baabaa$$$" then there are such suffixes: "$$$a$$$", "$$$aa$$$", "$$$baa$$$", "$$$abaa$$$", "$$$aabaa$$$", "$$$baabaa$$$"

  • Since the first having smallest lexicographical order among all suffixes then its index must be the answer ?

2) Prove it wrong:
  • We are dealing with circular string, and the normal suffix doesnt have enough information.

  • Let $$$A, B$$$ are some different suffixes, and $$$A < B$$$, but if you extend both by size $$$n$$$ then ($$$A < B$$$) might no longer be true, but because of the lexicographical sorting order and all suffixes here having different size, therefore the one with smaller size is considered as "smaller".


3) Correct idea:
  • Sort all suffixes of $$$S + S$$$ with its index

  • Find smallest suffix whose index in $$$0 \dots |s| - 1$$$

  • Compare with other suffixes to find one with smaller index


4) Algorithm:
  • For convention

Let smaller / biggers means in lexicographical order

Let equal means two suffixes have their first $$$n$$$ characters equal (first means prefix of suffix)

Let $$$T = S + S + c$$$ ($$$c$$$ is smaller than any character in $$$S$$$)

  • First we store all suffixes of $$$T$$$ (with its index) into an array then sort it.

  • We can easily find the first suffix $$$X$$$ whose index $$$t$$$ is between ($$$0 \dots n - 1$$$) (those suffixes whose length are at least $$$n$$$)

  • Since there might be some other equal suffixes of size $$$n$$$ whose index is smaller, we compare strings and find it


5) Optimizations:
  • Since all suffixes are sorted and we only need to care smallest suffixes (all among them we select the one with smallest index)

  • Hence we only need to care about consecutive suffixes with $$$X$$$ (they are either bigger than $$$X$$$ or equal to $$$X$$$ but with smaller index)

Let $$$Y$$$ is the current suffix (consecutive to right of $$$X$$$ of course)

If $$$X = Y$$$ then $$$Y$$$ must have smaller index, hence we update the result

Otherwise since $$$Y > X$$$, hence we break


6) Examples:
  • Let $$$SA[]$$$ (Suffix Array) is the array which stores order of suffixes

$$$SA[i] < SA[j]$$$ means suffix start from $$$i$$$ is smaller than suffix start from $$$j$$$

  • Let $$$LCP[]$$$ (Longest Common Prefix) is the array with store the longest common prefix between two consecutive suffixes in the order of $$$SA[]$$$

$$$LCP[x] = k$$$ means suffix start from $$$SA[x]$$$ and $$$SA[x + 1]$$$ have their longest common prefix is $$$k$$$

  • For $$$S = $$$"$$$aaba$$$"
Example

7) Notices:
  • With $$$T = S + S$$$ instead of $$$T = S + S + c$$$, there are actually have plentiful implementations to make algorithm right, even only $$$T = S$$$ there are ways to solve the problem too. But they might much more complex, slow or just correct for this problem that you should be careful.

  • Smallest lexicographical suffix might not have smallest index


8) Implementations:
  • There are many ways to implement suffix array (I wont go deeper)
Spoiler
  • There are many ways to implement longest common prefix: (I wont go deeper)
Spoiler
  • Here are main implementations
Simple Optimized Implementation - O(n log n) SA - O(n) LCP - O(n log n) total time - O(n) auxiliary space


















































VIII. Suffix Automation Approach











A) Approach



1) Algorithm:
  • Construct Suffix Automaton of $$$s + s$$$. The automaton itself is a path graph of all string rotations

  • Inorder to get lexicographically minimal string rotation, we traverse from initial state $$$|s|$$$ times by smallest edge (in lexicographical order), Let this state is $$$X$$$

  • For inding its index, we find the first occurence of minal strings, equivalence to first occurence of state $$$X$$$. We construct occurences of each state. Let first occurence of state $$$P$$$ is $$$f(P)$$$

  • Let not go deeper in construction here. We have least rotation $$$r = f(X) - |s| + 1$$$


2) Implementations:
Suffix Automaton Solution - O(n log(alphabet)) time - O(n) auxiliary space


















































IX. Elimination Tournament Algorithm



So here are my other approachs for this problem. I dont know whether there are names for these algorithm. I just got the ideas from soccer elimination and apply to this problem. By self-researching and computing, finaly I archive linear algorithm

About the algorithm name, if one know any paper about it before, please tag me in and I will tag the first authurs of those/these algorithms

The simplest idea is that we have a list of candidate, and keep doing elimination until we have the final champion











A) Dual Elimination



1) Idea:
  • So the ideas is that for the initial candidate list, we will select 2 consecutive candidates, then compare two circular strings of the length equal to the minimum gap of the two candidates

2) Algorithm:
  • Let $$$t = s + s$$$, and the selected candidates are $$$l$$$ and $$$r$$$. For convention, let $$$l < r$$$, then we will compare $$$A$$$ and $$$B$$$, for which $$$A = t[l \dots l+(r-l)-1]$$$ and $$$B = t[r \dots r+(r-l)-1]$$$.

Case 1: $$$A < B$$$, we will select the starting point of $$$A$$$ become the next candidate, it is $$$l$$$

Case 2: $$$A > B$$$, we will select the starting point of $$$B$$$ become the next candidate, it is $$$r$$$

Case 3: $$$A = B$$$, we will select the smaller starting point of $$$A$$$ and $$$B$$$, it is $$$min(l, r)$$$


3) Example:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Round I - 27 candidates
Round II - 13 candidates
Round III - 7 candidates
Round IV - 4 candidates
Round V - 2 candidates
Champion

4) Notice:
  • We are comparing circular string !

5) Complexity:
  • Normal Version: Stable Complexity

  • Optimized Version: Linear when characters are unique

  • At the $$$k$$$ elimination, the number of participants will be about $$$O(\frac{n}{2^k})$$$

  • At the $$$k$$$ elimination, each string will be compared $$$O(2^k)$$$ times

  • Hence the total complexity is $$$O(\overset{\lceil log_n \rceil}{\underset{k = 1}{\Sigma}}(\frac{n}{2^k} \times 2^k))$$$ $$$=$$$ $$$O(n \times 1 + \frac{n}{2} \times 2 + \dots + \frac{n}{2^{\lceil log_2(n) \rceil}} \times \lceil log_2(n) \rceil)$$$ $$$=$$$ $$$O(n \times \lceil log_2(n) \rceil)$$$ $$$=$$$ $$$O(n\ log\ n)$$$


6) Implementations:
Detail Dual Elimination Algorithm - O(n log n) time - O(n) auxiliary space
Noncomment Dual Elimination Algorithm - O(n log n) time - O(n) auxiliary space
Optimized and Simple Dual Elimination Algorithm - O(n log n) time - O(n) auxiliary space










B) Substring Based Elimination



1) Idea:
  • From inital set of candidates, keep comparing substrings and eliminate the bigger one until we get the final champion

2) Algorithm:
  • First, let $$$mn$$$ / $$$mx$$$ is the minimal / maximal character of $$$s$$$

Define: $$$mn = \underset{c \in S}{min}(c)$$$ and $$$mx = \underset{c \in S}{max}(c)$$$

  • Pre-elimination Round: We take the position $$$p$$$ that $$$s[p] = mn$$$. Since all other positions wont provide Minimal Lexicographical Circular String

Define: $$$candidate = $$$ { $$$p\ \ |\ \ p \in $$$ { $$$0, 1, \dots, |s| - 1$$$ } $$$ \cap s[p] = mn$$$ }

  • Then we take maximumly $$$n - 1$$$ Rounds from $$$d = 1$$$ to $$$d = n - 1$$$. For all current candidate, add the next character to it. Find the minimal substring, and eliminater unsatisfied others.

3) Optimization

Optimization: At the $$$p$$$ round we will find $$$c = $$$ minimal character among all candidate's $$$pth$$$-next character, then traverse again and eliminate the one whose $$$pth$$$-nxt character is greater than $$$c$$$

Define: $$$s[x] = s[x\ mod\ |s|]$$$ and $$$c = \underset{p \in\ candidate}{min}(s[p + d])$$$ and $$$next\ candidate = $$$ { $$$p\ \ |\ \ p \in candidate \cap s[p + d] = c$$$ }


4) Example:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Pre-elimination - 26 candidates
Round I - 18 candidates
Round II - 9 candidates
Round III - 3 candidates
Round IV - 3 candidates
Round V - 3 candidates
Round VI - 2 candidates
Champion

5) Notice
  • After all eliminations, if there are more than one candidate, select the one with lowest index

6) Complexity
  • If the string is $$$k$$$-repeated strings (each substring size $$$d$$$) or similar then there will be atleast $$$k$$$ candidate after $$$d$$$ eliminations. About $$$O(k \times d)$$$ in the best cases

  • Best case to say, like a string where all characters are unique, then it will be Linear $$$O(n)$$$

  • Worst case to say, like a string where all characters are the same, then it will be $$$O(n \times n) = O(n^2)$$$


7) Implementations
  • For convention, let not use obviously-not-optimized code
Detail Substring Based Elimination - O(n^2) time - O(n) auxiliary space
Noncomment Substring Based Elimination - O(n^2) time - O(n) auxiliary space










C) Elimination and Colliding



1) Idea:
  • This is the improvement from Substring Based Elimination. It takes me few hours to think and to implement the idea

  • If two substrings colide then one will be eliminate


2) Algorithm:
  • At the $$$dth$$$ elimination

  • Let $$$s[p] = s[p\ mod\ |s|]$$$ for convention

  • Let $$$l, r$$$ are the candidates and $$$l < r$$$

  • Let $$$A$$$ is a circular substring of $$$s$$$ and $$$A = s[l \dots l + d - 1]$$$

  • Let $$$B$$$ is a circular substring of $$$s$$$ and $$$B = s[r \dots r + d - 1]$$$

  • If $$$A$$$ and $$$B$$$ collide ($$$l + d - 1 = r$$$), we will select $$$l$$$ and eliminate $$$r$$$ (since $$$l < r$$$)


3) Proof:
  • Let $$$A_1, A_2, \dots, A_k$$$ are consecutive circular substring of $$$s$$$ that satisfy
  • $$$\Sigma(|A_i|) = |S|$$$

  • $$$A_i = s[l_1, r_i]$$$

  • $$$l_{i+1} \equiv r_i + 1$$$ (mod $$$n$$$)

  • No loss of generality. Let $$$A_1$$$ and $$$A_2$$$ are the substring we are comparing, for such $$$l1, l2$$$ are the candidates, then we also have $$$l_1 = min(l_i)$$$

  • In lexicographical order (in comparing the strings) to say

If $$$A_1 < A_2$$$ then $$$l_2$$$ will be eliminated

If $$$A_1 > A_2$$$ then $$$l_1$$$ will not be the next candidate

If $$$A_1 = A_2 = \dots = A_k$$$ then $$$min(l_i)$$$ will be the champion. Hence $$$l_1$$$ is

Else there is such $$$p$$$ that $$$A_1 = A_2 = \dots = A_p \neq A_{p+1}$$$

  • There is only the case $$$A_p < A_{p+1} \Rightarrow A_1A_2\dots A_p < A_2A_3 \dots A_{p+1}$$$, hence $$$l_2$$$ will be eliminated

  • Because for this case $$$A_1 = A_2 = A_p > A_{p+1}$$$ — the contradiction happened where $$$l_1, l_2, \dots, l_p$$$ are all candidates (Since $$$A_1 = A_2 = \dots = A_p$$$), but $$$A_{p+1}$$$ is smaller ($$$l_{p+1}$$$ should be the candidate instead

  • Let $$$p_1, p_2, \dots, p_k$$$ the candidates and $$$A_1, A_2, \dots, A_k$$$ the candidate circular substrings of $$$s$$$ that start from those position

$$$S = aabaaa\dots$$$ and $$$A_1 = aab$$$, $$$A_2 = aaa$$$ then $$$A_1$$$ is eliminated, $$$p_2 = 3$$$ the next candidate

$$$S = aabaac\dots$$$ and $$$A_1 = aab$$$, $$$A_2 = aac$$$ then $$$A_2$$$ is eliminated, $$$p_1 = 0$$$ the next candidate

$$$S = aabaabaab\dots aab$$$ and $$$A_1 = A_2 = \dots = A_k = aab$$$, then $$$p_1 = 0$$$ the champion

$$$S = aabaabaab\dots aabaac\dots$$$ and $$$A_1 = A_2 = \dots = A_p = aab \neq A_{p+1} = aac$$$, then $$$p_1 = 0$$$ the next candidate

$$$S = aabaabaab\dots aabaaa\dots$$$ and $$$A_1 = A_2 = \dots = A_p = aab \neq A_{p+1} = aaa$$$, then contradiction happened since $$$aaa$$$ should be the candidate instead


4) Example:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Pre-elimination - 26 candidates
Round I - 18 candidates
Round II - 6 candidates
Round III - 3 candidates
Round IV - 2 candidates
Round V - 2 candidates
Round VI - 2 candidates
Champion

5) Notice:
  • Collision Detecting is on Circular Substring

6) Complexity:

If the string continues to grown, then they will collide and dies

Trying to extend string size $$$k$$$ mean minimize candidate list size $$$O(k \times \lfloor \frac{n}{k} \rfloor) = O(n)$$$

Trying to maximize candidate list size $$$k$$$ mean reduce the string size $$$O(f(x)) = O(k \times (\lfloor \frac{n}{k} \rfloor \times k - k)) + O(f(\lfloor \frac{k}{2} \rfloor)) = O(k\ log\ k)$$$


7) Implementations:
  • For convention, let not use obviously-not-optimized code
Detail Elimination and Colliding - O(n log n) time - O(n) auxiliary space
Elimination and Colliding - O(n log n) time - O(n) auxiliary space










D) Elimination and Merging



1) Idea:
  • This is the improvement from Elimination and Colliding. It takes me few days to think, to prove and to implement the idea

  • If many substrings collide, we can eliminate all as once. And jump directly to the1

  • From the above proof, we also have the property that when $$$p$$$ is one of the candidates then $$$repeated-string$$$ start from $$$p$$$ might be one of the candidate


2) Algorithm:
  • At the $$$dth$$$ elimination, let $$$p_1, p_2, \dots, p_k$$$ are the candidates, $$$A_1, A_2, \dots, A_k$$$ are circular substring of $$$s$$$ that $$$A_1 = s[p_1 \dots p_1 + d - 1]$$$. Notice that all of them are either collide ($$$p_i + d - 1 \equiv p_{i+1}$$$ (mod $$$n$$$)) or not intersect (because the intersect pairs are eliminated at their first collision)

  • Let $$$v$$$ is the maximum collision or/and exist such consecutive candidates $$$q_1, q_2, \dots, q_v$$$ that $$$A_{q_1}$$$ collides $$$A_{q_2}$$$ collides $$$\dots$$$ collides $$$A_{q_v}$$$. Then $$$A_{q_1}A_{q_2} \dots A_{q_v}$$$ is the best candidates up to the $$$(d \times v)$$$-th elimination.

  • Therefore, instead of eliminating all the weaker candidates, we can just merge them and teleport to the next elimination that they collides again


3) Proofs:
  • I think you can prove this based on Elimination and Colliding and Duval Lyndon Approach.

  • Good thing is that this will also prove the linear complexity of the algorithm


4) Example:
  • $$$S = $$$"$$$abaabaaabaababaaabaaababaab$$$"
Pre-elimination - 26 candidates
Round III - 3 candidates
Champion

5) Notice:
  • If the candidates arent changed, teleport to next tournament not itself

  • The last candidate might merge with the first candidate

  • When merging, you should use modulo operator for safety

  • Because of its complex code, you can just ignore pre-elimination and play round $$$d = 1$$$ to $$$n$$$


6) Complexity:

It seem to be about $$$O(n\ log\ n)$$$ when each time the candidates is eliminate atleast a half

But actually the complexity is Linear $$$O(n)$$$ because every position is called maximumly once for preprocessing, twice times for comparings, thrice for merging only.


7) Implementations:
Detail Elimination and Merging - O(n) time - O(n) auxiliary space
Noncomment Elimination and Merging - O(n) time - O(n) auxiliary space

8) Optimization
  • Since every position $$$p$$$, will in range $$$0 \dots 2 \times n - 1$$$, hence we can precalculate the modulo of each
Optimized Elimination and Merging - O(n) time - O(n) auxiliary space

Полный текст и комментарии »

  • Проголосовать: нравится
  • +106
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски



Table of content

Teleporter Description
I. Lyndon Definitions Definitions of Lyndon word, Lyndon factorization, ...
II. Duval Algorithm Talk about the duval algorithm and how it works
III. Real Life Applications Motivation to learn the algorithm
IV. Programming Applications The code is short and simple to implement
V. My Questions Are there any other applications ?
................................................................... ..........................................................................................................................










I. Lyndon Factorization



1) String Concatenation

  • Definition: The operation of joining character strings end-to-end

  • Property I: Non-empty string $$$S$$$ is a concatenation of all substrings of itself

  • Property II: Non-empty string $$$S$$$ is a concatenation of any empty string at any position in itself with itself

  • Property III: Let $$$A, B, C$$$ the non-empty string then $$$A + B + C = A + (B + C) = (A + B) + C$$$

  • For convention, let define the operator + is string concatenation



2) Lyndon Word

  • Definition: A nonempty string that is strictly smaller in lexicographic order than all of its rotations.

  • Property I: Lyndon word is nonempty and lexicographically strictly smaller than any of its proper suffixes.

  • Property II: Let $$$S, T, Z$$$ is nonempty word. $$$S$$$ is Lyndon word if $$$S < T\ \ \ \forall\ S = Z + T$$$

  • Property III: Lyndon word cant be factorized. It means that the only factor in its factorization is itself.



3) Lyndon Factorization

  • Definition: Split the string into many lyndon words in such a way that the words in the sequence are in lexicographically non-increasing order.

  • Property I: For $$$s = s_1 + s_2 + \dots + s_k$$$ where $$$s_1, s_2, \dots, s_k$$$ are lyndon words and in non-increasing order ($$$s1 \geq s2 \geq \dots \geq s_k$$$)

  • Property II: Lyndon factorization is unique.

  • Property III: The last Lyndon Factor is Lexicographically Smallest Suffix of the string



4) Least Starting Position (LSP)

  • Definition: The minimal position of some substrings that make it LMSR.

  • Property I: Let $$$X$$$ the substring of $$$S$$$ that its starting position $$$p$$$ is LSP. Then some Lyndon Factors in $$$X$$$ has the LSP $$$p$$$

  • Property II: $$$K$$$-repeated String, where each substring has size $$$L$$$ then there are $$$K$$$ LSP: $$$0, L, 2L, \dots, (K-1)L$$$

  • Property III: The Circular String start from LSP of given string is Lexicographically Minimal String Rotation











II. Duval Algorithm



  • Exist Time: 1983

  • Duval algorithm is an effecient algorithm for listing the Lyndon words of length at most $$$n$$$ with a given alphabet size $$$s$$$ in lexicographic order in $$$O(n)$$$

  • The position of the last Lyndon factorized word from Duval algorithm provides minimum cyclic string

The pseudo algorithm
Implementation - O(n) Time - O(1) Auxiliary Space
Detail Implementation - O(n) Time - O(1) Auxiliary Space










III. Real Life Applications



1) Finger print identification:
  • We can encode the finger print into many detailed circular strings. How to search such finger print again from those in very huge data base ? Circular comparision using lyndon factorization is requried.

2) Biological genetics:
  • In some cases, we need to know whether these two group's genetics are belonged to the same bacteria, virus.

3) Games:
  • Well, ofcourse we can apply the algorithm in some language/words-related games










IV. Programming Applications



1) Least rotation to make smallest lexicographical ordered string.


The problem:
  • Given a string $$$S$$$ of size $$$N$$$

  • A right-rotation is that move the leftmost character to rightmost of $$$S$$$

  • Find the least right-rotation to make $$$S$$$ become the smallest lexicographical ordered string

Important Property: After those right-rotations, the string will be Minimum Acyclic String



The solution:
  • One thing we can come in mind is to use hash or string algorithms in $$$O(n\ log\ n)$$$, but they are kinda complex

  • Some other approachs can be found here


  • Bruteforces Solution: Let $$$t = s + s$$$. Then for each substring of size $$$|s|$$$, we compare and find the smallest
Bruteforces Solution - O(n^2) Time - O(n) Auxiliary Space
Optimized Bruteforces Solution - O(n) to O(n^2) Time - O(n) Auxiliary Space

  • Duval Solution: We can apply lyndon factorization with a little bit observation
Detail Duval Solution - O(n) Time - O(n) Auxiliary Space
None-comment Duval Solution - O(n) Time - O(n) Auxiliary Space
Optimized Duval Solution - O(n) Time - O(1) Auxiliary Space

Practices Problem:











V. My Question



1) Are there any other programming applications for Lyndon Factorization ?
  • The algorithm is simple to code while running pretty fast as its low complexities. It would be kinda sad if there is only one main application, isnt it :(

2) Are there any other problems for Lyndon Factorization ?
  • To remember something better and understand the algorithm deeper, we need to practice right :D It would be nice if there are some more problems

Полный текст и комментарии »

  • Проголосовать: нравится
  • +137
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

The problem

So I have try to post a blog, but then I realize that the text inside spoilers is outsides even if the spoiler is still closed

I searched on google and codeforces and tried several ways people say like

  • Use other browsers

  • Add small space, tab, empty char add the end of summary=""

  • Add small texts near (above / below) the spoiler

  • Add some empty lines under <spoiler > or/and over </spoiler>

  • Use ~~~ instead of ```cpp

But sadly none of them work. However, for somewhat a reason that sometimes the spoilers arent broken though they are similar

Do someone know how to fix it? Thank you <3




UPDATE 0: I finnaly figured out that if the spoiler has it (last line) = (a line with - at its head) will destroy all spoilers below

UPDATE 1: It is so weird that using my old blog but add extra word or remove old word can still break the spoilers :( And the spoilers are now all broken again

Полный текст и комментарии »

  • Проголосовать: нравится
  • +23
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски


Teleporter: [Previous] | | | [Next]

Table of content

This blog isnt the best but worth spending time to read it

Teleporter Description
I. STATEMENT Taking about the problem
II. EXAMPLE To understand the problem better
III. Solution for small number of element — N How much will you get in each possible subset ?
IV. Solution for small sum of weight — C[i] What is the maximum value possible when your bag is exact $$$W$$$ weight ?
V. Solution for small sum of value — V[i] What is the minimum bag weight possible when it is exact $$$S$$$ sum value ?
VI. Tracing for selected elements Which next state will lead to the best result ?
VII. Other solutions How to solve the problem with special condition ?
VIII. Online Algorithm How to solve the problem when you need to output the result whenever you receive a new item ?
IX. Optimizations and Heuristic How to improve the algorithm faster, shorter, simpler, safetier or saving space
X. Debugging Support you when you are in a trouble that you cant find your bug
XI. Knapsack Variation and Practice Problems In case you need a place to practice or submitting
XII. Blog status The current progress and contributor of this blogs




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

I. STATEMENT

Taking about the problem

Problem: During midnight, a thief breaks into a jewelry shop. There are $$$N$$$ priceful items whose value and weight are known. The item $$$p$$$ can be sold for $$$V_p$$$ money but having $$$C_p$$$ weight. There is a bag with infinity amount of space, means that any item can be pinto it while the item's value in the bag remain unchanged ! But, it can only hold maximumly $$$W$$$ weight.

Question: What is the value $$$V$$$ that the thief can steal from the shop.





















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

II. EXAMPLE

To understand the problem better



  • Input
3 8
2 10
4 20
6 30

  • Output
40

  • Explanation:
There are 8 possible cases
{} -> 0 value, 0 weight
{1} -> 10 value, 2 weight
{2} -> 20 value, 4 weight
{3} -> 30 value, 6 weight
{1, 2} -> 30 value, 6 weight
{1, 3} -> 40 value, 8 weight - optimal
{2, 3} -> 50 value, 10 weight - invalid weight
{1, 2, 3} -> 60 value, 12 weight - invalid weight




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

III. Solution for small number of element — N

How much will you get in each possible subset ?



A. Permutation Approach (Bad) — $$$O(n!)$$$ time — $$$O(n)$$$ space
  • For each possible permutation, pick elements until it weight too much

  • The result is the maximum value sum, for which weight sum is not greater than $$$W$$$


Permutation Approach - O(n!) time - O(n) space


B. Bitmasking Approach (Good) — $$$O(2^n \times n)$$$ time — $$$O(n)$$$ space
  • Because the order isnt important, we just need to test all every possible subset

  • The result is the maximum value sum, for which weight sum is not greater than $$$W$$$

Bitmasking Approach - O(2^n * n) time - O(n) space


C. Meet-in-the-middle Approach (Better) — $$$O(2^{^{\frac{n}{2}}} \times \frac{n}{2})$$$ time — $$$O(2^{^{\frac{n}{2}}})$$$ space
  • Split the array into two halves $$$L$$$ and $$$R$$$. In each half, we will calculate every possible subsets. And in each subset we store a pair of $$$(value\ sum, weight\ sum)$$$

  • For each element $$$X(value_X, weight_X) \in L$$$, we need to find suitable element $$$Y(value_Y, weight_Y) \in R$$$ that satisfying maximum $$$value_R$$$ and $$$weight_L + weight_R \leq W$$$

  • Therefore, we can sort all the $$$R$$$ set by increasing weight. Let $$$maxval_Y = max(value_E | E \in R, weight_E \leq weight_Y)$$$. Then for each $$$X \in L$$$, we can find its suitable $$$Y$$$ by binary search in $$$O(log\ |R|)$$$ with $$$O(|R|)$$$ precalculation

Meet in the middle approach - O(2^(n/2) * (n/2)) time - O(2^(n/2)) space




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

IV. Solution for small sum of weight — C[i]

What is the maximum value possible when your bag is exact $$$W$$$ weight ?



A) Recursive Dynamic Programming — $$$O(N \times W)$$$ time — $$$O(N \times W)$$$ space
  • Memorization:

    • f[i][s] = magic(int i, int s) stand for using from the $$$ith$$$ items, with the total weight of $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$-1$$$
  • Base cases

    • If ($$$s > w$$$) then $$$v = -oo$$$ since we use more than what the bag can hold
    • If ($$$i \geq n$$$) then $$$v = 0$$$ since there is no available item, so no weight added into the bag
  • Transistion

    • Using current item, it will be $$$A = magic(i + 1, s + c_i) + v_i)$$$ — move to next item, weight is added with $$$c_i$$$, value is increased by $$$v_i$$$
    • Not using current item, it will be $$$B = magic(i + 1, s + 0) + 0)$$$ — move to next item, weight is remained, value is not increased
    • We want the maximum value so $$$magic(int\ i, int\ s) = max(A, B)$$$
  • The final result: $$$result = magic(1, 0)$$$ — starting from first item with $$$0$$$ weighted bag

Recursive Approach - O(NW) time - O(NW) space


B) Iterative Dynamic Programming — $$$O(N \times W)$$$ time — $$$O(N \times W)$$$ space
  • Memorization:

    • f[i][s] stand for using from the $$$ith$$$ items, with the total weight exact $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$0$$$ not $$$-1$$$
  • Base cases:

    • $$$\forall x \geq 0, f[0][x] = 0$$$ — using no item, hence return no value
    • $$$\forall x \geq 0, f[x][0] = 0$$$ — having no weight, hence no using item
    • $$$\forall x > 0, y < 0, f[x][y] = -oo$$$ — define it as negative infinity for easier calculation
  • Transistion:

    • Using current item, $$$A = \underset{0 \leq t + c_i \leq s}{\underset{j \leq i}{max}}(f[j][t]) + v[i] = \underset{0 \leq t = s - c_i}{\underset{j \leq i}{max}}(f[j][t]) + v[i] = \underset{0 \leq t = s - c_i}{\underset{j = i - 1}{f[j][t]}} + v[i]$$$ maximum value among all previous bags added to current item
    • Not using current item, it will be $$$B = \underset{0 \leq t + 0 \leq s}{\underset{j \leq i}{max}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j \leq i}{max}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j = i - 1}{f[j][t]}} + 0$$$ — move to next item, weight is remained, value is not increased
    • We want the maximum value so $$$f[i][s] = max(A, B) = max(f[i - 1][s], f[i - 1][s - c_i] + v_i)$$$
  • The final result: $$$result = \underset{0 \leq s \leq w}{max}(f[n][s])$$$ — starting from first item with $$$0$$$ weighted bag

Bad Approach - O(N^2 * W^2) time
Iterative Approach - O(NW) time - O(NW) space
Prefixmax DP Approach - O(NW) time - O(NW) space


C) Recursive Dynamic Programming (Space optimization) — $$$O(N \times W)$$$ time — $$$O(N + W)$$$ space

A) O(2W) DP space

  • Observe: $$$\forall i > 0, f[i][x]$$$ depends on $$$f[i - 1]$$$ and $$$f[i]$$$ only, hence we just need 2 dp array space

  • Define: When we calculate at pth element, we have $$$\underset{x \equiv p (mod 2)}{f[x]}$$$ is current dp array, $$$\underset{y \equiv p + 1 (mod 2)}{f[y]}$$$ is previous dp array

  • Transistion: $$$f[i][s] = max(f[i - 1][s], f[i - 1][s - c_i] + v_i)$$$ equivalent to $$$f[x][s] = max(f[y][s], f[y][s - c_i] + v_i)$$$

Space Optimization Approach - O(NW) time - O(N + 2W) space


B) O(W) 1D — DP space

  • From the above algorithm, we can change the inner loop
Inner Part
  • Kinda tricky, but we only need one array, for each query $$$f[s]$$$ stand for maximum value with bag of weight $$$s$$$ upto that query.
Inner Part
  • Notice that it is required for the second-inner-loop to iterate from $$$w$$$ downto $$$c_i$$$. Here is the reason
From c[i] upto w
From w downto c[i]
  • Finally, here is 1D Dynamic Programming Solution
Space Optimization Approach - O(NW) time - O(N + 1W) space




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

V. Solution for small sum of value — V[i]

What is the minimum bag weight possible when it is exact $$$S$$$ sum value ?



A) Recursive Dynamic Programming — $$$O(N \times SUM)$$$ time — $$$O(N \times SUM)$$$ space
  • Memorization:

    • f[i][s] = magic(int i, int s) stand for using from the $$$ith$$$ items, with the total value of $$$s$$$ that minimum weight is exact $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$-1$$$
  • Base cases

    • If ($$$s < 0$$$) then $$$v = +oo$$$ means we use more value than expected
    • If ($$$i > n$$$ and $$$s \neq 0$$$) then $$$v = +oo$$$ means there is currently no bag of exact $$$s$$$ value
    • If ($$$i > n$$$ and $$$s = 0$$$) then $$$v = 0$$$ means there is actually a bag of exact $$$s$$$ value
  • Transistion

    • Using current item, it will be $$$A = magic(i + 1, s - v_i) + c_i)$$$ — move to next item, sum value is reduce by $$$v_i$$$, weight is added with $$$c_i$$$
    • Not using current item, it will be $$$B = magic(i + 1, s - 0) + 0)$$$ — move to next item, sum value is remained, weight is not increased
    • We want the minimum weight so $$$magic(int\ i, int\ s) = min(A, B)$$$
  • The final result: $$$result = \underset{0 \leq s \leq \Sigma(v_i)}{max}(s | magic(1, s) \leq w)$$$ — maximum value whose weight is not greater than $$$W$$$

Recursive Approach - O(NSUM) time - O(NSUM) space


B) Iterative Dynamic Programming — $$$O(N \times SUM)$$$ time — $$$O(N \times SUM)$$$ space
  • Memorization:

    • f[i][s] stand for using from the $$$ith$$$ items, with the total value of exact $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$+oo$$$ not $$$-1$$$
  • Base cases:

    • $$$\forall x \geq 0, f[0][x] = 0$$$ — using no item, hence return no weight
    • $$$\forall x \geq 0, f[x][0] = 0$$$ — having no value, hence no using item
    • $$$\forall x > 0, y < 0, f[x][y] = +oo$$$ — define it as negative infinity for easier calculation
  • Transistion:

    • Using current item, $$$A = \underset{0 \leq t + v_i \leq s}{\underset{j \leq i}{min}}(f[j][t]) + c[i] = \underset{0 \leq t = s - v_i}{\underset{j \leq i}{min}}(f[j][t]) + c[i] = \underset{0 \leq t = s - c_i}{\underset{j = i - 1}{f[j][t]}} + v[i]$$$ minimum weight among all previous bags added to current item
    • Not using current item, it will be $$$B = \underset{0 \leq t + 0 \leq s}{\underset{j \leq i}{min}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j \leq i}{min}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j = i - 1}{f[j][t]}} + 0$$$ — move to next item, value is remained, weight is not increased
    • We want the minimum weight so $$$f[i][s] = min(A, B) = min(f[i - 1][s], f[i - 1][s - v_i] + c_i)$$$
  • The final result: $$$result = \underset{0 \leq s \leq \Sigma(v_i)}{max}(s | f[n][s] \leq w)$$$ — maximum value whose weight is not greater than $$$W$$$

Iterative Approach - O(NSUM) time - O(NSUM) space


C) Iterative Dynamic Programming (Space Optimization) — $$$O(N \times SUM)$$$ time — $$$O(N + SUM)$$$ space

A) O(2SUM) DP space

  • Observe: $$$\forall i > 0, f[i][x]$$$ depends on $$$f[i - 1]$$$ and $$$f[i]$$$ only, hence we just need 2 dp array space

  • Define: When we calculate at pth element, we have $$$\underset{x \equiv p (mod 2)}{f[x]}$$$ is current dp array, $$$\underset{y \equiv p + 1 (mod 2)}{f[y]}$$$ is previous dp array

  • Transistion: $$$f[i][s] = min(f[i - 1][s], f[i - 1][s - v_i] + c_i)$$$ equivalent to $$$f[x][s] = min(f[y][s], f[y][s - v_i] + c_i)$$$

Space Optimization Approach - O(NSUM) time - O(N + 2SUM) space


B) O(SUM) 1D — DP space

  • From the above algorithm, we can change the inner loop
Inner Part
  • Kinda tricky, but we only need one array, for each query $$$f[s]$$$ stand for maximum value with bag of weight $$$s$$$ upto that query.
Inner Part
  • Notice that it is required for the second-inner-loop to iterate from $$$sum$$$ downto $$$v_i$$$. Here is the reason
From v[i] upto sum
From sum downto v[i]
  • Finally, here is 1D Dynamic Programming Solution
Space Optimization Approach - O(NSUM) time - O(N + SUM) space




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VII. Tracing for selected elements

Which next state will lead to the best result ?



A) Solution for small number of element — N

  • A) Permutation Approach: We will update selected elements when we see a better solution
Permutation - O(n!) time - O(n) space

  • B) Bitmasking Approach: We will update bitmask when we see a better solution
O(2^n) time - O(n) space

  • C) Meet-in-the-middle Approach: We will update bitmask when we see a better solution AND ON DP-CALCULATION.
Bitmasking - O(2^(n/2) * (n/2)) time - O(2^(n/2)) space


B) Solution for small sum of weight — C[i]

  • A) Recursive Dynamic Programming: Starting from $$$(i = 0, s = 0)$$$, we already have $$$magic(i,s)$$$ return the best result, $$$magic(i + 1,s + 0) + 0)$$$ or/and $$$magic(i + 1, s + c[i]) + v[i]$$$ will be the best result
Trace cases
Recursive DP - O(NW) time - O(NW) space

  • B) Iterative Dynamic Programming:
Prefixmax Iterative DP - O(NW) time - O(NW) space

  • C) Iterative Dynamic Programming (Space Optimization):
Explanation
Code


C) Solution for small sum of value — V[i]

  • A) Recursive Dynamic Programming: Starting from $$$(i = 0, s = res)$$$, we already have $$$magic(i,s)$$$ return the best result, $$$magic(i + 1,s + 0) + 0)$$$ or/and $$$magic(i + 1, s - v[i]) + c[i]$$$ will be the best result
Trace cases
Recursive DP - O(NSUM) time - O(NSUM) space

  • B) Iterative Dynamic Programming:
Iterative DP - O(NSUM) time - O(NSUM) space

  • C) Iterative Dynamic Programming (Space Optimization):
Explanation
Code




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VII. Other solutions

How to solve the problem with special condition ?



A) Fractional Knapsack & Greedy Approach

  • On progress...




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VIII. Online Algorithm

How to solve the problem when you need to output the result whenever you receive a new item ?



A) Solution for small number of element — N

  • A) Permutation Approach: Really not worth being used thought it is possible to optimize it

  • B) Bitmasking Approach:
Explanation
Bitmasking Approach - O(2^N * N) time - O(n) space
DP Bitmask Approach - O(2^N) time - O(2^N) space

  • C) Iterating Deque Approach:
Explanation
Deque approach with little optimization - O(min(W, 2^N)) time - O(2^N) space
Deque approach with bad optimization - O(min(W, 2^N * N)) time - O(2^N * N) space

  • D) Recursive Approach:
Explanation
Recursive Approach - O(min(W, 2^N)) time - O(N) space
Recursive Approach with optimization - O(min(W, 2^N / 2^K)) time - O(N) space

  • E) Meet in the middle approach: On the progress...


B) Solution for small sum of weight — C[i]

  • A) Recursive Dynamic Programming:
What have changed from the orginal ?
O(NW) time - O(NW) space

  • B) Iterative Dynamic Programming:
What have changed from the orginal ?
O(NW) time - O(NW) space

  • C) Iterative Dynamic Programming (Space Optimization):
What have changed from the orginal ?
O(NW) time - O(W) space


C) Solution for small sum of value — V[i]

  • A) Recursive Dynamic Programming:
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space

  • B) Iterative Dynamic Programming:
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space

  • C) Iterative Dynamic Programming (Space Optimization):
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

IX. Optimizations and Heuristic

How to improve the algorithm faster, shorter, simpler, safetier or saving space



A) Filtering the array

  • 1) Split items into 2 types, whose weight less than $$$W$$$ and the rest
Hint

  • 2) Compressed the array
Hint




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

X. Debugging

Support you when you are in a trouble that you cant find your bug



A) Wrong answer

1) Becareful when weight sum and value sum is big, it would cause overflow

Debug

2) Becareful that in Meet-in-the-middle approach:

  • You have to update the bitmask that have maxvalue.

  • You have to update the $$$maxval$$$ and $$$maxmask$$$ before assign $$$x.maxval$$$, $$$x.maxmask$$$

  • You have to use also in collecting the result

Wrong
Wrong
Wrong
Debug
Debug

  • 3) Forget base cases: In type $$$IV$$$ the DP is already init as 0, so you dont need the loop to zero, while the $$$V$$$ is not like that when you init it as $$$+oo$$$
Wrong


B) Time Limit Exceed

1) Global variable $$$\neq$$$ Local variable

  • In Meet-in-the-middle approach, the solve() function didnt use global variable (n), it use $$$n = |c| = |s|$$$.
Debug

2) Forget to use memorization

Wrong
Wrong

3) You might get WA if you have wrong initalization or leave the value generated randomly

Wrong

4) If you wanna binary search for the result, remember that you cant do Prefixmin DP $$$O(N \times SUM)$$$ as what it like in Prefixmax DP $$$O(N \times W)$$$

Wrong


C) Memory limit exceed

1) Though Meet-in-the-middle approach is faster than Bitmasking Approach, it requires large amount of space — $$$O(2^{^{\frac{n}{2}}})$$$, which may give you MLE !


2) In some cases you will need space optimization if the limit is too tight !


3) Becareful in tracing results

Wrong
Fixed


D) Runtime Error

1) Out of bound

Wrong
Wrong

2) Array too big in main function: Declare it globally with the init size bigger than problem constraint a bit





















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

XI. Knapsack Variation and Practice Problems

In case you need a place to practice or submitting



Hint

Note

Hint




Hint

Hint




Hint

Hint

Hint




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

XII. Blog status

The current progress and contributor of this blogs



  • Current progress;

    • 1) Online Algorithms
    • 2) Remain space optimization while tracing for elements
  • On progress:

    • 0) Table of content & Complexity comparision table
    • 1) Online Algorithm
    • 2) Optimizations and Heuristic
    • 3a) Unbounded knapsack
    • 3b) Bounded knapsack
    • 3c) Item limitation knapsack
    • 4a) Knapsack query maximum value with item in range $$$[L, R]$$$
    • 4b) Knapsack query maximum value with weight in range $$$[L, R]$$$
    • 4c) Knapsack query minimum weight with value in range $$$[L, R]$$$
    • 5a) Multiple knapsack bags
    • 5b) Multidimentional item
  • Special thank to contributors: SPyofgame, TheScrasse, Lusterdawn, jiangly

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски


Teleporter: [Previous] | | | [Next]

Table of content

Teleporter Description
I. STATEMENT Taking about the problem
II. EXAMPLE To understand the problem better
III. Solution for small number of element — N How much will you get in each possible subset ?
IV. Solution for small sum of weight — C[i] What is the maximum value possible when your bag is exact $$$W$$$ weight ?
V. Solution for small sum of value — V[i] What is the minimum bag weight possible when it is exact $$$S$$$ sum value ?
VI. Tracing for selected elements Which next state will lead to the best result ?
VII. Other solutions How to solve the problem with special condition ?
VIII. Online Algorithm How to solve the problem when you need to output the result whenever you receive a new item ?
IX. Optimizations and Heuristic How to improve the algorithm faster, shorter, simpler, safetier or saving space
X. Debugging Support you when you are in a trouble that you cant find your bug
XI. Knapsack Variation and Practice Problems In case you need a place to practice or submitting
XII. Blog status The current progress and contributor of this blogs




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

I. STATEMENT

Taking about the problem

Problem: During midnight, a thief breaks into a jewelry shop. There are $$$N$$$ priceful items whose value and weight are known. The item $$$p$$$ can be sold for $$$V_p$$$ money but having $$$C_p$$$ weight. There is a bag with infinity amount of space, means that any item can be pinto it while the item's value in the bag remain unchanged ! But, it can only hold maximumly $$$W$$$ weight.

Question: What is the value $$$V$$$ that the thief can steal from the shop.





















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

II. EXAMPLE

To understand the problem better



  • Input
3 8
2 10
4 20
6 30

  • Output
40

  • Explanation:
There are 8 possible cases
{} -> 0 value, 0 weight
{1} -> 10 value, 2 weight
{2} -> 20 value, 4 weight
{3} -> 30 value, 6 weight
{1, 2} -> 30 value, 6 weight
{1, 3} -> 40 value, 8 weight - optimal
{2, 3} -> 50 value, 10 weight - invalid weight
{1, 2, 3} -> 60 value, 12 weight - invalid weight




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

III. Solution for small number of element — N

How much will you get in each possible subset ?



A. Permutation Approach (Bad) — $$$O(n!)$$$ time — $$$O(n)$$$ space
  • For each possible permutation, pick elements until it weight too much

  • The result is the maximum value sum, for which weight sum is not greater than $$$W$$$


Code


B. Bitmasking Approach (Good) — $$$O(2^n)$$$ time — $$$O(n)$$$ space
  • Because the order isnt important, we just need to test all every possible subset

  • The result is the maximum value sum, for which weight sum is not greater than $$$W$$$

Code


C. Meet-in-the-middle Approach (Better) — $$$O(2^{n/2})$$$ time — $$$O(2^{n/2})$$$ space
  • Split the array into two halves $$$L$$$ and $$$R$$$. In each half, we will calculate every possible subsets. And in each subset we store a pair of $$$(value\ sum, weight\ sum)$$$

  • For each element $$$X(value_X, weight_X) \in L$$$, we need to find suitable element $$$Y(value_Y, weight_Y) \in R$$$ that satisfying maximum $$$value_R$$$ and $$$weight_L + weight_R \leq W$$$

  • Therefore, we can sort all the $$$R$$$ set by increasing weight. Let $$$maxval_Y = max(value_E | E \in R, weight_E \leq weight_Y)$$$. Then for each $$$X \in L$$$, we can find its suitable $$$Y$$$ by binary search in $$$O(log\ |R|)$$$ with $$$O(|R|)$$$ precalculation

Code




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

IV. Solution for small sum of weight — C[i]

What is the maximum value possible when your bag is exact $$$W$$$ weight ?



A) Recursive Dynamic Programming — $$$O(N \times W)$$$ time — $$$O(N \times W)$$$ space
  • Memorization:

    • f[i][s] = magic(int i, int s) stand for using from the $$$ith$$$ items, with the total weight of $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$-1$$$
  • Base cases

    • If ($$$s > w$$$) then $$$v = -oo$$$ since we use more than what the bag can hold
    • If ($$$i \geq n$$$) then $$$v = 0$$$ since there is no available item, so no weight added into the bag
  • Transistion

    • Using current item, it will be $$$A = magic(i + 1, s + c_i) + v_i)$$$ — move to next item, weight is added with $$$c_i$$$, value is increased by $$$v_i$$$
    • Not using current item, it will be $$$B = magic(i + 1, s + 0) + 0)$$$ — move to next item, weight is remained, value is not increased
    • We want the maximum value so $$$magic(int\ i, int\ s) = max(A, B)$$$
  • The final result: $$$result = magic(1, 0)$$$ — starting from first item with $$$0$$$ weighted bag

Code


B) Iterative Dynamic Programming — $$$O(N \times W)$$$ time — $$$O(N \times W)$$$ space
  • Memorization:

    • f[i][s] stand for using from the $$$ith$$$ items, with the total weight exact $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$0$$$ not $$$-1$$$
  • Base cases:

    • $$$\forall x \geq 0, f[0][x] = 0$$$ — using no item, hence return no value
    • $$$\forall x \geq 0, f[x][0] = 0$$$ — having no weight, hence no using item
    • $$$\forall x > 0, y < 0, f[x][y] = -oo$$$ — define it as negative infinity for easier calculation
  • Transistion:

    • Using current item, $$$A = \underset{0 \leq t + c_i \leq s}{\underset{j \leq i}{max}}(f[j][t]) + v[i] = \underset{0 \leq t = s - c_i}{\underset{j \leq i}{max}}(f[j][t]) + v[i] = \underset{0 \leq t = s - c_i}{\underset{j = i - 1}{f[j][t]}} + v[i]$$$ maximum value among all previous bags added to current item
    • Not using current item, it will be $$$B = \underset{0 \leq t + 0 \leq s}{\underset{j \leq i}{max}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j \leq i}{max}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j = i - 1}{f[j][t]}} + 0$$$ — move to next item, weight is remained, value is not increased
    • We want the maximum value so $$$f[i][s] = max(A, B) = max(f[i - 1][s], f[i - 1][s - c_i] + v_i)$$$
  • The final result: $$$result = \underset{0 \leq s \leq w}{max}(f[n][s])$$$ — starting from first item with $$$0$$$ weighted bag

Bad transistion code - O(N^2 * W^2) time
Normal DP
Prefixmax DP


C) Recursive Dynamic Programming (Space optimization) — $$$O(N \times W)$$$ time — $$$O(N + W)$$$ space

A) O(2W) DP space

  • Observe: $$$\forall i > 0, f[i][x]$$$ depends on $$$f[i - 1]$$$ and $$$f[i]$$$ only, hence we just need 2 dp array space

  • Define: When we calculate at pth element, we have $$$\underset{x \equiv p (mod 2)}{f[x]}$$$ is current dp array, $$$\underset{y \equiv p + 1 (mod 2)}{f[y]}$$$ is previous dp array

  • Transistion: $$$f[i][s] = max(f[i - 1][s], f[i - 1][s - c_i] + v_i)$$$ equivalent to $$$f[x][s] = max(f[y][s], f[y][s - c_i] + v_i)$$$

Code


B) O(W) 1D — DP space

  • From the above algorithm, we can change the inner loop
Inner Part
  • Kinda tricky, but we only need one array, for each query $$$f[s]$$$ stand for maximum value with bag of weight $$$s$$$ upto that query.
Inner Part
  • Notice that it is required for the second-inner-loop to iterate from $$$w$$$ downto $$$c_i$$$. Here is the reason
From c[i] upto w
From w downto c[i]
  • Finally, here is 1D Dynamic Programming Solution
Code




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

V. Solution for small sum of value — V[i]

What is the minimum bag weight possible when it is exact $$$S$$$ sum value ?



A) Recursive Dynamic Programming — $$$O(N \times SUM)$$$ time — $$$O(N \times SUM)$$$ space
  • Memorization:

    • f[i][s] = magic(int i, int s) stand for using from the $$$ith$$$ items, with the total value of $$$s$$$ that minimum weight is exact $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$-1$$$
  • Base cases

    • If ($$$s < 0$$$) then $$$v = +oo$$$ means we use more value than expected
    • If ($$$i > n$$$ and $$$s \neq 0$$$) then $$$v = +oo$$$ means there is currently no bag of exact $$$s$$$ value
    • If ($$$i > n$$$ and $$$s = 0$$$) then $$$v = 0$$$ means there is actually a bag of exact $$$s$$$ value
  • Transistion

    • Using current item, it will be $$$A = magic(i + 1, s - v_i) + c_i)$$$ — move to next item, sum value is reduce by $$$v_i$$$, weight is added with $$$c_i$$$
    • Not using current item, it will be $$$B = magic(i + 1, s - 0) + 0)$$$ — move to next item, sum value is remained, weight is not increased
    • We want the minimum weight so $$$magic(int\ i, int\ s) = min(A, B)$$$
  • The final result: $$$result = \underset{0 \leq s \leq \Sigma(v_i)}{max}(s | magic(1, s) \leq w)$$$ — maximum value whose weight is not greater than $$$W$$$

Code


B) Iterative Dynamic Programming — $$$O(N \times SUM)$$$ time — $$$O(N \times SUM)$$$ space
  • Memorization:

    • f[i][s] stand for using from the $$$ith$$$ items, with the total value of exact $$$s$$$ that maximum value is $$$f[i][s]$$$
    • All $$$f[i][s]$$$ init as $$$+oo$$$ not $$$-1$$$
  • Base cases:

    • $$$\forall x \geq 0, f[0][x] = 0$$$ — using no item, hence return no weight
    • $$$\forall x \geq 0, f[x][0] = 0$$$ — having no value, hence no using item
    • $$$\forall x > 0, y < 0, f[x][y] = +oo$$$ — define it as negative infinity for easier calculation
  • Transistion:

    • Using current item, $$$A = \underset{0 \leq t + v_i \leq s}{\underset{j \leq i}{min}}(f[j][t]) + c[i] = \underset{0 \leq t = s - v_i}{\underset{j \leq i}{min}}(f[j][t]) + c[i] = \underset{0 \leq t = s - c_i}{\underset{j = i - 1}{f[j][t]}} + v[i]$$$ minimum weight among all previous bags added to current item
    • Not using current item, it will be $$$B = \underset{0 \leq t + 0 \leq s}{\underset{j \leq i}{min}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j \leq i}{min}}(f[j][t]) + 0 = \underset{0 \leq t = s}{\underset{j = i - 1}{f[j][t]}} + 0$$$ — move to next item, value is remained, weight is not increased
    • We want the minimum weight so $$$f[i][s] = min(A, B) = min(f[i - 1][s], f[i - 1][s - v_i] + c_i)$$$
  • The final result: $$$result = \underset{0 \leq s \leq \Sigma(v_i)}{max}(s | f[n][s] \leq w)$$$ — maximum value whose weight is not greater than $$$W$$$

Code


C) Iterative Dynamic Programming (Space Optimization) — $$$O(N \times SUM)$$$ time — $$$O(N + SUM)$$$ space

A) O(2SUM) DP space

  • Observe: $$$\forall i > 0, f[i][x]$$$ depends on $$$f[i - 1]$$$ and $$$f[i]$$$ only, hence we just need 2 dp array space

  • Define: When we calculate at pth element, we have $$$\underset{x \equiv p (mod 2)}{f[x]}$$$ is current dp array, $$$\underset{y \equiv p + 1 (mod 2)}{f[y]}$$$ is previous dp array

  • Transistion: $$$f[i][s] = min(f[i - 1][s], f[i - 1][s - v_i] + c_i)$$$ equivalent to $$$f[x][s] = min(f[y][s], f[y][s - v_i] + c_i)$$$

Code


B) O(SUM) 1D — DP space

  • From the above algorithm, we can change the inner loop
Inner Part
  • Kinda tricky, but we only need one array, for each query $$$f[s]$$$ stand for maximum value with bag of weight $$$s$$$ upto that query.
Inner Part
  • Notice that it is required for the second-inner-loop to iterate from $$$sum$$$ downto $$$v_i$$$. Here is the reason
From v[i] upto sum
From sum downto v[i]
  • Finally, here is 1D Dynamic Programming Solution
Code




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VII. Tracing for selected elements

Which next state will lead to the best result ?



A) Solution for small number of element — N

  • A) Permutation Approach: We will update selected elements when we see a better solution
Permutation - O(n!) time - O(n) space

  • B) Bitmasking Approach: We will update bitmask when we see a better solution
O(2^n)) time - O(n) space

  • C) Meet-in-the-middle Approach: We will update bitmask when we see a better solution AND ON DP-CALCULATION.
Bitmasking - O(2^(n/2)) time - O(2^(n/2)) space


B) Solution for small sum of weight — C[i]

  • A) Recursive Dynamic Programming: Starting from $$$(i = 0, s = 0)$$$, we already have $$$magic(i,s)$$$ return the best result, $$$magic(i + 1,s + 0) + 0)$$$ or/and $$$magic(i + 1, s + c[i]) + v[i]$$$ will be the best result
Trace cases
Recursive DP - O(NW) time - O(NW) space

  • B) Iterative Dynamic Programming:
Prefixmax Iterative DP - O(NW) time - O(NW) space

  • C) Iterative Dynamic Programming (Space Optimization):
Explanation
Code


C) Solution for small sum of value — V[i]

  • A) Recursive Dynamic Programming: Starting from $$$(i = 0, s = res)$$$, we already have $$$magic(i,s)$$$ return the best result, $$$magic(i + 1,s + 0) + 0)$$$ or/and $$$magic(i + 1, s - v[i]) + c[i]$$$ will be the best result
Trace cases
Recursive DP - O(NSUM) time - O(NSUM) space

  • B) Iterative Dynamic Programming:
Iterative DP - O(NSUM) time - O(NSUM) space

  • C) Iterative Dynamic Programming (Space Optimization):
Explanation
Code




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VII. Other solutions

How to solve the problem with special condition ?



A) Fractional Knapsack & Greedy Approach

  • On progress...




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

VIII. Online Algorithm

How to solve the problem when you need to output the result whenever you receive a new item ?



A) Solution for small number of element — N

  • On progress...


B) Solution for small sum of weight — C[i]

  • A) Recursive Dynamic Programming:
What have changed from the orginal ?
O(NW) time - O(NW) space

  • B) Iterative Dynamic Programming:
What have changed from the orginal ?
O(NW) time - O(NW) space

  • C) Iterative Dynamic Programming (Space Optimization):
What have changed from the orginal ?
O(NW) time - O(W) space


C) Solution for small sum of value — V[i]

  • A) Recursive Dynamic Programming:
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space

  • B) Iterative Dynamic Programming:
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space

  • C) Iterative Dynamic Programming (Space Optimization):
What have changed from the orginal ?
O(NSUM) time - O(NSUM) space




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

IX. Optimizations and Heuristic

How to improve the algorithm faster, shorter, simpler, safetier or saving space



A) Filtering the array

  • 1) Split items into 2 types, whose weight less than $$$W$$$ and the rest
Hint

  • 2) Compressed the array
Hint




















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

X. Debugging

Support you when you are in a trouble that you cant find your bug



A) Wrong answer

1) Becareful when weight sum and value sum is big, it would cause overflow

Debug

2) Becareful that in Meet-in-the-middle approach:

  • You have to update the bitmask that have maxvalue.

  • You have to update the $$$maxval$$$ and $$$maxmask$$$ before assign $$$x.maxval$$$, $$$x.maxmask$$$

  • You have to use also in collecting the result

Wrong
Wrong
Wrong
Debug
Debug

  • 3) Forget base cases: In type $$$IV$$$ the DP is already init as 0, so you dont need the loop to zero, while the $$$V$$$ is not like that when you init it as $$$+oo$$$
Wrong


B) Time Limit Exceed

1) Global variable $$$\neq$$$ Local variable

  • In Meet-in-the-middle approach, the solve() function didnt use global variable (n), it use $$$n = |c| = |s|$$$.
Debug

2) Forget to use memorization

Wrong
Wrong

3) You might get WA if you have wrong initalization or leave the value generated randomly

Wrong

4) If you wanna binary search for the result, remember that you cant do Prefixmin DP $$$O(N \times SUM)$$$ as what it like in Prefixmax DP $$$O(N \times W)$$$

Wrong


C) Memory limit exceed

1) Though Meet-in-the-middle approach is faster than Bitmasking Approach, it requires large amount of space — $$$O(2^{^{\frac{n}{2}}}$$$, which may give you MLE !


2) In some cases you will need space optimization if the limit is too tight !


3) Becareful in tracing results

Wrong
Fixed


D) Runtime Error

1) Out of bound

Wrong
Wrong

2) Array too big in main function: Declare it globally with the init size bigger than problem constraint a bit





















――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

XI. Knapsack Variation and Practice Problems

In case you need a place to practice or submitting



Hint

Note

Hint




























――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

Teleporter: [Previous] | | | [Next]

XII. Blog status

The current progress and contributor of this blogs



  • Current progress;

    - **1)** Online Algorithms
    
     - **2)** Remain space optimization while tracing for elements
  • On progress:

    - **0)** Table of content & Complexity comparision table
    
    - **1)** Online Algorithm
    
    - **2)** Optimizations and Heuristic
    
    - **3a)** Unbounded knapsack
    
    - **3b)** Bounded knapsack
    
    - **3c)** Item limitation knapsack
    
    - **4a)** Knapsack query maximum value with item in range $$$[L, R]$$$
    
    - **4b)** Knapsack query maximum value with weight in range $$$[L, R]$$$
    
    - **4c)** Knapsack query minimum weight with value in range $$$[L, R]$$$
    
    - **5a)** Multiple knapsack bags
    
    - **5b)** Multidimentional item
  • Special thank to contributors: SPyofgame, TheScrasse, Lusterdawn, jiangly

Полный текст и комментарии »

  • Проголосовать: нравится
  • +43
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

The full statement

Main statement

A pair of charactor $$$(L, R)$$$ is called good or matched to each other if it satisfy one of below

  • $$$L =$$$ '(' and $$$R =$$$ ')'
  • $$$L =$$$ '[' and $$$R =$$$ ']'
  • $$$L =$$$ '{' and $$$R =$$$ '}'

Notice that if $$$(L, R)$$$ is good then $$$(R, L)$$$ is not good

String can have many variation of categories, one of that is good string. Let a string $$$S$$$ of size $$$N$$$ is called good if

  • $$$S$$$ is empty (its length $$$N = 0$$$)
  • $$$S = S_1 + S_2 + \dots + S_n$$$. Where $$$S_i$$$ is a good string and + symbol mean that string concaternation
  • $$$S = L + S_x + R$$$ where $$$S_x$$$ is a good string and $$$(L, R)$$$ is a good pair of charactor

Given a string $$$S$$$ of size $$$N$$$. We can add some charactor '(', ')', '[', ']', '{', '}' into anywhere in string $$$S$$$ but you cant replace or remove them.

The question is that: What is the minimum number of charactor need to add into string to make it good ?

Limitation: $$$N = |S| \leq 500$$$



The dynamic programming solution $$$O(n^3)$$$

Lets $$$F(l, r)$$$ is the answer for substring $$$S[l..r]$$$.

  • If $$$l > r$$$ then the string is empty, hence the answer is $$$F(l, r) = 0$$$
  • If $$$l = r$$$ then we should add one charactor to match $$$S_l$$$ to make this substring good, hence the answer is $$$F(l, r) = 1$$$
  • We can split into 2 other substring $$$S[l..r] = S[l..k] + S[k+1..r]$$$, for each $$$k$$$ we have $$$F(l, r) = F(l, k) + F(k+1, r)$$$ hence $$$F(l, r) = min(F(l, k) + F(k+1, r))$$$
  • Notice that when $$$S_l$$$ match $$$S_r$$$, $$$F(l, r) = min(F(l + 1, r - 1), min(F(l, k) + F(k+1, r)))$$$

Complexity:

  • $$$F(l, r)$$$ have $$$O(n^2)$$$ states
  • In each substring $$$S[l..r]$$$, we maybe to have a for-loop $$$O(n)$$$
  • Hence the upper bound of the complexity is $$$O(n^3)$$$
Recursive Code
Iterative Code
Full Code


The other dynamic programming solution $$$O(n^3)$$$

Base cases:

  • If $$$l > r$$$ then the string is empty, hence $$$F(l, r) = 0$$$
  • If $$$l = r$$$ then we should add one charactor to match $$$S_l$$$ to make this substring good, hence $$$F(l, r) = 1$$$

Branch and bound cases:

  • If $$$S_l$$$ is close barrack, then add a open barrack before it, hence $$$F(l, r) = F(l + 1, r) + 1$$$
  • If $$$S_r$$$ is open barrack, then add a close barrack after it, hence $$$F(l, r) = F(l, r - 1) + 1$$$
  • If $$$(S_l, S_{l+1})$$$ is good, then just paired it up, hence $$$F(l, r) = F(l + 2, r) + 0$$$
  • If $$$(S_{r-1}, S_r)$$$ is good, then just paired it up, hence $$$F(l, r) = F(l, r - 2) + 0$$$

Main cases:

For each $$$k = l \rightarrow r - 1$$$

  • If $$$S_k$$$ match $$$S_r$$$, minimize $$$F(l, r)$$$ with $$$F(l, k - 1) + 0 + F(k + 1, r - 1)$$$
  • Else add a open charactor at k to match $$$S_r$$$, minimize $$$F(l, r)$$$ with $$$F(l, k) + 1 + F(k + 1, r - 1)$$$

Complexity:

  • $$$F(l, r)$$$ have $$$O(n^2)$$$ states
  • In each substring $$$S[l..r]$$$, we maybe to have a for-loop $$$O(n)$$$ or $$$O(1)$$$ for transistion
  • Hence the upper bound complexity is $$$O(n^3)$$$
  • Hence the lower bound complexity is $$$O(n^2)$$$
Recursive Code
Full code
Optimize version


My question

  • If the string $$$S$$$ is only consist of '(' and ')' then there is a Linear ($$$O(n)$$$) solution
The solution
  • Can my algorithm ($$$dp[l][r] = min(dp[l][k] + dp[k + 1][r])$$$) improved into $$$O(n^2\ polylog)$$$ or lower in somehow ?

  • Failed to use Knuth algorithm $$$(dp[l][r] = min(dp[l][k] + dp[k][r] + cost[l][r])$$$ since fully-motone condition is not satisfied

Полный текст и комментарии »

  • Проголосовать: нравится
  • +40
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

The problem

Given $$$n$$$ points $$$(x_1, y_1),\ (x_2, y_2),\dots, (x_n, y_n)$$$

Find the minimum distance between 2 pair of points.

The problem: SPOJ



The constraint

  • $$$2 \leq n \leq 50000$$$

  • $$$x_i, y_i \leq 10^6$$$



My question

I was solving a problem that need to find the minimum distance between two points. I tried to bruteforce then cde an divide and conquer algorithm. But then I modified the bruteforce by adding some branch-and-bound to ignore not-optimal cases. For somehow my code get AC and seems to run fast while I thought it will be slow with the complexity of $$$O(n^2)$$$ with small constant.

I dont really sure about the complexity, can some one help me to calculate it ?

My main part
My full code

Полный текст и комментарии »

  • Проголосовать: нравится
  • -1
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

The equation

A Linear Diophantine Equation is an equation of the general form:

$$$\underset{i = 1}{\overset{n}{\Sigma}} (a_i \cdot x_i) = N$$$

Where $$$a_i$$$ and $$$N$$$ are given integers and $$$x_i$$$ are unknown integers.



The problem

Given Linear Diophantine Equation of only 2 variables:

$$$ax + by = c$$$

With given integers $$$a, b, c$$$ and unknown integers $$$x, y$$$

Some interesting property

We have to count the number of $$$(x, y)$$$ non-negative integers solutions for the equation (assume that these value are under $$$10^9$$$ so that we dont deal with overflow cases$

Can I have a simplier implementation then this ? (My algorithm based on cp-algorithm)

Recursive extended greatest common divisor
Recursive extended greatest common divisor
Find one solution ax + by = c
Shift solution
Count number solutions of ax + by = c with given range x & range y
Count all nonegative solutions (x, y) satisfy ax + by = c

Полный текст и комментарии »

  • Проголосовать: нравится
  • +5
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Original Problem


You are given a binary table of size $$$n×m$$$. This table consists of symbols $$$0$$$ and $$$1$$$

You can make such operation: select $$$3$$$ different cells that belong to one $$$2×2$$$ square and change the symbols in these cells (change $$$0$$$ to $$$1$$$ and $$$1$$$ to $$$0$$$)

Your task is to make all symbols in the table equal to $$$0$$$. You dont have to minimize the number of operations. (It can be proved, that it is always possible)


And the constraints are

  • $$$2 \leq N, M \leq 100$$$
  • Easy Version: Limited in $$$3 \cdot N \cdot M$$$ operations
  • Hard Version: Limited in $$$1 \cdot N \cdot M$$$ operations
Code solution without minimizing (with comments)


Extended version

But what if I have to minimize number of operations ?

  • Is there an algorithm other than brute-force to find minimum number of operations in these problem ?

  • I am wondering if I can use Gauss-Elimination (mod 2) or Greedy-DP to solve in somehow

  • I wrote an analizer for small $$$N \cdot M$$$ tables so that you can check too. (Modify by a bit, we can answer query of all $$$N \cdot M$$$ tables, but the complexity is $$$O(2^{n \cdot m})$$$)

Analizer
Test with 3x3 tables
With (n, m) = (2, 2) || show_case = true; show_trace = true;
  • And if the ones are connected, here is the analizer (I will optimize the algo later)
Small checker
Observation
Test with 9x9 tables

Полный текст и комментарии »

  • Проголосовать: нравится
  • +56
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Original Problem

M-candies-problem. In this version, we need to calculate the number of ways to share exact $$$K$$$ candies for all $$$N$$$ children that the $$$ith$$$-child doesnt have more than $$$a_i$$$ candies.

And the constraints are

  • $$$1 \leq N \leq 100$$$
  • $$$0 \leq K \leq 10^5$$$
  • $$$0 \leq a_i \leq K$$$
O(n * k^2) solution - Standard DP
O(n * k) solution - Prefixsum DP
O(n * k) solution - Online algo and space optimization


Extended Version

But what if the constraints were higher, I mean for such $$$M, a_i \leq 10^{18}$$$ limitation ?

O(1) solution for N = 1
O(1) solution for N = 2
O(1) solution for N = 3
O(1) solution for N = 4

Those fully-combinatorics codes above suck and hard to gets a simplified formula. Though I think this problem can be solved for general $$$a_i$$$ and $$$k$$$ in $$$O(n)$$$ or $$$O(n\ polylog\ n)$$$ with combinatorics or/and inclusion-exclusion, but I failed to find such formula.

Can someone give me a hint ?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +11
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Does anyone know any simple website or tool or vscode-extension that replace all macros directly into the source code ? I tried to search on google/vscode-extension something like antimacro ..., remove macro ..., replace macro ... but find no result. Thanks for helping ^^

Полный текст и комментарии »

  • Проголосовать: нравится
  • +19
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Can someone help me. I didnt download anything or any extensions these days but suddenly my both Google and Mozilla Firefox browsers give me codeforces without right navigation tabs :(

I really need the navigations for reading recent news on codeforces and some are helpful <3 Thanks

Полный текст и комментарии »

  • Проголосовать: нравится
  • +2
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

The problem

Lets $$$f(x) = $$$ product of digits of $$$x$$$. Example: $$$f(123) = 1 * 2 * 3 = 6$$$, $$$f(990) = 9 * 9 * 0 = 0$$$, $$$f(1) = 1$$$

The statement is, given such limitation $$$N$$$, count the number of positive $$$x$$$ that $$$1 \leq x * f(x) \leq N$$$

Example: For $$$N = 20$$$, there are 5 valid numbers $$$1, 2, 3, 4, 11$$$



The limitation

  • Subtask 1: $$$N \leq 10^6$$$
  • Subtask 2: $$$N \leq 10^{10}$$$
  • Subtask 3: $$$N \leq 10^{14}$$$
  • Subtask 4:

    Unable to parse markup [type=CF_MATHJAX]



My approach for subtask 1

  • If $$$(x > N)$$$ or $$$(f(x) > N)$$$ then $$$(x * f(x) > N)$$$. So we will only care about $$$x \leq N$$$ that $$$x * f(x) \leq N$$$
Calculating x * f(x) - O(log10(x))
Counting - O(n log10(n))


My approach for subtask 2

  • If $$$x$$$ contains $$$0$$$ then $$$f(x) = 0 \Rightarrow x \times f(x) < 1$$$. We only care about such $$$x$$$ without $$$0$$$ digit
Building function - O(result + log10(n))


Here is the solution:

Let takes some $$$x$$$ satisfy $$$1 \leq x * f(x) \leq N$$$

We can easily prove that $$$f(x) \leq x$$$, and because $$$x * f(x) \leq N$$$, we have $$$f(x) \leq \sqrt{N}$$$ (notice that $$$x$$$ might bigger than $$$\sqrt{N}$$$)

Since $$$f(x)$$$ is product of digits of $$$x$$$, which can be obtain by such digits {$$$1, 2, \dots, 9$$$}. So $$$f(x) = 2^a \times 3^b \times 5^c \times 7^d$$$

So we can bruteforces all possible tuple of such $$$(a, b, c, d)$$$ satisfy ($$$P = 2^a \times 3^b \times 5^c \times 7^d \leq \sqrt{N}$$$). There are small amount of such tuples (493 tuples for $$$N = 10^9$$$ and 5914 tuples for $$$N = 10^{18}$$$)

Find all possible tuples - O(quartic_root(N))

For each tuples, we need to counting the numbers of such $$$x$$$ that $$$1 \leq x \times f(x) \leq N$$$ and $$$f(x) = P$$$.

  • We have the value $$$P$$$, so $$$\lceil \frac{1}{P} \rceil \leq x \leq \lfloor \frac{N}{P} \rfloor$$$.
  • We have the value $$$f(x) = P$$$, so $$$x$$$ can be made by digits having the product exactly $$$P$$$, so we can do some DP-digit

So now we have to solve this DP-digit problem: Calculate numbers of such $$$x$$$ ($$$L \leq x \leq R$$$) whose $$$f(x) = P$$$



Solving Subproblem

We try to build each digits by digits for $$$X$$$. Because $$$X \leq N$$$, so we have to build about $$$18$$$ digits.

Lets make a recursive function $$$magic(X, N, p2, p3, p5, p7)$$$

Lets make some definition
Notice that
Ugly precalculation code - O(1)
magic function - O(18*p2*p3*p5*p7)


About the proving stuff

1) $$$\forall x \in \mathbb{N}, f(x) \leq x$$$

  • Proof: $$$x = \overline{\dots dcba} = \dots + d \times 10^3 + c \times 10^2 + b \times 10^1 + a \times 10^0 \geq \dots \times d \times c \times b \times a = f(x)$$$

2) If $$$x$$$ satisfy then $$$f(x) \leq \sqrt{N}$$$ must be satisfied

  • Proof: $$$x \times f(x) \leq N \Rightarrow f(x) \times f(x) \leq N \Rightarrow f(x) \leq \sqrt{N}$$$

3) $$$\exists\ a, b, c, d \in \mathbb{N} \rightarrow f(x) = 2^a \times 3^b \times 5^c \times 7^d$$$

  • Since $$$x = \overline{\dots dcba} \Rightarrow (0 \leq \dots, d, c, b, a \leq 9)$$$ and $$$f(x) = \dots \times d \times c \times b \times a$$$

  • And we also have $$$\forall$$$ digit $$$v$$$ ($$$v \in \mathbb{N}, 0 \leq v \leq 9$$$) $$$\rightarrow \exists\ a, b, c, d \in \mathbb{N} \rightarrow v = 2^a \times 3^b \times 5^c \times 7^d$$$

  • And because $$$f(x)$$$ is the product of digits of $$$x$$$, hence the statement is correct

4) If we know $$$f(x)$$$ we can find such $$$x$$$ satisfy $$$x \in [L, R]$$$

  • Proof: Since $$$f(x)$$$ is created from factors of digits of $$$x$$$, so $$$x$$$ can also be generated using the factors

5) Number of tuples $$$(a, b, c, d)$$$ satisfy $$$P = 2^a \times 3^b \times 5^c \times 7^d \leq \sqrt{N}$$$ is very small

  • Lets $$$O(k(x)) = O(log_2(x) \times log_3(x) \times log_5(x) \times log_7(x))$$$

  • Since each value $$$x$$$ have the upper bound of $$$log_x(\sqrt{N})$$$. So the complexity is about $$$O(log_2(\sqrt{N}) \times log_3(\sqrt{N}) \times log_5(\sqrt{N}) \times log_7(\sqrt{N})) = O(k(\sqrt{N})) \leq O(log_2(\sqrt{N})^4)$$$

  • But actually for $$$R \leq 10^{18}$$$, the complexity is just about $$$O(k(\sqrt[4]{N}))$$$

Weak proof - N = 10^k
Weak proof - N = 2^k
Code


About the complexity:

  • $$$O(h(x)) = O(log_{10}(N))$$$ is number of digits we have to build
  • $$$O(k(x)) = O(log_2(N) \times log_3(N) \times log_5(N) \times log_7(N)) = O(log(N)^4)$$$ is product of all prime digits $$$p$$$ with its maximum power $$$k$$$ satisfy $$$p^k \leq N$$$
  • $$$O(g(x)) = O(k(\sqrt{N}))$$$ is number of such valid tuples, but for $$$1 \leq N \leq 10^{18}$$$ it is about $$$\approx O(k(\sqrt[4]{N})) \leq O(log_2^4{\sqrt[4]{N}})$$$
  • The space complexity is $$$O(SPACE) = O(h(x) \times k(x)) = O(log_2(N) \times log_3(N) \times log_5(N) \times log_7(N) \times log_{10}(N)) = O(log(N)^5)$$$
  • The time complexity is $$$O(TIME) = O(SPACE) + O(g(x) \times k(x)) \approx O(log(N)^4 \times log_2^4{\sqrt[4]{N}})$$$


Other

  • About the real complexity for $$$O(k(x))$$$, do you find a better/closer upper bound of counting such tuples of $$$(a, b, c, d \in \mathbb{N})$$$ that $$$P = 2^a \times 3^b \times 5^c \times 7^d \leq N$$$ ? Since my $$$O(log_2(N))$$$ complexity seem very far than the real counting and $$$O(log_2(\sqrt{N}))$$$ is closer but just correct for $$$N \leq 10^{18}$$$

  • Thanks for reading and sorry for updating this blog publicly so many times (most is for the proving path and correcting the complexity)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +63
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

I come to a fun problem, and after I tried hard to solve it, I curiously to find better algorithm, but I cant.



The problem is:

There are $$$N$$$ buildings with $$$a_1, a_2, \dots, a_n$$$ metters height. These days are hard, the heavy raining weather still not ended yet. In day $$$d$$$, every building with height $$$h$$$ only have $$$x_d = \lfloor \frac{h}{d} \rfloor$$$ height left of good space to use, while others are sunk underwater. Every building having the same value $$$x_d$$$ on day $$$d$$$ will group together (including $$$x_d = 0$$$ which sunk completely underwater) in a way that no other building with same value $$$x_d$$$ in another group.



The question is:

Output $$$N$$$ lines, in each size $$$s$$$ from $$$1$$$ to $$$n$$$, what is the earliest day $$$d$$$ that have at least one group of size $$$s$$$ (if there is no suitable day then output -1)



The constraints are:

  • Subtaks 1: $$$n \leq 100~ and ~a_i \leq 2.10^5$$$
  • Subtaks 2: $$$n \leq 300~ and ~a_i \leq 3.10^6$$$
  • Subtaks 3: $$$n \leq 300~ and ~a_i \leq 5.10^7$$$


The examples are:

Example 1
Example 2
Example 3


My approach to this problem:

Observation: Harmonic Sequence
Subtask 1: A[i] <= 2 * 10^5
Subtask 2: A[i] <= 3 * 10^6
Subtask 3: A[i] <= 5 * 10^7


My question

  • Is there a better algorithm for larger $$$N$$$ ? (upto $$$10^4, 10^6$$$)

  • Is there a better algorithm for larger $$$a_i$$$ ? (upto $$$10^{12}, 10^{16}, 10^{18}$$$)

  • Can I use combinatorics or euclidian algorithm for this problem ?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +18
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Original Problem

In this problem. The statement give you a deque of $$$n$$$ number. There are two players take turn alternately. In one turn, they can select either leftmost or rightmost element and remove it, and earn $$$x$$$ points where $$$x$$$ is the removed number. They play until the deque is empty. Lets $$$X, Y$$$ are the scores of the first player and the second. Find the maximum $$$X - Y$$$ when they play optimally

We can use dynamic-programming to solve it in $$$O(n^2)$$$ or we can improve upto $$$O(n\ polylog(n))$$$ using data-structure like Treap and fully-optimized by deque in linear $$$O(n)$$$

Recursive DP - O(n^2)
DP iterative - O(n^2)
Deque - O(n)

Variant Problem

Then, I come to a problem, here is the statement.

There is a cycle of $$$n (n \leq 10^4)$$$ binary number $$${a_1, a_2, \dots, a_n}$$$ and ($$$a_i \in {0, 1}$$$) First player take a random number, lets say $$$a_p$$$ then remove it and gain $$$a_p$$$ points The second player take a number which is consecutive with last number removed ($$$a_p$$$) — select either $$$a_{p - 1}$$$ or $$$a_{p + 1}$$$ (notice that $$$a_1$$$ and $$$a_n$$$ is consecutive) They start to play alternately until there are no number left and they plays optimally

The question is in each game where as the first player select the number $$$a_p$$$, $$$p \in [1, n]$$$. How many games did the first player have more score than the second player

Example 1
Example 2
Example 3

I try to use dp to solve it in $$$O(n^2)$$$ but I dont know how to optimize by using deque and only come up to an $$$O(n^2)$$$ solution. Can someone suggest me a better algorithm ?

Dynamic programming - O(n^2)
Deque way - O(n^2)

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

About the problem

The problem is to calculate the number of such subsequence $$${a_1, a_2, \dots a_n}$$$ that ($$$a_1 + a_2 + \dots + a_k = n$$$) where ($$$k \geq 2$$$) and ($$$a_i \in {1, 2, \dots, n}$$$)

It is the sequence OEIS A111133



My approach for small n

Lets $$$magic(left, last)$$$ is the number of valid subsequences whose sum equal $$$left$$$ which next selected element is such $$$next$$$ in range $$$(last, left]$$$ ($$$next$$$ is strictly greater then last selected number $$$last$$$ and not greater than current sum $$$left$$$). The recursive stop when $$$left = 0$$$ then we found one valid subsequence

Recursive dp - O(n^3) - small n


My approach for bigger n

Lets $$$magic(sum, cur)$$$ is the number of valid subsequences whose selected sum is $$$sum$$$ and current selecting element is $$$cur$$$

  • $$$cur$$$ is init as $$$1$$$ (smallest element) and recursive stop when it greater than $$$n$$$ (largest element)

  • $$$sum$$$ is in range $$$[0, n]$$$ when it equal $$$n$$$ then we found 1 valid subsequence so we return $$$1$$$, else if it greater than $$$n$$$ we stop the recursive

The complexity is still $$$O(n^3)$$$ which $$$O(n^2)$$$ calculation and $$$O(n)$$$ for bignum implementation

Recursive dp - O(n^3) - bignum result
Iterative dp - O(n^3) - bignum result


My question

  • Can I solve the problem in $$$O(n^2)$$$ or in $$$O(n^2 \times polylog(n))$$$

  • Can I find the n_th element faster than $$$O(n^3)$$$

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Simplest way:

Approach
Brute-force <TLE> - O(n! * n ^ 2) time - O(n) space

Improving the algorithm

Approach
BIT, Binary-search, Dynamic-programming <TLE> - O((n! * n + n) log n) time - O(n) space

Using formula to improve furthur more

Approach
Recursive Dynamic Programming Approach <AC> - O(n * k) time - O(n * k) space
Iterative Dynamic Programming Approach <AC> - O(n * k) time - O(k) space

I solved this problem but I wonder if there is a better way (faster than $$$O(n \times k)$$$)

So my question is "Is there a faster approach for this problem ?"

Полный текст и комментарии »

  • Проголосовать: нравится
  • +21
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

I read in this paper and know that Binary GCD Implementation is proven to be about 2 times faster than Normal GCD Implementation.

Binary Iterative GCD Implementation (wikipedia)
Normal Iterative GCD Implementation

I just wonder if there is an Efficient Binary Extended GCD Implementation and how fast can it be ?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +4
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

Thanks to Suffix Array Tutorial from Codeforces I could learn easily how to build Suffix Array and solve some problems

I learn about $$$O(n \log^2(n))$$$ solution and optimize it into $$$O(n \log(n)$$$, it is fast and good. In some contests, I saw some div-1 rankers discuss about $$$O(n)$$$ solution. Now I am wondering if there is an simple implementation but efficient Suffix Array in Linear Time and Space ?

From their conversation about Linear Solution, I read from this paper and this too but I am not be able to implement it. I know those implementations are hard and higher above my levels but I just curiously to know about the implementation

Thanks for reading, sorry if this blog waste your time.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски
In problem $$$\overbrace{(((a ^ n) ^ {{}^n}) ^ {{}^{{}^{...}}})}^{b\ times\ n} \mod m$$$
  • My approach is calculating each part $$$((a ^ n) \mod m)$$$ then $$$(((a ^ n) ^ n) \mod m) = ((t ^ n) \mod m)$$$ ... all together in $$$O(\log n)$$$ each part and $$$O(b \times \log n)$$$ total time complexity

  • Can I improve it somehow faster where $$$b$$$ is large ($$$b \leq 10^{16}$$$) and ($$$m$$$ can be composite number)

In problem $$$\overbrace{a ^ {(n ^ {(n ^ {(...)})})}}^{b\ times\ n} \mod m$$$
  • I only know the way to use bignum to calculate $$$n ^ n$$$ then $$$n ^ {n ^ n}$$$ all together then I just have to calculate the modulo of $$$a ^ {...} \mod m$$$ but the total complexity will be very huge (since the number of digits of bignum will raise very fast)

  • Can I apply these formula from phi-function:

$$$a ^ {\phi(m)} \equiv 1 \pmod m$$$
$$$a ^ n \equiv a ^ {n \mod \phi(m)} \pmod m$$$
$$$a ^ n \equiv a ^ {\phi(m) + (n \mod \phi(m))} \pmod m$$$
  • Can I improve it somehow faster where $$$n$$$ is large ($$$n \leq 10^{16}$$$) and ($$$m$$$ can be composite number)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +76
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

There is an empty rectangle of size

$$$n \times m$$$

where

$$$1 \leq n, m \leq 10^6$$$

We cover the rectangles k times from (u1, v1) to (u2, v2), where

$$$1 \leq k \leq 400$$$
$$$1 \leq u1 \leq u2 \leq n$$$
$$$1 \leq v1 \leq v2 \leq m$$$

The question is: How many squares of that rectangle have been covered ?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +8
  • Проголосовать: не нравится

Автор SPyofgame, история, 4 года назад, По-английски

I found these implementations ^^

Sorry, I am not sure if there are some corner cases or I calculated wrong complexity. If there are, please correct me.

I am wondering if there are some other implementations ^^ (you can comment below and I will add to the post)

Implementation 0: Trivial
Implementation 1: Optimized Trivial
Implementation 2: Naive Factorization
Implementation 3: Factorization
Implementation 4: Miller-rabin
Implementation 5: Sieve + Factorization
Implementation 6: Sieve + Miller-rabin + Pollard-rho
Implementation 7: Divisor Sum Sieve


Compilation:

Implementation Main Algorithm Time Complexity Space Complexity Coding Space Coding Complex
0. Trivial Implementation O(n) O(1) Short Simple
1. Trivial Implementation O(√n) O(1) Short Simple
2. Factorization + Math Math O(n ^ 2) ? O(1) Normal Normal
3. Factorization + Math Factorization O(n√n) O(1) Normal Normal
4. Factorization + Math Miller-rabin O(n * log^5(n)) O(1) Long Normal
5. Factorization + Math Sieve O(n) + O(log n) O(n) Normal Normal
6. Factorization + Math Pollard-rho O(√n) + O(√(√(n)) polylog n) query O(√n) + O(log n / log(log n)) query Very Long Complex


Planning:

  • Add relative blogs

  • Add some more implementations

  • Add tutorial & reasoning for each implementation

Полный текст и комментарии »

  • Проголосовать: нравится
  • +40
  • Проголосовать: не нравится