How to solve E,K?
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 156 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Name |
---|
Does anyone have a proved solution for C?
Well, I "proved" that my solution works for up to 10**9 by checking that only numbers divisible by 4 and 9 can yield a sequence of zero Mobius function values with step resp. 4 or 9 of sufficient length.
Does this count as proved? :)
In my solution I had to decide the answer mod 44100 first (tried a few "promising" candidates). Is 36 enough in your solution?
Yes, 36 was enough, then I ran KMP on 36-bit characters.
Do I understand correctly that if you have 2 reminders modulo 4 (or 9) that generate all zeroes(e.g 010101..01), you just say that there's no answer?
Well, I check only the first remainder modulo 36 that has all necessary zeros. But I think the answer for those cases ends up being -1 indeed.
I'm not sure whether this can be called as proof, but I'm quite surprised one can use the mathematical property of μ to solve C.
First, I computed the |μ(x)| sequence in an
unsigned long long seq[int(1e9) / 64 + 1]
array, Then, I computed all length-64 substrings of the input string as integer modulo P and stored all occurrences likepos[s[i..i+63] % P].push_back(i)
.Then I assumed
seq
is random enough so thatseq[x] % P
is diverse enough. It turned out when P = 2000003 the most frequentseq[x] % P
appeared only ~7500 times. Thus, I could filter at most 7500 × 200 feasible positions in total, by using elements inpos[seq[x] % P]
.I have fully proved except time. Let's caculate the function as bitset. It can be done in time equal to sum of inversed prime squares, which works in about half a second if optimize 2 and 3 with bitset.
Then, we can push resulting string throw kmp-autumaton of given string. It works in about 2 second, but can be easyly optimized 8 or 16 times using bit compression.
What is the complexity of J's solution? My NlogN solution for some reason was getting TLE on test case 13. What is the expected complexity?
We had a linear solution with suffix tree
Okay.
Linear. After computing suffix tree it's just dfs.
Can you please explain your solution?
Let's calculate for each subtree expected answer if we and our opponent can only choose suffixes in this subtree. Let's recalc it for vertex v when we add its child u. Fix some p and say that with probability p we don't go to u. Then if our opponent chooses suffix in subtree of u, we get p·len(v) + (1 - p)·dpu, and otherwise he gets p·dpv + (1 - p)·len(v). p is optimal when those values are equal.
I'm not familiar with suffix trees, but are they required in your solution, or the main idea stays the same even if one chooses to use just suffix arrays? If so, could you, please, give the details of your solution? It seemed so hard to approach to us...
Well, every solution with suffix trees can be rewritten with only suffix arrays, because they are equivalent. But usually it's much harder to think in terms of suffix array than in terms of suffix tree(which is just a trie of all suffixes except some vertices are compressed), and code is often very ugly. It's much easier to learn how to write suffix automaton once and always use it(since tree of links in automaton is suffix tree of reversed string).
For K our solution was dynamic programming. Let dp[i][j] the smallest x such that you can aquire a level k in the interval [i, x>. Know you have the relation dp[i][j] = dp[dp[i][j - 1]][j - 1]. You have a special case when A[i] = = j. That is a O(NM) solution. We haven't proved it, but the sum of number of distinct results of the dp over each level looks to be bounded by something like O(N log M). Maybe we were just lucky :D and that is not true. So know by using having vectors of pairs for each level {x, y}.That means that all interval [z, y > are good if and only if z ≤ x. And with some binary searches answer the queries in O(Q log N).
It seems that problem 1075 from timus has weak tests. My solution which I wrote in 2015 got PE, the code was so bad, that I had to rewrite it from scratch. I am so ashamed of my past self now.
In timus problem points had to be not on the sphere, but in todays problem it was allowed. Maybe that can be an issue
Yes, I had issue with that, but such test is in the samples, so I fixed it.
I'm not sure that the problem is weak tests. On Timus it is guaranteed that start and finish are strictly outside the planet, while here it was just outside (maybe on a surface). Luckily, my solution still gets AC.
I tried my best to find an easy but novel geometric task QwQ. I did not think about it is still an old problem. Sad :(
There is also Timus 1285 for hyperspace.
Maybe I'm too dumb, but is there an intuitive way to check that the optimal path lies in the plane containing s, t and o?
What about B and E? I've spent over 2 hours struggling with each of them and couldn't find a solution. BTW, I really love their statements
for B, it's necessary to find some structure on the set of intervals. Given an interval [l,r], let's say x is a cut point if all elements of [l,x] are bigger than all elements of [x+1,r] or vice versa.
We can decompose [l,r] into sub-intervals in different ways depending on whether or not there's a cut point.
That's about what I've tried but so many complications came up. Firstly, I couldn't find any decent decomposition in case there is not cut point. Secondly, even when there is, it might appear that you can consider all the cut points and the division induced by these. Then the most straightforward thing to do would be multiplying the no-cut solutions, but throughout our analysis this seemed wrong for some reason. Probably too blurred definitions. Also we were doing this for the whole permutation, but I'm not so sure how considering the concept for any interval [l, r] may help. Could you, please, give some more specifics?
For the case without cut points: For each element, look at the maximal subinterval (meaning not contained in any other subinterval) which is a contiguous subsequence containing it. This is unique for each element and the intervals obtained are disjoint.
For the case with cut points: You can also have cut points within a block. In the example [4 3 2 1] [7 5 6] [9 8], we can cut [4 3 2 1] into [4][3][2][1].
Alright, so for the second part that was the reason why I eventually gave up on the approach: you sort of want to not be able to cut it so that the smaller ones are to the left (you want it to not be cutable when considering they are actually attached to something to the left). I guess I could try something like the number of equivalence classes with exactly one cut after the first one and size = 1 + the actual length, with a particular case for the first interval in the division. You basically include what's to the left of the interval as one extra position. Would that work? I'm still unsure how to actually enforce that "exactly one cut point after the first one" condition.
As for the first part, I was able to prove that indeed your choice induces a correct partition. In that case, the number of equivalence classes would be the product of dp[length], where dp[n] = the answer for n, or 0. It's 0 only in case there are too few members of the division (it's non zero only for n at least 4, 5 or so), am I right? (basically you can do it iff there is a permutation of length = number of members of the division, that has no other compact segment than [i, i] and [1, N])
Yeah, that's right for the case with no cut points.
For the case with cut points, if you assume the blocks are increasing, each of those blocks by itself will either have no cut points, or cut points but with the blocks are decreasing. However, increasing vs decreasing gives the same equivalence class of segments.
In the example, [4] [3 2 1 7 5 6 9 8] isn't actually a cut.
In fact, you can just take the product of dp[length] over partitions for the case with cut points.
I got AC adding one more dynamic — number of cut-points states, such that first nontrivial block contains 1. Also it could be useful that number of all states and and number of states, where first nontrivial block doesn't contain 1 are equal.
Let's use 0-based indices for the elements, and call the point between element i - 1 and element i "point i". Also, we use half-open intervals (for example, [l, r) corresponds to elements between point l and point r). We denote the answer of the problem as dp[n].
Let's define two types of cut points. We call point x "cut point of the first kind" if both [0, x) and [x, n) are good intervals. This matches [user:ksun48]'s definition. Also, we call point x "cut point of the second kind" if no good interval except for the entire interval crosses this point. In other words, if [l, r) is a good interval, one of the following must hold: (l, r) = (0, n), r ≤ x, or l ≥ x.
Suppose that n ≥ 2. After some work on paper, we can prove the following.
First, suppose that one or more cut points of the first kind exist, and their indices are 0 < i1 < ... < ik < n. Then, the set of good intervals are:
Also, there's no restrictions for the structures of subintervals (we can handle them completely independently). Thus, the number of different structures of this form is simply dp[i1] × dp[i2 - i1] × ... × dp[n - ik].
If no cut point of the first kind exists, three or more cut points of the second kind exist. Call their indices 0 < i1 < ... < ik < n. In this case, the set of good intervals are:
Again, there's no restrictions for the structures of subintervals, and the number of different structures of this form is dp[i1] × dp[i2 - i1] × ... × dp[n - ik].
Therefore, we get the following recurrence:
It's easy to reduce it to DP.
Hmm, the description becomes unexpectedly long, but the final implementation is surprisingly short:
E: let's append one extra "sentinel" P to both strings.
Now the question is equivalent to: a situation is losing if we can find one P in the first string and one P in the second string such that the total "balance" before them is 0 (equal number of Vs and Ps in total over both prefixes), and at least one of those Ps is not a sentinel.
Since balance only decreases on Ps, this is equivalent to: one P in the first string and one P in the second string such that the total "balance" before them is <=0, and at least one of those Ps is not a sentinel.
Now we can handle two strings almost independently: for each string a O(n**2) DP on suffixes can compute "how many ways are there to replace ?s with Vs and Ps so that the minimum balance before a P is x", and combine those two DP results using prefix sums to get our answer.
Now we finally need to handle "at least one of those Ps is not a sentinel" part by subtracting the "both those Ps must be sentinels" situation, which can be done with almost the same indepedent DPs, but now checking "how many ways are there to replace ?s with Vs and Ps so that the minimum balance before a P is x AND the minimum balance is only achieved on the sentinel P".
Can you explain how to implement this DP in O(n2) ? The obvious solution is O(n3): dp[position][current_balance][min_balance] which is not fast enough
DP on suffixes: suppose we know for a string A, for each x, what how many ways are there to replace ?s with Vs and Ps so that the minimum balance before a P is x. Now if we prepend a letter V to A, we get a'[x+1]+=a[x], and if we prepend a letter P to A, we get a'[min(x-1,0)]+=a[x].
In other words, if we go from the end, then we can avoid storing current_balance as each P is added with balance 0.
And in the DP for the second part instead of a'[min(x-1,0)]+=a[x] we do "if x-1<0, then a'[x-1]+=a[x]" for all Ps except the sentinel.
How about the remaining ones? A, G, H?
G: case when there's one sub-garland is trivial but have to be checked manually. Don't forget case n=m=k=1
next, there are obvious conditions to check: number of B >= 2k n % k == 0, n>=mk. It turns out it's enough. First, let's replace all Cs except for 2k to Bs. Now, while we have strictly more than 4 Bs, we can remove exactly k-2 Cs and 2Bs where it's either 2 first Bs or second and third B in sorted order. When there is 4 Bs we also can always split it (but lengths are not necessary exactly k, but still we can make it disivible by k)