1793A - Ещё одна акция придумал и подготовил Ormlis
1793B - Федя и массив придумал и подготовил TheEvilBird
1793C - Даша и поиски придумал fedoseev.timofey, а подготовил vaaven
1793D - Московские гориллы придумал и подготовил Gornak40
1793E - Велепин и маркетинг придумал и подготовил Tikhon228
1793F - Ребрендинг придумал Tikhon228, а подготовил vaaven
1793A - Ещё одна акция
Пусть $$$n = (m + 1) \cdot q + r$$$.
Заметим, что акцией выгодно пользоваться если $$$a \cdot m \leq b \cdot (m + 1)$$$. В таком случае $$$q$$$ раз купим картошку по акции. Оставшуюся картошку (или всю, если акция не выгодна) можно купить по цене $$$\min(a, b)$$$ за килограмм.
Тогда ответ:
Асимптотика решения: $$$\mathcal{O}(1)$$$
1793B - Федя и массив
Заметим, что локальные минимумы и максимумы будут чередоваться, и их будет одинаковое количество $$$k$$$. Обозначим $$$i$$$-й локальный максимумум за $$$a_i$$$, $$$i$$$-й локальный минимум за $$$b_i$$$. Без потери общности считаем, что $$$a_i$$$ идет раньше $$$b_i$$$. Чтобы перейти от $$$a_i$$$ к $$$b_i$$$ надо выписать $$$a_i - b_i$$$ чисел, от $$$b_i$$$ к $$$a_{(i + 1) \bmod k}$$$ надо $$$a_{(i + 1) \bmod k} - b_i$$$.
Тогда $$$(a_1 - b_1) + (a_2 - b_1) + (a_2 - b_2) + \ldots + (a_k - b_k) + (a_1 - b_k) $$$
$$$= 2 \cdot (a_1 + a_2 + \ldots + a_k) - 2 \cdot (b_1 + b_2 + \ldots + b_k) = 2 \cdot (A - B) = n$$$
В качестве массива подойдет $$$[y, y + 1, y + 2, \ldots, x - 1, x, x - 1, x - 2, \ldots, y + 1]$$$.
1793C - Даша и поиски
Предположим мы хотим проверить удовлетворяет ли весь массив искомому требованию. Если это так, то мы можем вывести весь массив как ответ. Иначе один из двух крайних элементов не удовлетворяет нашим требованиям. Из этого можно сделать вывод, что все отрезки содержащие элемент, который не удовлетворяет нашим требованиям будут также некорректными, потому что этот крайний элемент будет оставаться минимумом/максимумом.
Из факта выше следует алгоритм: давайте посмотрим на подотрезок $$$[l; r]$$$, который изначально равен $$$[1; n]$$$. Если $$$a_l = \min(a_{l}, a_{l+1}, \ldots, a_{r})$$$ или $$$a_l = \max(a_l, a_{l + 1}, \ldots, a_r)$$$, то перейдем к отрезку $$$[l + 1; r]$$$. Также необходимо аналогичное рассуждения для $$$a_r$$$. Таким образом мы либо через сколько-то иттераций получим требуемый подотрезок, либо получим $$$l == r$$$ и ответом будет $$$-1$$$.
Итоговая ассимптотика: $$$\mathcal{O}(n\log n)$$$ или $$$\mathcal(O)(n)$$$ в зависимости от реализации.
1793D - Московские гориллы
Обозначим за $$$pos_x$$$ индекс числа $$$x$$$ в перестановке. Подотрезки с $$$\operatorname{MEX}>1$$$ имеют вид $$$1 \le l \le pos_1 \le r \le n$$$.
Введем обозначения: $$$l_x = \min{[pos_1, pos_2, \ldots, pos_x]}$$$, $$$r_x = \max{[pos_1, pos_2, \ldots, pos_x]}$$$.
Подотрезки с $$$\operatorname{MEX}>x$$$ имеют вид $$$1 \le l \le l_x \le r_x \le r \le n$$$. Давайте определим вид подотрезков с $$$\operatorname{MEX}=x$$$.
Если $$$pos_{x + 1} < l_x$$$, тогда подотрезки с $$$\operatorname{MEX}=x+1$$$ имеют вид $$$pos_{x+1}<l \le l_x \le r_x \le r \le n$$$
Если $$$l_x \le pos_{x + 1} \le r_x$$$, тогда не существует подотрезка с $$$\operatorname{MEX}=x + 1$$$
Если $$$r_x < pos_{x+1}$$$, тогда подотрезки с $$$\operatorname{MEX}=x+1$$$ имеют вид $$$1 \le l \le l_x \le r_x \le r < pos_{x+1}$$$
Осталось всего лишь пересечь множества таких подотрезков для $$$p$$$ и $$$q$$$, что делается тривиально.
1793E - Велепин и маркетинг
Давайте отсортируем людей по их требованию размера группы. Предположим у нас есть такой человек $$$i$$$, что он не доволен, и есть человек $$$j > i$$$, который доволен. Тогда мы можем заменить $$$j$$$ человека в его группе на $$$i$$$ и ответ для нас не ухудшится. Отсюда следует, что для конкретного $$$k$$$ ответ это какой-то префикс людей, которых мы можем сделать довольными.
Давайте также докажем, что существует такая расстановка групп, которая покрывает тот же самый префикс, и каждая группа это непрерывный отрезок. Давайте возьмем какое-нибудь корректное разбиение по группам. Тогда каждая группа будет представлять из себя набор несвязных отрезков. Давайте возьмем самый левый из таких отрезков. Заметим, что мы можем его про swap'ать до ближайшего отрезка такой же группы справа, при чем ничего не сломав.
Таким образом мы получили, что мы можем искать решение в виде разбиения каждого префикса на валидные группы, которые являются отрезками. Будем решать эту задачу с помощью динамического программирования.
Пусть $$$dp[i]$$$ -- максимальное количество групп, на которые можно разбить $$$i$$$-й префикс, так чтобы все были довольны (при чем нельзя использовать элементы за префиксом). База динамики: $$$dp[0] = 0$$$ (пустой префикс максимум можно разбить на 0 групп). Переход: для $$$i$$$-го человека его группа должна иметь размер хотя бы $$$a[i]$$$, поэтому переход выглядит следующим образом $$$dp[i] = \underset{0 \leqslant j \leqslant i - a[i]}{\max} dp[j] + 1$$$. Но что если $$$a[i] > i$$$? Тогда мы не можем набрать $$$i$$$-й префикс. Тогда положим $$$dp[i] = -\infty$$$. Эту динамику можно посчитать с помощью префиксных максимумов. Эта часть решения работает за $$$\mathcal{O}(n)$$$.
Ранее мы сказали, что ответом будет являться какой-то префикс людей, которые будут довольны. Если мы можем разбить префикс на какое-то количество групп, то этот ответ может являтся префиксом для всех $$$k \leqslant dp[i] + n - i$$$. (мы разбиваем наш префикс соотвественно $$$dp$$$, а остальных людей расскидываем по одному в группу)
Если мы не можем сделать весь префикс довольным ($$$dp[i] = -\infty$$$), то нам надо докидывать людей извне. Таким образом максимальное количество групп, на которые мы можем разбить, если $$$i$$$-й префикс полностью доволен, это $$$n - a[i] + 1$$$.
Заметим, что если каким-то префиксом мы можем набрать $$$k$$$, то тогда можем набрать и $$$k - 1$$$ (объединив две группы в одну). Тогда нам нужно найти самый большой префикс, который подходит для данного $$$k$$$ в запросе. Это можно сделать массивом суффиксных максимумом за $$$\mathcal{O}(q)$$$ суммарно. Итоговая асимптотика решения: $$$\mathcal{O}(n \log n + q)$$$.
1793F - Ребрендинг
Давайте будем идти по всем элементам слева направо. Основной задачей будет поддержка актуальной версии $$$dp[i]$$$ -- минимальной разности $$$a_i$$$ с элементами справа от него, которые мы успели рассмотреть. Пусть мы корректно посчитали $$$dp$$$ для первых $$$r$$$ элементов. Перейдем к $$$r + 1$$$. Давайте покажем как обновить ответ для всех $$$j < i$$$, таких что $$$a[j] > a[i]$$$. Для $$$j < i$$$, таких что $$$a[j] < a[i]$$$ решается аналогично.
Давайте возьмем первый элемент $$$a[j]$$$ слева от $$$i$$$, такой что $$$a[j] > a[i]$$$. Заметим, что если есть $$$l < j < i$$$, такой что $$$a[l] > a[j] > a[i]$$$, то для него мы $$$dp[l]$$$ не будем обновлять, потому что $$$|a[l] - a[j]| < |a[l] - a[i]|$$$. Также мы не будем обновлять ответ для $$$l$$$ таких что $$$|a[l] - a[j]| < |a[l] - a[i]|$$$, то есть если $$$a[l] > a[i] + \frac{a[j] - a[i]}{2}$$$. Поэтому дальше нас будут интересовать только числа с отрезка $$$\left[ a[i], a[i] + \frac{a[j] - a[i]}{2}\right]$$$.
Давайте заметим, что мы уменьшили длину отрезка в $$$2$$$ раза. То есть таких иттераций будет не больше $$$\mathcal{O}(\log n)$$$. Находить самое правое число, принадлежащее отрезку, можно с помощью дерева отрезков. Ответом для отрезка $$$l_i, r_i$$$ будет $$$\underset{l_i \leqslant j < r}{\min} dp[l]$$$ в момент $$$r_i$$$. Это также можно эффективно находить с помощью дерева отрезков. Итоговая асимптотика решения $$$\mathcal{O}(n \log^2 n + q \log n)$$$.
Также есть решение за $$$\mathcal{O}(n\sqrt{n} + q \log q)$$$, которое проходит все тесты.
1619E based on a similar idea (although easier) to Problem D of this contest. Got the idea for D through this previously solved problem.
is this contest rated?
You want:
Rated:
Unrated:
Edit: the ratings have been updated!!
if copy F: rated else unrated
Though I solved F, I think it shoud be unrated. But now it is rated.
Did anyone solved F using Mo's algo (specific sqrt decomposition) ?
I made some failed attempts using sets with a TC of $$$\mathcal{O}(n\sqrt{n}\log{n})$$$
Yes, check out my submission.
Super fast set for integers and super fast IO were used to pass TL, overall complexity is $$$O(C\cdot (N+Q)\sqrt{N})$$$, where $$$C$$$ is some small constant of fast set.
Wow, that's kinda amazing. Do you have a template / resource for this super fast set and super fast IO, that you can share with me ?
It's actually hard for me to wrap my head around it...
Both templates are in the code or wdym by that. IO was taken from someone's submission here, fast set was written by me based on 64-nary tree.
Practice on infoarena and this won't feel that amazing.
What is infoarena dude ?
Could you describe your solution, please?
It's something like this: link
in your MO you are not using optimizations,maybe it would have been easier to pass TL if you used tricks that are making NO even faster?
What optimizations you are talking about?
the sorting of the blocks, (you have it commented), also you may have used the best comparator that i really do not remember and will include a link when i find it
It is not an usual MO where you can move left and right pointers as you want, so the comparator you're talking about won't work here.
Could someone help identify the mistake in my code for problem D?
193336874
Problem E is a cleverly-designed dp problem with easy implementation and requirement for thinking ability. However, it's ruined by a duplicate problem F, otherwise more contestants could solve it.
first time i got to solve D in contest and i was just 10 seconds away from submitting it :(
same here ;(
Also was super close to submitting D. Bricked B hard :(
Can somebody explain to me this statement from the solution for problem F: That is, there will be no more such iterations than O(logn)? Also, I think it should have been
|a[l]-a[j]|
instead of|a[l]isa[j]|
and[a[i],a[i]+(a[j]−a[i])/2]
instead of[a[i],a[i]+(a[j]−a[l])/2]
.Every time comes a new $$$i$$$ , we try to update all the $$$dp_j$$$ that $$$j<i$$$ and $$$a_j>a_i$$$ ($$$<$$$ the same).
We first find the nearest such $$$j$$$ and update it. As we know, elements greater than $$$a_j$$$ are ignored, so the range we consider now is $$$[a_i,a_j]$$$ . Also, the elements greater than the center of the range are ignored. So the next $$$a_j$$$ we find is in $$$\left[a_i,a_i+\frac{a_j-a_i}{2}\right]$$$(the left half of the old range).
We find the nearest such $$$j$$$ again. This can be done as a maximum query on a weight segment tree. Now we have a new $$$j$$$ , and then repeat the process above. Note that each time we find a new $$$j$$$, the length of the range is halved. So the whole process will stop in $$$O(log\ v)$$$ . Here $$$v$$$ is the range of array $$$a$$$ , and we know $$$v=n$$$ .
And your correction is right. vaaven please check that.
Only got a micro positive delta but luckily became expert from specialist narrowly :)
congratz)
Can sqrt-decomposition solve the problem F? I got a TLE and don't know how to optimize
Yes, and it runs really fast.
Can anyone Please explain Problem B . It is really Confusing.
I'll try to specify the 2nd paragraph of E.
From para#1 we know that in the sorted array, it is always possible that all the satisfied people form a prefix of this array. But in this prefix, maybe a group is separated into a set of unconnected segments. Now we'll prove that all these segments from the same group can be connected.
Let's pick two segments from one of the groups, calling the left segment $$$A$$$ , and the right one $$$B$$$ . We now try to keep moving $$$A$$$ to the right until $$$A$$$ connects with $$$B$$$ . While $$$A$$$ moving right, it squeezes other elements to the left. From para#1 we know if you throw an element to the left, it won't be worse. So don't worry about these squeezed elements. As for $$$A$$$ , it will stop as soon as it meets $$$B$$$ . We know people in $$$B$$$ are all satisfied, because all the people in the whole prefix are satisfied. We also know the whole array is sorted, so the elements in $$$A$$$ will not be greater than $$$B$$$ . And, we only move segments and not change the length, so the group size always stays the same. So people in $$$B$$$ stay satisfied, people in $$$A$$$ with lower requirement (than $$$B$$$) also stay satisfied. Nothing is wrong.
Now we can connect together any two segments from the same group by moving the left one to the right. The remaining task is only to repeat this operation.
Therefore, we have proved that, each group in this prefix can be a continuous segment.
Thank you very much, I finally understand what editorial is trying to say .....
Actually the above explanation is not completely correct and also misleading as we can not always squeeze elements to the left. There is something missing in it due to which it gives false impressions. As it took me quite a lot of time to understand it,i would try my best to explain my argument here : first we sort the requirement vector a : a1,a2,a3,...,an in a non-decreasing order of ai's i.e. now for all 1<=i,j<=n , i<j implies ai<=aj.
Proof of para 1 : Suppose we assign some book numbers to all ai's(that is what we have to do in the problem) and call this array b i.e. bi=x means that we assigned the book number x to the ith index (reader). Now consider a case where for some i<j the ith index(or the reader) is not satisfied but the jth one is satisfied. Suppose we assigned book number x to i and y to j(i.e bi=x and bj=y), then from the condition of satisfaction we get : number of x's(in the array b) is strictly less than ai and number of y's(in b) is not less than aj. Let's denote the number of x's in b by u and number of y's in b by v(i know that there are a lot of variables and notations introduced, but please bare with me.) then we have : u=aj. Now due to these two indices the contribution in ans is 1: why ?,it's obvious as ai is not satisfied but aj is. Now swap the book numbers assigned to i and j i.e make bi=y and bj=x, now we have that the index i is satisfied because bi=y, and number of y's in b = v>=aj>=ai and index j is no longer satisfied as bj=x, and number of x's in b = u<ai<=aj. Hence the pair (i,j) still contributes 1 to the answer(which is the number of indices that are satisfied) and hence the answer does not get worse. Hence we can always have a prefix of indices that are satisfied.
Proof of para 2 : Now again consider an assignment of book numbers to all indices. Assume the the prefix(m) is satisfied for this assignment. Now consider a case where in this prefix, a group is separated into set of unconnected segments. Consider two separated segments of the same group. For the sake of simplicity consider both these segments to be of length 1 because if we can only prove that we can bring together two indices belonging to the same group, we are done, as for larger segments we can bring indices together one by one to make into a single continuous segment. Now coming back to the proof : consider the following scenario : array a : ai , aj , ak array b : x. , y , x
where i<j<k<=m(i.e all 3 indices lie in the satisfied prefix) and bi=x , bj=y , bk=x , i.e indices i and k belong to the same group but are separated into unconnected segments.Also following the variable names from the previous proof, denote number of x's in b by u and number of y's in b by v. Now since all the three indices are satisfied, we have : u>=ai , v>=aj , u>=ak . We can write the 1st and 3rd conditions together as u>=ak and ai<=aj<=ak. Hence we have u>=ak , v>=aj , ai<=aj<=ak ----- * Now let's analyse what happens when we swap the values of b for indices i and j(in order to bring the two x's together : we want to bring all the separated segments belonging to a same group together into a continuous segment) : array a : ai , aj , ak array b : y , x , x
now since the index k is unchanged, it is still satisfied. index i is still satisfied by the same argument as in proof 1 : number of y's in b = v>=aj>=ai (from *) and index j(the most confusing one) is also still satisfied(all this was possible because of the presence of the index k<j having bk=x) because : number of x's in b = u>=ak>=aj(from *). Hence we can perform the swap without breaking anything. Hence we can always bring together two different indices(that are not initially together) belonging to the same group by making swaps from left to right as shown above and hence we are done!
I'm sorry but I have to say your text is hard to read... Anyway, I know what you are talking about. That's exactly the same as what I tried to say. Maybe my words such as "move the segment" and "squeeze" caused some misunderstandings. I'll clarify my idea.
"Moving a segment" only moves the arrangement of groups, not the elements themselves. For example, $$$\Big[(1,2),[3,4],(5,6)\Big]\rightarrow\Big[[1],(2,3),[4],(5,6)\Big]$$$ . In fact, this step swaps the arrangement of groups between the two indexes $$$1$$$ and $$$3$$$ . That is the same as your proof.
Anyway, thanks for your discussion.
F can be solved using sqrt decomposition in $$$O((n+q)\sqrt n)$$$ with a small constant.
Notice that a pair $$$l,r$$$ can contribute to the answer only if $$$r-l\leq \sqrt n$$$ or $$$|a_r-a_l|\leq \sqrt n$$$, and there are only $$$O(n\sqrt n)$$$ of them. So we can find the answers offline, sorted by increasing $$$r$$$, and each time we add an element to the right, we update the answers in the segtree, but this will be $$$O((n\sqrt n+q)\log n)$$$, which is too slow.
But we have a trick here, notice that we have $$$O(n\sqrt n)$$$ updates and only $$$O(q)$$$ queries in the segtree, we can trade query time complexity for update time complexity using sqrt decomposition. Therefore the total time complexity is $$$O((n+q)\sqrt n)$$$.
Here is the submission 193306318 (it is really short).
What do you mean by "can contribute to the answer"? What if the statement is false? How is the answer calculated? update: You have already precalculated answers for each block of size sqrt(n).
actually it seems that number of segments that might contribute to the answer are only O(n) instead of O(n * root(n)). I am not able to prove it but it does seems like the case.
In problem B, there isn't any restriction to the (x — y) difference. But when I try to hack solutions using the case x, y = {1e9, -1e9}, the validator says "Invalid Case". Why?
You may say; "It is impossible / incredibly hard to find a solution to that case". That's what I thought when I was trying to solve this problem. Yes, there is a restriction on the sum of 'n', but that doesn't mean (x — y) is also restricted.
Looking forward to your opinions about that.
It is a implicit restriction, due to the limit on N this case has no valid solution. I agree that this is misleading... maybe they didn't make it clear to avoid giving a hint
Well, x — y means one of your n's will be equal to x-y. That means the original restriction on the sum of n's indirectly restricts the value of x and y
It doesn't. Because there isn't anything like "It is guaranteed that the answer exists" in the statement. Or "It is guaranteed that answer exists with n <= 2e5".
I meant it comes from the sketch of the optimal solution, I dont see the problem, is part of being a competitor to check the implicit constraints too and I think it's brilliant how they did it here, it made me doubt a lot
The problem statement says: It is guaranteed that the sum of $$$n$$$ over all test cases does not exceed $$$2 \cdot 10^5$$$
It was proven in the editorial that in any valid solution, $$$n = 2 \cdot (x - y)$$$.
Here, the "gurantee" means that an input is valid only if the sum of $$$n$$$ in the answer will not exceed $$$2 \cdot 10^5$$$. Any input that doesn't satisfy this will be deemed invalid, even if it satisfies the $$$-10^9 \le x, y \le 10^9$$$ requirement.
This can be understood incorrectly, because the word "gurantee" is very ambiguous here. You should read more about it here.
i think its fine the way they did it, and to some extent reduced the guessing on the problem, if they instead say sum of x-y doesnt exceed 1e5, solution would be obvious
Can anyone tell me why wrong answer is coming for Test 6 in problem D?
Link to Submission
5
2 3 1 4 5
2 3 1 4 5
Why can't the explanations be more elaborate? Aren't they supposed to induce more interest? I always feel these tutorials require a lot of cognitive effort, does anybody feel the same?
I can't find what's wrong with my code for problem F, which uses the same approach as in the editorial. My submission — 194098249. Can anyone please point out the mistake?
Different idea for problem F (that is hopefully easier to find).
Consider a 1d array $$$dp$$$ of size $$$N$$$. Through various iterations, our array $$$dp$$$ keeps changing.
In the first iteration, $$$dp[i]$$$ is the minimum absolute difference between $$$a[i]$$$ and some other element in indices in the range $$$N - 1 \dots i$$$. In the second iteration, $$$dp[i]$$$ is the minimum absolute difference between $$$a[i]$$$ and some other element in indices in the range $$$N - 2 \dots i$$$. More generally, at the $$$x$$$th iteration, $$$dp[i]$$$ is the minimum absolute difference between $$$a[i]$$$ and some other element in indices in the range $$$N - x \dots i$$$.
It is not hard to see that for a query $$$(l, r)$$$, all we need to do is find $$$\text{min}_{l \le x \le r} dp[x]$$$ on the $$$N - l$$$th iteration of $$$dp$$$. Since we can process the queries offline, we can process the queries in decreasing order of $$$l$$$--or in increasing order of "iteration."
So now the question becomes how to actually construct $$$dp$$$. At the beginning, at the $$$0$$$th iteration, $$$dp$$$ is simply an array where all elements are infinity (or some extremely large number). So we just need to figure out how to update and go from the i — 1th iteration to the ith iteration. What changes? Not much.
Lots of stuff that is within $$$\sqrt{N}$$$ of index may change. We can apply these changes manually. What about stuff that is farther than $$$\sqrt{N}$$$ of index $$$i$$$? Only $$$O(\sqrt{N})$$$ of them change. The thing is that $$$dp[x]$$$ gets really small when $$$|x - i| > \sqrt{N}$$$. In fact, when $$$|x - i| > \sqrt{N}$$$, $$$dp[x] < \sqrt{N}$$$. So basically, most of the dp values are quite small (and probably won't change). The only $$$dp$$$ values that may be affected are ones where $$$a[x]$$$ is really close to $$$a[l]$$$ (in fact, within $$$\sqrt{N}$$$ of $$$a[l]$$$).
So now we know how to update $$$dp$$$ $$$O(\sqrt{N})$$$ times each iteration.
Now, the rest is just answering queries, which we can do once we slap on a segment tree to $$$dp$$$.
194176722
Final Complexity: $$$O(N \sqrt N \log N + Q \log N)$$$
I'm sorry, but you modified $$$dp$$$ $$$O(\sqrt{N})$$$ times on each iteration, and on each modfication you need to update it in the segment tree. As a result, the complexity should be $$$O((N\sqrt{N}+Q)\log N$$$ instead of what you mentioned.
You are right. I updated the post.
vaaven What is the $$$O(n\sqrt{n}+q\log q)$$$ solution? Please tell us.
We can split our quieries in two groups. $$$r - l > \sqrt{n}$$$ and $$$r - l \leqslant \sqrt{n}$$$. Let's solve the task for each queries in order of increasing $$$r_i$$$. For second group of queries it's really easy to solve this task, because after moving $$$r$$$ to $$$r + 1$$$ we need to recalculate nearest element only for $$$\sqrt{n}$$$ elements. For first group we can do similar dp but not for elements but for answers. This will work in $$$\mathcal{O}(n \sqrt n + q\sqrt{n})$$$. To reach $$$\mathcal{O}(n \sqrt n + q \log q)$$$ we need to sort queries for each $$$r$$$ in order of decrease $$$l$$$ and understand that answer deacrese. That's why we can use smart pointer and our solution will work in $$$\mathcal{O}(n \sqrt n + q \log q)$$$.
This sorting task can be done in linear time. First sort the queries in deceasing $$$l$$$. This can be done with counting sort. After this, put each query to its $$$r_i$$$ bucket. Because this doesn't change the relative position for queries whose $$$r_i$$$ are the same, so we finished the sorting in linear time.
Complexity is $$$O(n\sqrt{n}+q)$$$ now.
ps: Thank you for this solution. I learnt a lot.
Yeah, we know about $$$\mathcal{O}(n\sqrt{n} + q)$$$ solution, but we are too lazy to сode a counting sort ¯ \ _ (ツ) _ / ¯
C: Dora & Search Why would this O(n(logn)) fail on given constraints?
Code
In if and else if section u should check from which priority is it matching. The conditions inside if and else if are not not correct
`
Can anyone tell me why WA on tc 9. https://codeforces.me/contest/1793/submission/209140213
I got the idea of D but couldn't trivially find the intersections (while I practiced).
Alternate $$$O(n\log{n} + q)$$$ solution for problem E using binary search (can be optimized to $$$O(n + q)$$$) :
Let satisfiability of person $$$i$$$ be $$$S(i)$$$. Also, sort all the people in increasing order of $$$S(i)$$$.
The first observation is the same as the editorial, For any $$$k$$$(number of groups to partition the people into), there exists atleast one optimal partition in which only some prefix of people are satisfied.
Now let's consider trying to partition the given set of people into the maximum number of valid groups.
We can observe that an optimal assignment strategy is to do the following repeatedly:
Starting from the first unassigned person till that moment, create the smallest valid group which consists of a subsegment in the sorted array. (ie. keep adding people into this set only till it remains invalid, stop once it becomes valid).
At the end we will be left with some suffix of unassigned people, and it's optimal to assign these people to the last group that we made (this way the maximal prefix of this suffix becomes satisfied as the last group is the largest).
Firstly, observe that each created group's (through the assignment strategy) size is equal to the last element in it (as adding any less/more would be a contradiction to the fact that we create the smallest possible valid group).
Secondly, the proof for groups being continuous segments is mentioned in the editorial.
Lastly, we can easily show that there exists atleast one optimal solution in which making only the smallest groups is optimal using contradiction and exchange arguments (consider what happens when we add extra elements to a group)
Now consider binary searching on maximal size $$$x$$$ of prefix of satisfied people we can get for $$$k$$$ groups. It's optimal to use the unsatisfied suffix to occupy as many groups as possible, so that the prefix has to occupy the minimum number of groups. Therefore each person in the suffix should be alone in his group.
Now there are two cases:
Now we can see that some (say $$$a$$$) of the created "groups" from the optimal strategy lie completely within the prefix of size $$$x$$$, and atmost one (last) group may lie partially within this prefix.
Now, if $$$a < k - n + x$$$ then it's obviously impossible for us to satisfy everyone in this prefix as that would be a contradiction to the fact that our assignment strategy was optimal.
Otherwise, if $$$a \geq k - n + x$$$ then the optimal thing for us to do is to put the first $$$k - n + x - 1$$$ groups into seperate groups, and put all the remaining people in the prefix into the last group. This is optimal because each group's size is equal to the last element in it, and we want to put the people from the last group (which lied partially within the prefix $$$x$$$) into a group with the maximum number of people (as rest of the groups lie completely within the prefix and will be satisfied anyways).
Thus we can check for a particular $$$x$$$ (size of prefix) and $$$k$$$(number of groups), if it's possible to make such partition in $$$O(1)$$$ time. For a fixed $$$k$$$, $$$f(x, k)$$$ is obviously boolean monotonic, so we can binary search.
How to optimise this to $$$O(n + q)$$$ you say? Just observe that as $$$k$$$ increases, $$$x$$$ decreases, so we don't even need binary search. Also, we can sort by satisfiability in $$$O(n)$$$ using counting sort, as $$$1 \leq a_i \leq n$$$.
$$$O(n + q)$$$ Implementation: link
A more intuitive version of the halving condition in F is just:
Consider the set of points that are "updatable" by the first condition, this is monotonically increasing (when we consider all a[j] > a[i]). Update the rightmost two points that satisfy the condition. Now:
let d = a[j] — a[j-1] (in the monotonic point set)
If d is "small", then we know our search range is [a[i], a[i] + d]
If d is "large", then we know that a[j-1] is small anyways. Since there's no points in between j-1 and j by definition, it's also a candidate for shrinking the valid updating range by
So our new update range is always [a[i], min(a[i] + d, a[j-1])]. If you draw a picture, the worst case is at the halfway point, so it shrinks in log(n) steps.
The visual intuition for the problem still makes sense in the editorial too, but this is even clearer IMO, and it motivates the solution a lot more I feel.