Problem A
Problem B
Problem C
Problem D
Problem E
Problem F
Problem G
Problem H
Problem I
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3821 |
3 | Benq | 3736 |
4 | Radewoosh | 3631 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3388 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 164 |
1 | maomao90 | 164 |
3 | Um_nik | 163 |
4 | atcoder_official | 161 |
5 | -is-this-fft- | 158 |
6 | awoo | 157 |
7 | adamant | 156 |
8 | TheScrasse | 154 |
8 | nor | 154 |
10 | Dominater069 | 153 |
Problem A authors — antontrygubO_o, Ari
Problem B author — Ari
Problem C author — Kuroni
Problem D author — hugopm
Problem E author — enoone
Problem F author — antontrygubO_o
Problem G author — antontrygubO_o
Problem H authors — Ynoi, dengyaotriangle
Problem I author — adamant
Name |
---|
You guys should've read problem I first, smh
86000122 You got a work to do;)
Very interesting that they have a similar coding style. Probably just a coincidence.
https://codeforces.me/contest/1375/submission/86004194
Some Tips
Monogon adamant
I think the last paragraph of the I editorial assumes that if $$$x$$$ is primitive and $$$x$$$ divides $$$\lVert q\rVert$$$ then $$$\lVert x\rVert$$$ divides $$$\lVert q\rVert$$$. Is this true? (Also, Monogon's solution seems to assume that the transformation is in the form $$$v\to q\cdot v\cdot \overline{q}$$$ for some Hurwitz quaternion $$$q$$$?)
I have not looked through the proofs, but I can confirm that my solution assumes the transformation is in that form, without the factor of $$$1/2$$$.
Yes, it is true because $$$|q| / x = |q| \cdot |x|^{-1} \cdot \bar x$$$. Since $$$|q|/x$$$ is Hurwitz quaternion and $$$\bar x$$$ is irreducible, we obtain that $$$|x|$$$ must divide $$$|q|$$$.
About transformation being $$$q \cdot v \cdot \bar q$$$, I think it's mostly correct. Assume $$$v \mapsto \frac{1}{2} q \cdot v \cdot \bar q$$$, where $$$q=s+ix+jy+kz$$$ is primitive, makes the transformation with integer matrix. It means that among $$$(s,x,y,z)$$$ exactly two numbers are odd and two numbers are even. Without loss of generality let's assume that $$$(s,x)$$$ are odd and $$$(y,z)$$$ are even. Consider quaternion $$$q'=s'+ix'+jy'+kz'$$$, where:
Note that all $$$s'$$$, $$$x'$$$, $$$y'$$$ and $$$z'$$$ are integer here. Matrix form of transform given by $$$v \mapsto q'\cdot v \cdot \bar{q'}$$$ is:
I used wolframalpha to get the matrix. As you may see, it's quite similar to matrix of $$$v \mapsto \frac{1}{2} q \cdot v \cdot \bar q$$$ transform. To actually get that matrix, you have to:
Swapping second and third rows alongside with multiplying them with $$$-1$$$ is done via $$$180^\circ$$$ rotation around $$$x$$$ axis, which may be represented as quaternion with unit norm. Multiplying second column by $$$-1$$$ is equivalent to changing $$$r_2 \mapsto -r_2$$$, but we don't really need it for this problem, as we're allowed to use negative coefficients. Layout is a bit messy here, but I hope it's clear now that considering only $$$v \mapsto q \cdot v \cdot \bar q$$$ is fine, as it's always possible to get here from $$$v \mapsto \frac{1}{2} q \cdot v \cdot \bar q$$$ if you're fine with loosing some signs or the relative order of $$$r_1$$$, $$$r_2$$$ and $$$r_3$$$ (which is the case in this problem).
Why is $$$\overline{x}$$$ irreducible? According to Wikipedia "a Hurwitz integer is irreducible if and only if its norm is a prime number" but this isn't the case here?
By irreducible I meant primitive there, sorry.
I think I follow most of the editorial for problem I, except for several statements in the last paragraph:
Why is $$$g$$$ primitive (has it to do with the gcd of input being 1? if so how?). For the 'it may be proven'-statement, how can it be proven and how is it used? Maybe we need to do it repeatedly (needing $$$q$$$ and $$$p$$$ to be primitive as well) to create a unique factorization of $$$\lVert g\rVert$$$ (and therefore also of $$$g$$$ up to units)? Why does $$$q$$$ has to divide $$$g$$$? And lastly why has $$$gcd(g,\lVert q\rVert)$$$ to be equal to $$$q$$$?
Also, I don't see where the assumption mentioned by Benq comes up.
Assuming that all input vectors are in the form $$$k\cdot q\cdot v\cdot \overline{q}$$$, we know that $$$k\cdot q$$$ divides each of them, so $$$k\cdot q$$$ must also divide their gcd $$$g$$$.
If some integer $$$x>1$$$ divides all coefficients of $$$g$$$ then it will divide all coefficients of any multiple of $$$g$$$. Since $$$g$$$ divides all the quaternions given in the input, $$$x$$$ cannot exist (actually, I'm not exactly sure how this works for $$$x=2$$$ since we can have half-integer coefficients).
I think my assumption shows that $$$\gcd(g,\lVert q\rVert)$$$ can't have norm greater than $$$\lVert q\rVert$$$, idk about the unique factorization though.
I hope, Benq's comment clarifies a bit about why's $$$g$$$ primitive. Indeed if it's not then there is some common integer factor $$$x$$$ which would divide every point from input. $$$x=2$$$ can't work here as well because for all points it holds that $$$s=0$$$ and Hurwitz quternions either have all coordinates integer or all coordinates semi-integer.
Benq's assumption indeed helps it that $$$\gcd(g,|q|)$$$ doesn't have norm greater than $$$q$$$. On the other hand we know that primitive $$$g$$$ has a unique divisor $$$q$$$ with norm $$$|q|$$$ if this norm divides $$$|g|$$$, thus this $$$q$$$ should be the $$$\gcd$$$.
You may read on the uniqueness of factorisation for primitive quaternions in Conway's book On Quaternions and Octonions. You may also read Factorization of Hurwitz Quaternions for detailed proof in generic case.
Here's a simpler mapping from transformations $$$v \mapsto \frac{1}{2} q \cdot v \cdot \bar{q}$$$ to $$$v \mapsto q \cdot v \cdot \bar{q}$$$ that doesn't involve expanding matrices:
Consider any integer rotation $$$v \mapsto \frac{1}{2} q \cdot v \cdot \bar{q}$$$ where $$$q$$$ is primitive. This maps integer vectors to integer vectors, so 2 divides $$$\lVert q \cdot \vec{e_x} \cdot \bar{q} \rVert = \lVert q \rVert^2$$$, and thus 2 divides $$$\lVert q \rVert$$$. Now, consider the factorization $$$q = q' p$$$ where $$$\lVert p \rVert = 2$$$; this exists by the factorization property of quaternions. Then, our rotation is the composition of $$$v \mapsto \frac{1}{2} p \cdot v \cdot \bar{p}$$$ and $$$v \mapsto q' \cdot v \cdot \bar{q'}$$$.
Note that there are exactly $$$24$$$ Hurwitz quaternions with norm $$$2$$$: $$$\pm 1 \pm i$$$ and symmetric with other terms. It's easy to check that the 24 possible transformations $$$v \mapsto \frac{1}{2} p \cdot v \cdot \bar{p}$$$ correspond to the 12 possible rotations of the unit octahedron $$$\pm \vec{e_x}, \pm \vec{e_y}, \pm \vec{e_z}$$$ which "flip the parity" of the faces (remember that $$$\pm p$$$ correspond to the same rotation). (The other 12 rotations are the 24 transformations $$$v \mapsto u \cdot v \cdot \bar{u}$$$ where $$$\lVert u \rVert = 1$$$.) Thus, in our problem, it's fine to discard the $$$v \mapsto \frac{1}{2} p \cdot v \cdot \bar{p}$$$ part of the transformation and use the equivalent $$$v \mapsto q' \cdot v \cdot \bar{q'}$$$.
In fact, there's a stronger statement for Hurwitz quaternions that $$$2 \mid \lVert q \rVert$$$ iff $$$\forall v: 2 \mid q \cdot v \cdot \bar{q}$$$. We can use this to get a tighter classification of the valid transformations:
Each transformation of the axes $$$\vec{e_x}, \vec{e_y}, \vec{e_z} \mapsto \vec{r_1}, \vec{r_2}, \vec{r_3}$$$ can be represented as a transformation of the form $$$v \mapsto k \frac{q \cdot v \cdot \bar{q}}{n}$$$ for a positive integer $$$k$$$, a primitive Hurwitz quaternion $$$q$$$, and $$$n = 2$$$ if $$$2 \mid \lVert q \rVert$$$ and $$$n = 1$$$ otherwise. Furthermore, these representations are unique up to the sign of $$$q$$$.
Wow, nice observation, thanks!
adamant also asked me about the case when the input coordinates have some nontrivial common factor $$$k^*$$$. It turns out that you can just divide all coordinates by $$$k^*$$$ and solve the reduced problem; equivalently, there exists some optimal solution of the form $$$v \mapsto k^* q v \bar{q}$$$.
Proof: consider any optimal solution $$$a_i = k q v_i \bar{q}$$$, where $$$q$$$ is a primitive Hurwitz quaternion with odd norm (see the characterization above), the $$$v_i$$$ are arbitrary Hurwitz quaternions, and $$$k$$$ is a positive integer. (The norm of this solution is $$$r = k \lVert q \rVert$$$.) Furthermore, among optimal solutions, pick one with maximal $$$k$$$. Obviously, $$$k \mid k^*$$$ because $$$k \mid a_i$$$ for all $$$i$$$. Assume for the sake of contradiction that $$$k < k^*$$$. Pick some prime $$$p \mid \frac{k^*}{k}$$$.
Case 1: $$$p \not\mid \lVert q \rVert$$$. Then, for all $$$i$$$, $$$p \mid \frac{a_i}{k} = q v_i \bar{q} \implies p \mid \bar{q}(q v_i \bar{q})q = \lVert q \rVert^2 v_i \implies p \mid v_i$$$. Thus, we can write $$$a_i = (kp) q \frac{v_i}{p} \bar{q}$$$ for all $$$i$$$, which is a solution with strictly higher norm ($$$r = kp \lVert q \rVert$$$).
Case 2: $$$p \mid \lVert q \rVert$$$. Let $$$q = q'h$$$ where $$$\lVert h \rVert = p$$$ (this exists by Hurwitz quaternion factorization). Then, $$$p \mid \frac{a_i}{k} = q' h v_i \bar{h} \bar{q'}$$$. Now, $$$q'h$$$ and $$$\bar{h}\bar{q'}$$$ are both primitive, and $$$\lVert h \rVert = p$$$, so we actually must have $$$p \mid h v_i \bar{h}$$$. This is an immediate result of Lemma 4.2 of Factorization of Hurwitz Quaternions. Intuitively, if a Hurwitz quaternion is a multiple of $$$p$$$, two "adjacent" terms with norm $$$p$$$ of its prime factorization must multiply to $$$p$$$ times a unit; in this case, that can't occur in $$$q'h$$$ or $$$\bar{h}\bar{q'}$$$ because those are primitive, so it must occur in the middle.
Thus, we can actually write $$$a_i = (kp) q' \frac{h v_i \bar{h}}{p} \bar{q'}$$$. This solution has the same norm as the original ($$$r = kp \lVert q' \rVert = k p\frac{\lVert q \rVert}{p} = k \lVert q \rVert$$$), but this has a larger $$$k$$$, which contradicts our maximality assumtion.
Hence, there must exist an optimal solution where $$$k = k^*$$$, so we can solve the problem for $$$a_i' = a_i / k^*$$$ and multiply the solution by $$$k^*$$$.
A quick followup: we can use this proof to almost classify all maximal solutions.
Let the primitive solution from above be $$$a_i = k^* q^* v_i^* \bar{q^*}$$$ (this is unique up to reordering/reflection of the axes, i.e. multiplication of $$$q$$$ by a unit). Then, the same proof shows that any other solution must be of the form $$$a_i = k q v_i \bar{q}$$$ where $$$q = q^* h$$$, $$$k = k^* / \lVert h \rVert$$$, and $$$v_i = \bar{h} v_i^* h / \lVert h \rVert$$$ for some Hurwitz quaternion $$$h$$$.
There are a few constraints on $$$h$$$. Obviously, we need $$$\lVert h \rVert \mid k^*$$$. Also, we want $$$q^* h$$$ to be primitive. The other constraint is the harder to satisfy: for all $$$i$$$, we need $$$v_i = \bar{h} v_i^* h / \lVert h \rVert = h^{-1} v_i^* h$$$ to be a Hurwitz quaternion. Equivalently, we need $$$v_i^* h = h v_i$$$ for some choice of $$$v_i$$$. This process of swapping terms is roughly "meta-commutation"; there's more information about meta-commutation at https://arxiv.org/abs/1307.0443. We roughly need $$$h$$$ to be a fixed point of meta-commuting by $$$v_i^*$$$ for all $$$i$$$ (more precisely, we should analyze each prime factor of $$$\lVert h \rVert$$$ separately, but the idea is the same).
If someone could finish this classification and do it algorithmically, we could get an algorithm for the original problem which doesn't involve brute forcing the norm. Maybe this is possible!
Thanks for your help. The pieces start coming together for me now. Some last questions / remarks.
It looks like it is important how exactly division (which is used inside gcd) is done: $$$round(b^{-1}a)$$$ works, but $$$round(ab^{-1})$$$ doesn't. How can we decide which one we need? Has it something to do with $$$q$$$ being a left factor of $$$g$$$?
You say 'for simplicity we will further work with vectors $$$a_i$$$ multiplied by $$$2$$$', but can't this cause $$$g$$$ to be non-primitive, resulting in the divisor of $$$g$$$ with norm $$$\lVert q \rVert$$$ being non-unique?
We don't need to worry about $$$k>1$$$, because then the gcd of input coordinates would be greater than $$$1$$$.
It doesn't matter to us that $$$q$$$ is only unique up to units, since that only changes the order and signs of $$$r$$$.
In your comment regarding the transformation being $$$q \cdot v \cdot \bar q$$$, I think $$$180^\circ$$$ rotation is only multiplying rows with $$$-1$$$, not swapping them. I think swapping is a $$$90^\circ$$$ (or $$$270^\circ$$$) rotation, which also multiplies one of the rows.
I think using $$$s'=\frac{s+x}{2},~ x'=\frac{x-s}{2},~ y'=\frac{y+z}{2},~ z'=\frac{z-y}{2}$$$ is even cleaner. If my calculations are correct, this only multiplies the second row with $$$-1$$$ and then swaps the second and third rows, which I think is just a $$$90^\circ$$$ rotation around the x axis.
$$$90^\circ$$$ rotation around an axis may also be represented as quaternion with unit norm (but with irrational coefficients), so I think your observation is still valid.
Divisibility: You're right that the order matters, and is determined by whether $$$q$$$ is a left- or right- factor. I think we've been pretty wishy-washy with what we mean by divisibility/division. If you look at the link on factorizations, I believe they're more precise with talking about left-module/left-ideal versus right-module/right-ideal, though they're obviously symmetric. Here's one explanation for this problem that's a little more precise: we will try to find the right-gcd of $$$a$$$ and $$$b$$$, i.e. the largest $$$g$$$ so that $$$a = ga'$$$ and $$$b = gb'$$$. Then, in one step of the Euclidean algorithm, we'd like to find $$$a = bq + r$$$ so that $$$r = g (a' - b' q)$$$ is stil right-divisible by $$$g$$$. Thus, we want $$$a \approx bq \iff b^{-1}a \approx q$$$. (Update: apparently the convention is that $$$a = ga'$$$ means that $$$a$$$ is "right-divisible" by $$$g$$$.)
See my comment above for a bit of explanation about the factor of 2. You're right that multiplying by 2 doesn't quite do what we want.
Yup, $$$k > 1$$$ isn't important.
Yeah, we don't really care about uniqueness, but it's nice to know.
I think the cleanest view of the transformation is dividing by $$$1 + i$$$.
It's important to avoid irrational coefficients so that our claims about Hurwitz quaternions hold.
Here's an alternative solution which does not involves caseworking on $$$\lVert q \rVert$$$ at all and is almost polynomial time.
Let $$$g = \operatorname{right-gcd}_i(a_i)$$$.
Consider any valid solution $$$a_i = q v_i \bar{q}$$$. Then, we can make a few observations:
It turns out that these conditions are necessary and sufficient when $$$g$$$ is primitive (as in this problem). The proof of this involves the observation that the $$$a_i$$$ are pure-imaginary, so if $$$h$$$ right-divides $$$a_i$$$, then $$$\bar{h}$$$ left-divides $$$\bar{a_i} = -a_i$$$, as well as some analysis about primitive Hurwitz quaternions.
Thus, the solution is as follows:
For a Hurwitz quaternion $$$q$$$, let $$$\operatorname{gid}(q)$$$ be the greatest integer divisor of $$$q$$$, i.e. the GCD of the coordinates of $$$q$$$. This is easy to evaluate.
Let $$$G$$$ be the maximum norm of the $$$\lVert a_i \rVert$$$. Then, $$$\operatorname{right-gcd}$$$ takes $$$O(\log(G))$$$ time, so the runtime of this algorithm is $$$O(n \log(G) + T_{\operatorname{square-free}}(G))$$$, where $$$T_{\operatorname{square-free}}(n)$$$ is the time to find the square-free component of $$$n$$$. Trial division gives $$$T_{\operatorname{square-free}}(G) = G^{1/3}$$$ runtime, while factoring via Pollard's rho gives $$$T_{\operatorname{square-free}}(G) = G^{1/4}$$$ expected time.
Here's a proof the solution. It suffices to prove the following:
Claim: [Sufficiency for a single prime] Consider any prime $$$p$$$ and Hurwitz quaternion $$$q$$$ with $$$\lVert q \rVert = p$$$. Also, consider any primitive quaternion $$$g$$$ such that $$$q$$$ right-divides $$$g$$$, and any pure-imaginary Hurwitz quaternion $$$a$$$ such that $$$p^2 \mid \bar{g} a g$$$ and $$$p^2 \mid \lVert a \rVert$$$. Then, $$$q^{-1} a \bar{q}^{-1}$$$ is a Hurwitz quaternion, i.e. $$$p^2 \mid \bar{q} a q$$$.
This claim suffices because we can repeatedly apply the claim and divide out $$$q$$$, i.e. replace $$$a' = q^{-1} a \bar{q}^{-1}$$$ and $$$g' = q^{-1} g$$$. It's easy to verify that the original conditions indeed shrink the maximum possible norm by a factor of $$$p = \lVert q \rVert$$$, so this iteration works.
To prove this claim, we will use the following powerful lemma about prime factorizations of Hurwitz quaternions. (This is Lemma 4.2 from Factorization of Hurwitz Quaternions.)
Lemma: Consider a prime $$$p$$$ and Hurwitz quaternions $$$a$$$,$$$b$$$,$$$c$$$ such that $$$p = \lVert b \rVert$$$. Then, $$$p \mid abc$$$ implies $$$p \mid ab$$$ or $$$p \mid bc$$$ (the converse is obviousyly true too).
Proof of lemma: Consider any $$$abc$$$ such that $$$p \mid abc$$$. Let $$$g$$$ be the right-gcd of $$$\bar{b}$$$ and $$$c$$$, with $$$\bar{b} = gx$$$ and $$$c = gy$$$. Then, $$$\lVert g \rVert \mid \lVert b \rVert = p$$$.
If $$$\lVert g \rVert = p$$$, then $$$\bar{b} = gx$$$ implies $$$x$$$ is a unit. Thus, we can write $$$bc = bgy = b(\bar{b}x^{-1})y = p \cdot x^{-1}y$$$, so $$$p \mid bc$$$.
Otherwise if $$$\lVert g \rVert = 1$$$, by the extended Euclidean algorithm, we can write $$$\bar{b}s + ct = g$$$ for some $$$s$$$ and $$$t$$$. Thus, $$$ab(\bar{b}s + ct) = abg \implies \lVert b \rVert as + abc t = abg$$$. $$$\lVert b \rVert = p$$$, and $$$p \mid abc$$$, so $$$p \mid abg$$$. $$$g$$$ is a unit, so $$$p \mid ab$$$, as desired.
Thus, we have proven the lemma.
In essence, if you have $$$p \mid h_1h_2h_3\cdots$$$, the part that "generates" the factor of $$$p$$$ cannot jump over any prime factors of norm $$$p$$$; it must come from two "adjacent" ones.
We're now ready to prove our original claim.
Proof of claim: First, let $$$g = qh$$$. Note that $$$p^2 \mid \lVert a \rVert$$$. We will casework on whether $$$p \mid a$$$ or not.
Case 1: $$$p \mid a$$$. Then, note that $$$p^2 \mid \bar{g} a g \implies p \mid \bar{g}\frac{a}{p}g = \bar{h} \bar{q} \frac{a}{p} q h$$$. Then, $$$\bar{h}\bar{q}$$$ is primitive and $$$\lVert \bar{q} \rVert = p$$$, so by the lemma, $$$p \mid \bar{q} \frac{a}{p} q h$$$. Also, $$$qh$$$ is primitive and $$$\lVert q \rVert = p$$$, so applying the lemma again, $$$p \mid \bar{q} \frac{a}{p} q$$$. Thus, $$$p^2 \mid \bar{q} a q$$$ and $$$q^{-1} a \bar{q}^{-1}$$$ is Hurwitz as desired.
Case 2: $$$p \not\mid a$$$. Then, $$$p^2 \mid \lVert a \rVert$$$, so we can write $$$a = xyz$$$ where $$$\lVert x \rVert = \lVert z \rVert = p$$$.
Now, $$$q$$$ right-divides $$$a$$$, so $$$p \mid \bar{q} a = \bar{q} xyz$$$. Once again, by the lemma, $$$\lVert x \rVert = p$$$ and $$$p \not\mid xyz$$$, so $$$p \mid \bar{q} x$$$.
Also, $$$a$$$ is pure imaginary and $$$q$$$ right-divides $$$a$$$, so $$$\bar{q}$$$ left-divides $$$\bar{a} = -a$$$, so $$$\bar{q}$$$ left-divides $$$a$$$. Thus, we have $$$p \mid a \bar{\bar{q}} = xyz q$$$. $$$\lVert z \rVert = p$$$, and $$$p \not\mid xyz$$$, so $$$p \mid zq$$$.
Thus, we can write $$$\bar{q} a q = \bar{q}xyzq = p^2 \cdot \frac{\bar{q}x}{p} y \frac{zq}{p}$$$, so $$$p^2 \mid \bar{q} a q$$$ and $$$q^{-1} a \bar{q}^{-1}$$$ is Hurwitz as desired.
Thus, our proof is complete.
How to define the right-gcd of quaternion? I was confused by it
https://codeforces.me/blog/entry/79731?#comment-658311 has a definition.
By the way it's my meme.
I did not understand it
Mojang added many shit mobs/etc to Minecraft(specially bees), and ruined it. I'm pointing out that Antontrygub's profile is a bee, and . . .
what happened to "doomsday" and tester comments :O
tricky Questions!!
problem C — summing all values of consecutive differences and checking if its greater than zero passes the system tests, I don't know whether its based on some proof and is correct or its wrong and bad system tests that let it pass. Any help will be appreciated.!
link to solution
Not wrong. $$$a_2 - a_1 + a_3 - a_2 + a_4 - a_3 + ... + a_n - a_{n - 1} = a_n - a_1$$$. $$$a_n - a_1 > 0 \leftrightarrow a_n > a_1$$$, which is the same as the tutorial.
thanks
I'm just curious how people arrive to solutions like that when they don't have an idea whether it is correct or not.
just observed from the test cases, don't know how that happened.I am just starting with cp.
That's understandable. Although one must be careful with sample test cases. In many problems they are intentionally misleading, as I realized, especially in geometry problems.
Thanks for the advice , I prefer doing things after proving most of the times but sometimes I cant prove and get a solution then I go for observation.
Sometimes it feels like someone telepatically gives me random hints.
Or Maybe you are John Wick of Coding. #QuesKiller
Your name is quite original.
Because they get encouraged by seeing "Accepted" or "Pretests passed" and also positive rating changes.
But it is sure that the people who want to learn and to find out the proof of problems not always do that (i.e.Solution without proof).
Yeah but how do they even come up with that? Like does your brain just generate random algorithms and you type them up?
https://codeforces.me/contest/1375/submission/85969397 can u please tell me what's going wrong in this??
This permutation can be reduced to a single element, but your code returns "NO". Check your code against it and myabe that would resolve the issue.
1
6
1 2 3 6 5 4
Extremely liked problems C & F, thanks a lot for such a nice trolling!
Could you give an intuition for how you came up with a solution for F?
I solved it myself during practice :P, so I can:
First of all, I noticed that the sample contained "First" as the optimal answer. If the sample is correct, this means that with all possibilities, First can force-win this game, regardless of the moves of the interactor (second player). So I took it as a working hypothesis that the First Player can always win.
It is immediately obvious that just before the winning move of the first player, the sequence looks like $$$a$$$, $$$a+d$$$, $$$a+2d$$$ and the number that the first player outputs is $$$d$$$, and the second player has just updated the pile with $$$a+2d$$$ stones, so he can't do it again. With some tinkering (and a bit of luck), you can find out that the magic number is $$$2c-(a+b)$$$ :)
Thanks for the interesting round and editorial!!! now I can solve the problem e, f
this is my solution for D which fails in tc 2 because my resultant array is 0 1 2 3 4 5 7 and not 0 1 2 3 4 5 6 in test number 2 if there is any other mistake please inform me ....thanks ....pretty good contest and kudos to the problem setters
0653443 -> 0153443 -> 0123443 -> 0123453 -> 0123463 -> 0123465
[deleted]
What's the difference with the solution described in the editorial?
[deleted]
Why so many wrong submissions for C even by red coders.
It was too simple for a C problem. (ig)
NO it wasn't too simple, the solution might be 2 lines of code where you don't even need to store the array values and require only the first and last values to answer the question but it was a tricky in which even some of the the red coders have submitted wrong submissions.
I meant the implementation of it, not the problem itself. Lol i couldn't solve it during contest either.
Thank you very much for this round :p
nearly 21000 participants registered and only about 8000 solved problem A so either participants are becoming too much conscious about their rating that they don't submit if they didn't solve problem A fast or problems were that tough but i don't think A and B were tough.so its actually the former point
Wow, that's a much smaller ratio than I expected.
The other factor is that registration is open for days ahead of time, so frequently people (myself included) will Register "just in case" because it's much better to register & not participate than to want to participate but forget to register.
Another shirt color change, nice!!
It also depends on initial approach... if it goes in wrong direction initially..you end up taking a lot of time on a question... like I thought of a lot of nonsense in C.. but the solution is simple .. still the way I solved C ..is more time consuming and lengthy as compared to solution ..
Yes! Div2AB hatters don't understand, that keeping first problem easy (or seems to be easy) is key for right rating correlation.
Another way to look Ques C:
If largest in present array is at position 1 or smallest is at position n, then it is not possible.
The final logic might be same but this one is easy to visualise or interpret. If you have smallest element at the rightmost spot or largest at leftmost spot you can never get they final array
Now largest is n and smallest is 1, start from either largest and smallest and check above condition.
https://codeforces.me/contest/1375/submission/85979406
Here
The final logic might be same but this one is easy to visualise or interpret. If you have smallest element at the rightmost spot or largest at leftmost spot you can never get they final array
cnt stores index of each number ( from i to n )
Now we know that number goes from 1 to n , so we iterate from 1 to n ( small to large )
During every iteration , if current number if at the rightmost spot, then you cannot convert this array , else continue.
I did C in different way:
Anyone else who used this logic??
Here I think.
Ohh yes, thanks for the explaining.
It's exactly the same thing, notice that every element cancels out except for the first and the last but interesting nonetheless
what is the significance of this line in question B:"exactly k of its neighboring cells have a number greater than 0 written on them."
It means that if a cell has a number k then that implies that exactly k neighbors of that cell are non zero.
It says that out of
4
possible neighbours, there should be onlyk
neighbours who havenumber > 0
.keep make like this announcements
they keep Cowards away
Well, I see that those testers comment were not lying, tho
A crazy $$$\textbf{D&C}$$$ solution for A :
what D&C means?
Divide and Conquer, I solved A like that after 1 hour of struggling . . .
from which account?
I'm a tester.
Now I can feel how these Round Feedback was compiled from the opinion of testers :p
Thanks for such a wonderful contest! Thank_u
drunk coordinator
Thanks anthoctryhardO_o for this beautiful round!!
Welcome to the new era of AdHocForces
https://codeforces.me/contest/1375/submission/86002430 I am not able to get where is my logic is going wrong for problem C? Logic: I have stored the first element of every increasing continuous sub-array(as it will be replaced by it only, when we will apply operations) in list(say A), except last element of array and then check whether any element greater then last element in list A, if no then ans is YES, otherwise NO. Please help anyone! Thank you.
3 6 5 2 1 4
Try this permutation. Answer should be YES while yours give NO.
For problem C, I used stack, in which if a smaller element is there I will not push a larger element and is a smaller element comes I push it and try to empty the stack using larger number coming.. Can anyone tell where it fails..... my code..85979051
Using stack missed some cases this problem can be handled in adhog way try this-Link
It's ok to use a stack to solve the problem, but you need a different strategy .
Don't try to empty the whole stack, keep the first one there instead.
85954948
Typical adamant problem)
Hi guys, As a beginner i faces a lot of problems understanding solution of problems I always taught that can there be some creative way of easily explaining solution, then i realizes the best was is by visual explanation
so i have prepared visual explanation and solution of problem A,B,C as i was able to get them under control
Links: -
Problem A
Problem B
Problem C
I have just started with this i hope it will be helpful to lot of beginners and i will also improve my explanation skills as we proceed in this new Journey
Hope it helps :) :) Waiting for valuable feedback :) :)
How can number of neighbours be 0 in B bro ?
In the question it is given that if a cell contains 0 then there is no constraint to it it will not make matrix not good In this context we can say that a cell with 0 can have 0 neighbors
https://codeforces.me/contest/1375/submission/86019209
Can you check where i am wrong?
You are setting value of f1 wrong We need to check each of 3 cases separately I.e corner>2 and side >3 and other >4 while setting f1 you can check the video in that we have check all 3 cases separately then your code will also be accepted
yeah got it. Thanks
Wow I got tricked on G :(
I didn't think G was that easy (in fact I thought of this solution 30 secs after I read it), but then instead thought of some crazy DP, and then gave up and didn't really try it...
What is your approach??
It supposed to be simple DFS, but I thought it was some hardcore shit DP.
These days I dream of codeforces' contest with problems based on algorithms and data structures. DREAMS.
How to solve D for minimum number of operations?
Your text to link here...
can anyone tell me why it is not accepting
Your link doesn't work, try to use a submission link.
Appreciate the hard work that goes into making a problem. But it would be really great to see more problems related to data structures and algorithms. Now we can mostly find ad-hoc or else greedy problems. Most people solve upto C or D and it would be really interesting to have DSA problems in this range too. I hope you guys agree with this and relevant people can see this comment so we can see a change in future contests. I am sorry if my comment is inappropriate.
It depends on your thought process, but I solved C and D with data structures.
I agree with you, but its rare to see problems that can be solved using only data structures or with the help of any algorithm. Using data structure to solve today's C is an overkill.
I think there's many ways to arrive at C's solution, but one way is to simulate with stack, and then one may observe that first element needs to be less than last element.
So actually, the simpler solution is still inspired by data structures. Depending on the person, it may be faster to code the stack solution as opposed to thinking any longer about making any simplifying observations, especially when the stack simulation is obviously correct in one's mind.
Here is my approach for Problem C(using stack) : https://youtu.be/xOFkewgFhjA
Let me know if you find this helpful :)
Understood. Thanks for the video!
I have not seen someone printing the matrix for B here, so this is how it looks like
If any part of the input matrix is larger than the above matrix, the result is "NO".
problem C, sum of differences of all consecutive numbers in the array equals (a[n]-a[1]).
[problem:C] 85966335
Can anyone believe todays problem C was just one if-else!!! :D
if(a[0]<a[n-1]) cout << "YES\n"; else cout << "NO\n"; The fuck!!
Yes. That is what a lot of people did.
These days there is a trend of Adhoc problems on codeforces.
We need data structures and algorithms based problems.
Here's another (almost equivalent) solution for C.
It is easy to see that $$$a_1<a_n$$$ is a necessary condition. If $$$a_1>a_n$$$, no matter what we do, for the last two elements (let them be $$$a_i$$$ and $$$a_j$$$), we must have $$$a_1 \leq a_i$$$ and $$$a_j \leq a_n$$$, so $$$a_i>a_j$$$, which is not reducable. To prove its sufficiency, I devised the following algorithm :-
Take the smallest element in the permutation. Obviously, it will be $$$1$$$ initially. Let its position in the array be $$$i$$$ (note that it cannot be $$$n$$$ because then $$$a_1<a_n$$$ would not hold). I will use this element to delete every element to the right of $$$i$$$ except $$$a_n$$$. I can do this because by definition $$$a_i$$$ is the smallest so $$$a_i<a_{i+1}$$$ will always hold. Now my array is $$$a_1, a_2,... , a_{i-1}, a_i, a_n$$$. Here I will use $$$a_i<a_n$$$ to delete $$$a_i$$$. So, my array is $$$a_1, a_2,... , a_{i-1}, a_n$$$. Repeat this process until a single element remains.
Can anyone please tell me why this approach is wrong or if the code is wrong. Please tell me I have no clue whatsoever regarding why I am passing the Test Case 1 and not 2. https://codeforces.me/contest/1375/submission/85968254 is the solution link.
86013495 You forgot to check whether $$$a[i][j] > 4$$$
85986603
I am curious about the correctness of my submission in problem D
My approach is to eliminate those duplicate numbers from left to right, then to apply the algorithm introduced in the editorial, this is, to say that "the number of duplicated numbers" $$$+$$$ "the number of cycles" — "number of fixed points" $$$\leq n$$$
The approach might sound ridiculous at first glance, but I can't find any counter case. Even more surprisingly( or obviously?), there are a lot of cases that my algorithm needs exactly $$$2n$$$ operations.
For example, any $$$a_1, ..., a_n$$$ has the form $$$a_1 = a_2 = ... = a_n = n - 1$$$ will achieve the equality, and this is not the only case the equality holds.
For instance, consider the following case where $$$n = 7$$$,
$$$6, 4, 5, 5, 4, 6, 6 \Rightarrow 6, 4, 5, 0, 1, 2, 3$$$ has $$$3$$$ cycles and $$$4$$$ repeated numbers.
So, the problem is, is there any proof about this amazing fact? Or this is just simply wrong.
I don't count cycles of length 1
Let's color duplicate numbers with red, and others with blue. After removing duplicates, red numbers will form an increasing sequence, because after each step MEX increases.
Take one cycle. Let's prove that it cannot consist only of red numbers. Suppose it is. Let $$$i$$$ be the index of the last number in the cycle. Since it is the last number, $$$a[i] < i$$$. Since red numbers form an increasing sequence, and $$$a[i]$$$ and $$$i$$$ are both red, then $$$a[a[i]] < a[i]$$$. And $$$a[a[a[i]]] < a[a[i]]$$$, and so on. So, the first number in the cycle points to the left, which can't be true. Thus, each cycle has at least one blue number in it.
\begin{gather*} n — N_{red} = N_{blue} \geqslant N_{cycles}\\ n \geqslant N_{red} + N_{cycles}\\ N_{red} = N_{duplicates} \Rightarrow N_{duplicates} + N_{cycles} \leqslant n \end{gather*}
Wow, I made it too complicated.
I have never ever seen a problem set like this.
When I saw the tester comments I thought they were joking but now I know they weren't (
Question are really tricky. Even I did not able to solve a single Problem in first hour of contest. And thinking lot about problem I solved first four problems until the end of contest. The logics to solce problems are really tricky and unique logic.
A simpler explanation for problem E: We need to apply all inversion swaps, so we must keep other inversions when swap a pair. We can achieve this by swapping the pair whose corresponding array elements have smallest difference.
How to effectively find the pair? See tourist's submission: 85960652. In this submisison, it first squeezes array values to 0..n-1, then only swaps the pair whose corresponding difference is 1. Such pair always exists.
can you please explain a little bit more?
e.g. In [3, 1, 2], you should swap (3, 2) first. If you swap (3, 1) first, then you get [1, 3, 2], (1 2) is not a inversion while (3, 2) in original array is a inversion. It is invalid.
e.g. In [3, 2, 1], you can swap (3, 2) first or (2, 1) first. But you cannot swap (3, 1) first.
If we can swap the pair whose corresponding array elements have smallest difference, then we can keep all other inversions after swapping a pair. More precisely, given k and a[k], for all i < k and a[i] > a[k], we should swap the smallest a[i] with a[k] first.
why swapping the pair whose corresponding difference is 1 is answer? please explain a little bit more
The way I thought about problem E:
The main idea is that, since we must go through all inversions, we should start with the ones that put the last elements of the array in their correct positions, and then going on until the first element is in its place.
But since for the same position we may have to do more than one swaps that move elements there, in which order should we do it? The ideia is to move in increasing order of a[i]. This way, after we finish all the swaps that affect a position we guarantee that we have the largest number. Since we are moving through the array from the largest to the smallest position, then it's correct.
The following code does exactly that:
I understood it. You keep all inversions by moving in increasing order of a[i]. Your idea is naturally described. Great.
My idea is that if we can apply all inversion swaps, then the array should be non-decreasing because its number of inversion pairs is zero. So I struggled to keep other inversions when swap a pair.
Question number D can be solved in O(n). We can calculate the mex of the arr in the beginning only and try to place 0 to n-1 elements from left to right of the array and also store the frequency of every number from 1 to n. Then by using logic that which element you are replacing then you will be having two choices that the number you replaced was smaller than the previous mex or grater than the previous mex. If the number you replaced was lesser than current mex and its frequency was 1 then definitely new mex will be replaced number. In this way we have to travel the whole array once only for calculating mex. I have not solved it though
I checked your submissions for problem D. None of them got accepted. Did you solve D using your idea???
Sorry but actually not
For Div2D, I thought of a different approach which sounds logically correct to me, but it gives WA on tc2. Here is the submission 86000887. If someone could give a test case where this fails that would be helpful.
Algorithm Description: - 1) make all elements unique. This can be done by performing the MEX of the first
n-1
elements, takingn-1
operations. - 2) make the last element equal ton
. If there is an element already equal ton
, perform MEX at that index, and then perform MEX on the last index. If there is no element equal ton
, perform MEX on the last index. This guarantees that the elements are unique and the last element is equal ton
in at mostn+1
operations. - 3) Note we haven-1
spots butn
numbers to choose from (0 to n-1 inclusive) since the last element has already been fixed. Therefore, at least one number between 0 to n-1 will be excluded from the array. Let us try all possibilities of exclusion. Suppose we are excluding the element2
from our array. Then, our array needs to look like:0 1 3 4 ... n
. We will find the MEX of the current array, which must be somewhere between 0 and n-1 inclusive. The MEX will fill the corresponding index from the array above, indexing the array from 0 but decrementing the indexif MEX is > 2
. If MEX is exactly2
, then we will stop the simulation early and check for validity. Otherwise, we performn-1
operations and then check for validity, for a total of2n
operations. I claim that some exclusion iteration will always produce a valid answer.I did exactly the same, but i struggled to get AC.
Lines from the editorial
It was not necessary under the given constraints, but one can notice that if mex=n, the current array is a permutation, and that solving a cycle will take size+1 operations. Hence, the described algorithm use in fact at most 1.5n operations, the worst case being [1,0,3,2,5,4,…] when there are a lot of 2-cycles.
how the author directly arrives at the conclusion that above algorithm will take at most 1.5n operations. Can someone elaborate?
If you notice the case when all numbers from 0 to n-1 are present but not in sorted order . Moreover consider that it is a derangement(no number is at its appropriate position), then if you follow the algorithm in the editorial it will take you 2 moves for placing the first number , then one move for placing the second number , 2 moves for third number , one move for fourth number... so on and so forth , if you are trying to place the numbers greedily in increasing order. So one move for n/2 elements and two moves for n/2 elements which is 1.5n. I think this is how he arrived at the conclusion, solving the worst case with the greedy(best) approach.
Another approach for G:
We can have only one node as the centre node in the star graph. Let's find for each vertex the minimum cost of building the star graph if it was the centre node.
Let's initially root the tree at node $$$1$$$ and define $$$dp[u]$$$ the minimum cost to convert the subtree of node $$$u$$$ to star graph. Therefore, $$$dp[u]$$$ can be calculated as:
here $c[u]$ denotes children of node $$$u$$$ and $$$c[c[u]]$$$ means grandchildren of node $$$u$$$. Now, $$$dp[1]$$$ stores the minimum cost of building a star graph for node $$$1$$$. Now, we can simply do a dfs on the tree and on the go update the $$$dp$$$ values of the adjacent nodes.
Submission
Is there any way to recompute MEX in less than O(n) time?? Problem D.
You can use unordered_map and set to find mex. I have solved it using this method only. You can see my solution Solution.
Take the boolean array that tells you which numbers are inside the current array, and build a Segment Tree with sum queries over it.
Then you can binary search for the current MEX in $$$O(\log n)$$$ (start at the root node, and whenever the left subtree is not full, go to that subtree). And you also make updates in $$$O(\log n)$$$.
But's it's not required to do so in problem D. Just a simple $$$O(n)$$$ approach is completely fine for that one.
Looks like I'm cracking the nut with a sledgehammer. @pprockys solution is actually a lot easier.
To explain it: just maintain a
set
with all numbers from 1 to $$$n+1$$$ that are currently NOT in the array. Then the MEX will simply be the smallest one of these, e.g.*set.begin()
. This gives you the MEX in $$$O(1)$$$ time, and updates are also just $$$O(\log n)$$$.can someone explain the problem E editorial using examples say 3,2,2 ? .According to the editorial we swap (1,3) thus we get 2,2,3 .$$$a_1>a_2$$$ but $$$b_1=b_2$$$ .
How to generally approach a problem like C as there were some key observations needed and to prove that the observation work requires a lot of time. I usually get lost in a dead-end in such type of problems.
You can check some edgecases. C is obviously about smaller/bigger element. What if smallest/biggest element is leftmost? What if smallest/biggest element is rightmost?
Even if you do not find the simplest solution that leads often to at least some solution.
For G, would the solution still be the same if in each operation we only delete the edge connecting a and b and add the edge connecting a and c?
https://codeforces.me/contest/1375/submission/86033992 Getting runtime error, don't know why!
$$$0≤a_i≤n$$$, so the size of "temp" should be n+1
Video for D: Replace MEX
Can someone explain or give an idea on how to solve the problem E Inversion Swapsort? I don't quite understand the editorial above.
Looking forward to AC4
I thought the tutorial's method to D in the contest, but it wasnt enough.
I didnt konw what to do if the mex is exactly n. So, I just went to the other way and deviated futher from the right method lol.
And now I konw that if the mex in n, we can just change any unfixed point, what a nice and meaningful way to handle it! And also learn how to find the mex in $$$log(n)$$$ time, nice round!
antontrygubO_o Could you explain general way how to aproach problems like F? How did you come up with the solution?
You can look at it backwards. The first player wins if there are two equal elements, knowing that you can think of the triplets (a,b,c) which can lead to the first player winning in one turn. After understanding that go one step further to 2 turns and so on. In this particular case you'll reach to the conclusion that first player can win regardless of the starting triplet in 3 turns. Not all problems like this will have same kind of solution, for some of them you may realize that all the winning positions for the first player share some common property, there also can be problems that this method will not work at all. Hope this helps.
Here are some observations I found helpful for F.
Consider the last move (the one that player 2 makes that leads to two piles having an equal quantity of stones). This move must have occurred on one of the two smaller piles; placing more stones on the largest pile will never make two piles equal in size. So if we have piles $$$a$$$, $$$b$$$, $$$c$$$, with $$$a<b<c$$$, the last move added $$$x$$$ stones such that $$$a+x=b$$$ or $$$b+x=c$$$. If you add $$$x$$$ to $$$c$$$ then clearly none of the piles are equal in size.
This also implies that $$$b-a=c-b$$$; in other words, $$$a, b, c$$$ form an arithmetic sequence right before the losing turn. Player 2 can't place the stones on $$$c$$$, and placing $$$x=b-a=c-b$$$ stones in either the smallest or middle pile both force a loss.
Our challenge as player 1 reduces down to making the pile sizes an arithmetic sequence (although not necessarily in order, like the piles could be $$$14, 26, 20$$$ and that's still fine) such that player 2 is prohibited from placing stones on the largest pile.
So let's assume we have some initial $$$a<b<c$$$ and pick $$$x$$$ such that whatever pile we add $$$x$$$ to, it becomes the biggest pile (thus prohibiting player 2 from putting more stones on it the next turn). So we want $$$b,c,a+x$$$ to be an arithmetic sequence, which gives us $$$x=2c-b-a$$$. It turns out this would also make $$$a,c,b+x$$$ an arithmetic sequence. The only case this doesn't solve is $$$a,b,c+x$$$ but as you can see in the official solution we can repeat this procedure and be done.
Granted, this whole approach glosses over the problem of figuring out player 1 has a winning strategy in the first place. The fact that you have 1000 turns and you can pick, for each test case, between player 1 and 2 makes things even trickier...it scared me off of the problem during the contest because I figured the solution would be much trickier (I guess same could be said of most problems this entire contest)
Very well explained
can someone explain why a simple bubble sort for problem E is wrong answer? it will give u,v with au>av.
because the first two conditions of question are not met, only the third condition is satisfied
For problem E,could anyboy tell me why it's wrong when I just simple swap the biggest number each time .86083757
I should read F and G first Hmmmmm :((
Problem H segment tree details:
The basic idea is that instead of splitting the nodes by index ranges, we split the nodes by the array value ranges. This allows us to merge child node results with a single operation.
The values stored in each node of the segment tree are the indices whose array values are within some range $$$[lb, rb]$$$ and the map of subset IDs for each possible subsequence of the stored indices. These subsequences can be represented as a pair consisting of the leftmost index and rightmost index of the subsequence. For example, with array $$$3,7,1,5,2,4,6$$$ and range $$$[4,7]$$$, the node stores indices $$$2,4,6,7$$$. One possible subsequence pair would be $$$(4, 7)$$$ representing index sequence $$$4,6,7$$$ and subsequence $$$5,4,6$$$ in the array.
When querying on the range $$$[l_i, r_i]$$$ for a some node, we first get the longest subsequence that is within the range. This can be done using
lower_bound()
andupper_bound()
on the stored indices. Then, if that subsequence already has a subset ID registered in the map, we return it. Otherwise, we query the left and right nodes without shrinking the $$$[l_i, r_i]$$$ range (since the nodes are split by array values).If we were successfully able to get a subset ID from both the left and right nodes, we merge the two into a new subset. This is possible because all the left node array values are less than the right node array values. Once merged, we register the new subset ID in the node's subsequence map. If we only get one subset ID from either the left or right node, we register that instead.
Overall time complexity: $$$O(n \sqrt{q} \log n)$$$ (see proof in the replies)
My submission
The solution is inspiring! But the complexity isn't as low as O(q * log^2 n). In worst case, there could be O(min(n^2, q*n)) index sub-ranges over the course, each costing O(log^2 n) look-ups down the tree.
Thus I think time should be O(min(n^2 * log^2 n, q*n * log^2 n)). It's just that getting into that worse case is really hard either by random or manual input, so the answer is low enough to pass the 2.2*10^6 limit.
Do you mean $$$min(n^2, q)$$$ ranges? I'm not sure where the extra factor of $$$n$$$ comes from. But I do agree the runtime would be $$$O(n \log n + min(n^2, q) * \log^2 n)$$$ without that extra factor.
The $$$2.2*10^6$$$ limit is for the number of sets, which the editorial proves that it's bounded by $$$2n\sqrt{q}$$$. The runtime isn't related since every query could generate more than one set and the sets take O(1) space.
No I meant time is q*n*logn. Sorry the last sentence of my last reply is wrong. That's for space not related to time. But runtime wise, your qLog^2n is even smaller than the output size n*sqrt(q), e.g. if q=1E4 and n=1E6. That couldn't happen, right?
Ah right time can't be faster than space, my bad :)
Though I'm still confused how the runtime could be $$$O(n q \log^2 n)$$$. Do you mind explaining it further?
Nevermind, I think I have a proof for a more correct runtime of $$$O(n \sqrt{q} \log n)$$$.
Like the editorial, we have $$$n = 2^N$$$ and $$$q = 2^Q$$$. The total runtime would be:
The $$$2^{2i}$$$ term is different from the set count because a full node could still be queried. However, once the parent node is full, then the queries will never reach the child. The extra log factor comes from the index lookups. For $$$i \le Q/2$$$:
And for $$$i > Q/2$$$:
Combine the two:
Perfect, and brilliant! I tried to prove it but didn't succeed :p This now explains why its a fast solution (and BTW tourist did it too 85990887, not sure if he proved it in his mind :D ).
Interestingly, I think this kind of resonates with my memory that in general $$$\sqrt n$$$-decomposition related problems can be solved by binary division algorithms too, like the classic range add-update-query problem which can be solved via both sqrt or segment trees. Not sure if there is a deeper reason behind that :)
any one explain me problem A , and approach in tutorial why is true?
Notice that $$$a_{i + 1} - a_i \ge 0$$$ is equivalent to $$$a_{i + 1} \ge a_i$$$. Similarly $$$a_{i + 1} - a_i \le 0$$$ is equivalent to $$$a_{i + 1} \le a_i$$$. Flip the signs in such a way that $$$a_i \ge 0$$$ for odd $$$i$$$, while $$$a_i \le 0$$$ for even $$$i$$$. Then
Giving at least $$$\frac{n - 1}{2}$$$ of each, as desired.
I just pasted editorial's explanation, because it perfectly answers to your questions("any one explain me problem A , and approach in tutorial why is true?"). Obviously this is not what you wanted to get, but rather what you asked for. You have read the editorial. You don't understand some part of it. Do you expect professor Charles Xavier to come, read your comment and mind, see which part of the editorial was unclear to you and help you? Do you expect someone to come and explain every microscopic thing which could have possibly not been clear to you? Don't be lazy and have some respect towards people who are going to help you. Spend more time writing your questions. Otherwise you won't get any help.
sorry , but it's clear from my question that i'm a beginner and i tried and don't understood the editorial , and beside that the editorial just 2 line almost and i said why approach is true what do you need me specify??!!!
why approach is true
Giving at least $$$\frac{n - 1}{2}$$$ of each, as desired.
what do you need me specify
OK, let me be more concrete. Up to which sentence do you understand the editorial? 'Cause (don't understood the editorial) when you don't specify which part is unclear to you, I can't think of something but quoting the editorial.
why are you so angry ???!!!!!! , I have not forced anyone to help me, if you do not like my question you can ignore it and thank you for your generosity !
I am not angry :) Grand_Master369, it's not like I don't like your question. I am quite sure that nobody will answer to it. I am trying to help. So, answer, please, up to which sentence did you understand the editorial? Which sentence was unclear?
ok , i'm sorry again
This branch of comments is too long, already. PM me, I'll explain to you.
if you could please tell me 312->1 3 2 -> 1 2 3 why is this not a correct option in problem E..thanks for efforts
I'd strongly suggest, to read the statement once again very attentively before reading the answer. The statement is very clear and unambiguous. Not to understand it, you must either have extremely low English knowledge or put 0 efforts before flooding the comment section with your quite stupid question or be dumb.
$$$a[] = 3, 1, 2$$$
$$$a[1]>a[2], 1<2$$$
$$$a[1]>a[3], 1<3$$$
Thus, you must swap positions $$$(1, 2)$$$ and $$$(1, 3)$$$.
$$$3 1 2$$$ -> $$$1 3 2$$$ -> $$$1 2 3$$$
First you swap positions $$$(1, 2)$$$, then positions $$$(2, 3)$$$.
hey EmilConst thanks for replying
the way enoone responded and the way you are responding i won't point out what's wrong with you it's just your humbleness that reflects your own true self ; U see i am just a pupil as of now and people do make mistakes so if you don't have patience to deal with "stupid " questions or dumb people like me so please don't claim that you want to help as you did in above comments , because if this is what you say helping is , believe me it's more of an inferiority complex and demotivation what you are providing, rather than helping out . try to "help" less
U see i am just a pupil
Being pupil is about CP skills. What I was talking about is problem statement reading skill. You can read more about it in Um_nik's post.To understand the examples of this problem, you just needed to understand the statement. To understand that clear, unambiguous statement, you need basic English knowledge, some time spent reading the statement attentively and some brains. In other words not to understand it, you must either have extremely low English knowledge or put 0 efforts before flooding the comment section with your quite stupid question or be dumb. And I think that you didn't think enough before asking a question.
more of an inferiority complex and demotivation
Are you demotivated, because I told you can't read problem statements?
alright ..... was just messing srry
In Case Anyone Need Detail Explanation(Not a video tutorial) For D Here
Can someone explain how to solve problem E. I tried to understand the editorial but could not get any idea.
Which part didn't you understand? :( :( :(
Actually, all of it, how are they using permutation for the proof etc. I am looking for a simple solution which might be easier to understand and if there isn't any better solution then if someone could explain this editorial in simpler terms.
Here I add some details to the parts of editorial that I think aren't clear enough. If you still have questions, feel free to ask.
In this problem we are only interested in the relative order of $$$a[1],a[2]...,a[n]$$$, not their exact values. $$$a={2,1,3}$$$, $$$b={7,6,8}$$$, $$$c={5,2,9}$$$ are the same for us($$$a[2]<a[1]<a[3]$$$, $$$b[2]<b[1]<b[3]$$$, $$$c[2]<c[1]<c[3]$$$).
Permutation is easier to think of, because there are no equal elements in it.
After using pairs in the order $$$(pos_{a_n+1},n),(pos_{a_n+2},n),...,(pos_n,n)$$$, relative order of the first $$$n-1$$$ numbers $$$a[1],a[2],...,a[n-1]$$$ doesn't change. $$$a[n]=n$$$, so it is in its place. There are no more pairs including nth element. Taking hold of this three facts, we see that we can just "throw away" nth number and recursively the problem for the first $$$n-1$$$ numbers.
I see. this way we can always achieve the sorted array, but how to come up with such a solution during contest, I would never have thought in this way.
It could be something like this. Lets do some swaps and put the maximal number in the end. Then recursively solve the problem for the first $$$n-1$$$ numbers. This is quite common idea. Now, how could we do this? If we want to recursively move to the first $$$n-1$$$ numbers, then we must have no more pairs including $$$n-1$$$(otherwise, how could we ignore nth element, and recursively move to the first $$$n-1$$$ ones). Lets look at an example:
$$$a = {2, 8, 3, 8, 1}$$$
We must do swaps $$$(1, 5), (2, 5), (3, 5), (4, 5)$$$ in some order, so that $$$8$$$ is in the end. Lets do it in some way, and see what happens to the first $$$n-1$$$ elements.
$$$swap(1, 5)$$$
$$$swap(2, 5)$$$
$$$swap(3, 5)$$$
$$$swap(4, 5)$$$
$$$b =$$$ {$$$1, 2, 8, 3, 8$$$} $$$=$$$ {$$$a[5], a[1], a[2], a[3], a[4]$$$}
We can see that $$$a[5], a[1], a[2], a[3], a[4]$$$ are cyclically shifted. If you do the swaps on a paper, you'll see, that $$$a_n, a_{i_1}, a_{i_2}, a_{i_3}, a_{i_4}...$$$ are shifted, where $$$i_1, i_2, i_3, i_4...$$$ are the positions we swaped with $$$a_n$$$
$$$swap(a_{i_1}, a_n)$$$
$$$swap(a_{i_2}, a_n)$$$
$$$swap(a_{i_3}, a_n)$$$
$$$swap(a_{i_4}, a_n)$$$
$$$...$$$
Now lets return to our $$$b =$$$ {$$$1, 2, 8, 3, 8$$$}. There is one inversion in $$$b$$$, which is $$$(3, 4)$$$, but in the original array $$$a$$$ the only inversion was $$$(2, 3)$$$, so we must swap $$$(2, 3)$$$. Thus recursively moving to the first $$$n-1$$$ numbers will be quite problematic. Lets try doing swaps in such a way, that inversions remain the same. In this subproblem, we see that exact values of elements aren't important, there relative ordering is. Thus, thinking about a permutation may be easier. In short, we must swap numbers in increasing order. It is described in the editorial.
Wow, this is really impressive, by the way , thank you for helping me and putting so much effort in writing this comment
I think I am doing the same thing here 86300926 but getting a WA for 7th test case. Can you help?
There are quite some ways to debug one's code. Read it very attentively. Put assertions and submit. Write a test generator, a checker and stress test your solution(get a small counter-test). Your first WA7 submission was 1 hour ago. I am quite sure you didn't try all this options. You'd better go try this out, before seeking help in comments section. Debugging is quite important skill one shouldn't neglect.
mynk322.0 I added stress testing part to your code here.
Thanks, I am clear with my logic and have implemented it correctly. But I think I am misunderstanding the problem. Giving it another read :)
for question E , why can 312->1 3 2 -> 1 2 3 not be done for the first test case ?
In case of $$$a=[3,1,2]$$$ the pairs that form inversion are $$$(1,2)$$$ and $$$(1,3)$$$. In your second step you swap the 2nd and 3rd values of the array, which is not allowed because $$$(2,3)$$$ does not form an inversion in array $$$a$$$.
The solution for F in the editorial is pretty elegant, it would be pretty useful to know how one should think during a contest in order to come up with that solution?
try the example, and find out why the second player lost.
Did anyone else feel this was one of the best written contests? Also, I personally felt that all the problems, even if some unsolvable for me, are very fun to read and very fun to think about.
a much easier solution to problem B is to set the corners of the grid to 2, the edges of the grid to 3, and all other inner elements to 4. it will always work.
print "NO" only if the element you are trying to change is more than what you are trying to change it to.
use this for reference if you need it: my code
That's the same solution as the one described in the editorial.
Can someone tell me why I am getting wrong in problem E in the 7th test case? 86297939 .
Your text to link here... I am getting WA for Problem C.Plz help
Why do you think it should get AC? Have you proven your idea? Your idea is very similar to the one in the editorial, but it's incomplete. Read the editorial.
Problem E — Editorial Solution Coded in a simple and elegant way
https://codeforces.me/contest/1375/submission/86442236
Solving problem E requires two observations-
We have to put largest element at position n, second largest element at position n-1
Since we have to include all inversions in the list we cannot simply swap it to the last position as we will lose inversions/list wrong inversions.
In order to prevent losing inversions when placing largest element at n we list all inversion of n i.e. all pairs (i,a[n]) such that i>a[n] and i<=n
and swap pairs as listed in editorial to prevent losing inversion
Does anyone know what the optimal algorithm (minimum number of increases) for B is? (not the one to solve the problem, but the one to find the minimum number of increases needed)