Problem A. Idea by Igorbunov
Only the following pairs of numbers are possible: $$$(x, x)$$$, $$$(x, 2 \cdot x)$$$, and $$$(x, 3 \cdot x)$$$
Notice that only the following pairs of numbers are possible: $$$(x, x)$$$, $$$(x, 2 \cdot x)$$$, and $$$(x, 3 \cdot x)$$$.
Let $$$d = gcd(a, b)$$$. Now notice that it's impossible that $$$a = k \cdot d$$$ for some $$$k > 4$$$, because otherwise $$$lcm$$$ will be at least $$$k \cdot d > 3 \cdot d$$$. Therefore the only possible cases are the pairs listed above and $$$(2 \cdot d, 3 \cdot d)$$$, but in the latter case we have $$$lcm = 6 \cdot d$$$.
The number of the pairs of the first kind is $$$n$$$, of the second kind is $$$2 \cdot \lfloor \frac{n}{2} \rfloor$$$, and of the third kind is $$$2 \cdot \lfloor \frac{n}{3} \rfloor$$$ (the factor $$$2$$$ in the latter two formulae arises from the fact that pairs are ordered). Therefore, the answer to the problem is $$$n + 2 \cdot \left( \lfloor \frac{n}{2} \rfloor + \lfloor \frac{n}{3} \rfloor \right)$$$.
Problem B. Idea by FairyWinx
Solve the problem for $$$k = 2$$$.
Solve the problem without the "$$$(r, c)$$$ must be colored" limitation.
Try to paint diagonals
No, you don't need casework
Notice that the answer to the problem is at least $$$\frac{n^2}{k}$$$, because you can split the square into so many non-intersecting rectangles of dimensions $$$1 \times k$$$. So let's try to paint exactly so many cells and see if maybe it's always possible.
For simplicity, let's first solve the problem without necessarily painting $$$(r, c)$$$. In this case, we're looking for something like a chess coloring, which is a diagonal coloring.
Let's number the diagonals from the "lowest" to the "highest". Notice that every $$$1 \times k$$$ and $$$k \times 1$$$ subrectangle intersects exactly $$$k$$$ consecutive diagonals, so we can paint every $$$k$$$-th diagonal to obtain the required answer: every such subrectangle will contain exactly one painted cell.
To add the $$$(r, c)$$$ requirement back, notice that $$$(r, c)$$$ lies on the diagonal number $$$r + c$$$. (Because if you trace any path from $$$(0, 0)$$$ to $$$(r, c)$$$ with non-decreasing coordinates, going one cell upwards or rightwards increases exactly one of the coordinates by one, and also increases the number of the diagonal by one). Therefore, all we need to do is paint the cells whose coordinates satisfy $$$(x + y) \% k = (r + c) \% k$$$
Problem C. Idea by FairyWinx
We've got a problem if $$$b_{i} > b_{i + 1} + 1$$$
It's alright if $$$a_i = b_i$$$
Programmers aren't mathematicians, you don't need to prove the solution
Firstly, we obviously require $$$a_i \le b_i$$$ to hold for all $$$i$$$. With that out of our way, let's consider non-trivial cases. Also let $$$a_{n+1} = a_1, b_{n+1} = b_1$$$ cyclically.
For each $$$i$$$, we require that either $$$a_i = b_i$$$ or $$$b_i \leq b_{i + 1} + 1$$$ holds. That's because if we increment $$$a_i$$$ at least once, we had $$$a_i = b_i - 1$$$ and $$$a_{i + 1} \le b_{i + 1}$$$ before the last increment of $$$a_i$$$, and from here it's just a matter of simple algebraic transformations.
Now let's prove these two conditions are enough. Let $$$i$$$ be the index of the minimal element of $$$a$$$ such that $$$a_i < b_i$$$ (i.e. the smallest element that's not ready yet). Notice that in this case we can, in fact, assign $$$a_i := a_i+1$$$, because $$$a_i \leq b_i \leq b_{i + 1} + 1$$$ holds, and now we're one step closer to the required array. It's easy to continue this proof by induction.
Problem D. Idea by FairyWinx
Reformulate this problem in terms of a complete binary tree.
It would be strange for the sponsors to changes two nodes of the same depth.
The problem can be reformulated as follows. We've got a complete binary tree with $$$2^n$$$ leaves. There's a marked edge from each intermediate node to one of its children. The winner is the leaf reachable from the root via marked edges. Changes modify the outgoing marked edge of a node.
Now it should be fairly obvious that there's no reason to change more than one node per level, because only one node matters per level--the one on the path from the root to the answer node. So, the winner only depends on the subset of levels we perform changes on, and vice versa: different subsets always yield different winners.
Sponsors can change exactly $$$i$$$ nodes in $$$\binom{n}{i}$$$ ways. Summing this over $$$i$$$, we get $$$\sum_{i=0}^{\min(n, k)} \binom{n}{i}$$$. Call this number $$$m$$$. $$$m$$$ is the number of winners the sponsors choose between--let's call them candidates for brevity. It's easy to see that $$$m$$$ is the answer to the problem, because a) sponsors can guarantee the winner is at least $$$m$$$, as, independent of the list of candidate winners "provided" by Madoka, at least one of them must be at least $$$m$$$, and b) Madoka can guarantee the winner is at most $$$m$$$ by firstly marking edges arbitrarily, then computing the list of candidate nodes, and only then fill them with numbers from $$$1$$$ to $$$m$$$ (and the other nodes arbitrarily).
Problem E. Idea by FairyWinx
Bruteforce $$$c$$$
$$$gcd(a, b)$$$ divides $$$a + b$$$.
Recall the existence of the Euler's totient function
Let's bruteforce $$$c$$$, then we have $$$gcd(a, b) = gcd(a, a + b) = gcd(a, n - c)$$$. This means that $$$gcd(a, b)$$$ divides $$$n - c$$$, so let's just go through all divisors of $$$n - c$$$. For every factor $$$d$$$, the count of pairs $$$(a, b)$$$ satisfying $$$a + b = n - c$$$ and $$$gcd(a, b) = d$$$ is $$$\phi (\frac{n - c}{d})$$$, because we need $$$d$$$ to divide $$$a$$$ and be coprime with $$$\frac{n - c}{d}$$$, so that the $$$gcd$$$ is equal to $$$d$$$.
Therefore, the answer to the problem is $$$\sum{lcm(c, d) * \phi{\frac{n - c}{d}}}$$$, where $$$1 \leq c \leq n - 2$$$ and $$$d$$$ is a factor of $$$n - c$$$.
Problem F. Idea by TeaTime, tutorial by purplesyringa
Editorialist's note: I didn't submit the solution myself, but I proved it theoretically, aggregated solutions of problemsetters as well as participants, so I'm fairly sure it's correct, but you might want to treat it with more suspicion.
Try graph theory.
Try flow theory.
No, that won't work, try again.
Let's reformulate the problem in terms of graphs. We are given an undirected graph and we are asked to determine edge directions, subject to fixed indegree minus outdegree (hereinafter balance) values for some vertices.
It is tempting to think of this as a flow problem: edges indicate pipes with capacity of 1, vertices are producers or consumers of flow, and vertices with fixed differencies produce or consume an exact amount of flow. Except that's not quite an equivalent problem: a maxflow algorithm will find a flow, i.e. an orientation of edges, but it might just ignore some edges if it feels like it.
We need to overcome this somehow by introducing incentive to use all edges. To do this, forget about the "edges are edges, vertices are vertices" idea for a while. Create an imaginary source and add a pipe with capacity 1 to every edge of the original graph. Technically, this is interpreting edges of the original graph as vertices of the flow graph. Non-technically, I like to interpret this like magically spawning flow into the middle of the edge.
Now, the flow appearing in the edge has to actually go somewhere if we want the maxflow algorithm to treat it like a treasure it wants to increase. Let's just add two pipes: from the edge to vertex $$$u_i$$$ and from the edge to vertex $$$v_i$$$, because where else would it go? (technically, any capacity works, but let's use 1 for simplicity) This has a nice side effect of determining the orientation of the edge: if the flow from the edge goes into $$$u_i$$$, it's as if it was oriented from $$$v_i$$$ to $$$u_i$$$, and vice versa.
A small problem is that this changes the semantics of the edge orientation somewhat. In the original problem, $$$u_i \to v_i$$$ incremented $$$v_i$$$ and decremented $$$u_i$$$. In the new formulation, only $$$v_i$$$ is incremented, so we need to transform the requirements $$$a_v$$$ on balances into requirements $$$b_v$$$ on indegrees: $$$b_v = \frac{a_v + \mathrm{deg} v}{2}$$$ (and we need to check that the numerator is even and non-negative, otherwise the answer is NO).
How do we enforce the indegree requirements? For all vertices with $$$s_v = 1$$$, add a pipe from the vertex $$$v$$$ to an imaginary sink with capacity $$$b_v$$$. We expect all these pipes to be satiated.
What about vertices with $$$s_v = 0$$$? Unfortunately, we can't just add a pipe from the vertex $$$v$$$ to an imaginary sink with capacity $$$\infty$$$, because if you have an edge with $$$s_v = 0$$$ and $$$s_u = 1$$$, the maxflow algorithm doesn't have any incentive to let the flow go to $$$u$$$ instead of $$$v$$$, so the pipe from $$$u$$$ to the sink might not get satiated and we might erroneously report a negative answer.
How do we require certain pipes to be satiated? We could theoretically use a push-relabel algorithm, but in this case we can use something much simpler. The sum of capacities of all pipes from the imaginary source is $$$m$$$. We expect them all to be satiated, so this is the total flow we're expecting. The sum of capacities of all pipes to the imaginary sink we want to satiate is $$$\sum_v b_v$$$. Therefore, there's a surplus of flow of exactly $$$\Delta = m - \sum_v b_v$$$ (if it's negative, the answer is NO). So: create an intermediate vertex, add a pipe of capacity $$$\infty$$$ from each vertex with $$$s_v = 0$$$ to this intermediate vertex, and a pipe from this intermediate vertex to the imaginary sink with capacity $$$\Delta$$$. This will stop the algorithm from relying too much on non-fixed vertices.
Run a maxflow algorithm. Check that every pipe from the source to an edge and every pipe from a vertex to the sink is satiated, or, alternatively, the maxflow is exactly $$$m$$$. If this does not hold, the answer is NO. If this holds, the answer is YES and the edge orientation can be restored by checking which of the two pipes from an edge is satiated.
Is this fast? That depends on the algorithm heavily.
Firstly, you can use the Edmonds-Karp algorithm. It works in $$$\mathcal{O}(FM)$$$, where $$$F$$$ is the maxflow and $$$M$$$ is the number of pipes. The former is $$$m$$$ and the latter is $$$n + m$$$, so we've got $$$\mathcal{O}((n + m) m)$$$, which is just fine.
Secondly, you can use Dinic's algorithm, which is typically considered an improved version of Edmonds-Karp's, but is worse in some cases. It improves the number of rounds from $$$F$$$ to $$$\mathcal{O}(N)$$$, where $$$N$$$ is the number of vertices in the network, which doesn't help in this particular problem, sacrificing the complexity of a single phase, increasing it to $$$\mathcal{O}(NM)$$$, which is a disaster waiting to happen.
Lots of people submitted Dinic's instead of Edmonds-Karp's. I don't know why, perhaps they just trained themselves to use Dinic's everywhere and didn't notice the unfunny joke here.
Luckily for them, Dinic's algorithm still works. You might've heard it works in $$$\mathcal{O}(M N^{2/3})$$$ for unit-capacity networks, where $$$M$$$ is the number of pipes and $$$N$$$ is the number of vertices, which translates to $$$\mathcal{O}((n + m) n^{2/3})$$$ in our case, which would be good enough if the analysis held.
Our network is not unitary, but it's easy to see how to make it into one. We use non-unit capacities in three cases:
When we enforce the indegree, we add a pipe with capacity $$$b_v$$$ from the vertex to the sink. We can replace it with $$$b_v$$$ pipes of capacity 1. As $$$\sum_v b_v \le m$$$, this will not increase the number of pipes by more than $$$m$$$, so the complexity holds. Due to how Dinic's algorithm works, replacing a pipe with several pipes of the same total capacity does not slow the algorithm down.
Similarly, the pipe from the intermediate vertex to the sink has capacity $$$\Delta \le m$$$, so we could theoretically replace it with $$$\Delta$$$ unit pipes and the complexity would hold.
Finally, when we handle vertices with $$$s_v = 0$$$, we add pipes from such vertices to the intermediate vertex with capacity $$$\infty$$$. However, for each such vertex, the maximum used capacity is actually the count $$$k$$$ of edges incident with $$$v$$$, so the capacity of such pipes could be replaced with $$$k$$$. This would obviously have the same effect, and would not slow Dinic down: even if it tries to push more than $$$k$$$ through the corresponding backedge, it will quickly rollback, which affects the time taken by a single phase (non-asymptotically), but not the count of phases. As the sum of $$$k$$$ is $$$\mathcal{O}(m)$$$, we're fine again.
I have another solution for D.
We will try to count the number of people for whom we can guarantee that they won't win. Let's call it $$$ans$$$.
Then the answer will be $$$2^n - ans$$$
Let's look at stupid solution and try to optimize it:
Pretty familiar, right?)
Let's iterate over values of $$$n$$$, and try to find a number of times, when $$$k$$$ will be equal to zero. Let's call this new $$$n$$$ = $$$m$$$. Then we will end up with $$$k$$$ = 0 exactly $$${n - m - 1 \choose k}$$$ times! Why? At first I thought that it has to be $$${n - m \choose k + 1}$$$, but our last move is always fixed! We have to go to $$$solve(n - 1, k - 1)$$$, because we are finding the first time, when $$$k$$$ = 0. Logic here is the same as in Pascal's Triangle.
My solution: 170668170
Oh wow, this one is clever, nice one!
We can also consider each participant as binary string. Initially winner is participant 000...0. Changing each outcome is equivalent to flipping one bit. Each bit flip will generate a new binary string i.e. new participant can be winner.
My Video Solution for D.
Thank you for the help. What is the math behind the inverse factorial code you are running? Fermat's little theorem?
Yesx you can read more about it here: https://cp-algorithms.com/algebra/module-inverse.html#finding-the-modular-inverse-using-binary-exponentiation
I didn't understand how you got (n-m-1)C(k) or why you thought (n-m)C(k+1) however I do understand why (n-m-1)C(k) is right instead of (n-m-1)C(k) because of last move. can you please explain the logic behind why the total number of ways to get to 0 are (n-m)C(k+1).
Auto comment: topic has been translated by purplesyringa (original revision, translated revision, compare)
IMPORTANT ISSUE(S)
D fixed
I really like the way you print 'YES' and 'NO' :)
In problem E, why the number of pairs (a,b) that are co-primes and sums to n is phi[n]? I only knew that phi[n] counts the number of integers between 1 and n which are coprime to n.
If $$$a$$$ is relatively prime to $$$n$$$, $$$a$$$ must also be relatively prime to $$$n-a$$$.
The proof is simple: assume $$$n-a$$$ is not relatively prime to $$$a$$$, then that implies some $$$d>1$$$ divides $$$n-a$$$ and $$$a$$$. However, this implies that $$$d$$$ also divides $$$n-a + a = n$$$, so $$$a$$$ is not relatively prime to $$$n$$$, a contradiction.
Given this, the pairs are just $$$(a, n-a)$$$, where $$$a$$$ is relatively prime to $$$n$$$, so there are $$$\varphi(n)$$$ of them.
an easier proof: $$$gcd(a,n)=gcd(a,n-a)$$$. simply think of how the euclidean algorithm works.
Why does my solution get memory exceeded for Problem B? https://codeforces.me/contest/1717/submission/170674893
You seem to never update any values in your matrix by using == instead of =.
Two corrections:
You're right, fixed both.
Auto comment: topic has been updated by purplesyringa (previous revision, new revision, compare).
I just published the tutorial for problem F. I had to write it myself instead of translating the problemsetter's editorial because apparently one the original editorialist's solution was hacked, he got sad and got drunk. I'm only half kidding. So I took the opportunity to provide more of a chain of thought rather than a concise tutorial. Hope you don't mind.
Excuse me, I have some confusion:
How to avoid this situation:
A flow go to the node $$$u_i$$$ where $$$s_{u_i} = 0$$$ , instead of going to the $$$v_i$$$ where $$$s_{v_i}=1$$$. However, $$$v_i$$$ needs this flow to make its value $$$b_{v_i}$$$ equal to $$$a_{v_i}$$$.
You're absolutely right, this was a bug. I saw some code related to that in people's solutions, but didn't realize that's what it was doing. Should be fixed now. Read starting from "What about vertices with $$$s_v = 0$$$?".
:/
Imagine getting drunk, could not be me
thank you for the round, next time keep your math problems to yourself.
good problems to attack my brain.
(though I've just had a vp
Problem A
gcd(a,b) will be in range [1, n], therefore if gcd(a,b)==1 then a=1 and b is multiple of 1 , if gcd(a,b)==2 then b should be multiple of 2 , for gcd(a,b)==3 (b=3*a) ...etc.
where m is an integer.
Programmer aren't mathematicians, you don't need to prove the solution
In D, the answer is:
which can be computed using FFT.
I realized that is is absolutely unnecessary to use FFT here after reading the editional. I tried to find the generating function for the answer but it turned out to be just the sum of binomial coefficients.
For all k, $$$[x^k](1+x)^n$$$ is just $$$\binom{n}{k}$$$. Notice that $$$[x^k]F(x)/(1-x)$$$ is summing up the coefficients of $$$[x^j]F(x)$$$, where $$$j\le k$$$. So the answer is the sum of binomial coefficients.
Yes you are right. I didn't realize that.
can someone please explain the solution to problem A in easy steps?
Well, let $$$g = \gcd(a, b)$$$ and $$$a = ga_0$$$ and $$$b = gb_0$$$, where $$$\gcd(a_0, b_0) = 1$$$. So, here we have $$$\mathrm{lcm}(a, b) = \frac{ab}{\gcd(a, b)} = \frac{g^2a_0b_0}{g} = ga_0b_0$$$. Thus, $$$\frac{\mathrm{lcm}(a, b)}{\gcd(a, b)} = a_0b_0 \leq 3$$$. So, here we have $$$3$$$ cases:
Adding all of them up, we have the answer as $$$2\lfloor \frac{n}{3} \rfloor + 2 \lfloor \frac{n}{2} \rfloor + n$$$.
In problem E, the answer is:
Where $$$\mu(n)$$$ is mobius function. This can be computed in $$$O(N \log N)$$$
should problem A really be rated 800 ?
No. My guess is that it will be rated 900 or 1000.
Alternate way of looking at D
Consider the path from each leaf to the root. The path will see "winning" and "losing" edges. Now associate an $$$n$$$-bit binary number to each leaf. This number will have a $$$1$$$ at position $$$i$$$ if the $$$i^{th}$$$ edge on the path was a winning edge and $$$0$$$ otherwise. Convince yourself that this number will be different for each leaf. Now for a leaf to win, all $$$0$$$s in its number must be changed to $$$1$$$. So Madoka will assign lower valued leaves, binary numbers with less number of $$$0$$$s. It is easy to see that the answer is the largest number having $$$k$$$ zeroes in its assigned number which is $$$\sum_{i = 0}^{min(n,k)} \binom{n}{i}$$$.
Can Anyone Explain Problem B. Madoka and Underground Competitions I am Struggling to implement it
I don't think this might be very helpful, but if you know a little bit of recursion, you can take a look at my submission. (Although there are way easier implementations. For eg. submission)
Another way to solve E:
$$$\sum lcm(c,\gcd(a,b)) = \sum \frac{c \cdot \gcd(a,b)}{\gcd(a,b,c)} = \sum \frac{(n-a-b) \cdot \gcd(a,b)}{\gcd(a,b,n-a-b)} = \sum \frac{(n-a-b) \cdot \gcd(a,b)}{\gcd(a,b,n)}$$$
We can fix $$$\gcd(a,b) = d$$$, and calculate $$$cnt[d] ~- $$$ number of pairs $$$(a;b)$$$ that $$$\gcd(a,b)$$$ has $$$d$$$ as divisor.
$$$cnt[d] = \displaystyle \sum_{k=2}^{k \cdot d < n} { (k-1) * (n-a-b)} = \displaystyle \sum_{k=2}^{k \cdot d < n} { (k-1) * (n-k \cdot d)}$$$
Then we have to make inclusion/exclusion to make $$$cnt[d] ~- $$$ number of pairs $$$(a;b)$$$ that $$$\gcd(a,b) = d$$$.
So, the answer will be $$$\displaystyle \sum_{d=1}^{d \le n} { \frac {cnt[d] \cdot d} {\gcd(d,n)}} $$$.
Overall, $$$O(NlogN)$$$.
Could you please give the code for this.
Problem D video editorial with code!
Problem D can be converted as follows:
Given a complete binary tree with $$$2^n$$$ different leaf nodes, and initially $$$2^n-1$$$ portals that connect each internal node with its left child node.
Madoka starts at the root node. From one internal node, she can move to its left child node instantly (in zero minutes) via portal. But to move to its right child node, she must walk, and it takes one minute.
Now she wants to know the number of different leaf nodes she can possibly reach within $$$k$$$ minutes.
For example, in a tree with $$$4$$$ leaf nodes and $$$3$$$ bridges, initially, Madoka can already reach one leaf node (the left most one) without taking time. If $$$k=1$$$, she can possibly reach $$$3$$$ different leaf nodes (all but the right most one). If $$$k \ge 2$$$, then all $$$4$$$ leaf nodes.
Sorry for posting such an unhelpful and immature comment. Your down votes made me realize that. I shall be careful next time.
As for me, personally I think your comment is helpful. I've been troubled by understanding the problem for a long time. Now I have understood it! Thank you! :)
How does one stop over-complicating the solution, for C, we know, we got to know b[i]<=b[i+1]+1, how do we stop there, my mind will want the proof, or at least be pessimistic enough to not let me code the solution because I would assume it should be more complicated than that, any tips?
Its good if you are not stopping there because you should not. Just one more observation is required that you should look at minimum number in array a.
Lmao, I think problem F can be solved with some cheesy 2-SAT with random walks. Just keep randomly swapping edges and keep track of indegrees of each node. We know each node v that has s[v] == 1 has a certain required in degree and a required out degree.
If for all nodes, your current in degree is at the required in degree, you have a valid solution. If for all nodes, your current in degree is at the required out degree, you have a valid solution by flipping all the edges. This makes it a lot better than regular 2-SAT since you have 2 possible hits instead of just one, I think this is why you can go quite a bit under the O(n^2) requirement for regular 2-SAT.
I was able to get an AC while upsolving while just using 10^7 iterations, (I expected I'd need more). My solution is literally just 100 lines and I was able to code it tipsy.
Useful Link about 2-SAT: https://www.cl.cam.ac.uk/teaching/1920/Probablty/materials/Lecture7.pdf
Submission Link: https://codeforces.me/contest/1717/submission/170763222
This is not the classical 2-SAT algorithm though. The "default" solution to 2-SAT is computing SCC on the graph of implications. What you're doing is much more generic than 2-SAT, it's kind of gradient descent.
Did anyone try some sort of simulated annealing?
I hacked your solution and got "unexpected verdict", which usually means a solution that the author marked as correct failed on the hack. Anyway, I tested your code locally and it failed on the test case (basically, just make a directed path of length $$$10^4$$$).
Oh oof, I guess my AC was just a bit of luck as well as some weak test cases (they probably only put 1 test case with m = 10000)
No, there were some test cases with large $$$m$$$. Your method fails when there's a long path of edges that need to be oriented correctly, and maybe the author's test cases were randomly generated in a way that doesn't create long paths.
I've got a good solution for D.
https://postimg.cc/94p4yLvq
In the image above: Basically, I'm trying to find out for each position, how many changes it would take to make that position holder win (when every time Madoka choose left one to win.) So the last line shows, how many changes we will have to make to make that node holder win. (0,1,1,2,...)
We can observe now, suppose we have found out for first 2^i positions. So what will the changes needed for the next 2^i positions?
first 2^i: 1,2,3,4,..k...2^i; second 2^i : 1,2,3,4...k...2^i;
Second k would need changes=changes first k needs+1; (It is easy to see that)
Now we can observe a pattern here. Pascal Triangle Pattern; if n=1 (N=2^1): 0 needed by 1, 1 needed by 1; if n=2 (N=2^2) : 0 needed by 1, 1 needed by 2, 2 needed by 1; ....so on so it would look like: 1 1 1 1 2 1 1 3 3 1 ...
so the final thing we wanna do is just find nth row of pascal triangle and do the prefix sum till min(k,n); and this would be our answer. Coz we are just trying to count how many positions can sponsor make win. So say they can make any position amongst the k positions, then answer would be k. coz Madoka would put the first k smallest there.
Excuse for the poor explanation.
hey, your explanation is pretty good, I got to the same kind of reasoning during the contest. Could you (or anyone!) elaborate on the idea of calculating the n-th row of Pascal's triangle fast? Based on your solution I can see that it is based on Fermat's theorem but I cannot understand it fully.
Any link to the place where this is explained would be helpful!
hey, nth row is basically the nCr constants of binomial expansion of (1+1)^n.
I suck at these kind of problem C. Also, how to get familiar with these type of induction proofs?
Is the complexity analysis in F correct? I know Dinic's complexity reduces to $$$E\sqrt{V}$$$ on special type of unit graph, but this graph is weighted. Intuitively it feels that the algorithm will be fairly fast, but is there a rigorous proof of complexity in this case?
Damn. I heard the complexity from a friend who heard it from a friend who apparently didn't read the original Karzanov's paper but just read someone's analysis on the internet and misinterpreted it.
Shit happens. Anyway.
Firstly, the complexity is reachable, albeit on a different type of networks. You can read Alexander V. Karzanov, On finding a maximum flow in a network with special structure and some applications to find out more.
Secondly, a slightly worse but still subquadratic complexity holds. I updated the tutorial with the proof.
I think you should either add the Russian version of the analysis for problem F, or say that there is an English version, because in Russian it says that there's no editorial yet and someone might think that there's no editorial at all.
Done.
Can someone please explain problem C's solution in some simple language?
As far as I have understood... First of all its obv that for all i, ai <= bi. Then it is necessary that for each element it should be either ai = bi or bi <= bi+1 + 1. Now given that these conditions are satisfied, we can always make both arrays equal in the following way. If at each step we pick the min element such that ai != bi, we are sure that bi <= bi+1 + 1. So lets suppose ai <= ai+1 then we can simply increment ai to ai + 1. The other case, ai > ai+1, is never possible because if ai+1 != bi+1 then we would have chosen ai+1 as the smallest element else if ai+1 = bi+1, then bi > ai > bi+1... which means bi > bi+1 + 1 which contradicts our initial condition. Hence, we will always be able to increment the min element until it reaches its target for each element.
Thanks man! Great explaination . I love the proof
can anyone provide the solution explanation of problem B ?
Let us try to construct the matrix starting from $$$(r,c)$$$. Assume the matrix is as follows when there is one
X
on $$$(r,c)$$$. (This is the fifth example btw)Note that this mark on $$$(r,c)$$$ cannot affect other rows or columns, we fill a full diagonal with marks. After this step the matrix is as follows.
Now, let's find some other cells we need to fill. After we fill these cells which have $$$k$$$ horizontal distance relative to the line, we can see that it has $$$k$$$ vertical distance as well.
And, of course, these marks did not affect other rows or columns, so we need to construct diagonals based on these cells as well.
And after that we need to construct marks based on the new diagonals, and the diagonals based on the marks, and so on.This goes on until the matrix fits the criteria. By inductive proof (same logic as we did above), we can prove that the diagonals we just marked are the one $$$(r,c)$$$ is on, and the ones whose horizontal/vertical distance to the original diagonal is a multiple of $$$k$$$. This gives us a formula, $$$r+c \equiv x+y \,(mod\, k)$$$ (think of the grid as a graph on a plane, really. you'll get my point). Therefore, the proof is finished and you can construct a relatively simple code based on what we just proved.
Sorry to bother you. Can you elaborate a bit more on how did you arrive on the formula (r+c ≡ x+y(mod k)) ? I understood everything apart from this particular thing.
Actually, it's fine either way you perceive it, $$$r-c \equiv x-y \pmod{k}$$$ will also AC. We just want the marks to be placed in some diagonal lines, and we want these diagonal lines to be have a (manhattan) distance of multiples of $$$k$$$ with each other.
It would be helpful if the first few test cases always contained edgecases/ tricky ones. That way we can upsolve a problem without looking at tutorial or other's solution. Consider this submission. I have no idea what went wrong. I even can't debug it because I don't know the case.