1498A - GCD Sum
Video Editorial
Author and Problemsetting: ninja_28
Editorialist: sigma_g
Can you think of the simplest properties that relate a number and its sum of digits?
Note that if $$$X$$$ is a multiple of 3, then both $$$X$$$ as well as the sum of digits of $$$X$$$ are a multiple of 3! Can you put this property to use here?
If $$$X$$$ is a multiple of 3, then $$$\texttt{gcd-sum}(X) \ge 3$$$. Therefore, we are guaranteed that at least every third number will satisfy the constraints required by our problem $$$(\texttt{gcd-sum}(X) > 1)$$$.
Therefore, for the input $$$n$$$, we can simply check which one of $$$n$$$, $$$n+1$$$, and $$$n+2$$$ has its gcd-sum $$$> 1$$$, and print the lowest of them.
Note that you must take long long
, as input integers exceed the range of int.
Moreover, simply outputting $$$\text{ceil}((n / 3) \times 3)$$$ is incorrect as some non-multiples of three may also may have gcd-sum $$$> 1$$$, for example, 26.
#include <bits/stdc++.h>
using namespace std;
long long int gcd_sum(long long int num) {
// returns gcd-sum
long long int tmp = num, digitsum = 0;
while (tmp > 0) {
digitsum += tmp % 10;
tmp /= 10;
}
long long int gcd = __gcd(num, digitsum);
return gcd;
}
int main(void) {
int t;
cin >> t;
while (t--) {
long long int n;
cin >> n;
if (gcd_sum(n) != 1) {
cout << n << endl;
} else if (gcd_sum(n + 1) != 1) {
cout << n + 1 << endl;
} else if (gcd_sum(n + 2) != 1) {
cout << n + 2 << endl;
}
}
return 0;
}
from math import gcd
def valid(x):
return gcd(x, sum([ord(c) - ord('0') for c in str(x)])) != 1
t = int(input())
while t > 0:
t -= 1
n = int(input())
while not valid(n):
n += 1
print(n)
1498B - Box Fitting
Video Editorial
Author and Editorialist: sigma_g
Problemsetting: ninja_28
There can exist multiple optimal packings for a given set of rectangles. However, all of them can always be rearranged to follow a specific pattern, based on the rectangles' sizes.
Can you show that it is always possible to replace a set of consecutive small blocks with a single large block? (of same or larger size)
We follow a greedy strategy:
- Initialize height of box as 1.
- Initialize current layer size as $$$W$$$.
- Pick the largest available rectangle that can fit into the current layer, and place it there. Repeat until this layer cannot fit any more rectangles.
- If more rectangles remain, increment height by 1 and now repeat the last three steps. Else, output the current value of height.
First count sort the given rectangles based on their widths. There can only be 20 distinct rectangle widths in the range $$$[1, 10^9]$$$, so the following works:
counts = [0 for w in range(0, 20)]
for w in widths:
counts[log2(w)] += 1
The solution can be implemented by iterating $$$N$$$ times.
At each iteration, step through the counts
array and take the largest available rectangle that can fit into the current space. If no rectangle was able to fit, increment height by 1 and reset box width to $$$W$$$.
This has complexity $$$\mathcal{O}(n\log(w_\text{max}))$$$.
It is also possible to implement the solution with a multiset
and upper_bound
, for a complexity of $$$\mathcal{O}(n\log(n))$$$.
Store all rectangle sizes in a multiset. At each iteration, find using upper_bound
the largest rectangle that can fit into the current space we have, and fit it in. If no rectangle can fit in this space, reset the space to maximum, increment height, and repeat.
It is always possible to replace several smaller blocks with a single larger block if it is available. Because all blocks are powers of two, it must so happen that smaller powers of two will sum to a larger power. Therefore, we can always place a larger block first if it can be placed there.
This elaborate proof isn't actually required for solving the problem. The intuition in the brief proof is sufficient for solving the problem. This proof is for correctness purpose only.
Let's first note a property: if $$$a_1 + \ldots + a_n>a_0$$$, then there exists some $$$i$$$ such that $$$a_1+ \ldots +a_i=a_0$$$, when all $$$a_i$$$ are powers of 2 AND $$$a_1$$$ to $$$a_n$$$ is a non-increasing sequence (AND $$$a_1 <= a_0$$$, of course). Why is this so? You can take an example and observe this intuitively, this is a common property of powers of two. For example, $$$4+2+2+1+1>8$$$, but $$$4+2+2$$$ (prefix) $$$=8$$$. Formally: if $$$a_1 =a_0$$$, this property is trivially true. If $$$a_1 < a_0$$$, we can splilt $$$a_0=2^ka_1$$$ for some $$$k$$$ into $$$2^k$$$ parts and — by strong induction — claim this property holds.
Let us now compare some optimal solution and our greedy solution. Before comparing, we first sort all blocks in each layer of the optimal solution in decreasing order. This does not affect the final answer but helps in our comparison. This comparison goes from bottom to top, left to right. Let us look at the first point of difference: when the block placed by us ($$$B_G$$$) is different from the block placed by the optimal solution ($$$B_O$$$). There are three cases.
If $$$B_O > B_G$$$: this is impossible, as in our greedy solution, we are always placing the largest possible block. We wouldn't place $$$B_G$$$ in there if $$$B_O$$$ was also possible. If $$$B_O == B_G$$$: we have nothing to worry about (this isn't a point of difference)
If $$$B_O < B_G$$$: let us assume that the optimal solution places several consecutive small blocks, and not just one $$$B_O$$$. Since the blocks are in decreasing order, none of them would be bigger than $$$B_G$$$. Note that either all of these blocks will sum to less than $$$B_G$$$ or a prefix of them will be exactly equal to $$$B_G$$$. In either case, we can swap them with one single large block $$$B_G$$$ (swapping with a $$$B_G$$$ which was placed in some higher layer in the optimal solution)
Hence, in the last case, we have shown that any optimal solution (an arrangement of blocks) can be rearranged such that each layer fits the largest possible block first. This is also what is done in our greedy strategy. Therefore, with this proof by rearrangement, we conclude that our greedy strategy gives the same minimum height as that of the optimal answer.
There is a binary search method to solve this problem. We binary search for the minimum height required. Given a height $$$h$$$ — how to check if it can fit all rectangles?
We first preprocess the given array to construct a new array $$$c_i$$$ = number of rectangles of width $$$1 « i$$$. The size of this array is < 20.
We iterate from largest width to smallest width. Let its width is $$$w_i$$$. Then, we know that it fits only $$$W / w_i$$$ times in one layer. Therefore, with height $$$h$$$, the box can only fit in $$$f_i = h \times (W / w_i)$$$. So, we can say that if $$$f_i < c_i$$$, then this height is insufficient.
Therefore, we now know that for any $$$i$$$, if $$$f_i < c_i$$$, then the height is insufficient. Do we need more conditions to provably state that the given height is sufficient?
Yes! We also need to check if we can fit in the $$$i$$$-th block in combination with tthe $$$i+1$$$-th block. That is, when checking if the $$$i$$$-th block has enough space, we need to account for the space that has already been used by the $$$i + 1$$$-th block. So, we need to update $$$c_i = c_i + 2\times c_{i+1} + 4\times c_{i+2} \ldots$$$.
Therefore, we only need to compute the suffix sum $$$c_i$$$ like so and then check the above conditions. Complexity is $$$\mathcal{O}(n+\log(w_{max})\log(n))$$$.
As we understood in the proof, this solution only works when it's guaranteed that smaller blocks will always exactly sum to any larger block. Therefore, if the blocks are not powers of two, this guarantee does not hold.
The following counterexample suffices:
6 13
6 6 4 4 3 3
As you can see here the smaller blocks are not guaranteed to sum to the larger block (no prefix of $$$4,3,3$$$ sums to $$$6$$$). Our greedy solution gives minimum height 3, but best possible minimum height is $$$2$$$ (first layer: $$$6,4,3$$$, second layer: $$$6,4,3$$$)
#include <cmath>
#include <iostream>
#include <vector>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, box_width, w;
cin >> n >> box_width;
vector<int> counts(20);
for (int i = 0; i < n; i++) {
cin >> w;
counts[log2(w)]++;
}
int height = 1, space_left = box_width;
for (int iter = 0; iter < n; iter++) {
int largest = -1;
for (int size = 19; size >= 0; size--) {
if (counts[size] and (1 << size) <= space_left) {
largest = size;
break;
}
}
if (largest == -1) {
space_left = box_width;
height++;
for (int size = 19; size >= 0; size--) {
if (counts[size] and (1 << size) <= space_left) {
largest = size;
break;
}
}
}
counts[largest] -= 1;
space_left -= 1 << largest;
}
cout << height << endl;
}
}
from math import log2
def solve_tc():
n, w = list(map(int, input().split()))
widths = list(map(int, input().split()))
counts = [0 for _ in range(20)]
for width in widths:
counts[int(log2(width))] += 1
space = w
height = 1
for it in range(n):
largest = -1
for size, count_width in list(enumerate(counts))[::-1]:
if counts[size] > 0 and (2 ** size) <= space:
largest = size
break
if largest == -1:
space = w
height += 1
for size, count_width in list(enumerate(counts))[::-1]:
if counts[size] > 0 and (2 ** size) <= space:
largest = size
break
assert(largest != -1)
counts[largest] -= 1
space -= 2 ** largest
print(height)
t = int(input())
while t > 0:
t -= 1
solve_tc()
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, w;
int box_width;
cin >> n >> box_width;
multiset<int> st;
for (int i = 0; i < n; i++) {
cin >> w;
st.insert(w);
}
int height = 1, space_left = box_width;
while (!st.empty()) {
auto it = st.upper_bound(space_left);
if (it != st.begin()) {
it--;
int val = *it;
space_left -= val;
st.erase(it);
} else {
space_left = box_width;
height++;
}
}
cout << height << endl;
}
return 0;
}
#include <array>
#include <cassert>
#include <cmath>
#include <iostream>
using namespace std;
#define PW 20
array<int, PW> arr;
int n, w;
bool valid(int height) {
for (int pw = 0; pw < PW; pw++) {
long long units_i_have = 1ll * height * (w / (1 << pw));
if (units_i_have < arr[pw]) return false;
}
return true;
}
int main() {
int t; cin >> t;
while (t--) {
cin >> n >> w;
for (int i = 0; i < PW; i++) arr[i] = 0;
for (int _ = 0; _ < n; _++) {
int w; cin >> w;
arr[log2(w)] += 1;
}
// suffix cumulative count
for (int i = PW - 2; i >= 0; i--) arr[i] += 2 * arr[i + 1];
int beg = 1, end = n, ans = -1;
while (beg <= end) {
int mid = (beg + end) / 2;
if (valid(mid)) {
end = mid - 1;
ans = mid;
} else {
beg = mid + 1;
}
}
assert(ans != -1);
cout << ans << endl;
}
}
1498C - Planar Reflections
Video Editorial
Author, Problemsetter and Editorialist: sigma_g
We can use dynamic programming to store the state of the simulation at a given time. Therefore, we can simulate the entire situation by reusing the dp states.
I will describe the most intuitive solution. Naturally, looking at the constraints as well as at the output that is required, we can store a 3-state dp: dp[n][k][2]
. Here, dp[n][k][d]
indicates the total number of particles (at the end of the simulation) when one particle of decay age $$$k$$$ hits the $$$n$$$-th plane in the direction $$$d$$$. ($$$d$$$ is either $$$0$$$ or $$$1$$$, and indicates the direction (left/right) in which the particle is going)
There can be two directions, $$$N$$$ planes and maximum decay age is $$$K$$$. So, this dp requires $$$2\times1000\times1000\times4$$$ bytes $$$\approx 24MB$$$ which is well within our memory constraints.
How to update the DP states? If $$$k = 1$$$, the value of dp[n][1][d]
for any $$$n$$$ or $$$d$$$ is obviously $$$1$$$ (as no copies are produced).
So, let us consider a particle $$$P$$$ with $$$k>1$$$. This particle $$$P$$$ produces a copy $$$P'$$$ going in the opposite direction. We can count the number of particles produced by $$$P'$$$ as dp[n - 1][k - 1][1 - d]
, as it hits the previous plane with a smaller decay age and in the opposite direction. Moreover, the particle $$$P$$$ itself hits the next plane and continues to produce more stuff. We can calculate its number of particles produced as dp[n + 1][k][d]
, as it hits the next plane with the same decay age and the same direction!
Finally, we can combine these two values to get the value of dp[n][k][d]
.
We can implement this is easily as a recursive dp with memoization. At each state, we only need to iterate in the correct order (in one case, from right to left, and in the other, from left to right), and update states as required. Look at the implementation for more details. The total complexity of this algorithm is $$$\mathcal{O}(nk)$$$
Obviously, there exist much better solutions which do not use a third state and are much sleaker. However, I wanted to describe the simplest idea required to solve the problem.
#include <cstring>
#include <iostream>
#include <vector>
using namespace std;
const int N = 1001;
const int K = 1001;
int n, k;
const int mod = 1e9 + 7;
int dp[N][K][2];
int solve(int curr, int k, int dir) {
if (k == 1) {
return 1;
}
if (dp[curr][k][dir] != -1) {
return dp[curr][k][dir];
}
int ans = 2; // me and my copy
if (dir == 1) {
if (curr < n)
ans += solve(curr + 1, k, dir) - 1;
ans %= mod;
if (curr > 1)
ans += solve(curr - 1, k - 1, 1 - dir) - 1;
ans %= mod;
dp[curr][k][dir] = ans;
} else {
if (curr > 1)
ans += solve(curr - 1, k, dir) - 1;
ans %= mod;
if (curr < n)
ans += solve(curr + 1, k - 1, 1 - dir) - 1;
ans %= mod;
dp[curr][k][dir] = ans;
}
return ans;
}
int main() {
int t;
cin >> t;
while (t--) {
cin >> n >> k;
memset(dp, -1, sizeof(dp));
cout << solve(1, k, 1) << endl;
}
}
mod = int(1e9 + 7)
def iter_solver(N, K):
dp = [[[-1 for _ in range(2)] for __ in range(K + 1)] for ___ in range(N + 1)]
for i in range(1, N + 1):
dp[i][1][0] = dp[i][1][1] = 1
for k in range(2, K + 1):
# forward dir
for n in range(N, 0, -1):
ans = 2
if n < N:
ans += dp[n + 1][k][0] - 1
ans %= mod
if n > 1:
ans += dp[n - 1][k - 1][1] - 1
ans %= mod
dp[n][k][0] = ans
# backward dir
for n in range(1, N + 1):
ans = 2
if n < N:
ans += dp[n + 1][k - 1][0] - 1
ans %= mod
if n > 1:
ans += dp[n - 1][k][1] - 1
ans %= mod
dp[n][k][1] = ans
return dp[1][K][0]
t = int(input())
while t > 0:
t -= 1
n, k = list(map(int, input().split()))
print(iter_solver(n, k))
1498D - Bananas in a Microwave
Video Editorial
Author: shash42
Problemsetting and Editorialist: sigma_g
We have a brute force $$$\mathcal{O}(N\cdot M^2)$$$ solution.
At every timestep $$$t$$$, for each banana $$$b_i$$$ that has already been reached previously, apply this timestep's operation $$$y_t$$$ times on $$$b_i$$$. For all the $$$y_t$$$ bananas reachable from $$$b_i$$$, update their minimum reachability time if they hadn't been reached previously.
Why is this correct? Simply because we are simulating each possible step of the algorithm exactly as it is described. Therefore, we cannot get an answer that's better or worse than that of the optimal solution.
Observe if we can reduce our search space when we visit nodes that have already been visited previously.
Let us take an example. We have some timestep $$$t,x,y = 1,5,10$$$. If we start visiting from $$$k=10$$$, we will visit $$$k=15,20,\ldots,55,60$$$. Let us say that $$$k=40$$$ was an already visited state. Do we now need to continue visiting $$$k=45,\ldots,60$$$ or can we stop our search here?
We can actually stop our search as soon as we reach a previouly visited node! Why is this so? This is because — within the same iteration — that already visited node will once again start its own search, and will hit all the points that we were going to hit, and some more!
For example, let us say we would reach points $$$a, a\cdot x, a\cdot x^2, \ldots, a\cdot x^{y-1}$$$. Now, assume that $$$a\cdot x^5$$$ had been previously reached, then it is better to stop at $$$a\cdot x^5$$$, as this node itself will reach $$$a\cdot x^5, a\cdot x^6, \ldots, a\cdot x^{y-2}, a\cdot x^{y-1},a\cdot x^{y}, \ldots, a\cdot x^{5+y-1}$$$. Note the latter range includes as a prefix the entire remaining suffix of the former range! Therefore, in this approach, nodes that would have been visited, will eventually be visited, and get assigned the minimum timestamp.
We can implement the optimized solution by simply adding an extra if already_visited[k]: break
to our inner loop. Yup, really, that's all it takes to solve this question!
Complexity analysis: We can show that each node is visited at most twice: an unvisited node is reached atmost once, whereas a visited node is reached atmost twice (once during the main loop and once when searching from the immediately previous visited node) There are $$$N$$$ iterations, and in each iteration, each of the $$$M$$$ nodes is visited at most twice. Therefore, the total complexity is $$$\mathcal{O}(2NM)$$$.
- Integer overflows: $$$x'$$$ does not fit in integer range
- Out of bounds access: simulating the $$$y_t$$$ steps of the algorithm even when $$$k$$$ exceeds $$$M$$$ prematurely
#include <iostream>
#include <vector>
using namespace std;
using LL = long long;
const int DIV = 1e5;
inline LL ceil(LL x, LL y) {
return (x + y - 1) / y;
}
int main() {
int T, M; cin >> T >> M;
vector<bool> is_seen(M + 1, false);
is_seen[0] = true;
vector<int> answer(M + 1, -1);
answer[0] = 0;
for (int timestep = 1; timestep <= T; timestep++) {
auto new_is_seen = is_seen;
int t, y; LL x;
cin >> t >> x >> y;
auto operation = [&] (long long &curr) {
if (t == 1) {
curr = curr + ceil(x, DIV);
} else {
curr = ceil(curr * x, DIV);
}
};
for (int k = 0; k <= M; k++) {
if (is_seen[k]) {
long long curr = k;
operation(curr);
for (int a = 1; a <= y;) {
if (curr > M) break;
if (is_seen[curr]) break;
new_is_seen[curr] = true;
answer[curr] = timestep;
a++;
operation(curr);
}
}
}
is_seen = new_is_seen;
}
for (int i = 1; i <= M; i++)
cout << answer[i] << " ";
cout << endl;
}
import sys
input = lambda: sys.stdin.readline().rstrip()
DIV = int(1e5)
def ceil(x, y):
return (x + y - 1) // y
T, M = list(map(int, input().split()))
is_seen = [0 for _ in range(M + 1)]
is_seen[0] = 1
answer = [-1 for _ in range(M + 1)]
answer[0] = 0
o = 0
for timestep in range(1, T + 1):
t, x, y = list(map(int, input().split()))
def operation(val):
if t == 1:
return curr + ceil(x, DIV)
else:
return ceil(curr * x, DIV)
k = 0
setting = []
while k <= M:
if is_seen[k]:
curr = k
i = 0
while i < y:
i += 1
o += 1
curr = operation(curr)
if curr > M:
break
if is_seen[curr]:
break
setting.append(curr)
answer[curr] = timestep
k += 1
for idx in setting:
is_seen[idx] = 1
sys.stdout.write(" ".join(map(str, answer[1:])))
1498E - Two Houses
Author, Problemsetting and Editorialist: dixitgarg
In this problem we have to output two nodes $$$a$$$ and $$$b$$$ such that there is a path from $$$a$$$ to $$$b$$$ and $$$b$$$ to $$$a$$$ and the absolute value of the difference of the indegree $$$(|k_a - k_b|)$$$ should be maximum. First of all, let us think of bireachability only i.e how to find two nodes $$$a$$$ and $$$b$$$ such that they are both reachable from each other? How can we verify this from the judge? Because if we ask $$$"? \ a \ b"$$$ i.e whether there is a path from $$$a$$$ to $$$b$$$ or not, then if the judge answers "Yes", we can't ask further queries. So we have to ask queries for those pairs $$$(a,b)$$$ for which we are sure that there is a path from $$$b$$$ to $$$a$$$. So how to ensure whether there is a path from $$$b$$$ to $$$a$$$ or not?
The answer lies in the fact that the given graph is not an ordinary graph, it is a special one. For every pair of vertices in this graph, there is a directed edge. So this type of graph has some interesting properties which we are going to see now.
Image how the compressed SCC of the graph will look like. For every pair of nodes of compressed SCC, there will be an edge, so it will have exactly one source, one sink and there would be only one possible topological sorting of this compressed SCC.
Since there is an edge between every pair of nodes, for every pair of nodes in the compresses SCC also, there would be an edge. And we know that if there is an edge between node $$$a$$$ to node $$$b$$$, then node $$$a$$$ comes before node $$$b$$$ in the topological sort. So for every pair of nodes of compressed SCC, we know which node would come first in the topological sorting, so it would result in a unique topological sorting.
Now we'll see one interesting and important property of this graph.
Property: Consider two consecutive strongly connected components in the topological sorting, then all nodes present in the left component will have indegree strictly less than all nodes present in the right component. Here left denotes lower enumeration in the topological sorting.
Consider two nodes $$$u$$$ and $$$v$$$ from the left component and right component respectively. Since the contribution to the indegree from the nodes which don't lie in these two components would be the same for both $$$u$$$ and $$$v$$$ (because $$$u$$$ and $$$v$$$ lie in adjacent components), we are ignoring it as we have to just compare their values. If we consider all the edges between the nodes of the left component and the right component, then all of them would be directed from the node in the left component to the node in the right component. So the node $$$v$$$ would have minimum indegree of $$$Size of Left Component$$$. The node $$$u$$$ would have the maximum indegree of $$$Size of Left Component - 1$$$. This is because there is no edge directed from the right component to the node $$$u$$$ and the maximum indegree will be when all other nodes in the left component have an edge directed towards node $$$u$$$. In that case, it would be $$$Size of Left Component -1$$$. So the indegree of $$$u$$$ is strictly less than the indegree of $$$v$$$. Since $$$u$$$ and $$$v$$$ are arbitrary nodes, it is true for all pairs of nodes.
Using the above property we can argue that if indegree of node $$$a$$$ is greater than the indegree of node $$$b$$$, then node $$$a$$$ is reachable from node $$$b$$$. Because either node $$$a$$$ lies in the same SCC or to the SCC of higher enumeration in topological sorting. In both cases $$$a$$$ is reachable from $$$b$$$.
So we can store all pairs of nodes $$$(a,b), 1 \leq a \leq n, 1 \leq b \leq n, a < b$$$ in an array and sort it according to decreasing order of absolute value of difference of indegrees i.e $$$|k_a - k_b|$$$. And if we pick a pair, let it be ($$$a$$$,$$$b$$$) and $$$indegree[b] > indegree[a]$$$, then we are sure that $$$b$$$ is reachable from $$$a$$$ so we need to check whether $$$a$$$ is reachable from $$$b$$$ or not, so we ask $$$"? \ b \ a"$$$ and if the judge responds by "Yes", then it means both $$$a$$$ and $$$b$$$ are reachable from each other. Since we were iterating in the decreasing order of $$$|k_a - k_b|$$$, we get the optimal answer. If the judge never outputs "Yes" in the whole process, then there is no pair of nodes that are reachable from each other. So we will output $$$"? \ 0 \ 0"$$$
Overall Complexity : $$$O(n^2 \log_2 n)$$$
#include <bits/stdc++.h>
using namespace std;
int main() {
int n;
cin >> n;
vector<int> indegree(n);
for (int i = 0; i < n; i++) {
cin >> indegree[i];
}
vector<pair<int, pair<int, int>>> vec;
for (int i = 0; i < n; ++i) {
for (int j = i + 1; j < n; ++j) {
if (indegree[i] > indegree[j])
vec.push_back({indegree[i] - indegree[j], {i, j}});
else
vec.push_back({indegree[j] - indegree[i], {j, i}});
}
}
sort(vec.rbegin(), vec.rend());
for (auto it : vec) {
if (it.first < 0)
break;
int a = it.second.first, b = it.second.second;
cout << "? " << a + 1 << " " << b + 1 << endl;
cout.flush();
string str;
cin >> str;
if (str == "Yes") {
cout << "! " << a + 1 << " " << b + 1 << endl;
cout.flush();
return 0;
}
}
cout << "! 0 0" << endl;
cout.flush();
return 0;
}
import sys
input = lambda: sys.stdin.readline().rstrip()
n = int(input())
indegs = list(map(int, input().split()))
pairs = []
for i in range(n):
for j in range(i + 1, n):
if indegs[i] > indegs[j]:
pairs.append((indegs[i] - indegs[j], (i, j)))
else:
pairs.append((indegs[j] - indegs[i], (j, i)))
pairs = sorted(pairs, reverse=True)
l = len(pairs)
for _, nodes in pairs:
ni, nj = nodes
sys.stdout.write(f"? {ni + 1} {nj + 1}\n")
sys.stdout.flush()
s = input()
if s == "Yes":
sys.stdout.write(f"! {ni + 1} {nj + 1}\n")
exit(0)
sys.stdout.write("! 0 0\n")
1498F - Christmas Game
Author: nikhil_c
Problemsetting and editorialist: sigma_g
By the Sprague-Grundy theorem, we know that the current player has a winning strategy if $$$a_1 \oplus a_2 \oplus \ldots \oplus a_n$$$ (xorsum of sizes of the existing piles) is non-zero. For a proof, read details on CP-algorithms.
Let us classify nodes into odd or even depending on their depth relative to the root. Note that even steps do not affect the game state. Let us prove how:
Let us say it's Alice's turn. If Alice moves some coins from an even step to an odd step, then Bob can move exactly those same coins from that odd step back to an even step. After this transition, once again it's Alice's move. In fact, we realize that Bob can "revert" every even->odd move by Alice.
Therefore, if Alice wants to win, she has to play at least one odd->even move. Moves that go from even->odd do not affect the game state at all, as the other player can always play another move that reverts them. Therefore, we can say that any coins present on even steps will not change the game state.
Let us now analyze what happens if some coins move from the odd steps to even steps. We know that any coins on even nodes will not contribute to the game state. In fact, we realize that it does not matter whether these coins were present on the even nodes before the game started or whether they came by on the even nodes during the game. Once they are on an even step, they no longer contribute to the game state.
Hence, we can conclude that moving a coin from odd step to an even step is as good as taking a coin from the odd step and throwing it away.
As we can see, we have effectively reduced the Nim game on tree to a linear nim game where only the odd positioned nodes determine the winning strategy. This is known as the staircase nim. The result is the following: the current player has a winning strategy if xorsum of all values at the odd steps is non-zero.
In general $$$K$$$, we can extend this idea to: parity of $$$d' = \lfloor \frac d K \rfloor$$$ where $$$d$$$ is the depth of this node (zero-indexed). All nodes — such that $$$d'$$$ is odd for them — will contribute to the final xorsum. Take a moment to digest this.
How to calculate these xors? At each node $$$x$$$, we store a vector of size $$$D(n) = 2\cdot K$$$ where $$$D(n)_i$$$ is the xorsum of all nodes having their depth = $$$i$$$ — relative to node $$$x$$$ — in the subtree of node $$$n$$$.
How to propagate these values in a DFS? We know that the nodes at depth i
is at depth i + 1
for my child nodes. So, we can simply cycle through them and update the values. Check the implementation for details.
#include <iostream>
#include <vector>
using namespace std;
const int N = 1e5 + 1;
const int K = 20 + 1;
using VI = vector<unsigned int>;
using VVI = vector<vector<unsigned int>>;
VVI dp(N, VI(2 * K));
VI a(N);
vector<bool> win(N);
int n, k, k2;
void dfs(VVI &adj, int node, int prev) {
dp[node][0] = a[node];
for (auto neigh : adj[node]) {
if (neigh == prev) continue;
dfs(adj, neigh, node);
for (int rem = 1; rem < k2; rem++)
dp[node][rem] ^= dp[neigh][rem - 1];
dp[node][0] ^= dp[neigh][k2 - 1];
}
}
void dfs2(const VVI &adj, const int node, const int prev, const vector<unsigned int> &my_xors) {
vector<unsigned int> final_xors(k2);
for (int i = 0; i < k2; i++)
final_xors[i] = my_xors[i] ^ dp[node][i];
unsigned int odd_layer_xor = 0;
for (int i = k; i < k2; i++)
odd_layer_xor ^= final_xors[i];
win[node] = odd_layer_xor > 0;
for (auto neigh : adj[node]) {
if (neigh == prev) continue;
auto xor_send = final_xors;
// remove all contribution of this subtree
for (int i = 1; i < k2; i++)
xor_send[i] ^= dp[neigh][i - 1];
xor_send[0] ^= dp[neigh][k2 - 1];
// whatever was depth k for me is depth k+1 for my child node
int last = xor_send.back();
for (int i = k2 - 1; i > 0; i--)
xor_send[i] = xor_send[i - 1];
xor_send[0] = last;
dfs2(adj, neigh, node, xor_send);
}
}
int main() {
cin >> n >> k;
k2 = 2 * k;
VVI adj(n + 1);
for (int i = 0; i < n - 1; i++) {
int x, y; cin >> x >> y;
adj[x].push_back(y);
adj[y].push_back(x);
}
for (int i = 1; i <= n; i++) cin >> a[i];
dfs(adj, 1, 0);
dfs2(adj, 1, 0, vector<unsigned int>(k2));
for (int i = 1; i <= n; i++) cout << (win[i] ? 1 : 0) << " ";
return 0;
}
n, k = list(map(int, input().split()))
k2 = 2 * k
dp = [[0 for _ in range(k2)] for _ in range(n + 1)]
adj = [[] for _ in range(n + 1)]
for _ in range(n - 1):
_x, _y = list(map(int, input().split()))
adj[_x].append(_y)
adj[_y].append(_x)
a = [0] + list(map(int, input().split()))
win = [0 for _ in range(n + 1)]
parent = [0 for _ in range(n + 1)]
def dfs(node):
global parent
global dp
stack = [node]
pass_num = [0 for _ in range(n + 1)]
while stack:
node = stack[-1]
pass_num[node] += 1
if pass_num[node] == 1:
for neigh in adj[node]:
if neigh == parent[node]:
continue
parent[neigh] = node
stack.append(neigh)
continue
dp[node][0] = a[node]
stack.pop()
for neigh in adj[node]:
if neigh == parent[node]:
continue
for rem in range(1, k2):
dp[node][rem] ^= dp[neigh][rem - 1]
dp[node][0] ^= dp[neigh][k2 - 1]
def dfs2(node):
global win
stack = [[node, [0 for __ in range(k2)]]]
global dp
while stack:
node, prev_xors = stack[-1]
stack.pop()
final_xors = [x ^ y for x, y in zip(prev_xors, dp[node])]
for neigh in adj[node]:
if neigh == parent[node]:
continue
send_xors = [x for x in final_xors]
for i in range(1, k2):
send_xors[i] ^= dp[neigh][i - 1]
send_xors[0] ^= dp[neigh][k2 - 1]
send_xors = [send_xors[-1]] + send_xors[:-1]
stack.append([neigh, send_xors])
odd_xor = 0
for i in range(k, k2):
odd_xor ^= final_xors[i]
win[node] = 1 if odd_xor > 0 else 0
dfs(1)
dfs2(1)
print(" ".join(list(map(str, win[1:]))))
Auto comment: topic has been updated by ninja_28 (previous revision, new revision, compare).
Excellent problem set! Need more Div2 rounds like these.
Thanks for the lightning-fast and detailed editorial!
One of the best editorials I've ever seen if it's not the best...Congratulations to all the team behind this round !
a SCC is a strongly connected component, but what is a compressed SCC?
I think it is Condensation graph(Every SCC is a node)
What theyc all the "condensation graph" here.
https://cp-algorithms.com/graph/strongly-connected-components.html
Essentially, compress every SCC to a vertex and look at the edges into and from it.
Auto comment: topic has been updated by ninja_28 (previous revision, new revision, compare).
E can be solved in $$$O(n\log n)$$$ with ZERO queries.
Consider sorting by in degree. For the $$$i$$$th point, one can tell whether it is the last point of a SCC by comparing $$$i(i-1)/2$$$ with the sum of in degree of the first $$$i$$$ points.
So one can find all SCCs just by iterating through the degree array. Thus finding the answer.
Implementation
Now i'm curios -- did authors know about this solution?
Thank you for letting us know! Our previous version of E was suffering from this issue (answering without asking queries). So, we had to change the question to ask for the pair of bi-reachable houses with maximum indegree difference, not just any pair of bi-reachable houses. However, we were not aware that this version also suffers from the same issue.
Sorry, I'm curious why compare i * (i−1)/2 with the sum of in degree of the first i points can determine whether it is the last point of a SCC?
When you process the very last node in an SCC, it means that all of the indegrees processed so far have been accounted for (no future node will ever have a directed edge towards a node at index $$$\le i$$$). So the sum of indegrees must be the same as the number of pairs processed so far, which is $$$\binom{i}{2} = i(i-1)/2$$$.
Another way to think about this is to keep track of a count "how many of the indegrees I have processed so far are due to future nodes", and you know you are in the last node of an SCC when this count becomes zero. In fact, this is what dengyaotriangle does in the submission they linked.
Done.
Yeah. I too had a feeling that it can be done with zero queries. But couldn't come up with this solution. :(
Wonder why the problem setter didn't know this well-known conclusion. :)
Auto comment: topic has been updated by ninja_28 (previous revision, new revision, compare).
for B why using map fails but using freq arr pass ? frq arr : 111405356
map: 111388954
You are missing the reference (&) Check this. Basically, you are changing a copy of the value but not the actual value
Can C be solved using iterative dp?
Obviously, 111382908 .
Thanks.
Can you Explain this please??
dp[x][y]: Number of particles that bounce in the x-th plane with y ages passed from the beginning. So, if (y & 1) then all those are bouncing to the left, and to the right otherwise. It is not difficult to notice that if(y & 1)
that leads to
the other case is analogous. Finally, I need to add dp[i][j] to ans for all i < n && j <= k, those are the cases when those particles don't bounce.
I've done it during contest using O(K) memory and it obviously runs in O(NK) time complexity.
Indeed it is kind of messy, during the contest it took me 30 minutes to debug and find out why it doesn't work for inputs with N = 1 or K = 1. (It can probably be improved somehow, but for now I'll handle them as special cases).
Basically what I'm doing here is keeping prefixes and suffixes, where:
Prefix — number of generated items if we start at 1, 2 ... i-1, i, and shoot in the right direction.
Suffix — number of generated items if we start at i, i + 1 ... n-1, n, and shoot in the left direction
Almost anything that can be solved using recursive dp can in principle be solved using iterative dp also.
could you please help me?) this is my solution for problem C and i don't know what's wrong with it. thank you! 111401801
Hey, you seem to be doing subtractions, but without modulo — notice that everything you do must be modulo 1e9+7. Subtractions with modulo — if you wanna do (a-b)%m, you gotta do (a%m-b%m+m)%m I think.
thanks! i'll check
sorry, where did you find it?
Check this Submission. I modified your code with my suggestion and it got accepted.
thanks!!!!
Sorry for 4 months late reply, can you explain how your solution works? I am really curious.
The problems were really interesting . Great Work !
Beautiful editorial. I just loved the way they are explained step by step.
In E problem, why this submission passed and this submission failed. I added following condition before asking query in AC submission.
Why is swapping making difference?
A reachable from B does not imply B reachable from A
I guess this is an easier proof for Problem E not requiring discussion of things like SCC and top-sort:
So the solution is: if there exists pair $$$(A, B)$$$ such that $$$k_{A} \geq k_{B}$$$, and we can go from $$$A$$$ to $$$B$$$ (assumption), then we can go from $$$B$$$ to $$$A$$$. We have to prove this.
Case 1: if the edge is from $$$B$$$ to $$$A$$$ : since we can go from $$$A$$$ to $$$B$$$ (assumption), and from this edge we can go from $$$B$$$ to $$$A$$$, so yes.
Case 2: the edge is from $$$A$$$ to $$$B$$$. Now consider all other vertices $$$C$$$. We claim that there exists a $$$C$$$ such that edges $$$(B, C)$$$ and $$$(C, A)$$$ exist.
Proof: we can think of indegrees as score. Currently we have discovered edge $$$A$$$ to $$$B$$$ so score is $$$0:1$$$. We want score of $$$A$$$ to be greater than that of $$$B$$$. Now if for each $$$C$$$, the only edge combination that favourably increases score of $$$A$$$ is $$$(B, C)$$$ and $$$(C, A)$$$. Hence proved.
Is there any mistake or lack of rigour in this proof?
What if we don't find any pair (A, B) such that $$$K_A \ge K_B$$$ and we can go from A to B ?
Haha do you see what you have written. If there don't exist such pairs then the answer is NONE of course lol. Cuz one of them has to be greater than the other. And you must find a bireachable pair. Lol.
Your proof avoids using SCC and topo-sort by not proving the part that requires them at all. So I would say your proof is incomplete because your proof makes assumptions.
My proof makes no assumptions. The assumption in the bold part is that it is the answer to the query. You can think it is the hypothesis. Given X prove Y. I said that X is an assumption.
I misread the "and" there. My bad :)
I will accept your apology if you give me T-shirt
Sorry but I am not incharge of that :)
It can also be done this way :
First we remove all the nodes with indegree 0 or outdegree 0. (After removing a node, we would again need to check for it and keep removing such nodes).
Now if we have two nodes, $$$A$$$ and $$$B$$$, $$$k_A >= k_B$$$ then I claim that we can go from $$$B$$$ to $$$A$$$ (Yes! without any other assumption).
Number of nodes reachable from $$$B$$$ = $$$n - k_B$$$ and
Number of nodes reachable to $$$A$$$ = $$$k_A + 1$$$
If we can't reach from $$$B$$$ to $$$A$$$, total nodes would be $$$n + 1 + k_A - k_B > n$$$ , its not possible. So we can go from $$$B$$$ to $$$A$$$.
I didn't require the assumption to prove that we can go from $$$B$$$ to $$$A$$$. I required it to prove that they form a bidirectional pair. I proved that we can go from $$$B$$$ to $$$A$$$ if $$$k_{A} \geq k_{B}$$$ and then using the assumption I proved bidirectional thing.
Wonderfull editorial with clear explanations
why i can't hack after system testing?
Because it is not educational nor div 3 round
sometimes there was a HACK 'button' after DIV2 rounds
please. dont downvote me:(
After educational rounds
You can lock your solution and go to your room then you can see the people's solution who solved the problem you locked and when you click on one's solution , you see Hack button !
don't worry too much about contribution,focus on your rating
Nice Editorial it helps me so much for solving rest problems which i m not able to do... Thanks
In problem E, can anyone please tell why did I receive wrong answer on test 2, I just implemented the same thing using multiset, LINK.
Same has happened with me as well. 111408654 I had used maps
you're not querying in the correct order necessarily. A->B does not imply B->A.
Doesn't matter because in test case 2 the answer is anyway "! 0 0"
Auto comment: topic has been updated by ninja_28 (previous revision, new revision, compare).
I framed D in a slightly different way which might be more straightforward to you if you're so inclined.
Essentially, for each timestep, we want to solve a version of the coin-change DP problem where we have a finite number of coins all of the same value. We want to do this in O(m). If there were an unlimited number of the coin, this is just the coin-change DP problem, and you just iterate left-to-right, storing DP[i] as 1 if you can get there, 0 if you can't.
Since we only have a finite number of coins, we still iterate left-to-right, but instead of storing 1 or 0, we store in DP[i] the minimum number of coins used to reach that value.
To recap, iterate i from 0 to m. if DP[i] != -1 and DP[i] < y, then DP[f(i)] = DP[i] + 1. Where f(i) is applying the function (either addition or multiplication) once.
111392313
Hi,
Can you please explain this line,
DP[targ] = max(DP[j]-1, DP[targ]);
Oh yeah, sorry, that must be confusing.
In my writeup I said to store in DP[i] the minimum number of coins we need to use to reach that value.
In my code I actually store in DP[i] the maximum number of coins NOT used in order to reach that value. So if we call the value in my writeup
k
, then the value in my code is justy - k
, and all the reachable DP[i] starts at y instead of 0.The reason I did it that is so that taking the max would automatically overwrite the existing DP[targ] if DP[targ] == -1, so I don't have to add the
if DP[targ] == -1
.Finally i reached pupil after 16 contests (including this):)
Jesus dude! You climbed out of an abyss! Great
C C can be done with $$$O(n)$$$ memory too!
Love the comments from grey guys, giving improvements on C problems! Climb out of your hole first
Show some respect.
He is only helping and you are encouraging him to think that he can do above level question only when he will be at that level.
Auto comment: topic has been updated by ninja_28 (previous revision, new revision, compare).
Really quick and well explained editorial
can anyone tell whats wrong in my logic for part C
solve(n,k)=1 when n==0 or k==1
and
solve(n,k)=1+solve(0,k-1)+solve(1,k-1)+solve(2,k-1)............solve(n-1,k-1) when n>0&&k>1
my logic for solve(n,k) is as follows:
if present decay age is k and it passes through n planes then it will lead to new particles as follows
particle generated at plane 1 will now pass through no plane leading to increase in solution by solve(0,k-1)
particle generated at plane 2 will now pass through one plane leading to increase in solution by solve(1,k-1)
similarly particle generated at plane i will increase solution by solve(i-1,k-1).
so my formula comes out to be:
solve(n,k)=1+solve(0,k-1)+solve(1,k-1)+solve(2,k-1)............solve(n-1,k-1)
But my logic was giving wrong answer.
Can anyone tell whats wrong with the logic???
https://codeforces.me/contest/1498/submission/111375468 Have a look on this one
Problems similar to F have appeared before. Staircase nim itself is a common concept, and Move the Coins is a problem with K=1 with updates. Here's a more in-depth tutorial about staircase nim and this problem. This idea generalizes really easily as nodes with depths which have different residues mod k are independent (this isn't really mentioned in the editorial even though it's a pretty important fact).
Solution for problem C with only 2 dimensions:
Consider some DP where the state is represented by:
And our answer for the state is the total number of particles that are produced during this state.
So, formally: $$$dp[hits][age] = $$$ number of total particles produced when we shoot a particle with age $$$age$$$ towards $$$hits$$$ planes.
So how to calculate the current state from previously calculated states? It seems tricky at the first glance but it's actually kind of simple.
When we shoot a particle towards $$$hits$$$ planes what happens?
A new particle with age $$$age - 1$$$ gets reflected by the first plane in front of the current particle and it will now face $$$n - hits$$$ planes, which is the state $$$[n - hits][age - 1]$$$.
The same particle goes through the first plane in front of it then continue to face $$$hits - 1$$$ planes (obviously it's age won't change), which is the state $$$[hits - 1][age]$$$
So now we know the transitions from our current state, the recurrence become:
My full submission.
UPD: Of course this can be optimized to get rid of the second dimension since we only need the information from states with $$$age$$$ and $$$age - 1$$$, this improves our space complexity to $$$O(n)$$$, code.
Fixed! :D
Hey can you please explain the order of evaluation you used. I was looping through 'hits' first then 'age' like this:
and got wrong answer and changed it and got correct answer. I am a bit confused.
In this order sometimes you will use the value of the state $$$[n - hits][age - 1]$$$ while it has not been calculated yet.
How can this happen?
When $$$hits$$$ is small enough: $$$n - hits \gt hits$$$, and since you never reached any $$$dp[i][j]$$$ where $$$i \gt hits$$$ and any $$$j$$$ from $$$2$$$ to $$$k$$$ your code will use $$$dp[n - hits][age - 1]$$$ before it even gets calculated.
Hey, As you have explained the reason for not changing the loop order below. I got that part but if we keep the order state above in your code.
Notice that the state $$$[n - hits][age - 1]$$$ is calculated before state $$$[hits][age]$$$, because we had already calculated all the states $$$[i][age - 1]$$$ for all $$$i$$$ from $$$1$$$ to $$$n$$$.
Great approach, the key here probably was to understand that the direction of the particle doesn't matter on the number of planks in front of it.
Thanks for sharing
Why Age is K only , I mean a particle can keep oscillating inside planes for longer period.
what is complexcity of this code it should get tle
include <bits/stdc++.h>
using namespace std;
long long int gcd_sum(long long int num) { // returns gcd-sum long long int tmp = num, digitsum = 0;
}
int main(void) { int t; cin >> t;
}
It's $$$O(length(n))$$$. You can search for the Euclidean Algorithm for the time complexity of __gcd.
For finding the sum of digits of n: $$$O(length(n))$$$.
For finding the gcd: $$$O(\log length(n))$$$.
The total complexity: $$$O(length(n) + \log length(n))$$$, and then it's $$$O(length(n))$$$.
Here $$$length(n)$$$ is the number of digits in $$$n$$$.
but what about t=10^4 it did not come in complexcity
But $$$O(T length(n))$$$ is still fast enough.
E had 100 ACs in just the last 10 mins damn.
Telegram boy!
Every editorial should be like this. Awesome job.
B was good this time it required more thinking than just straight implementation
deleted
1498C - Planar Reflections can also be solved without using DP. Here is my solution 111412008 where i just traverse the given setup in ZigZag fashion.
It is arguably still a DP solution, but this is a nice way to condense it to just N of memory.
That's true, but
You don't need a conventional 2/3 state Dynamic Programming approach to solve this
is the point i was trying to make as the editorial clearly used that concept and it's not easy for a person to understand who doesn't know DP yet.that was the first intuition I got after drawing simulation
Solved problem 1498C - Planar Reflections using 2 states only. You don't need the 'direction' state or dimension if you observe the following.
Hitting the $$$i^{th}$$$ wall (/plane) in the opposite direction at any moment is equivalent to hitting $$$n-i+1^{th}$$$ wall in the forward direction at the same moment.
Thus the following short recursive DP function works perfectly, with calling the function with
call(n, k)
My submission 111384789
Also, looks like the spoilers get broken for comments (looks fine on the post). It would be nice if it gets fixed.
Best editorial guys.. Keep up the good work
On problem E I proved my solution on a different approach.
Lemma: Let A and B be nodes such that $$$k[a] <= k[b]$$$. Then, $$$B$$$ is reachable from $$$A$$$.
Proof:
Case 1: (If there is a directed edge from $$$A$$$ to $$$B$$$.)
There is nothing more to be done.
Case 2: (If there is not a directed edge from $$$A$$$ to $$$B$$$.)
Let $$$X_i$$$ be the set of nodes such that there exists a directed edge from every element of $$$X_i$$$ to $$$I$$$. Also, let $$$Y_i$$$ be the set of nodes such that there exists a directed edge from $$$I$$$ to every element of $$$Y_i$$$. Let's prove that there is at least one node that belongs to $$$Y_a$$$ and $$$X_b$$$. We have:
$$$|Y_a| + |X_b| = (n \; - \; 1 - \; k[a]) \; + \; k[b] = \; n \; - \; 1 + \; (k[b] \; - \; k[a]) \;>\; n - 2$$$
Using Dirichlet Principle, as $$$B$$$ does not belong to $$$Y_a$$$ and $$$A$$$ does not belong to $$$X_b$$$, we have that there are n — 2 nodes and the sum of elements of $$$Y_a$$$ and $$$X_b$$$ is bigger than n — 2. So, they have an element in common. Then, let $$$X$$$ be this element. There is an edge from $$$A$$$ to $$$X$$$ and from $$$X$$$ to $$$B$$$. We have a path from $$$A$$$ to $$$B$$$, finishing our proof.
I used this and for every pair of nodes, verified if there is a path between $$$B$$$ and $$$A$$$ (assuming that $$$k[a] \; <= \; k[b]$$$).
Nice proof! There's two things I would like to ask about:
Why is the sum $$$|Y_a| + |X_b| \geq n - 2$$$ and not $$$\geq n - 1$$$? (given that $$$k[b] - k[a]$$$ could be at least $$$0$$$).
On this line: "we have that there are n — 2 nodes and the sum of elements of $$$Y_a$$$ and $$$X_b$$$ is bigger than n — 2". Why can we say that it is bigger than $$$n - 2$$$? supposing the inequality you mentioned in 1. is $$$\geq n - 2$$$, would the statement be "bigger or equal to $$$n - 2$$$"? (or "bigger or equal to $$$n - 1$$$" if that's the correct right-hand side of the inequality). In such case, how can show that there'll be at least one element in common?
Thanks!
1: Actually the correct result was supposed to be:
$$$|Y_a| \; + \; |X_b| \; = \; n \; - \; 1 \; + \; (k[b] \; - \; k[a]) \; >= \; n \; - \; 1 \; > \; n \; - \; 2$$$
Because the lemma stated that $$$k[a] <= k[b]$$$. So, the sum of $$$Y_a$$$ and $$$X_b$$$ sizes is strictly bigger than $$$n - 2$$$. Sorry for that, my mistake.
2: I will try to explain it a bit better. Because of the assumption of the case (If there is not a directed edge from $$$A$$$ to $$$B$$$), we have that $$$B$$$ does not belong to $$$Y_a$$$ and $$$A$$$ also does not belong to $$$Y_a$$$. Then, there are at most $$$n - 2$$$ different elements in $$$Y_a$$$. Similarly, there are at most $$$n - 2$$$ different elements in $$$X_b$$$.
Suppose they have no elements in common. Then, we must have at least $$$|Y_a| \; + \; |X_b|$$$ different nodes other than $$$A$$$ and $$$B$$$. But, the graph has at most $$$n - 2$$$ nodes other than $$$A$$$ and $$$B$$$. Then, we get, from the above equation:
$$$n - 2 < |Y_a| + |X_b| <= n - 2$$$
By contradiction, they must have at least a common element.
I hope that helps!
That helps, thanks!
Awesome editorial and problem set. Really appreciate the quick and neat editorials and videos. It shows how much effort you people put into this round. Congrats and keep up the good work.
Love this great problem set for Div2 and the detailed editorial!
we need more editorials like this on the platform
Can anyone please tell me why this solution 111375843 for problem B is wrong. I solved using 2 pointers. Thanks in advance :)
Very nice editorial!
My solution for C was correct, but my code crashed because I was exceeding stack limit on my pc. How can I increase stack size on windows? I'd seen a few blogs, nothing worked so far.
EDIT: It worked, if anyone needs help feel free to DM me.
How to solve the problem C in Python using recursion?
In problem C I made
ll dp[1001][1001][2]
then I wanted to writell &ret = dp[i][j][d];
but accidentally I wrotell &ret = dp[i][j][k];
it got Accepted but..$$$d <= 1$$$
$$$k <= 1000$$$
so when I call
ll &ret = dp[i][j][k];
it's likell &ret = dp[1000][1000][1000]
when I put
vector
instead of thearray
I got RTECan anyone explain why
ll &ret = dp[1000][1000][1000]
didn't get RTE or MLE because it's large ($$$10^9$$$) and the array isll dp[1001][1001][2]
?Great Editorial! I love how there's hints for the solutions so that you can actually figure it out by yourself. There should be more editorials like this
Personally, I think C Problem can be solved in linear space in a very concise and "clean" way:
Forgive me if that picture is simply too ugly
:(
We give special treatment to $$$n=1$$$ or $$$k=1$$$ cases, and in other cases, we start by an array of length $$$n$$$, initialized
1,0,0,...,0
and calculate the partial sum of that array each step, after every calculation we reverse the array and do the exact same thing, finally terminate after $$$k$$$ rounds.so the time complexity of this solution is $$$\Theta(nk)$$$, as you can see in my contest submissions
:D
Actually, we can prove that there always exists a recursion in shape of $$$a_i=\sum_{\Delta=1}^{n}c_{\Delta}a_{i-\Delta}$$$, in the angle of view which $$$n$$$ is designated and $$$k$$$ varies as a sequence, which can be easily calculated in $$$\Theta(n^3\log k)$$$ time and potentially be calculated in $$$\Theta(n\log n\log k)$$$ time using polynomials and other (highly non-trivial) tricks.
For those who are Chinese, I think there's a website and several problems about this.
Well, me myself isn't very good at maths and especially polynomials, additionally I'm learning literacy class now, so I don't really understand this......
QwQ
ALSO, DON'T EVER TRY TO USE
double
TO CALCULATE INTEGER AND FRACTION CEILINGS WHENlong long
IS SUFFICIENT ENOUGH.C can also solved by finding regularity.We can find how many copies will be produced by all the particles going in the same direction each time.And their sum is the answer.Sorry my English is so poor.
Code:111393762
:D
We used the same iide to solve C,and your description is easy to understand.:D
omg i think this is one of the best editorials on this platform!
thx
:D
for Problem D, we only need one
res
array (res[i] == -1
means not visited), as in a 0-1 knapsack problem, instead of iterating from 0 to m, we iterate from m down to 0, therefore no need to keep twovisited
arrays.check my code here: 111428128
that's a pretty neat idea!
just have a little problem: won't this get TLE using 3 iteration loops?
QAQ
See the editorial, the inner two loops: each element in res[M] will be visited at most twice. Total complexity is $$$O(2MN)$$$
for problem B i wrote the same multiset approach, but it faield on Pretset 2. Exactly same approach of multiset..
multisetst; for (int i = 1; i <= n; i++) { cin >> a[i]; st.insert(a[i]); // mp[a[i]]++; }
Don't access after deleting, it's undefined behaviour because that iterator is pointing to value which is not present.
Though the reason stated by you doesn't apply..It was failing because of some constant global variable issue.Accessing afterwards deleting or before doesn;t creates issue until reference poitner is in memory.
*f is in memory till end brace so we can access that anywhere till that
You are right it was happening because array size was small.
https://www.cplusplus.com/reference/set/set/erase/
PS: Reason given by Dush1729 is right
I really liked the contest specially problem E. I still haven’t read to tutorial but in my solution I didn’t use top-sort. First I proved for any two vertices in a tournament like a and b if the output degree of a is at least as big as the output degree of b, b is reachable from a. (The proof would be kinda long if I write but it isn’t so hard using induction and u need to know that each tournament graph has a king but it might have an easier solution. Still if u couldn’t and wanted the answer just comment it and I will explain) After this it’s easy. For every two vertices we will push back the difference between k of those in a vector and then sort it from big to small and we check the first which is possible. Here is my code if it will help( I used set though) 111401603
Tutorial is just awesome, need more div2 problems and tutorials like this.
In problem D , Making x long double and using x = x' /(1e5) with inbuilt ceil function is giving me Wrong Answer
However ,
Using x as long long and with the "incline ceil function" given in the editorial is getting accepted .
Can someone please help me out and explain why this is happening ?
There might be precision issues with a fractional value of the multiplier. The integral ceil function in the editorial bypasses any such issues.
thanks
Nice learnt an imp one , but probably they should have kept this in sample tests instead of system tests I feel.
Actually during contest I did not read the question D properly (entirely my fault). I thought that for type 2 operation x can be between 1 and 10^5m instead of 10^5 to 10^5*m (x / 1e5 >= 1). Only reread at the end of contest (b4 7 minutes of contest end). Suppose x can be a fraction value as well for the type 2 operation. i.e. 1 <= x <= 10^5 * m. Is there a solution ?
Just consider the case when it's not that the block sizes for problem 2 are not in powers of 2 then can anyone think of a way to implement the DP solution?
Peace be upon you ! I really liked this contest and it's Editorial. Good job ! I'm waiting for more contests from you :)
Can someone please explain the scenario in
when there is no solution ? We have proven that the topological sort of the SCC is unique, and that the indegree of vertices in those SCC increases as we go from left to right in the topological sort enumeration, but is it always possible to topological sort this graph ?
There is no solution when the input graph is acyclic.
It is always possible to do a topological sort of the condensation graph because it is acyclic.
Can you please share the reason the condensed graph is acyclic ?
If some strongly connected components are in a cycle, they wouldn't be called strongly connected components because the entire cycle would make a (subset of a) single strongly connected component.
Nice one. So can we say that for this problem, there is always an answer ?
No, if the input graph is acyclic, there will be no bi-reachable pair of nodes.
Great problemsetting and Excellent tutorial! Thanks to this round :)
From the editorial can anyone explain that in the ith timestamp why every unvisited node (upto i-1 th timestamp) is visited atmost once? Let's assume that the operation is t=1 and "no" is an unvisited node upto i-1 th timestamp.Then "no" can get visited from no-x,no-2*x,no-3*x.... (assuming all of them have been visited upto i-1th step). I am telling this according to the solution given in the editorial. Can anyone please tell me where I am wrong?
According to the solution in the editorial,
no
(an unvisited node) will only be visited from its closest already visited node.The iteration starting from
no-3x
will stop atno-2x
(since it's an already visited node). The iteration starting fromno-2x
will stop atno-x
(since it's an already visited node). So, only the iteration starting fromno-x
will actually reachno
.Thanks. Totally understood.
I was stuck at problem D....
Because I finished watching Steins;Gate 0 just an hour before the contest.
And that music.... popped up in my head... again... ;__;.
Glad you liked the theme :)
Help needed in Problem C !
When a particle of decay age $$$k$$$ hits the $$$n^{th}$$$ plane, it produces a particle with decay age $$$k-1$$$ directed towards the $$${(n-1)}^{th}$$$ plane (if it exists), which on hitting the $$${(n-1)}^{th}$$$ plane produces a particle of decay age $$$k-2$$$ (if $$$k>2$$$) directed towards the $$$n^{th}$$$ plane, this goes on until the decay age of the produced particles is greater than $$$1$$$.
How the solution in the editorial is able to count such particles with decay age $$$k-2$$$ & the consequently produced particles when a particle of decay age $$$k$$$ hits the $$$n^{th}$$$ plane?
So basically in dp transition step we are summing up the
dp[curr+1][k][dir]
anddp[curr-1][k-1][1-dir]
here:These two dp states will take care of particles you are talking about because a dp state of
n,k,dir
indicates the total number of particles (at the end of the simulation) when one particle of decay agek
hits then-th
plane in the directiond
.P.S: If still not clear I would recommend to watch video editorial for this problem where we have clearly explained about how to think of dp states.
Thanks for the wonderful solution!
Realllllly fast!
I got WA on D because I used ceil() instead of dividing integers :(
I think in D it is written to stop when we get first visited[ind]=1; But I think that is not required at all. We can do only 1 operation and stop. Maintain one num array as the minimum operation required to reach this point at this timestamp. Now suppose we want to do one operation if(num[ind]+1<=y) then only we can do this operation and update the next num and visited nxt as true.
Yes you are right!. We found given implementation more cleaner so mentioned that in the editorial.
Additionally, in the editorial, I wanted to highlight the subtle difference between the TLE and the accepted solution, that is why I did not cover this approach.
Can anybody explain problem F a bit more? I understood everything related to staircase NIM but now we have the problem of finding for each node the xorsum of values at nodes that have $$$d' = d/k$$$ odd. How can we do that an maintain the answers for all roots?
Do you find any resources to explain it. I have the same question with you. :<D
Hey, I have figured out this problem.
Let's define f[u][i] is a state that root is 1, and the deep is i that value ralated to Hey, I have figured this problem.
It is a type of root-changed(I am not sure about this word) DP.
Whatever xD
Let's define f[u][i] is a state that root is 1, and we just think about some nodes in the subtree of u and the distance with node u remainder by (2*K) equal to i. The value of f[u][i] is function of sg.
dp[u][i] is a state that root is u, and we just think about some nodes distance with node u remainder by(2*k) equal to i. The value of dp[u][i] is function of sg.
——————— Firstly, we can calculate f[u][i] in this transfer equation as follows. The time complexity is O(n*k). Define u is the father of v
f[u][i] ^= f[v][i-1];
Obviously, dp[1] = f[1].
Secondly, if we just think about the relationship between u(father of v) and v. We can find the value is about:
dp[v][i] = f[v][i] ^ dp[u][i-1] ^ f[v][i-2];
We also can calculate them by dfs. The time complexity is O(n*k).
Some solutions which passed system cases are now getting WA on E on later added test cases. So isn't this matter of luck for people who submit wrong codes and got accepted? It happened to me twice in recent contests :( (Didn't code my logic, thinking it to get WA, but another guy submitted it and passed system tests and failing on later added test cases but still getting the points)
Seems like there is a little typo in the C++ solution of D(in the ceiling function) which is causing compilation error.
Sorry about it! This seems to be a common issue across multiple editorials, for example, the editorial of problem C here.
One of the best editorials I have seen off late. Great work!
What are odd steps and even steps in problem F ?
UPD : Understood after reading this blog
In planar reflections problem, can any one tell why ans was initialized to two in recursive c++ approach?
The initialization of 2 corresponds to:
Current particle which will be passing through the current plane.
New particle which is created by strike of current particle at current plane. This new particle will be going in opposite direction.
Thanks man, that made it clear.
I have a doubt in question D. I feel bfs can also solve this question. I want your suggestion on whether bfs works and is it efficient.
Yes you can solve this question using BFS also. I just updated editorial with BFS code, do have a look.
Hi, Can Someone help me understand how to solve Problem E. I don't understand what is SCC and compressed SCC. Can some one guide me what i need to learn in prior to understand the solution?
You need to study graphs properly and then study SCC's from various online resources.
Thank you for guiding ninja_28
Problem C can be much more simple. It can simply be solved using 2-state dp. dp(i, j) implies the total number of particles generated if a particle is needed to pass i sheets with current age j. As the opposite direction is symmetric the simple reccurence comes out to be dp(i,j) = dp(i — 1, j) + dp(n — i, j — 1); (n — i bcz if there are i sheets needed to be crossed rightwards there will be n — i sheets for the newly born particle in opp. direction to be crossed).
I think we don't need binary search in Alternate implementation with easier proof of B(Box Fitting), It is sufficient to find least height required for every suffix of Ci and required height will be maximum of all possible least heights. here is the Solution.
Binary search is used to reduce time complexity to O(nlogn).
At first with Binary Search, Time Complexity was O(n+log(Wmax)log(n)). now if we don't do Binary search, then it will be O(n+log(Wmax)). Please observe carefully, I am talking about Editorial Section containing Alternate implementation with easier proof in problem B (Box fitting).
Unfortunately the Python solution to problem E has been uphacked: https://codeforces.me/contest/1498/submission/111571072
It TLEs with 125000 write/flush?
It has not been uphacked. Test 29 was already in systests. We need to submit the code with Python 3. AC submission. Though, I am not sure why PyPy3 and Python 3 give different I/O rates.
Here's a sort of "meta-solution" I came up with for problem E. It's not really rigorous, but it is simple :)
First, remove all vertices that cannot be in the answer by repeatedly removing all vertices whose indegree is
n-1
(and updaten
after that).Now, consider a pair of vertices
A
andB
, whose indegree difference isID
. Now, if you decide to query a pair of verticesC
andD
whose indegree difference is less thanID
, there is a risk thatA
andB
are the answer, since if there exists at least one graph that matches the input, and in this graphA
andB
are mutually reachable, you cannot ignore the possibility that this graph is in fact the graph that is given. So in a sense, we are forced to query the pair of vertices whose indegree difference isID
before we can query a pair of vertices whose indegree difference is less thanID
. And, if we are forced to query it, and the answer is "Yes", we don't have any other option than to output this pair as the answer, since we don't have any other candidates.Can someone explain to me please why am I getting the wrong answer in D if I use this operation function instead of theirs?
Any kind of help is highly appreciated.
There might be precision issues with a fractional value of the multiplier. The integral ceil function in the editorial does not mess around with fractions, and is thus guaranteed to always give the correct ceil.
I have a problem in B here . I used multiset and did quite same but still got TLE . (the only difference in my approach is I used lower_bound instead of upper_bound) .. can anyone tell me why its happening like this? i am posting the link to my code here https://codeforces.me/contest/1498/submission/112199870 thanks in advanced !!
can anyone help me in figuiring out why its giving TLE? problem B i used multiset and lower_bound
why downvoting my comment ? i really faced this issue but its sorted now :)
Author solution in Python for Task D does not meet Time Limit. :(
It does. All the solutions we have posted in the editorial are verified on Polygon. You have to submit this code in PyPy3.
Best Editorial I have ever seen! Keep it up!
Applied multi-source BFS for D problem, not working :(. Wrong answer on testcase 12. Would appreciate if anyone points out where it is going wrong. 113828973
why am i getting TLE?
The editorial is excellently framed
The solution given in the editorial is giving runtime error in my local machine for the tc -> (500 250). A possible reason for this could be the limit of stack size (I am not sure about this). I am using windows os with GCC compiler. Does anyone face the same issue? Or any advice on what should I do?
Yes!! I am also looking solution for this problem!! Please tell me if you can fix it!
If you're worried about the correctness of the code depending on the runtime environment, my suggestion is to run the code in Codeforces customtest (https://codeforces.me/problemset/customtest ), where the editorial code indeed works fine.
Although I do not use Windows, I can guess that stack size should not be an issue because the solution is iterative (not recursive).
In Div-2 problem C, My code is not giving output in codeblocks! But it got accepted .. Can anyone tell me how I can fix this? My code link
picture link, if not visible ....
Hi, the code that you have shown is not the same as the code in the editorial. Editorial Python code is written iteratively, but your linked code is recursive.
It's the best editorial I have even seen.
Why am i getting wrong output? I have used the exact same logic as mentioned in editorial.
~~~~~
amazing problemset and set of well structured editorials , just awesome work guys