Idea: EJIC_B_KEDAX, development: EJIC_B_KEDAX.
Consider the remainder from dividing $$$n$$$ by $$$3$$$ before the first move. If it is equal to $$$1$$$ or $$$2$$$, then Vanya can make the number $$$n$$$ divisible by $$$3$$$ after the first move, i.e. he wins. Let the remainder be $$$0$$$, then Vanya must change the number after which it will not be divisible by $$$3$$$. Then Vova can do the same operation as Vanya and make it divisible by $$$3$$$ again. This will go on indefinitely, so Vova wins.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
if (n % 3) {
cout << "First\n";
} else {
cout << "Second\n";
}
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
}
1899B — 250 Thousand Tons of TNT
Idea: zwezdinv, development: zwezdinv.
Solution #1:
Since $$$k$$$ is a divisor of $$$n$$$, there are $$$O(\sqrt[3]{n})$$$ such $$$k$$$. We can enumerate all k, calculate a given value in $$$O(n)$$$, and take the maximum of them. Total complexity — $$$O(n \cdot \sqrt[3]{n})$$$.
Solution #2:
Without using the fact that $$$k$$$ is a divisor of $$$n$$$, we can simply loop over $$$k$$$ and then calculate the values using prefix sums, and at the end check that there are exactly $$$k$$$ elements in each segment. Such a solution works in $$$O(\frac{n}{1} + \frac{n}{2} + \frac{n}{3} + \cdots + \frac{n}{n}) = O(n \log n)$$$.
#include<bits/stdc++.h>
using namespace std;
using ll = long long;
#define all(x) x.begin(), x.end()
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) cin >> a[i];
ll ans = -1;
for (int d = 1; d <= n; ++d) {
if (n % d == 0) {
ll mx = -1e18, mn = 1e18;
for (int i = 0; i < n; i += d) {
ll sm = 0;
for (int j = i; j < i + d; ++j) {
sm += a[j];
}
mx = max(mx, sm);
mn = min(mn, sm);
}
ans = max(ans, mx - mn);
}
}
cout << ans << '\n';
}
int32_t main() {
int t;
cin >> t;
while (t--) solve();
}
Idea: meowcneil, development: meowcneil.
There are "bad" positions in the array, i.e., those on which two numbers of the same parity are next to each other. Note that all matching segments cannot contain such positions, in other words, we need to solve the problem of finding a sub segment with maximal sum on some number of non-intersecting sub segments of the array, the boundaries of which are between two neighboring elements of the same parity.
The problem of finding a sub segment with maximal sum can be solved using the classical algorithm with keeping minimal prefix sum on the prefix. The problem can be solved in a single pass over the array by simply dropping the keeped values when we are in a bad position.
Total complexity — $$$O(n)$$$.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int ans = a[0];
int mn = min(0, a[0]), sum = a[0];
for (int i = 1; i < n; ++i) {
if (abs(a[i] % 2) == abs(a[i - 1] % 2)) {
mn = 0;
sum = 0;
}
sum += a[i];
ans = max(ans, sum - mn);
mn = min(mn, sum);
}
cout << ans << endl;
}
int main() {
int tc = 1;
cin >> tc;
for (int t = 1; t <= tc; t++) {
solve();
}
}
1899D — Yarik and Musical Notes
Idea: zwezdinv, development: molney.
The problem requires to count the number of pairs of indices $$$(i, j)$$$ ($$$i < j$$$) such that $$$(2^a)^{(2^b)} = (2^b)^{(2^a)}$$$, where $$$a = b_i, b = b_j$$$. Obviously, when $$$a = b$$$ this equality is satisfied. Let $$$a \neq b$$$, then rewrite the equality: $$$(2^a)^{(2^b)} = (2^b)^{(2^a)} \Leftrightarrow 2^{(a \cdot 2^b)} = 2^{(b \cdot 2^a)} \Leftrightarrow a \cdot 2^b = b \cdot 2^a \Leftrightarrow \frac{a}{b} = \frac{2^a}{2^b}$$$. Obviously, $$$a$$$ and $$$b$$$ must differ by powers of two, otherwise the equality cannot be satisfied, since the ratio of powers of two is on the right. Without loss of generality, suppose that $$$b = a \cdot 2^k$$$ ($$$k > 0$$$), then the equation takes the form $$$\frac{a}{a \cdot 2^k} = \frac{2^a}{2^{a \cdot 2^k}} \Leftrightarrow \frac{1}{2^k} = \frac{1}{2^{(2^k - 1)a}} \Leftrightarrow 2^k = 2^{(2^k - 1)a}$$$. If $$$k = 1$$$, then $$$a = 1$$$, $$$b = 2$$$. If $$$k > 1$$$, then $$$2^k - 1 > k$$$, and so the equality cannot be satisfied.
Thus, the only possible cases where the equality is satisfied are if $$$b_i = b_j$$$ or $$$b_i = 1, b_j = 2$$$ (and vice versa). The number of such pairs can be counted for $$$O(n \log n)$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (int& x : a) cin >> x;
ll ans = 0;
map<int, int> cnt;
for (int i = 0; i < n; i++) {
ans += cnt[a[i]];
if (a[i] == 1) ans += cnt[2];
else if (a[i] == 2) ans += cnt[1];
cnt[a[i]]++;
}
cout << ans << "\n";
}
signed main() {
int t;
cin >> t;
while (t--) solve();
}
Idea: Vladosiya, development: Vladosiya.
Consider the position of the first minimum in the array, let it be equal to $$$k$$$. All elements standing on positions smaller than $$$k$$$ are strictly greater, so we must apply operations to them, because otherwise the array will not be sorted. Suppose we have applied operations to all such elements, they have taken some positions after $$$k$$$ (since they are strictly greater than the minimum), i.e. now the minimum element that moved from position $$$k$$$ is at the beginning of the array. If we apply the operation to it, it will return to its current position, since it is less than or equal to all elements of the array, i.e. the array will not change.
Thus, after the array has its minimum at the beginning, it is useless to apply operations, and all the operations applied before that will move the element from the beginning of the array to some position after the position of the first minimum.
Then, if the part of the array after the position $$$k$$$ is not sorted, the answer is $$$-1$$$, because it is impossible to change the order of elements in it. Otherwise, the answer is equal to the number of elements standing before the first minimum, since the operation must be applied to them and they will be in the right place in the right part. Total complexity — $$$O(n)$$$.
def solve():
n = int(input())
a = [int(x) for x in input().split()]
fm = 0
for i in range(n):
if a[i] < a[fm]:
fm = i
for i in range(fm + 1, n):
if a[i] < a[i - 1]:
print(-1)
return
print(fm)
for _ in range(int(input())):
solve()
Idea: EJIC_B_KEDAX, development: EJIC_B_KEDAX, zwezdinv.
This problem can be solved in several similar ways, one of them is given below. First, it is most convenient to take a bamboo — vertices from $$$1$$$ to $$$n$$$ connected in order. Then, we will maintain the following construction. At each moment of time, vertices $$$1$$$ and $$$2$$$ will be connected by an edge, from vertex $$$2$$$ there will be at most two branches, which are sequentially connected vertices (bamboo). Thus, at any given time there will be at most three leaves in the tree, one of which is vertex $$$1$$$.
We will maintain vertices from two branches in two arrays. Then, let the current number from the query be $$$d$$$. If the distance from any of the leaves to vertex $$$1$$$ is $$$d$$$, we don't need to perform the operation. Otherwise, let's do the operation so that the distance from a leaf from, for example, the first branch to vertex $$$1$$$ is equal to $$$d$$$. If the current distance is greater than $$$d$$$, then we remove the extra vertices to the end of the second branch, and otherwise we add the necessary ones from the end of the second branch. Thus, after each operation, the distance from vertex $$$1$$$ to some of the sheets will be equal to $$$d$$$.
Transformations can be done by completely moving vertices, then the total complexity — $$$O(nq)$$$.
#include<bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> b1, b2;
for (int i = 0; i < n; ++i) {
b1.push_back(i);
}
b2.push_back(1);
for (int i = 1; i < n; ++i) {
cout << i << ' ' << i + 1 << endl;
}
int q;
cin >> q;
while (q--) {
int d;
cin >> d;
d++;
if (b1.size() == d) {
cout << "-1 -1 -1\n";
} else if (b1.size() < d) {
d = d - b1.size();
vector<int> qq(b2.end() - d, b2.end());
int u = b2[b2.size() - d];
int v1 = b2[b2.size() - d - 1];
int v2 = b1.back();
cout << u + 1 << ' ' << v1 + 1 << ' ' << v2 + 1 << '\n';
for (int i = 0; i < d; ++i) b2.pop_back();
for (auto i : qq) b1.push_back(i);
} else {
d = b1.size() - d;
vector<int> qq(b1.end() - d, b1.end());
int u = b1[b1.size() - d];
int v1 = b1[b1.size() - d - 1];
int v2 = b2.back();
cout << u + 1 << ' ' << v1 + 1 << ' ' << v2 + 1 << '\n';
for (int i = 0; i < d; ++i) b1.pop_back();
for (auto i : qq) b2.push_back(i);
}
}
}
}
Idea: EJIC_B_KEDAX, development: Sokol080808.
Let's start the depth-first search from vertex $$$1$$$ and write out the entry and exit times for each vertex. Then, the fact that vertex $$$b$$$ is a descendant of vertex $$$a$$$ is equivalent to the fact that $$$\mathrm{tin}[a] \leq \mathrm{tin}[b] \leq \mathrm{tout}[b] \leq \mathrm{tout}[b] \leq \mathrm{tout}[a]$$$, where $$$\mathrm{tin}$$$ and $$$\mathrm{tout}$$$ — entry and exit times, respectively. Then, let us create an array $$$a$$$, where $$$a_i = \mathrm{tin}[p_i]$$$, then the problem is reduced to checking that on the segment c $$$l$$$ through $$$r$$$ in array $$$a$$$ there is at least one number belonging to the segment $$$[\mathrm{tin}[x]; \mathrm{tout}[x]]$$$. This can be done, for example, using Merge Sort Tree, then the total complexity will be $$$O(n \log n + q \log^ 2 n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define sz(x) (int)x.size()
#define all(x) x.begin(), x.end()
struct SegmentTree {
int n;
vector<vector<int>> tree;
void build(vector<int> &a, int x, int l, int r) {
if (l + 1 == r) {
tree[x] = {a[l]};
return;
}
int m = (l + r) / 2;
build(a, 2 * x + 1, l, m);
build(a, 2 * x + 2, m, r);
merge(all(tree[2 * x + 1]), all(tree[2 * x + 2]), back_inserter(tree[x]));
}
SegmentTree(vector<int>& a) : n(a.size()) {
int SIZE = 1 << (__lg(n) + bool(__builtin_popcount(n) - 1));
tree.resize(2 * SIZE - 1);
build(a, 0, 0, n);
}
int count(int lq, int rq, int mn, int mx, int x, int l, int r) {
if (rq <= l || r <= lq) return 0;
if (lq <= l && r <= rq) return lower_bound(all(tree[x]), mx) - lower_bound(all(tree[x]), mn);
int m = (l + r) / 2;
int a = count(lq, rq, mn, mx, 2 * x + 1, l, m);
int b = count(lq, rq, mn, mx, 2 * x + 2, m, r);
return a + b;
}
int count(int lq, int rq, int mn, int mx) {
return count(lq, rq, mn, mx, 0, 0, n);
}
};
vector<vector<int>> g;
vector<int> tin, tout;
int timer;
void dfs(int v, int p) {
tin[v] = timer++;
for (auto u : g[v]) {
if (u != p) {
dfs(u, v);
}
}
tout[v] = timer;
}
void solve() {
int n, q;
cin >> n >> q;
g.assign(n, vector<int>());
for (int i = 0; i < n - 1; i++) {
int u, v;
cin >> u >> v;
u--; v--;
g[u].push_back(v);
g[v].push_back(u);
}
timer = 0;
tin.resize(n);
tout.resize(n);
dfs(0, -1);
vector<int> p(n);
for (int i = 0; i < n; i++) cin >> p[i];
vector<int> a(n);
for (int i = 0; i < n; i++) a[i] = tin[p[i] - 1];
SegmentTree ST(a);
for (int i = 0; i < q; i++) {
int l, r, x;
cin >> l >> r >> x;
l--; x--;
if (ST.count(l, r, tin[x], tout[x])) {
cout << "YES\n";
} else {
cout << "NO\n";
}
}
}
int main() {
int tests;
cin >> tests;
while (tests--) {
solve();
if(tests > 0) cout << "\n";
}
return 0;
}
Will there be a English editorial?
Added, thank you for the noticing
time of contest was so fu....g less than the time it should be
non of problems was that intresting and you can see that in the comments of announcement and there was no hard algorithm problem and i think in problem C you should say that what is a subarray problem F statement wasnt clear at all
Cope with it
E was too easy for an E. In my opinion, D > C > B > E > A in decreasing order of difficulty.
I think maybe it's about the problem statements
After not being able to implement B, i just tried C and didn't even bother to read D. But after this comment i read E and yupp ... this the correct order of difficulty mentioned by you.
Nope i dont think so C was the most easiest one just apply sliding window
Pretests of C are horribly bad.
So many contestant has 0 if all of the array is negative.
Example code: https://codeforces.me/contest/1899/submission/233111911
Sample Input for Hack:
1
3
-3 -1 -1
NOTE: I am not writing this comment to bury authors (because it has really good problems and it can be best Div. 3 I have ever seen.) but I think how random case generator don't produce this example, it seems really interesting?
don't understand this part -> Since k is a divisor of n , there are O(n−−√3) such k is there a proof?
I think it's a typo, he meant to say sqrt not cbrt?
I don't know how to prove it mathematically, but it's true for n <= 10^9. I found it in this post: https://codeforces.me/blog/entry/651?locale=ru
Wow! how did you found this? thanks.
I just typed "асимптотика делителей числа" ("asymptotics of the divisors of a number" in English) in google :)
If you're good in math, try to read this page) there're enough interesting things en.wikipedia.org/wiki/Divisor_function
link: to the page
Can someone explain what problem F is asking? Can't comprehend it =(
Initially, you have to assume any tree of n vertices. Then, there are q queries. Query is: You have given distance x, if the distance between any two leaf nodes equals x, then do nothing. If it is not then you have to make a change in the tree (only one operation).
The operation is like this — choose any three vertices u, v1, v2 such that there is edge b/w u and v1 and there is no edge b/w u and v2. Then, remove edge b/w u and v1 and add edge b/w u and v2.
Explain what is wrong with my solution 233204396, problem D. I count the number of different numbers, using this information I calculate the number of pairs of identical numbers and pairs from (1, 2), add them up, get WA on the 11th test
I'm pretty sure you got an overflow on the line:
ans += cnt1 * cnt2;
(cnt1
andcnt2
should belong long int
)should be
Integer overflow. Change
to
AC submission link: https://codeforces.me/contest/1899/submission/233205784
Video solution for the Chinese:
BiliBili
Is there any video solution in English?
I don't know. Maybe someone posted it on youtube
https://www.youtube.com/watch?v=9AMKfmiOQ5Q
Problem F, on example,2nd test case, in the third query already exist a path with distances 3 between 2 leafs , 5-4-2-3, but the answer wasn't -1 -1 -1.I'm receiving WA veredict for that?233191318
Uh actually the wrong answer you're getting is for the case when D = 4 (Q = 4). According to your construction Tree at Q = 4
(1-2-3-5 & 2-4)
hereD is not 4.
You make a change instead of-1 -1 -1
to preserve the answer for the upcoming query.Problem G can be solved offline in O(nlogn + qlogn). answer queries in decreasing order of entry time of x. Add the exit time of each vertex v in the corresponding position in the permutation when we process the first query such that tin[x] <= tin[v] In this way, we can guarantee that we've only added vertices that have greater entry time than current x. So we can query for the minimum on segment.
How do you find the vertices v with tin[v] >= tin[x] for each query of x in O(logn)?
Sorry for necroposting, but just in case someone's wondering how to implement his idea: https://codeforces.me/contest/1899/submission/266137066
Thanks a lot Deconstructor!
Nice problemset. B was tough for me to understand :(
same B was giving me wrong for last two tc that wre in the examples
Bad problem state b and d! Also don’t like any of 1st 5 problems. Although it was fun to participate. Best luck for me and also for the problem setter. Plz try to explain some of the test cases.
Which data structure do I need to know to solve G?
segment tree working and i guess it cannot be solved by fenwik but i am not sure
https://codeforces.me/contest/1899/submission/233153696
Persistent segment trees
your solutions are way simpler to understand thanks a lot! curious whats your real handle?
Merge Sort Tree, Persistent Segment Tree or Wavelet Tree
std::set
is enough if you use small-to-large merge.I believe array is enough if you use Mo's on trees (we can use block technique for turning on and off). I solved it using small_to_large and segment_tree during contest tho.
Can you tell why my solution is not giving MLE 233219342 when I am trying to store indices in the subtree of current node in a set for all the nodes.
Check in the merge function.Isn't the memory consumed is O(n^2) I am new to this method. Can you explain ?
Every element is copied only $$$O(\log n)$$$ times, so every element can appear in at most $$$O(\log n)$$$ sets. Thus, the memory is $$$O(n \log n)$$$.
This is the same argument as for why the time complexity is $$$O(n \log n \cdot (\text{time of a single insertion})) = O(n \log^2 n)$$$.
I was thinking of the same approach, i.e. storing indices of all nodes in the subtree.
I'm getting MLE
https://codeforces.me/contest/1899/submission/233311333
also if you could help me with this comment, I'd be really grateful — https://codeforces.me/blog/entry/122407?#comment-1085776
Do you understand why your code is giving MLE?
Yes, for each node x, I'm storing all node indices in the subtree rooted at x.
so in the worst case, in a linear tree, starting from the root we will be storing, n-1(root), n-2, n-3, ... 1( leaf) indices for each node, so this will lead to O(n^2) space.
but this solution — https://codeforces.me/contest/1899/submission/233219342 also seems to be using O(n^2) space and is passing.
Okay, good. So there are two questions you would like an answer to:
Answers:
If you don't understand something, I can explain it.
I've read the doc you shared, i understand how the space complexity appears to be O(n^2) but is actually O(nlogn).
I'll try this trick, will check if it'll give me an AC.
Thank you for your time and the resources shared.
Any point-update-range-sum data structure
.
I used the Euler tour technique along with a minimum segment tree: you can check my submission here.
sqrt decompositon
Thank you so much!
G can be solved with small to large too
Can someone tell me why this code for problem B is not working
take mini to be LLong_max or 1e18 something otherwise it overflows
Thanks man that worked:)
can anyone tell me why my code is not working 278019663
hash-table(for e.g., std::unordered_map in C++) can help to solve problem D with O(n) complexity, but, of course, O(nlogn) is enough to get OK
Problem b: 2^(a⋅2^b)=2^(b⋅2^a)⇔a⋅2^b=b⋅2^a how did we get the second part from the first part?
Since the exponential function is strictly increasing, the equation 2^x = 2^y is equivalent to x = y.
okay i get it now, but for the last part, shouldn't we get: 1/(2^k) = 1/(2^(a(1-2^k))) rather than: 1/(2^k) = 1/(2^(a(2^k-1))) ?
No, because
$$$\displaystyle\frac{2^a}{2^{a\cdot2^k}}$$$
$$$\displaystyle=\frac{2^a}{2^{a\cdot2^k-a+a}}$$$
$$$\displaystyle=\frac{2^a}{2^{(a\cdot2^k-a)+a}}$$$
$$$\displaystyle=\frac{2^a}{2^{(a\cdot2^k-a)}\cdot2^a}$$$
$$$\displaystyle=\frac{1}{2^{a\cdot2^k-a}}$$$
$$$\displaystyle=\frac{1}{2^{a(2^k-1)}}$$$
okay got that too, thank you all<3:)
thankyou so much it helped a lot
Let's define x = a * 2 ^ b and y = b * 2 ^ a.
We know that 2 ^ x = 2 ^ y, so x = y <=> a * 2 ^ b = b * 2 ^ a, which is what you asked for.
I hope that you understand!
yes! thanks<3
apply log2 to both sides you get :
log(a)+ b*log(2) = log(b) + a*log(2)
hence
b — log(b) = a — log(a)
this only occurs when 1,2 or when a=b
thanks
take log base 2 of both sides, then you will get the expression
F can be solved with an easy O(n) solution
E and F both were extremely easy if you consider the given Testcases gave away the entire solution process
G can be solved using a normal segment tree or a Fenwick tree by offline query. First, let's solve this the other way around. You don't have to find a descendant; you have to find an ancestor.
Now here, the sum initially doesn't change when you return because we remove the sum for all the children in step 3 before again reaching node 'U'. So we will remove step 3, and to make sure for each query we received any descendant, we will store the initial sum (number of nodes between L and R), and after we iterate all the children for node 'U', we recheck the count, and if there is any descendant, our sum will increase.
Check Code Here
You can check code if you don't understand explaination.
For F, you can just connect the first $$$n-1$$$ as a sequence tree and keep the $$$nth$$$ node away. For each query you can just connect the $$$nth$$$ node to the $$$d_i$$$ and the distance will be $$$d_i$$$ from node $$$1$$$ (node 1 is a leaf node).
Hey can you explain why the code still works if we remove the condition
because in the question it mentions for u v1 v2 there should be an edge between u and v1 and no edge between u and v2 but for the above case v1 and v2 are same and there is definitely an edge between u and v2 so shouldn't the code be wrong. Here is my submission:
https://codeforces.me/contest/1899/submission/233250281
Testcases are weak I guess
even then the answer where v1 and v2 are same shouldn't be accepted as correct because it is mentioned in the question
Solved using the same exact idea.
Noice
for D, Put the equation $$$\left(2^{x}\right)^{2^{y}}=\left(2^{y}\right)^{2^{x}}$$$ in a graphing calculator the graph directly shows x=y and (1,2),(2,1) . Then write code.
that's nice!
Why were the problem statements so vague? is this the first time that they are organizing a contest?
Easy sqrt-decomposition online solution of G: 233217422
For problem D, there is a simpler proof, without much symbolic manipulation.
We know that $$$2^a \cdot b = 2^b \cdot a$$$.
Write that as $$$2^a / a = 2^b / b$$$.
The function $$$2^x / x$$$ is increasing for $$$x > 2$$$.
So, the equality can not hold for any numbers greater than $$$2$$$.
Applied the same thought
Why don't you guys just rephrase the problem statements using chatgpt? The prompts were actually painful to understand.
That is not actually you who does the rephrase of problem, which is actually a problem solving skill. Chat-GPT does it for you. I do not do this because I have work ethics
maybe because each and every detail is very important and chatgpt might morph a detail.
Editorial has such a strange solution for F. You can just make a tree which is a straight line from $$$1$$$ to $$$n-1$$$ and attach vertex $$$n$$$ to vertex $$$d_i$$$ for all queries. Both vertex $$$1$$$ and $$$n$$$ are leaves and distance of $$$1$$$ from $$$n$$$ is $$$d_i$$$. Complexity $$$O(n)$$$.
https://codeforces.me/contest/1899/submission/233223051
B problem editorial not clear for me. Is it slower to find all divisors of n, and iterate over them than ignore it and just iterate over all possible k? I am sorry I am missing something in editorial although I solved it right after contest. I can't find how O(n⋅n−−√3) compares to O(nlogn) asymptotically.
Contest unrated? rank not updated
easiest A-F lol
F: At the beginning, let the tree be a chain, with the edge of (n-1) ->(n) constantly moving. It would be easy to set the distance between 1->x as d. https://codeforces.me/contest/1899/submission/233233780
A more simpler solution to F by just maintaining the last node we connected the nth node to- https://codeforces.me/contest/1899/submission/233238323
Problem B. can anyone explain why my code fails.
solution link: https://codeforces.me/contest/1899/submission/233239115
For b problem, how did yall find out we gotta use long long instead of int?
Try put 1e18 instead of INT_MAX and put 0 instead of INT_MIN
.
Bcoz ai<=10^9 ..and if u take sum of any subarray of some length lets say K then you can easily see it will become greater than intmax. Hence u had to take llong
since the whole array can be 1e9, n > 3 can lead to sum overflow
thank you for all of your replies, got it.
Problem 1899F - Alex's whims can be solved easier like this:
O(n+q)
Is there a counter to this method? Was quite surprised to see such small constraints.
I did the same thing, and as far as I can see, the solution is completely right. They must've made the constraints low to troll participants.
I literally saw the problem in last 4-5 mins and thought of this solution but I somehow convinced myself I'm wrong (Read: low confidence) and spent the time in understanding the problem statement of B (which also I failed).
Anyway, great performance avighnakc, Congratulations on reaching Expert!!
How did u get this good so quick? Did u practice a lot of $$$C, D$$$ of Div.2s or is it something else?
I did CSES, practiced quite a lot of codeforces div2B’s and C’s and also 1500-1600 rated problems, and up solved some ABCs. Honestly, it’s all about how much effort you put in. I’m not that special per-se, if you grind as much as me, you’ll also improve fast. Good luck, and thanks!
The constraints are low because to check if the solution is correct or not the checker needs O(n^2q) operations minimum. If the constraints were too large it would have taken ages to check each test case.
For problem F many have written the condition where last_node == current_node and printed -1 -1 -1 in that case but my code where i have commented the same condition is also accepted.
Submission link: https://codeforces.me/contest/1899/submission/233250281
But it shouldn't as the question mentioned for every u v1 v2 there should be an edge between u and v1 and no edge between u and v2. but my code outputs the same v1 and v2 for some cases and is still accepted. Am I missing something or the question was phrased slightly incorrectly?
Can someone explain 'F' ?
Yeah sure, here I have used the pointer approach. So, initially, we will consider the tree to be like a long stick. So all the nodes will lie in a straight line. Now, We will only move the end node and no other node and this will solve our problem easily. So initially we have the tree something like 1->2->3->4->5->6->7. So in this case we will only move the 7th node. Now, we know that 7 is connected to 6 initially so the dist between 1 and 7 will be 6. If 7 is connected to 3 then the dist between 1 and 7 will be. So, wherever the 7th node will be connected the dist will be equal to that node itself. Check this implementation now, or see my last submission.
void solve(int xx){ int n, q; cin>>n>>q; rep(i,2,n+1)cout<<i-1<<" "<<i<<endl;
}
Thanks for making it easy.
I'm wondering if the successful hacking attempt goes into the final test data. I wrote a brute force dfs in G, which only works when the data is given randomly, it passed pretests, but I think it's easy to hack when the give tree is a chain. I don't know why it passes the hack.
I don't know, too.
Didn't realise that F was an easy problem....
F was very easy: my code is more simple than authors.
https://codeforces.me/contest/1899/submission/233117295
There is a simpler solution for F: 233147982. Plus G is solvable with just std::set + small-to-large technique by storing inverse indexes and lower-bounding the left bound of a desired segment.
Why is my submission 233141859 timing out? I've used an
unordered_map
to solve it in O(n).https://codeforces.me/blog/entry/62393
Ah, thanks! Frustrating to get the right idea and lose out on something minor like this :'(
at least you've learned something new)
Problem G can be solved offline in $$$O((n+q)*log(n))$$$. First, push back all queries for node x into a vector. Then just DFS and for each node, you store its descendants' pos in array $$$p[]$$$ in a set. You can merge all sets of its children faster by swapping 2 sets to put all elements of the smaller set into the bigger one. In that way, you can answer all queries $$$[l,r,id]$$$ of x by using $$$lower bound$$$ to check that is there any element in the set sitting between l and r, and update the answer for the id-th query.
This is my submission: 233162558
For those who feel(like me) that the observations in problem D are hard to make analytically(and at the same time be sure that no edge case was missed), we can use high school calculus to solve this in a more structured and generalized format, which may be useful in some other questions-
After you get the equation a⋅2^b=b⋅2^a, rewrite it as (a.2^-a)=(b.2^-b). This holds when a=b. Furthermore, we ask, "For what distinct integral values of x is f(x)=x.2^-x equal to itself?" We can see this by making a rough plot of f(x). There is only one critical point(x=(1/ln2)~1.4), which is a maximum (we can easily observe this using just the first derivative). So the curve looks like a bell, strictly increasing before 1, peaking between 1 and 2, and strictly decreasing thereafter.
The equality cannot hold on either side of the peak since we have a strict ordering. Therefore the only possible equality is if f(1) matches with some x>=2. Now we can either find that this holds for x=2 by trial and error, or more rigorously...wait for it...binary search. Search on the strictly decreasing array for f(i) i>=2 for the value of f(1), to get x=2. Then we can count the occurrences using standard techniques.
I mean, honestly, you didn't need to observe that (1,2) are the only solutions. You could just observe that since $$$(2^a)^(2^b) = (2^b)^(2^a) => \frac{2^a}{a} = \frac{2^b}{b}$$$, you can store the values of $$$\frac{2^a_i}{a_i}$$$ in a map and count that way.
Yes, you are right! That makes absolute sense and is a much simpler way to solve this. Even in my original solution, I did not observe that (1,2) is the only solution, instead came up with a way to efficiently find all possible solutions using sorting and two pointers(based on the growth rate of exponents), Although I kept getting WA due to integer overflow(facepalm).
But I feel that without this observation we end up doing more computation than is required(even though asymptotically it is the same) so I just wanted to wrap my head around it.
But how you will store $$$2_i^a$$$ when
a
becomes1e9
Check my submission, you can just store the exponent and divide out all factors of 2 from $$$a_i$$$.
why long long lmax = LONG_MIN, lmin = LONG_MAX; this line gives wrong answer why this one gives right answer? long long lmax = -1e18, lmin = 1e18; why?
It should be LLONG_MIN and LLONG_MAX for long long Reference
I was trying to solve 1899G with a different approach, but getting MLE.
For each node x, I'm keeping track of all nodes in the subtree rooted at x, but I'm not actually storing the child nodes directly, but rather their index in P. i.e if nodes 2,4 are in the subtree rooted at 3, I store mem[3]={idx[2],idx[4]} where idx is the index of 2 in P ( in sorted order ).
Then I can run Q(l,r,x), in mem[x] in log(n) time.
T(n) = O(nlogn) S(n) = O(n*n) — MLE
when I see the solution, even in the ST, for each segment we are storing an array of start times, even that would have a S(n) = O(n*n). how does it not give MLE?
use small to large
For problem F, Can's I just maintain a chain and keep moving the last node to the dth node?
yep, I did the same.
Why is O(NlogN) solution of D giving TLE in C++? LMAO Just checked that map is getting accepted but not unordered_map! WTH??? Someone tell the author that this isn't Div1. Also, if they wanted to use such tricks then at least they could have included these test cases during the contest. https://codeforces.me/contest/1899/submission/233145738
std::unordered_map
is not $$$O(n \log n)$$$. It's $$$O(n)$$$ on average, and $$$O(n^2)$$$ with many hash collisions.std::map
is guranteed $$$O(n \log n)$$$.I am still shocked why so many people (including author) have used map for this question. I agree it is my fault for not using custom hash function for unordered_map but what is the point of using map when it is slower than unordered_map on average?
Why does it matter that
std::unordered_map
is faster on average, if it has a slower worst case performance?People use
std::map
more often thanstd::unordered_map
because hacks againststd::unordered_map
are very common on codeforces. If you usestd::map
, you cannot get hacked with those types of hacks. You should read this blog for a more in-depth explanation of this. That blog also contains a way you can still usestd::unordered_map
and not get hacked.Also,
std::unordered_map
isn't even that much faster on average. Even though it is $$$O(1)$$$ per operation andstd::map
is $$$O(\log n)$$$ per operation,std::unordered_map
has a massive constant factor which makes it almost as slow asstd::map
, even on average (unlike for example arrays, which are also $$$O(1)$$$ per operation but they are much much faster).Thanks for the info ^_^
Friendship ended with unordered_map, now map is my best friend!
New to CPP ; anyone could give reason as to why I'm getting TLE on problem D : https://codeforces.me/contest/1899/submission/233162158
When I would easily be able to solve this in python, and it's O(N) normally...?
Someone hacked the solution that uses unordered_map :(
Read this blog for more info https://codeforces.me/blog/entry/62393
Yeah read this... Feel like when you're at the point where you need to create your own hash function because the tests cases are designed such that standard library hashing function is going to fail, it's not really CP anymore and just tedious knowledge requisite... If my solution is O(n), it should pass....
Thanks for the link !
You're right. $$$O(n)$$$ solutions should (and do) pass. But your solution is $$$O(n^2)$$$, unfortunately.
I mean, if the hill you want to die on is that hashmap insertion is O(n), more power to you, but I believe, that pretty much everyone would say that hashmap insertion is O(1) (invite you to read about amortized operation ; are vector insertion O(n)?). Like if the goal is for hashmap to be unusable in the contests, we should just say so somewhere. No amount of pseudo-randomness is going to be enough if I really want to come up with an example which will destroy your hash function.
Oh well, just another reason why python is just better than c++, shoulda known better !
Also, just to be clear, no O(n) solutions exists for the problem from your point of view, so no, they don't pass !
Hashmap insertions are $$$O(1)$$$ on average with random data, they aren't $$$O(1)$$$ amortized. If some operation is $$$O(1)$$$ amortized, even if a single operation can be $$$O(n)$$$ in the worst case, it is guranteed that $$$n$$$ such operations will work in $$$O(n)$$$ total time. But with many hash collisions $$$n$$$ hashmap insertions with a bad hash work in $$$O(n^2)$$$ $$$\Rightarrow$$$ not amortized $$$O(1)$$$.
In my opinion it's a feature of the language, and the participant is responsible for understanding how their language works.
Maybe it won't prevent it, but it will definitely slow down the creation of such hack test cases. And how are you going to deal with time-initialized randomness when creating those hacks?
Python sictonaries can also be hacked similarly (at least in PyPy), see this. And if python is supposedly better, why so you even use c++?
$$$O(n)$$$ solutions do exist, like the hashmap solution with a randomized (and time-initialized) hash function, which can't be hacked. Or can you prove me wrong by providing a hack to the custom hash in the blog linked in the original reply?
You're right about amortized, I was just making an analogy (poorly written I'll admit) to how you could make the same argument for amortized operations, the point being that talking about worst case is (in some cases, and in my opinion) non-nonsensical. I don't think anyone would call n insertions in a vector O(n^2), even if worse case is O(n), because we know it's O(1) amortized (which while not the same as average, is comparable).
In my opinion it's a feature of the language, and the participant is responsible for understanding how their language works.
What's the point to that though...? All this ends up doing is that everyone has to put some pseudo-random implementation in their template and then not think about it ever again. It's not about learning how the language work or anything deep, it's literally just another implementation of something that is in the standard library...
O(n) solutions do exist, like the hashmap solution with a randomized (and time-initialized) hash function, which can't be hacked. Or can you prove me wrong by providing a hack to the custom hash in the blog linked in the original reply?
Give me an implementation, an infinite amount of time and I will eventually find a test case which, at a particular time, will make you TLE. The complexity of your solution does not change because finding such a test case is harder, which is what I'm arguing against. Hashmap always will have that O(n) worst case, you can't change that no matter the implementation. Even worse ; a pseudo-random implementation makes it so that the same exact code that passed a problem super easily could TLE in another time, you're just gambling. To me, it's not a super interesting problem to solve, so either we should just say somewhere that hashmaps are not to be used ever, or we realise that hacking standard library implementation of hashmap is not really pertinent to the task of solving the problem...
O(n + q) Solution for F 233329854
Did anyone solve Codeforces Round 909 (Div. 3) G with Treap?
It can be solved by it according to this USACO Guide article on Small-To-Large Merging :
$$$"$$$It's easy to merge two sets of sizes $$$n\ge m$$$ in $$$\mathcal{O}(n+m)$$$ or $$$(m\log n)$$$ time, but sometimes $$$O\left(m\log \left(1+\frac{n}{m}\right)\right)$$$ can be significantly better than both of these. Check "Advanced — Treaps" for more details.$$$"$$$
If anyone got TLE (like me) in the problem 1899D - Yarik and Musical Notes, maybe look at this blog post about
unordered_map
.Thank you! I was scratching my head over this.
Why are we performing lower_bound — lower_bound instead of upper_bound — lower_bound, given that in the instance where we have an entry time of 1,2,3 in the merge sort tree and our time in and time out of x is 2, then wouldnt lower_bound — lower_bound return 0 instead of returning 1?
In problem C while checking the parity of two consecutive numbers first code is accepted and second code is causing error just because i use % operator for checking parity in "if" why is it so?
First code (Accepted)
Second Code (Causing error for testcase n=3 101, -99, 101 it outputs 103 but it should 101)
the difference in both piece of code is... below line...
Please help... why don't the second code just assing the value to variable 'res' in 'if' ?
Accepted code link 233406392
Error code link 233406621
sorry for my comment ...i was not considering abs() ...ignore my comment.
Yeah, unfortunately negative modulo in cpp is weird
Can someone tell the pre requisites for the last problem? Also does anyone have similar problems like that
F but simpler 233608256
During practice, I submitted this solution for problem F.
It got accepted and is much simpler than the editorial's solution and its time complexity is O(q). Submission link: https://codeforces.me/contest/1899/submission/233759099
yes, i did it exactly in the same way.
Linear / O(n + q) solution of problem F 233781812
Can someone explain me why printf is printing zeroes ? this is submission for Alex's whims problem F
https://codeforces.me/contest/1899/submission/234314307
I changed printf to cout and it got accepted, do I need to select a specific compiler while submitting code containing printf because it worked find on my personal computer
Problem D Why does this solution give a TLE on test case 21 even after it being an O(n) solution ? https://codeforces.me/contest/1899/submission/234901875
Problem D: I got tle on test case 21 ......Can anyone tell me why??? int n; cin >> n;
unordered_map<int,int>mp;
for (int i = 0; i < n; ++i) { int x; cin >> x; mp[x]++; }
LL ans = 0;
for(auto &val: mp){ int x = val.second; if(x > 1) { ans += 1ll* x * (x-1) / 2; } }
cout << ans << '\n';
}
How can you get -1 in E? Can't it always be sorted. Can't understand editorial
can someone help me with the problem statement B? I tried a lot but couldnt find the error in my code... link to my submission:https://codeforces.me/contest/1899/submission/235506545
for anyone who is getting TLE on test case 37,use int instead of long long
Hello, would be good if someone could explain to me why my solution is getting MLE and what can be done to solve it! Thank you!
I have 2 main iterations
https://codeforces.me/contest/1899/submission/236010009
(This one I believe is due to the fact that I stored every single permutation so I switched it to a more brute force implementation below)
https://codeforces.me/contest/1899/submission/236019332
(This I am not sure why I will have a MLE)
Edit: I had a bug in my code, ignore thank you!
https://codeforces.me/contest/1899/submission/236063262
This one is accepted
https://codeforces.me/contest/1899/submission/236061994
This one gives a tle.
Both the codes are same just in the accepted one, 1 value of dictionary is initialized. Time complexity is same anyways. Can anyone figure out why is it happening ?
Can anyone please me the problem with my solution to problem C.I have used a two pointer approach. My solution
F can be solved in only O(n+q) by this solution . Given editorial is overkill for the solution.
From a functional perspective, understanding problem B becomes easier. We have $$$(2^a)^{(2^b)} = (2^b)^{(2^a)} \iff \frac {2^b}{b} = \frac{2^a}{a}$$$. Let $$$f(x) = \frac{2^x}{x}$$$. Clearly, when $$$a = b$$$, the equation holds true. When $$$a \neq b$$$, observing the function's graph, we find that the equation holds only when $$$x_0 = 1$$$ and $$$x_1 = 2$$$
in 1899B , wouldn't the complexity be n*sqrt(n), instead of n*cuberoot(n)?
I have a solution with O(n+q) for Problem F. Solution
F is actually an easy problem but the problem statement is too difficult to read. It is only asking if we can form a distance
d, 2<=d<=n-1
between two LEAVES. We construct a degenerate tree, which has exactly two leaves (let's index them1 to n
), then we can reconnect1st or nth
to any vertex to get distance d. (ex: moventh to (d+1)th
vertex)For G question, I found an easy approach of using dfs + minimum Segment tree. You can check out here, https://codeforces.me/contest/1899/submission/257742336 I got this intution because we need minimum intime of node from l to r, which have intime greater than intime of x and smaller than outime of x.
1899A should have mentioned that the first player has to play at least one turn. -_-
.
An easier way to understand and implement F,
https://codeforces.me/contest/1899/submission/266170220
Why is this not used in the main editorial?
what about the 10 moves constraint in the test case where n =1000
F can be solved in O(n+q). First make a linear graph and keep shifting the last vertex so that the distance between the last vertex and vertex 1 is equal to d.
My Submission: https://codeforces.me/contest/1899/submission/280523159