Thanks for participating. We apologize for problem F that has appeared before. Still, we hope you all enjoy our round!
1705A - Mark the Photographer
Author: MarkBcc168
First, sort $$$h_1 \leq h_2 \leq \dots \leq h_{2n}$$$. There is a very explicit description to which $$$h_i$$$'s work.
What is the optimal arrangement that maximizes the minimum difference across all pairs?
We have a very explicit description of whether the arrangement is possible. Sort the heights so that $$$h_1 \leq h_2 \leq \dots \leq h_{2n}$$$. Then, there exists such arrangement if and only if all the following conditions hold.
We present two proofs.
Proof 1 (Direct Proof). Suppose that the arrangement is possible. We will show that for each $$$i$$$, we have $$$h_{n+i}-h_i\geq x$$$.
To do so, note that $$$n+1$$$ people who have height in $$$[h_i, h_{n+i}]$$$. It's impossible that these $$$n+1$$$ people got assigned to different columns (because there are $$$n$$$ columns), so there exist two people that got assigned to the same column.
However, because these two people have height in $$$[h_i, h_{n+i}]$$$, the difference in heights between these two people is at most $$$h_{n+i}-h_i$$$. As the difference is at least $$$x$$$ by the arrangement, we must have that $$$h_{n+i}-h_i\geq x$$$. $$$\blacksquare$$$
Proof 2 (Exchange Argument). First, we look at two pairs. Suppose that the $$$i$$$-th person in the first and second row have heights $$$a < b$$$, while the $$$j$$$-th person in the first and second row have heights $$$c < d$$$.
- Assume that $$$b\geq c$$$, then we switch $$$b,c$$$. The arrangement still works since $$$a-c \geq a-b \geq x$$$ and $$$b-d \geq c-d \geq x$$$.
- Similarly, $$$a\geq d$$$ yields a switch.
Thus, we can keep exchanging until anyone in the first row is at least as tall as anyone in the second row. Thus, the first row must be $$$h_{n+1}, h_{n+2}, \dots, h_{2n}$$$, while the second row must be $$$h_1, h_2, \dots, h_n$$$ in some order.
Now, we look at the same picture again. Assume that $$$a\geq c$$$ but $$$b\leq d$$$. then, we can switch $$$b,d$$$, and it still works because $$$a-d \geq c-d \geq x$$$ and $$$c-b \geq c-d \geq x$$$. Thus, we can switch the second row until it matches the order of the first row.
Therefore, we force the first row to be $$$h_{n+1}, h_{n+2}, \dots, h_{2n}$$$, while the second row must be $$$h_1, h_2, \dots, h_n$$$ in that order. This implies the conclusion. $$$\blacksquare$$$
Time Complexity: $$$O(n\log n)$$$ for sorting.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, x;
cin >> n >> x;
vector<int> a(2 * n);
for (int i = 0; i < 2 * n; ++i)
cin >> a[i];
sort(a.begin(), a.end());
bool ok = true;
for (int i = 0; i < n; ++i)
if (a[n + i] - a[i] < x) ok = false;
cout << (ok ? "YES" : "NO") << "\n";
}
int main() {
int tt; cin >> tt;
while (tt--) solve();
}
1705B - Mark the Dust Sweeper
Author: MarkBcc168
The optimal way is to fill all the zero entries first.
Delete the leading zeroes in the array $$$a$$$ (i.e., the first $$$t$$$ numbers of $$$a$$$ that are zero) so that now $$$a_1\ne 0$$$. Let $$$k$$$ be the number of $$$0$$$'s in $$$a_1,a_2,\dots,a_{n-1}$$$. The answer is
To see why, let Mark keep filling the holes (rooms with dust level $$$0$$$) first by subtracting the first nonzero index and changing the first zero index to $$$1$$$. This takes $$$k$$$ moves to fill all zeroes in $$$a_1,a_2,\dots,a_{n-1}$$$. Then, we can start moving, from left to right, all dust to the $$$n$$$-th room, taking $$$a_1+a_2+\dots+a_{n-1}$$$ moves.
Finally, we argue that this is the minimum number of moves. To that end, we prove that each move decreases the answer by at most $$$1$$$. We consider two cases.
- If a move has $$$j=n$$$, then it decreases $$$a_1+a_2+\dots+a_{n-1}$$$ by $$$1$$$ but does not decrease $$$k$$$.
- If $$$j\ne n$$$, then the move doesn't decrease $$$a_1+a_2+\dots+a_{n-1}$$$ and decreases $$$k$$$ by at most $$$1$$$.
Thus, we are done. The time complexity is $$$O(n)$$$.
#include <iostream>
#include <vector>
#define ll long long
using namespace std;
void solve(){
int n; cin >> n;
vector<int> a(n);
for(int i = 0; i < n; ++i)
cin >> a[i];
ll ans = 0;
int ptr = 0;
while(ptr < n && a[ptr] == 0)
ptr++;
for(int i = ptr; i < n-1; ++i){
ans += a[i];
if(a[i] == 0) ans++;
}
cout << ans << "\n";
}
int main(){
int tt; cin >> tt;
while(tt--) solve();
}
1705C - Mark and His Unfinished Essay
Author: MarkBcc168
What's in common between all letters that were copied at the same time?
The answer is the difference between the current position and the position where it came from. That's what you need to store.
By tracking the difference, you can recurse to the previously-copied substring.
This is an implementation problem. What you need to do is after the $$$i$$$-th copying operation, we need to keep track of the beginning point $$$a_i$$$ and the ending point $$$b_i$$$ of the appended string. Moreover, we also keep track the subtraction distance $$$t_i = a_i-l_i$$$ so that for $$$k\in [a_i, b_i]$$$, the $$$k$$$-th letter is the same as the $$$(k-t_i)$$$-th letter. Thus, we have recursed the position to the smaller position $$$k-t_i$$$, so we keep doing that until we reach the initial string.
Therefore, to solve this problem, we iterate from $$$i=c,c-1,\dots,1$$$. If $$$k$$$ is in $$$[a_i,b_i]$$$, subtract $$$k$$$ by $$$t_i$$$. After all operations, $$$k$$$ should be at the inital string, and we output the $$$k$$$-th letter.
The time complexity of this solution is $$$O(cq)$$$. However, less efficient solutions of $$$O((c\log c) q)$$$ (using binary search each time) or $$$O(c^2q)$$$ (by going through all intervals in each iteration) pass as well.
#include<bits/stdc++.h>
#define ll long long
using namespace std;
void solve(){
int n, c, q; cin >> n >> c >> q;
string s; cin >> s;
vector<ll> left(c+1), right(c+1), diff(c+1);
left[0] = 0;
right[0] = n;
for(int i=1; i<=c; ++i){
ll l, r; cin >> l >> r;
l--; r--;
left[i] = right[i-1];
right[i] = left[i] + (r-l+1);
diff[i] = left[i] - l;
}
while(q--){
ll k; cin >> k;
k--;
for(int i=c; i>=1; --i){
if(k < left[i]) continue;
else k -= diff[i];
}
cout << s[k] << "\n";
}
}
int main(){
int tt; cin >> tt;
while(tt--) solve();
}
1705D - Mark and Lightbulbs
Author: MarkBcc168
Look at all the $$$01$$$'s and $$$10$$$'s carefully.
The sum of the number of $$$01$$$'s and $$$10$$$'s is constant.
Consider all positions of $$$01$$$'s and $$$10$$$'s. How does it change in each operation?
As explained in the sample explanations, the operation cannot change the first or the last bit. Thus, if either $$$s_1\ne t_1$$$ or $$$s_n\ne t_n$$$, simply return $$$\texttt{-1}$$$.
Now, the key idea is to consider a binary $$$\overline{s} = (s_1\oplus s_2)(s_2\oplus s_3) \dots (s_{n-1}\oplus s_n)$$$ of length $$$n-1$$$, where $$$a\oplus b$$$ denotes the XOR operation of bits $$$a$$$ and $$$b$$$. Then, it's easy to verify that the operation acts on $$$\overline{s}$$$ by just swapping two different bits. An example is shown below
Thus, the operation is possible if and only if $$$\overline s$$$ and $$$\overline t$$$ has the same number of $$$1$$$'s. Moreover, if $$$a_1,a_2,\dots,a_k$$$ are the positions of $$$1$$$'s in $$$\overline s$$$ and $$$b_1,b_2,\dots,b_k$$$ are the positions of $$$1$$$'s in $$$\overline t$$$. Then, the minimum number of moves is given by
which can be evaluated in $$$O(n)$$$.
This is a well-known fact, but for completeness, here is the proof. Note that the operation is moving $$$1$$$ to left or right by one position. Thus, to achieve that number of moves, simply move the first $$$1$$$ from $$$a_1$$$ to $$$b_1$$$, move the second $$$1$$$ from $$$a_2$$$ to $$$b_2$$$, $$$\ldots$$$, and move the $$$k$$$-th $$$1$$$ from $$$a_k$$$ to $$$b_k$$$. For the lower bound, notice that the $$$i$$$-th $$$1$$$ cannot move past the $$$(i-1)$$$-th or $$$(i+1)$$$-th $$$1$$$. Thus, it takes at least $$$|a_i-b_i|$$$ moves to move the $$$i$$$-th $$$1$$$ from $$$a_i$$$ to $$$b_i$$$. Summing gives the conclusion.
Note that another way to think about this problem is to look at the block of $$$1$$$'s and $$$0$$$'s and notice that the number of blocks remains constant. This is more or less the same as the above solution.
#include <bits/stdc++.h>
#define ll long long
using namespace std;
void solve(){
int n; cin >> n;
string s,t; cin >> s >> t;
vector<ll> pos_s, pos_t;
if(s[0] != t[0] || s[n-1] != t[n-1]){
cout << -1 << "\n";
return;
}
for(int i=0; i<n-1; i++){
if(s[i] != s[i+1]) pos_s.push_back(i);
if(t[i] != t[i+1]) pos_t.push_back(i);
}
if(pos_s.size() != pos_t.size()){
cout << -1 << "\n";
}
else{
int k = pos_s.size();
ll ans = 0;
for(int i=0; i<k; ++i){
ans += abs(pos_s[i] - pos_t[i]);
}
cout << ans << "\n";
}
}
int main(){
int tt; cin >> tt;
while(tt--) solve();
}
1705E - Mark and Professor Koro
Author: abc241
Find a concise description of the answer first.
Think about power of two.
The sum $$$2^{a_1}+2^{a_2}+\dots+2^{a_n}$$$ is constant. Show that the answer must be the most significant bit of that.
Use either bitset or lazy segment tree to simulate the addition/subtraction.
The key observation is the following.
Claim: The answer is $$$\lfloor\log_2(2^{a_1}+2^{a_2}+\dots+2^{a_n})\rfloor.$$$
Proof: The upper bound is pretty clear, as the operation doesn't change the $$$\sum 2^{a_i}$$$. Moreover, the sum must be at least $$$2^{\text{ans}}$$$, giving the result.
For the construction, let Mark keep performing the operation until he cannot. At this point, all numbers must be distinct, and the $$$\sum 2^{a_i}$$$ is unchanged. Let the current numbers on the board be $$$b_1<b_2<\dots < b_k$$$. Then,
Thus, Mark can make the final number be $$$b_k = \lfloor\log_2(2^{a_1}+2^{a_2}+\dots+2^{a_n})\rfloor$$$ as desired. $$$\blacksquare$$$
Finally, we need a data structure to maintain the $$$\sum 2^{a_i}$$$ and simulate base 2 addition. There are many ways to proceed, including the following:
Using bitsets, partition the bits into many chunks of $$$w$$$ bits ($$$w$$$ between $$$50$$$ and $$$64$$$ is fine). This gives $$$O(n^2/w)$$$ complexity, but its low constant factor makes it enough to pass comfortably.
Use lazy segment augmented with $$$O(\log n)$$$ binary search. For each bit added, find where the longest streak of $$$1$$$'s to the left of that bit ends, and update accordingly. Similarly, for each bit subtracted, find where the longest streak of $$$0$$$'s to the left of that bit ends, and update accordingly. The total complexity is $$$O(n\log n)$$$.
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
// Super Idol的笑容
// 都没你的甜
// 八月正午的阳光
// 都没你耀眼
// 热爱105°C的你
// 滴滴清纯的蒸馏水
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
struct bitset{
unsigned long long arr[3130]={};
unsigned long long AF=-1ull;
void flip(int l,int r){
arr[l/64]^=(1ull<<(l%64))-1;
if (r%64==63) arr[r/64]^=AF;
else arr[r/64]^=(1ull<<(r%64+1))-1;
l/=64,r/=64;
if (l==r) return;
arr[l]^=AF;
for (int x=l+1;x<r;x++) arr[x]^=AF;
}
int get(int i){
if (arr[i/64]&(1ull<<(i%64))) return 1;
else return 0;
}
int get1(int i){
//search [i%64,64) on i/64 first
unsigned long long mask=AF^((1ull<<(i%64))-1);
i=i/64;
unsigned long long temp=arr[i]&mask;
if (temp) return i*64+__builtin_ctzll(temp);
i++;
while (true){
if (arr[i]==0) i++;
else return i*64+__builtin_ctzll(arr[i]);
}
}
int get0(int i){
//search [i%64,64) on i/64 first
unsigned long long mask=AF^((1ull<<(i%64))-1);
i=i/64;
unsigned long long temp=(arr[i]^AF)&mask;
if (temp) return i*64+__builtin_ctzll(temp);
i++;
while (true){
if (arr[i]==AF) i++;
else return i*64+__builtin_ctzll(arr[i]^AF);
}
}
int gethigh(){
int i=3129;
while (true){
if (arr[i]==0) i--;
else return i*64+63-__builtin_clzll(arr[i]);
}
}
} BS;
int n,q;
int arr[200005];
void add(int i){
BS.flip(i,BS.get0(i));
}
void del(int i){
BS.flip(i,BS.get1(i));
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n>>q;
rep(x,1,n+1) cin>>arr[x];
rep(x,1,n+1) add(arr[x]);
int a,b;
while (q--){
cin>>a>>b;
del(arr[a]);
arr[a]=b;
add(arr[a]);
cout<<BS.gethigh()<<endl;
}
}
#include<bits/stdc++.h>
using namespace std;
struct LazySeg {
int l, r;
int val = 0, tag = 0;
bool is_lazy = false;
LazySeg * l_child = NULL, * r_child = NULL;
LazySeg(int _l, int _r) {
l = _l;
r = _r;
if (r - l > 1) {
int m = (l + r) / 2;
l_child = new LazySeg(l, m);
r_child = new LazySeg(m, r);
}
}~LazySeg() {
delete l_child;
delete r_child;
}
void unlazy() {
if (!is_lazy) return;
val = (r - l) * tag;
if (r - l <= 1) return;
l_child -> tag = tag;
l_child -> is_lazy = true;
r_child -> tag = tag;
r_child -> is_lazy = true;
tag = 0;
is_lazy = false;
}
void update(int from, int to, int x) {
unlazy();
if (from >= r || l >= to) return;
if (from <= l && to >= r) {
tag = x;
is_lazy = true;
unlazy();
} else {
l_child -> update(from, to, x);
r_child -> update(from, to, x);
val = l_child -> val + r_child -> val;
}
}
int query(int from, int to) {
if (from >= r || l >= to) return 0;
unlazy();
if (from <= l && to >= r) return val;
else {
if (l_child == NULL) return 0;
return l_child -> query(from, to) + r_child -> query(from, to);
}
}
//pre = prefix in [l,k)
int max_right(int k, int pre, int v) {
unlazy();
if (r - l == 1) {
if (val == v) return l;
else return l - 1;
}
l_child -> unlazy();
int mid = (l + r) / 2;
if (mid <= k) {
return r_child -> max_right(k, pre - l_child -> val, v);
} else if (l_child -> val - pre == v * (mid - k)) {
//left to mid-1 has all 1's => answer must be >= mid-1
return r_child -> max_right(mid, 0, v);
} else {
return l_child -> max_right(k, pre, v);
}
}
//suff = suffix
int get_answer() {
unlazy();
if (r - l == 1) {
if (val == 1) return l;
else return l - 1;
}
r_child -> unlazy();
if (r_child -> val == 0) {
//[mid to end] are all 0
return l_child -> get_answer();
} else {
return r_child -> get_answer();
}
}
};
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(NULL);
cout.tie(NULL);
int n, q;
cin >> n >> q;
LazySeg tr(0, 200100);
auto add = [ & ](int x) {
int y = tr.max_right(x, tr.query(0, x), 1) + 1;
if (y == x) { //no carry; just change 0 to 1
tr.update(x, x + 1, 1);
} else { //there is a carry; set the whole block of 1's to 0
tr.update(x, y, 0);
tr.update(y, y + 1, 1);
}
};
auto remove = [ & ](int x) {
int y = tr.max_right(x, tr.query(0, x), 0) + 1;
if (y == x) {
tr.update(x, x + 1, 0);
} else {
tr.update(x, y, 1);
tr.update(y, y + 1, 0);
}
};
vector < int > a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
add(a[i]);
}
while (q--) {
int k, l; cin >> k >> l;
k--;
remove(a[k]); add(l);
a[k] = l;
cout << tr.get_answer() << "\n";
}
}
1705F - Mark and the Online Exam
Author: MarkBcc168
Unfortunately, a harder version of this problem has appeared in a Chinese contest here and here. You can look at their solution here. We thank many contestants who pointed it out.
It's possible to solve this problem without any randomization. See the subsequent hints for how to do so.
Observe that we can take differences between two very close queries to get the number of $$$\texttt{T}$$$'s in a small subsequence.
You can take the difference against a pre-computed query. Applying this with a group of two questions.
You have three possibilities: either $$$\texttt{TT}$$$, $$$\texttt{FF}$$$, and $$${\texttt{TF}, \texttt{FT}}$$$. If the third possibility happens, simultaneously figure out whether is $$$\texttt{TF}$$$ or $$$\texttt{FT}$$$ and answer one question within one query.
You will need to precompute the query $$$\texttt{TFTF}\dots\texttt{TF}$$$.
There are many possible approaches, including using randomized algorithms. However, I will present the solution that takes about $$$\tfrac{2n}{3}$$$ queries deterministically.
We pre-query $$$\texttt{TTT...T}$$$ and $$$\texttt{TFTF...TF}$$$. Then, for $$$i=1,2,\dots,\left\lfloor\frac n3\right\rfloor$$$, we take the difference when both the $$$(2i-1)$$$-th and the $$$2i$$$-th question in $$$\texttt{TTT...T}$$$ is changed to $$$\texttt{F}$$$.
- If the difference is $$$+2$$$, then both answers must be $$$\texttt F$$$.
- If the difference is $$$-2$$$, then both answers must be $$$\texttt T$$$.
- Else, the answers must be $$$\texttt{TF}$$$ or $$$\texttt{FT}$$$ in some order.
Now, here is the key idea: if the last case happens, then we can figure out if it's $$$\texttt{TF}$$$ or $$$\texttt{FT}$$$ as well as the answer to one more question in one query. To do so, compare the previous $$$\texttt{TFTF...TF}$$$ with a new query that has $$$3$$$ differences:
(Note: we assume that the third question corresponds to $$$\texttt T$$$ in the query. If it's $$$\texttt F$$$, just change to $$$\texttt T$$$ and proceed analogously.)
There are four possible scenarios.
- If the answers are $$$\texttt{TF}$$$ and $$$\texttt{T}$$$, then the difference is $$$-3$$$.
- If the answers are $$$\texttt{TF}$$$ and $$$\texttt{F}$$$, then the difference is $$$-1$$$.
- If the answers are $$$\texttt{FT}$$$ and $$$\texttt{T}$$$, then the difference is $$$+1$$$.
- If the answers are $$$\texttt{FT}$$$ and $$$\texttt{F}$$$, then the difference is $$$+3$$$.
Therefore, we can distinguish these four scenarios in one go.
Finally, if the first two cases happen, we can easily figure out the answer to one more question in one query (say, by changing that question to $$$\texttt F$$$ and compare with the $$$\texttt{TT...T}$$$ query). Either way, we can deduce the answer to $$$3$$$ questions in $$$2$$$ queries, leading to a solution with $$$\tfrac{2n}{3}$$$ queries.
Note that this solution can be easily improved to $$$\frac {3n}{5}$$$ on average by randomly shuffling the questions.
#include <bits/stdc++.h>
using namespace std;
int n;
int query(string s){
cout << s << endl;
cout.flush();
int x; cin >> x;
if(x==n) exit(0);
return x;
}
int main(){
cin >> n;
//query true count
string all_T(n, 'T'), ans(n, '?');
int cnt_T = query(all_T);
//query TF
string all_TF(n, 'T');
for(int i=1; i<n; i+=2) all_TF[i] = 'F';
int cnt_TF = query(all_TF);
//begin the loop
int l = 0, r = n-1;
while(r >= l){
if(r==l){ //only l is undetermined
string s(all_T);
s[l] = 'F';
int k = query(s);
if(k > cnt_T){
ans[l] = 'F';
}
else{
ans[l] = 'T';
}
l++; r--;
}
else{
string s(all_T);
s[l] = 'F'; s[l+1] = 'F';
int k = query(s) - cnt_T;
if(k == 2){
ans[l] = 'F'; ans[l+1] = 'F';
l += 2;
}
else if(k == -2){
ans[l] = 'T'; ans[l+1] = 'T';
l += 2;
}
else{
if(r == l+1){ //only l and l+1 left; figure out the order
string s(all_T);
s[l] = 'F';
int k = query(s);
if(k > cnt_T){
ans[l] = 'F'; ans[l+1] = 'T';
}
else{
ans[l] = 'T'; ans[l+1] = 'F';
}
l += 2;
}
else{ //determine l, l+1, r
string s(all_TF);
s[l] = 'F'; s[l+1] = 'T';
if(s[r] == 'F') s[r] = 'T';
else s[r] = 'F';
int k = query(s) - cnt_TF;
if(k == 3){
ans[l] = 'F'; ans[l+1] = 'T'; ans[r] = s[r];
}
else if(k == 1){
ans[l] = 'F'; ans[l+1] = 'T'; ans[r] = all_TF[r];
}
else if(k == -1){
ans[l] = 'T'; ans[l+1] = 'F'; ans[r] = s[r];
}
else{
ans[l] = 'T'; ans[l+1] = 'F'; ans[r] = all_TF[r];
}
l += 2; r--;
}
}
}
}
query(ans);
}
Auto comment: topic has been updated by MarkBcc168 (previous revision, new revision, compare).
Can you continue in the sun time next time?
So fast Editorial...Love it.
The editorial with hints is excellent :)
C was nice
A and B were normal. C was kinda nice. D felt kind of easier than normal.
why my solution is giving memory limit 164324383
C in that problem can be up to 40, so the string can have a length of up to n times 2 powered by 40. That is because the string can be "duplicated" if they simply spam 1 as L and the current length of it as R, so it becomes twice as big. But you can not make a string of that size, because it takes too much memory, so you get MLE.
l, r⩽10^18, means r-l ⩽ 10^18 this is already a large value, even if we do not take into account that q can be of the order of 10^4
For C: https://codeforces.me/contest/1705/submission/164336489 why this code give TLE. and how to remove TLE? please tell
Note that the length of the string can be upto $$$2^{40}\cdot 200000$$$ because the copying operation could be $$$[1,n]$$$, $$$[1,2n]$$$, $$$[1,4n]$$$, $$$\dots$$$, $$$[1,2^{39}n]$$$.
Your solution has a loop from l to r, which varies by the length of the string. Therefore, it can never loop through in time.
Good Tasks for Codeforces Round #807)
Editorial is perfect.
So fast Editorial!!!!!
you are 607 mouse ?
you are 607 mouse ?
Problem C was just awesome <3
Thanks for the superfast editorial, interesting round with mArK!
If you are/were getting a WA/RE verdict on problems from this contest, you can get the smallest possible counter example for your submission on cfstress.com. To do that, click on the relevant problem's link below, add your submission ID, and edit the table (or edit compressed parameters) to increase/decrease the constraints.
Disclaimer: There is a quota of 1 request per 24 hours on the free plan.
Hints are always helpful.
Wow, very fast editorial! I like this contest because the problems are easy to understand. Great job!
I thik C is very nice
So fast Editorial!!
Question D is so difficult
Amazing tutorial! It made me everything easy to understand.
very terrible contest. C easier than B and too easy. F appeared before. E is DS. In F you get the same result when Wrong Answer and when too many queries. Downvoted. why do you all say quick editorial? does it make the contest less bad in any way?
wheather a problem is easy or not depends on personal opinions,ds is good,it's not easy to find out F appeared before
It is actually very easy, sir.
The tutorial for D "Moreover, if a1,a2,…,ak are the positions of 1's in s and b1,b2,…,bk are the positions of 1's in t. " , should here s/t be s¯¯¯ and t¯ ?
Great catch! Fixed.
In problem F editorial
should we replace ±1 on ±2?
By the way, do you know an algorithm which finds all answers faster than 2n/3 questions (without randomization)? just curious
Fixed, thank you.
The Chinese problem linked above has a much stricter query limit. I haven't checked their solution yet, so I cannot guarantee that it gives a faster deterministic algorithm. Feel free to check it out!
which topic i should learn for solving problems as C?
There is no specific knowledge required to solve C. You can try reading the hints and tutorial provided above.
ok i got it but always i solve A, B in div2 but i cant solve any C problem in div 2 could you give me some tips for improving) thx
the best way to improve at problems like div2C is through problem solving and gaining experience
Practice more div2Cs would be OK.
Try to solve them as quickly as you can.
C is just about implementation
I though the observation that for each query you have to travel back through the (c<=40) operations was the hardest part.
It was super hard for me. I should do more practice
it's funny how bitset codes pass (in E) but my std::set solution TLE'd until i replaced them with fenwick trees
...which WA'd, but I still desperately claim my solution was correct, as it does the only normal thing that the problem could ask: maintain a huge base two number and add/subtract powers of two to it. Also, apparently std::sets are at least 10 times slower than a fenwick tree, as the fenwick tree solution got ~200ms, and the std::set got >2000ms. Happy summer holiday guys
https://codeforces.me/contest/1705/submission/164283877
This made my day a whole lot better i will just go and jump off a cliff now. Thank you, stranger
D was really good. I don't know why I find such kinds of problem a bit hard. I wasn't able to come up with the invariant that sum of $$$(01)$$$ and $$$(10)$$$ is constant. Can someone suggest set of problems like $$$D$$$ (where I need to recognise an invariant)?
Same bro :(
It is easy to come up with solution for D by observing carefully.
Say we have 10001 then we can convert it to 11101 -> the first block of 1 got expanded to last continuous 0 we had
Say we have 11101 then we can do the opposite and convert it to 100001 -> the first block of 1 got compressed to first continuous 1 we had
This is for left to right . U can observe similarly for right to left. Now this implies that if i have some block of 1 i can always expand(through continuous zeroes and stop before last zero) or compress it(through continuous ones and stop after 1st one). So we can easily notice that -1 case will be when count of blocks of 1 in s and t are different. Also there is that case for first and last character to be same which can also be deduced by the observation since when expanding [x,y] to [l,r] (l<x , r>y) element at l and y will never change.
There we you can now just expand and compress these blocks greedily and make them same now.
this was my sol as well.
Same
One perhaps easier solution is, if you separate the bit string into substrings containing either only ones or only zeros, you realize that the operations we can do only allow for moving borders between neighboring substrings. Then the solution exists only if the order of the substrings is matched and the minimal number of operations is calculated by differences in positions of borders of the substrings.
I think the invariant that the number of chunks of 1 stays the same is easier to be observed.
Same for me :/
After the contest I continued to try and solve D and was surprisingly able to come up with a constructive solution that works, but it was hell to make. As a new expert who hasn't solved that many Ds before I was very pleased to read the editorial and realize that what I did wasn't the expected smart solution.
Here is my code (it isn't pretty, but I will never touch it again): 164362277
I only solved that problem b/c I saw 3+ similar problems in the past and I knew which direction to push my solution in, if that makes any sense.
This could be helpful: https://codeforces.me/blog/entry/85172 (there is one problem with an invariant)
So fast editorial and fantastic contest!
C was kinda hard for me, but I really enjoyed solving it. Great problem
Thanks for the fast tutorial ...
Enjoyed this contest — definitely has a good math olympiad flavor to it (both problems and solutions)!
Does qn D answer overflow int ? passed three pretests then failed.
The maximum possible answer is on the order of n^2, so yes, there is an overflow, and you have to use long long.
Yup that was the problem,felt horrible to lose a qn like that,also i feel the bitmask soln is over complicated,all i did was checked that first and last same,the number of continuous groups of 1 same ,since a group of cont. ones can never split up or merge with another group of ones,then check if number of groups in s and t same ,then calculate distance by moving start and end points of each group in s to corresponding one in t.
C was bit tricky for me, Good problemset overall. Happy Coding & Happy friday :)
This contest was great, I wish I hadn't had classes until 8:45pm (quite sad). Anyways, that's a rapid editorial.
Thanks for fast tutorial and E was cool.
Does somebody have binary search solution for C?
Not needed
See my submission: https://codeforces.me/contest/1705/submission/164314488
Fun fact: something happened during testing of problem F related to different behaviours of different versions (and 64-bit or not) of g++ and we couldn't figure out the reason behind it, so I'm putting it here to see if anyone can offer some insights into the different behaviours, thanks in advance :D
This submission gets AC:
164341892
I used some randomised heuristics
But when I submit the same code in non 64-bit version, it gets WA:
164341855
But once again when I remove two lines from the header it gets AC again, however, those two lines don't seem to be relevant at all to the algorithm:
164342019
It has to do with unsigned ints, in this case vector.size(). Casting all .size() values to int in the second submission makes it AC.
Here's what I think is happening here:
The first submission passes since you are dealing with 64-bit numbers on both sides, so converting unsigned to signed you get the same number even after overflow, so == works ok. Likewise for the third submission with 32-bit numbers.
For the second submission, you convert a signed integer to an unsigned one in 32-bit (after some possible overflow), which is not the same as the signed int in 64-bit, so == doesn't work as intended. Removing #define int long long fixes this.
Moral of the story: always cast .size() to a signed int to avoid situations like this.
Thanks a lot, that would explain what happened.
[unrelated] why are you removing the legendary line :(
isn't the TL on D too tight?
My code with long long int gives TLE but when I change long long int to int, it passes in 600ms.
My solution is $$$O(nlog^{2}n)$$$. Was this done on purpose to cut the solutions of this logistic?
Based on our testing, the segment tree has a high constant factor that an $$$O(n\log^2 n)$$$ is very hard to pass. You should either use bitsets or aim for $$$O(n\log n)$$$. We apologize for the strict TL.
But $$$O(n log^2 n)$$$ is too much for a problem, that can be solved in linear time, so it's quite ok that your solution TLE-d.
I disagree, the $$$O(nlog^{2}n)$$$ solution was more obvious to me. I couldn't come up with that observation.
Can you share your Approach towards problem D?
E, I would like to read a simpler version of the paragraph starting with "Proof: The upper bound is pretty clear...".
What is the upper bound here? What exactly is decreasing?
This was written when there was the second operation (of deleting numbers). The editorial is updated now. Hope this eliminates your confusion.
I'm very bad with segment trees, so I came up with the following solution for problem E. Let's maintain two sets $$$S$$$ and $$$T$$$ ("positive" and "negative" sets) such that
The following properties should hold:
The second property is important because it allows us to answer queries by taking the largest element of $$$S$$$ and subtracting $$$1$$$ if the largest element of $$$T$$$ is greater than the second largest element of $$$S$$$. (If the property didn't hold, we might need to deal with something like $$$2^6-2^5-2^4$$$.)
To add $$$2^x$$$, do the following:
Subtraction is entirely analogous. To maintain the second property, just check if it doesn't hold and if so, delete $$$x$$$ from $$$S$$$, $$$x-1$$$ from $$$T$$$, add $$$x-1$$$ to $$$S$$$, and check the property again.
For all of these operations, we can do an amortized analysis by using $$$|S|+|T|$$$ as a potential function. The overall time complexity is $$$O((n+q)\log n)$$$ since we use sets to maintain the maximum.
The implementation of this is very easy compared to the other solutions.
In the complexity, how do you account for the 'recursive' updates? Is there a bound on the number of updates? (a constant factor maybe? Certainly not just $$$n + q$$$ updates)
That's what the amortized analysis handles, because all the extra steps that we do will reduce the value of $$$|S|+|T|$$$. You should be able to find some resources by searching "amortized analysis" and "potential method".
Other simpler in understanding approach for D: (TC: O(q*n*logn)):
Submission: https://codeforces.me/contest/1705/submission/164342095
Auto comment: topic has been updated by MarkBcc168 (previous revision, new revision, compare).
C is a nice implementation problem. It also gave me a lesson:
BE EXTREMELY CAREFUL using
int
when you see anything like $$$x \le 10^{18}$$$ or $$$x \le 10^{12}$$$ . I literally spend ~1h struggling on C because of this :(Can anyone please explain why for problem C, there is a runtime error in my solution. solution
.......got my mistake.(solved)
Coz of integer overflow. You need to use long long. Made the same mistake :")
Nice contest! Thanks for the fast editorial! :)
I followed the editorial solution but I'm getting error for a particular test case that isn't even visible and I can't figure out why. Any help!? Here's my solution: https://codeforces.me/contest/1705/submission/164347601
Take a look at Ticket 15743 from CF Stress for a counter example.
PROBLEM can someone plz explain why this code didnt work!! please
Take a look at Ticket 15750 from CF Stress for a counter example.
Why is it TLE?
164335406
MarkBcc168 Please help!!
Funnily enough, in E, I never realised that we are just maintaing a binary number. The final algorithm you get is very similar to the editorial, albeit slower by a constant factor.
Here's what I did, let $$$f_x$$$ be the frequency of $$$x$$$ on the blackboard. Let $$$g_x$$$ be the maximum possible frequency of $$$x$$$ after some operations. It's easy to see that $$$g_x = f_x + \lfloor g_{x - 1} / 2 \rfloor$$$. The answer is simply the largest $$$z$$$ such that $$$g_z$$$ is non-zero.
Now, if we can somehow efficiently maintain $$$g$$$ after $$$f$$$ increases/decreases by $$$1$$$ we'd be done.
Suppose we decrement $$$f_x$$$ by $$$1$$$. Consider what happens to $$$g$$$, let's call it $$$g'$$$ after the change. Obviously $$$g_y' = g_y$$$ for all $$$y < x$$$, and $$$g_x' = g_x - 1$$$. What about the following values?
In the former case, we don't need to propagate the changes any further, in the latter case, it's as if we reduced $f_{x + 1}$ by 1 and the process repeats.
In effect, we find the maximal interval $$$[x, r)$$$ such that $$$g_y$$$ is even for all $$$y \in [x, r)$$$ and then reduce $$$g_y$$$ by $$$1$$$ for all $$$y \in [x, r]$$$. Similar thing happens on incrementing $$$f_x$$$.
We can maintain all this information using two lazy segment trees, one storing $$$g_x$$$ and $$$g_x \text{ mod } 2$$$ on the leaves, and their sum in the internal nodes. The first segtree supports range increment, and the second range flip operation.
Can someone please recommend some problems similar to D?
this problem I think
problem C can anyone tell me why my solution is giving runtime error[submission:164331309][click here](https://codeforces.me/contest/1705/submission/164331309)
From diagnostics: "Probably, the solution is executed with error 'division by zero' on the line 37".
Why is my solution for C throwing a memory limit exceeded error?
I am iterating over the queries and splitting the queries with length > n as subcomponents and evaluating the final answer.
I am essentially generating 40 * 40 * 1000 (q^2 * TC) query ranges for each test case which should still pass the given memory limit right?
I don't get how to convert E to those binary numbers ? I mean i get the part for the sum being constant. But what about the condition of first number in the streak of 1's appearing more than once?
Does anybody have the same idea with me? it got a Memory Limit Exceeded 164356635
I'm trying to enumerate the all indexes for lowercase chars
m['a'~'z'][po1, po2, po3] when the query gives me a pos, I will search it and return the char. I think the pos array is too large
It doesn't change that much storing the position that each element of the final string is equivalent to in the original string (as you are doing) and just constructing the final string, because in both instances you are storing the equivalent of the final string's length in elements, which can be up to 10^5 * 2^40 (n duplicated c times) a.k.a: way too much.
So your guess is correct, the position array is getting too big.
Thanks for the superfast editorial, interesting round with mark!
For problem B, I started traversing the array from the second last element(n-1th for 1 Indexing). If the element is 0 and the previous element is a non-zero element, then that particular 0 will contribute in reducing the non-zero element. Thus, it will take two steps to reduce that 0 to 0 again(0 to 1 and again 1 to 0). So add 2 to the answer. If the element is non zero, then first decrement that element with the number of steps taken by the 0 element(if any) which was present next to it. Now, the dust remaining at that index is the actual dust which require individual steps to get reduced to zero. So add that number to answer.
Here is the code for the same.
include
include<bits/stdc++.h>
define ff(i,n) for(int i=0;i<n;i++)
using namespace std;
void solve(){
int tc; cin>>tc; while(tc--){ int n; cin>>n; vector vect(n,0); int i=0; ff(i,n){ cin>>vect[i];
} }
int main() { solve(); return 0; }
This logic is somehow not working and I am unable to determine the failed test case. Can anyone please help.
One of the best CF rounds in a while!
For B: what if you can't fill all the zeros. for example, the input : 2 0 0 0 3 0 0 0 0 0 6 . In this input, you can't make all the zeros into ones and apply the operation on i and j. You would end up with 1 1 0 0 1 1 1 0 0 0 6. But then you haven't filled all the zero entries have you? You would then have to move the 1s to the right?
After you end up with 1 1 0 0 1 1 1 0 0 0 6, you can move 1->3, to do operation i->j, all elements from i to j-1 must be >=1, but the j-th element could be equal to 0 :)
> be me
> reading editorial
> read "This is a well known fact"
> sure bud
Thanks for the great contest!
Here is my solution to E which used
ChthollyTree
, i.e., use set to maintain all the intervals.actually, we just need these operations and there are only two types of values: 0 and 1. :
set interval
find first
a
afterb
.because:
operate +1 on position x:
a[x] == 0: set a[x] to 1
a[x] == 1: find the first 0 after x, assume on pos y; then set a[y] to 1, set a[x..y] to 0.
operate -1 on position x:
a[x] == 1: set a[x] to 0
a[x] == 0: find the first 1 after x, assume on pos y (always exist); then set a[y] to 1, set a[x..y] to 1.
So we can use set or map to maintain the
1
intervals.here is my code: https://codeforces.me/contest/1705/submission/164369724
it is written by Rust, but the logic and comment are still suitable for all languages.
in problem D: I can't understand why the XOR is the key idea? I see how this applied on the 4th test case but Why not we lucky that this is the case with the 4th test case only?
In my opinion, after serveral operations, the number of the sum of $$$01$$$ and $$$10$$$ will not change.
The problem says that $$$s_{i - 1} != s_{i + 1}$$$, for example, "100" will change to "110", and "001" will change to "011".
We can find that the count of differences of the adjoining numbers will not change.
In fact, XOR can find that if the adjoining numbers is not same, because $$$0 \ XOR\ 1 = 1$$$, $$$1 \ XOR \ 0 = 1$$$, $$$0 \ XOR \ 0 = 0$$$, $$$1 \ XOR \ 1 = 1$$$.
I hope this solution can help you.
For D, I understand how "cool" the solution looks like, but seriously, how can I even approach the XOR thing.
Personally I don't think I would ever directly land in the xor thing. The human and logical approach to get to the solution and possibly to that xor realization would be to look for blocks/groups of elements that have some relation with each other or certain relevant behaviour, that's a common and effective idea that you can train yourself to pay attention to in some problems and can become better in finding/identifying.
For example, one group you might find is that all the elements with odd positions can only change the elements with even positions and vice-versa. But a more important one in this problem are the groups of ones and zeroes for example, you might realize that you can expand and contract those as much as you want, from there you start to get somewhere.
F can be solve in $$$n/{2.08}$$$ times. Here is another solution: blog (in Chinese).
Video Solutions for Problem C and Problem D
F is the same as this
and solution here 20210527
Problem E can also be solveed by set. I store the segments of continuous 1. For examplpe, 10110111 may be stored as {1,11,111}. We also stored the segment's left bounder and right bounder, so when we need to check a position, we can use std::set::lower_bound to find a section that may include that position. Modifying the set is similar as the solution of segment tree, including splitting a section, and merge two sections, tottal complex is NlogN.
Can somebody tell me how to think of D's solution? I think, it's just a little hard to think of. So is there anything lead to the soltuion? thx.
Bro, I used to be very afraid of this kind of problems. But realize that the main approach is observing and trying. I found the solutution by this way: 110->100 100->110 This transform told me nothing, so I tried to write some long strings, such as the strings in the sample cases: 000101-> 001101 -> 011101 -> 011001 -> 010001 ->010011. I read this transform from the node under the problem statements. It is really amazing that 000101 can turn to 011101. It means that 0001 can turn to 0111, further more, 0001000 can be turn to 0111110. That is, you can freely extend or shrink a continuous ones(1111...) or zores(000...), but you can't make a section disappear. This lead to a great conclusion: If string s1 start by 1 and end by 0, then s1 is like this: 1..0...1...0...1..0.. Realizing that you can extend or shrink a section(but you can't make any section disappear), so if s1 can be transformed to s2, then s2 must be this way: 1..0...1...0...1..0.. The number of sections is the same as s1.
The problem has almost been solved! Please finish the remaining work.
Oh yeah, i got it. It's just observing and trying to find the clues as you are calculating the samples. thx a lot, bro!
I've decided to create my own editorial for the problems I solved(A-D)
Just sort the array and h[i] where 1<=i<= n will have to be matched with h[i+n]. So just check if it can be matched.
164266799
If the first n-1 rooms are already clean, print 0. Else, find the first nonzero room i less than n, and store the positions of all the 0's in front of it that are less than n. Then all you have to do is make all the 0's 1's by performing the operation from the closest 0 to the farthest 0. If room i ever becomes 0, move on to the next nonzero room and repeat. Once all the 0's are gone, the answer is just the sum of all elements except element n.
164278651
For each query, you're going back and "unconstructing" the operations. I created an array that stores the original lengths of each string after performing no operations, 1 operations, etc... Then you have to look for the first element in the array greater than or equal to k and reduce k using that. If k ever becomes less than or equal to the string's length, just print s[k-1]. This process can be done iteratively or recursively, though iteratively was slightly easier for me and is probably faster.
164294136
First of all s[0] = t[0] and s[n-1] = t[n-1] must be true if their is a solution, because those 2 elements can't be changed by the operation. Now let's define a group to be a list of consecutive 1's. The key observation is that a group can be increased or decreased to any size as long as it doesn't touch any other groups. So if there is to be a solution, the number of groups in s and t must be the same. Now in order to count the # of moves, we are going to match group i in s with group i in t for all groups i. The cost to move group i in s to t is the absolute value of the difference of their left-sides plus the absolute value of the difference of their right-sides. So just sum up this value for all the pairs of groups and you have your answer.
164312198
Our solutions are identical, but I find that my vocabulary is not enough when writing editorials.
Hi, thank you so much for your take on the problems! If you don't mind could you explain the logic behind this line of your code in problem C:
k = operations[i].second - (origlen[i] - k);
Since we're going back in time, right now we know that the index of the kth character must be between the left and right values of operation i. This is true because operation i was the first value we found >= k, so the above statement must be true. What "origlen[i] — k" does is that it gets the distance between the end of the string and the kth character. But since the end of the string is the rth character of the previous string, we can reduce k by going that same distance down from that right value instead.
The editorial with hints is excellent :)
https://codeforces.me/contest/1705/submission/164391480 another solution for E using SegmentTree
@Omar_Mohammad please can you tell what we are storing at one node of segment tree, it would be very helpful.
it's a segment tree sum i.e( each node holds the sum of its segment and each leaf node holds either 0 or 1) that sets a value to some range[l,r] with some modification on its implementation to get the first one, the first zero, and the last one in a given range
https://codeforces.me/contest/1705/submission/164346961
Another Solution for E with BLOCK.
It's $$$O(n\sqrt{n\log n})$$$, which have Successfully Passed the System Test!
The complexity of using bitests to solve D is $$$O(n^2/w)$$$. But the maximum of n is $$$2*10^5$$$. I think this complexity can not pass the problem.
Can anyone explain this?
I can give you answer that "bitsets are fast lol" but I think that is not very satisfying so I shall summon sslotin and hopefully he can give a better explanation ;)
Bitsets are fast lol.
But seriously, if you are performing a simple bitwise operation, then, by the virtue of vectorization (processing a block of 256 bits per instruction) and instruction-level parallelism (capacity to execute two such instructions per cycle), $$$w=512$$$, or about $$$512 \times 3.60 \times 10^9 \approx 2 \cdot 10^{12}$$$ bitwise operations per second on CF server, which theoretically even lets you submit a $$$O(n^2)$$$ algorithm for $$$n=10^6$$$.
Thank you very much for your explanation.
The editorial of D is a bit obscure for what it is. A more natural way to interpret and make the key observation is to note that the number of "segments" of 1's, alongside the endpoints of the string, are invariant under operations. After checking this condition, simply notice that each operation moves a boundary of some segment by 1 unit (eg. 011000 -> 011100 is expanding the right boundary by 1), so repeatedly greedy matching the leftmost segments of both strings would give the answer if we take the absolute differences between their boundaries.
Is there a way that i can access the testcase on which i fail? when i click on my failed submission it shows first few cases of each test but if it is failing the 272nd testcase then can i know what it was?
Actually there is no method to get the testcases.
F can be solved with "Entropy" (https://en.wikipedia.org/wiki/Entropy_(information_theory)) .
You can split the positions into some groups , and each group includes at most B elements .
You find the positions of 'F' of each group independently , and use Entropy to find the optimal choice . which can be solved in $$$O(2^{2B})$$$ for each group .
Solution: 164420897 ,(B=12, and the number of queries is about 520 for n=1000).
Awesome approach! I felt that the problem is about something more general, and it was entropy indeed.
Thanks :)
Great Round!!
But why my submission on B got TLE on test 5?
Submission: https://codeforces.me/contest/1705/submission/164288648
F is a great problem, thank you!
There is an alternate easier solution (without using xor operation ) for D where you can store indices where change is made in a queue. Then the current element at index i will be (s[i]-48+length)%2 where length is length of queue which stores no. of indices where changes from 0 to 1 can be made with element values >=i and <n-1.
Here is the code-> https://codeforces.me/contest/1705/submission/170797410