1847:A — The Man who became a God
- Author — PoPularPlusPlus
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ull unsigned long long
#define pb(e) push_back(e)
#define sv(a) sort(a.begin(),a.end())
#define sa(a,n) sort(a,a+n)
#define mp(a,b) make_pair(a,b)
#define all(x) x.begin(),x.end()
void solve(){
int n , k;
cin >> n >> k;
ll arr[n];
for(int i = 0; i < n; i++)cin >> arr[i];
vector<ll> v;
ll sum = 0;
for(int i = 1; i < n; i++){
v.pb(abs(arr[i] - arr[i-1]));
sum += v.back();
}
sort(all(v));
for(int groups = 1; groups < k; groups++){
sum -= v.back();
v.pop_back();
}
cout << sum << '\n';
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;cin >> t;while(t--)
solve();
return 0;
}
- Author — PoPularPlusPlus
#include <iostream>
#include <vector>
using namespace std;
#define ll long long
#define ull unsigned long long
#define pb(e) push_back(e)
#define sv(a) sort(a.begin(),a.end())
#define sa(a,n) sort(a,a+n)
#define mp(a,b) make_pair(a,b)
#define all(x) x.begin(),x.end()
void solve(){
int n;
cin >> n;
int arr[n];
for(int i = 0; i < n; i++)cin >> arr[i];
int cur = arr[0];
int part = 1;
for(int i = 0; i < n; i++){
cur &= arr[i];
if(cur == 0){
if(i == n-1)break;
part++;
cur = arr[i + 1];
}
}
if(cur != 0)part--;
part = max(part,1);
cout << part << '\n';
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;cin >> t;while(t--)
solve();
return 0;
}
1847:C-Vampiric Powers, anyone?
- Author — PoPularPlusPlus
1847C — Vampiric Powers, anyone?
At the end of the array, you can only achieve xor of any subarray of the original array.
Lets denote $$$f(u,v) =$$$ xor of all $$$a_i$$$ such that $$$min(u,v) \leq i < max(u,v)$$$. In the first operation you add $$$f(n,i)$$$. I.e. $$$[u_1,v_1)=[n,i)$$$. It can be proven that $$$f(u_k,v_k) = f(v_{k-1},v_k)$$$ in the $$$k$$$-th operation which is a range.
Suppose we have taken $$$k$$$ ranges that already satisfy this property. Now, I add a new $$$k+1$$$-th range. So, first I need to take the $$$k$$$-th range $$$f(u_k,v_k)$$$. Now I'm xoring it with the range $$$f(u_{k - 1}, v_{k - 1})$$$. As [ $$$u_k, v_k$$$) and [ $$$u_{k - 1}, v_{k - 1}$$$) share an endpoint, the result for these ranges will be a range.
For two ranges $$$f(x,y)$$$ and $$$f(y,z)$$$, if the two ranges do not intersect, the result will be the sum of the two ranges $$$f(x,z)$$$. If the two ranges intersect, then the intersections will be cancelled out, and the result will be the difference $$$f(x,z)$$$.
Now, we maintain a boolean array $$$b$$$ where $$$b_i$$$ is $$$1$$$ if there is some $$$j$$$ such that $$$a_1 \oplus a_2 \oplus \cdots \oplus a_j = i$$$. Initially, $$$b$$$ is all $$$0$$$. We loop $$$j$$$ from $$$1$$$ to $$$n$$$ and check for each $$$k$$$ if $$$b_k=1$$$. If it is, then there is some position $$$p < j$$$ such that $$$a_1 \oplus a_2 \oplus \cdots \oplus a_p = k$$$. If we take xor of range from $$$(p,j]$$$, then it will be $$$k \oplus a_1 \oplus a_2 \oplus \cdots \oplus a_j$$$ (as $$$a_1 \oplus a_2 \oplus \cdots \oplus a_p$$$ gets cancelled). This $$$a_1 \oplus a_2 \oplus \cdots \oplus a_j$$$ can be stored as we loop ahead. We are looping all possible prefix xors and not all prefix positions because $$$n$$$ is large.
Time Complexity — $$$O(n \cdot 2^8)$$$.
#include <bits/stdc++.h>
using namespace std;
int main() {
cin.tie(0)->sync_with_stdio(0);
int ntest;
cin >> ntest;
while (ntest--) {
int n;
cin >> n;
vector<int> a(n);
for (auto &i : a)
cin >> i;
int const max_value = 1 << 8;
vector<char> has_pref(max_value);
has_pref[0] = true;
int cur_xor = 0;
int ans = 0;
for (auto i : a) {
cur_xor ^= i;
for (int pref = 0; pref < max_value; ++pref) {
if (has_pref[pref]) {
ans = max(ans, pref ^ cur_xor);
}
}
has_pref[cur_xor] = true;
}
cout << ans << '\n';
}
}
- Author — PoPularPlusPlus
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ull unsigned long long
#define pb(e) push_back(e)
#define sv(a) sort(a.begin(),a.end())
#define sa(a,n) sort(a,a+n)
#define mp(a,b) make_pair(a,b)
#define vf first
#define vs second
#define all(x) x.begin(),x.end()
void solve(){
int n , m , q;
cin >> n >> m >> q;
string st;
cin >> st;
vector<pair<int,int>> ranges;
for(int i = 0; i < m; i++){
int l , r;
cin >> l >> r;
l--;
r--;
ranges.pb(mp(l,r));
}
set<int> s;
for(int i = 0; i < n; i++)s.insert(i);
vector<int> v;
int pos_in_v[n];
memset(pos_in_v,-1,sizeof pos_in_v);
for(int i = 0; i < m; i++){
auto it = s.lower_bound(ranges[i].vf);
vector<int> toerase;
while(it != s.end() && (*it) <= ranges[i].vs){
toerase.pb((*it));
v.pb(toerase.back());
pos_in_v[toerase.back()] = v.size()-1;
it++;
}
while(toerase.size()){
s.erase(toerase.back());
toerase.pop_back();
}
}
int cnt = 0;
for(int i = 0; i < n; i++){
if(st[i] == '1')cnt++;
}
int ans = 0;
for(int i = 0; i < min(cnt , (int)v.size()); i++){
if(st[v[i]] == '0')ans++;
}
while(q--){
int pos;
cin >> pos;
pos--;
if(pos_in_v[pos] != -1 && pos_in_v[pos] < cnt){
if(st[pos] == '0'){
ans--;
}
else ans++;
}
if(st[pos] == '0'){
st[pos] = '1';
cnt++;
if(cnt <= v.size() && st[v[cnt-1]] == '0')ans++;
}
else {
st[pos] = '0';
if(cnt > 0 && cnt <= v.size() && st[v[cnt-1]] == '0')ans--;
cnt--;
}
cout << ans << '\n';
}
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL);
//int t;cin >> t;while(t--)
solve();
return 0;
}
- Author — StArChAn
#include<bits/stdc++.h>
using namespace std;
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
int n, ans[5005], holy[10][10][10];
map<vector<int>, int> GG;
int area(int x, int y, int z)
{
int ok = max(x, max(y, z));
int sum = x+y+z;
if(2*ok >= sum) return 0;
return sum*(sum-2*x)*(sum-2*y)*(sum-2*z);
}
int query(vector<int> v)
{
sort(v.begin(), v.end());
if(GG.find(v) != GG.end()) return GG[v];
cout << "? " << v[0] << " " << v[1] << " " << v[2] << endl;
int ret; cin >> ret; assert(ret != -1); return GG[v] = ret;
}
void print(bool found = true)
{
if(!found)
{
cout << "! -1" << endl;
exit(0);
}
cout << "! ";
for(int i = 1; i <= n; i++) cout << ans[i] << " ";
cout << endl; exit(0);
}
bool works()
{
for(int i = 1; i <= n; i++)
{
for(int j = i+1; j <= n; j++)
{
for(int k = j+1; k <= n; k++)
{
if(area(ans[i], ans[j], ans[k]) != holy[i][j][k])
return false;
}
}
}
return true;
}
void brute()
{
for(int i = 1; i <= n; i++)
{
for(int j = i+1; j <= n; j++)
{
for(int k = j+1; k <= n; k++)
holy[i][j][k] = query({i, j, k});
}
}
int mask = 1<<(2*n);
int lol = 0;
int workser = -1;
for(int i = 0; i < mask; i++)
{
int copy = i;
for(int k = 1; k <= n; k++)
{
ans[k] = 1+copy%4;
copy/=4;
}
if(works())
{
workser = i;
lol++;
}
}
assert(lol);
if(lol > 1) print(false);
int copy = workser;
for(int k = 1; k <= n; k++)
{
ans[k] = 1+copy%4;
copy/=4;
}
print();
}
signed main()
{
ios_base::sync_with_stdio(false); cin >> n;
if(n <= 8) brute();
map<int, int> nice;
for(int i = 1; i <= 4; i++) nice[area(i, i, i)] = i;
int a, b, X;
{
bool found = false;
for(int i = 1; i <= min(n, 9) && !found; i++)
{
for(int j = i+1; j <= min(n, 9) && !found; j++)
{
for(int k = j+1; k <= min(n, 9) && !found; k++)
{
if(nice[query({i, j, k})])
{
a = i;
b = j;
X = nice[query({i, j, k})];
found = true;
}
}
}
}
}
if(X > 1)
{
map<int, int> ok;
for(int i = 1; i <= 4; i++) ok[area(i, X, X)] = i;
for(int i = 1; i <= n; i++) ans[i] = ((i==a) || (i==b))? X : ok[query({a, b, i})];
print();
return 0;
}
vector<int> non_ones;
for(int i = 1; i <= n; i++)
{
if(i == a || i == b || query({i, a, b}))
ans[i] = 1;
else
non_ones.push_back(i);
if(non_ones.size() == 4) break;
}
if(non_ones.empty()) print();
map<int, int> ok;
for(int i = 2; i <= 4; i++) ok[area(1, i, i)] = i;
int x, y, Z; x = y = -1; Z = 0;
for(int i = 0; i < non_ones.size() && (Z == 0); i++)
{
for(int j = i+1; j < non_ones.size() && (Z == 0); j++)
{
if(ok[query({a, non_ones[i], non_ones[j]})])
{
x = non_ones[i];
y = non_ones[j];
Z = ok[query({a, non_ones[i], non_ones[j]})];
}
}
}
if(x == -1) print(false);
map<int, int> fin; ans[x] = ans[y] = Z;
for(int i = 1; i <= 4; i++) fin[area(Z, Z, i)] = i;
for(int i = 1; i <= n; i++) ans[i] = ans[i]? ans[i] : fin[query({x, y, i})];
print();
return 0;
}
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
const int N = 5005;
int ans[N];
int n;
int dp[5][5][5];
int holy[11][11][11];
int area(int a, int b, int c){
if (a >= b + c) return 0;
if (b >= a + c) return 0;
if (c >= a + b) return 0;
int s = (a + b + c);
return s * (s - 2 * a) * (s - 2 * b) * (s - 2 * c);
}
int query(int a, int b, int c){
cout << "? " << a << " " << b << " " << c << endl;
int ans; cin >> ans;
return ans;
}
void print(bool found = true){
if (!found){
cout << "! -1\n";
exit(0);
} else {
cout << "! ";
for (int i = 1; i <= n; i++) cout << ans[i] << " ";
cout << endl;
exit(0);
}
}
bool works(){
for (int i = 1; i <= n; i++){
for (int j = i + 1; j <= n; j++){
for (int k = j + 1; k <= n; k++){
if (area(ans[i], ans[j], ans[k]) != holy[i][j][k])
return false;
}
}
}
return true;
}
void brutesmall(){
for (int i = 1; i <= n; i++){
for (int j = i + 1; j <= n; j++){
for (int k = j + 1; k <= n; k++){
holy[i][j][k] = query(i, j, k);
}
}
}
int mask = 1 << (2 * n);
int lol = 0;
for (int i = 0; i < mask; i++){
int copy = i;
for (int i = 1; i <= n; i++){
ans[i] = 1 + copy % 4;
copy /= 4;
}
if (works()){
lol++;
}
}
if (lol > 1) print(false);
for (int i = 0; i < mask; i++){
int copy = i;
for (int i = 1; i <= n; i++){
ans[i] = 1 + copy % 4;
copy /= 4;
}
if (works()){
print();
}
}
}
void Solve()
{
cin >> n;
for (int i = 1; i <= 4; i++){
for (int j = 1; j <= 4; j++){
for (int k = 1; k <= 4; k++){
dp[i][j][k] = area(i, j, k);
}
}
}
if (n < 9){
brutesmall();
}
vector <int> tuple(4, -1);
while (true){
if (tuple[0] != -1) break;
int p1 = 1 + RNG() % n;
int p2 = 1 + RNG() % n;
int p3 = 1 + RNG() % n;
if (p1 == p2 || p2 == p3 || p3 == p1) continue;
int ar = query(p1, p2, p3);
for (int j = 1; j <= 4; j++){
if (ar == dp[j][j][j]){
tuple[0] = p1;
tuple[1] = p2;
tuple[2] = p3;
tuple[3] = j;
}
}
}
for (int i = 0; i < 3; i++) ans[tuple[i]] = tuple[3];
if (tuple[3] == 1){
vector <int> nv;
for (int i = 1; i <= n; i++){
if (ans[i] > 0) continue;
int fetch = query(tuple[0], tuple[1], i);
if (fetch == 0) {
nv.push_back(i);
if (nv.size() > 3) break;
} else {
ans[i] = 1;
}
}
if (nv.size() == 0) print();
vector <int> ntuple(3, -1);
for (int i1 = 0; i1 < nv.size(); i1++){
for (int i2 = i1 + 1; i2 < nv.size(); i2++){
int ok = query(tuple[0], nv[i1], nv[i2]);
for (int j = 2; j <= 4; j++){
if (ok == dp[1][j][j]){
ntuple[0] = nv[i1];
ntuple[1] = nv[i2];
ntuple[2] = j;
}
}
}
}
if (ntuple[0] == -1) print(false);
ans[ntuple[0]] = ntuple[2];
ans[ntuple[1]] = ntuple[2];
tuple[0] = ntuple[0];
tuple[1] = ntuple[1];
tuple[2] = -1;
tuple[3] = ntuple[2];
}
for (int i = 1; i <= n; i++){
if (ans[i] > 0) continue;
int fetch = query(tuple[0], tuple[1], i);
for (int j = 1; j <= 4; j++){
if (dp[tuple[3]][tuple[3]][j] == fetch)
ans[i] = j;
}
}
print();
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// cin >> t;
for(int i = 1; i <= t; i++)
{
Solve();
}
return 0;
}
- Idea — astoria
- Prepared by — PoPularPlusPlus
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ull unsigned long long
#define pb(e) push_back(e)
#define sv(a) sort(a.begin(),a.end())
#define sa(a,n) sort(a,a+n)
#define mp(a,b) make_pair(a,b)
#define vf first
#define vs second
#define all(x) x.begin(),x.end()
const int B = 31;
struct item {
ll pos , bit, idx;
};
item make(ll pos , ll bit , ll idx){
item res;
res.pos = pos;
res.bit = bit;
res.idx = idx;
return res;
}
void solve(){
int n;
cin >> n;
int qu;
cin >> qu;
int arr[n];
for(int i = 0; i < n; i++)cin >> arr[i];
if(n == 1){
while(qu--){
int x;
cin >> x;
if(x < arr[0])cout << 1 << '\n';
else cout << -1 << '\n';
}
return;
}
vector<pair<ll,ll>> v;
queue<item> q;
bool vis[n][B];
memset(vis,0,sizeof vis);
for(int i = 0; i < n; i++){
for(int j = 0; j < B; j++){
if((arr[i] & (1 << j)) > 0){
vis[i][j] = 1;
}
}
v.pb(mp(i , arr[i]));
}
for(int i = 0; i < n - 1; i++){
v.pb(mp(n+i,arr[i]|arr[i+1]));
}
for(int j = 0; j < B; j++){
if(vis[n-1][j])q.push(make(n+n-1,j,0));
}
for(int i = 0; i < n - 1; i++){
for(int j = 0; j < B; j++){
if(vis[i][j])q.push(make(n+n+i,j,(i+1)%(n-1)));
}
}
int val[n-1];
for(int i = 0; i < n-1; i++)val[i] = arr[i]|arr[i+1];
while(q.size()){
item it = q.front();
q.pop();
if((val[it.idx] & (1 << it.bit)) > 0){
continue;
}
val[it.idx] += (1 << it.bit);
if(v.back().vf == it.pos){
v.pop_back();
}
v.pb(mp(it.pos , val[it.idx]));
q.push(make(it.pos + n , it.bit , (it.idx + 1)%(n-1)));
}
// first part gets over where v[i].first contains position in ascending order and v[i].second contains the value to what has become on that position.
for(int i = 1; i < v.size(); i++){
v[i].vs = max(v[i].vs , v[i-1].vs);
}
while(qu--){
int x;
cin >> x;
ll l = 0 , r = v.size()-1;
ll res = -1;
while(l <= r){
int mid = (l + r)/2;
if(v[mid].vs <= x){
l = mid + 1;
}
else {
r = mid - 1;
res = v[mid].vf+1;
}
}
cout << res << '\n';
}
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;cin >> t;while(t--)
solve();
return 0;
}
How this Believer.Code solution is accepted because in this solution vector had initialised with 1e5 and testcases wasupto 1e4 and how can 1e9 will running in 1sec time limit?
I think you linked wrong solution link
Nope,it's alright Believer.
n^2 solution got accepted in problem C.
link: 212424847
Can anyone explain why?
Weak test cases probably contributed, but the other contributing factor is that your code was vectorized, Godbolt link. Your same code without the pragmas will TLE 5. 212505218
Assuming CF's machines have 256-bit registers your code would have a near $$$8\times$$$ speedup (theoretically). You can read more about vectorization here (short version) and here (more in-depth).
Thank you very much for explanation.
But still weak test cases or constraint.
hmm
What algorithm are you talking about? would be going through the editorial later but if there is a way other than the one mentioned in the editorial do link it please.
Problem C can be reduced to finding maximum XOR for a sub-array. Which is a standard problem.
Reducing problem C to finding maximum XOR for a sub-array is not a trivial task (at least it was not for me; when I finally submited a solution I was still not sure whether it was correct).
I agree, it's not trivial in-fact I didn't even remotely think that. I just guessed a way round. (got -ve delta ultimately)
The contest was fair except for the initial queue problem, which has happened before so not that big a deal.
Yes, it was not so easy to see that we have to find maximum XOR, I figured that out after spending nearly 40 mins on the problem. But sadly I didn't know the implementation. I just wish I knew it was a standard problem becoz it would have my rating go up.
The community is pretty weird, the guy asked "what algorithm" and I just answered that, in no way did I blame the authors for carelessness.
I think people downvoted because they thought I was critical of the authors.
https://www.geeksforgeeks.org/find-the-maximum-subarray-xor-in-a-given-array/ I saw the same code in a lot of submissions today along with the same comments lol.
I myself copied the code from my old submission in this problem https://cses.fi/problemset/task/1655
But the point of this problem is transform its several conditions into standard problem.Can you find the train of thought on the Google?I believe that the main idea of transforming is what the Author want to talk about.
Can you please explain,in c if we take i=1 and get a new power then again taking i=1, we can get the max power , why it will not work plzzz help
Shouldn’t it be 0 if u do so? It’s XOR, any number xor Itself would be 0, and by doing so, you xor 1st-nth numbers for two times each, getting 0 at last.
Sorry bruh I can't clearly understand what you mean,but I can share my idea about this problem.(Btw,this problem cannot be solved by Greedy maybe.)
We can find that the new "strength" equals the suffix XOR of the array.Try to understand that two identical numbers' XOR equals to 0.Now it comes to 2 different conditions.
First,if you choose a suffix which start at k1,then choose a suffix start k2,you will get a prefix which equals to the XOR of k2 to (k1-1),because k2 to n appeared two times and their XOR is 0.
According to this conclusion,we can clearly find that we need to find a contiguous subarray to make its XOR as big as you can.
Notice that you can get the prefix XOR.According to the conclusion of XOR,you can get any subarray by two prefix XOR together.
Choose 2 numbers among n numbers and find the maxinum XOR,you can solve it by using Trie.
Hope this comment can help you.
I somehow cheesed F. 212418362 my submission computes every element of (n-1) sized sequences for each different length, which is queried, so complexity is O(n*k), where k is number of different lengths appeared in input. I don't know is there a test to exploit it, random cases can't do it.
Hacked.
Got AC one more time 212520810 by don't storing a number twice.
Hacked
I generated the test with a help of simulated annealing, couting the number of times your calc() function gets called. In this test it is calculated 1300 times and your solution works in about 3300 ms, but if I run the generator for longer it will get stronger tests (it ran only for 5 minutes).
SegmentForces
Exactly!
The contest was pretty good. I somehow couldn't find how to prove the assumption in C during the contest, and got lucky with my intuition there (which was that 2 operations are enough to reach the maximum)
[deleted]
In problem D, can someone please explain "Now, this t can be found using various pbds like set,dsu,segment tree,etc.", beacuse it is the only thing I couldn't figure out?
You need the relative order of the first appearances for each index in string T, and when relative order comes into play then pbds or segment tree can be made use of. Not sure how to use dsu.
For segment tree I mapped first occurrences for each index to a counter variable starting from 1 and made a segment tree of N nodes to maintain range sum.
I was thinking for every number from 1 to n to find first interval [L,R] in which it is? How to do it in O(nlogn)?
I used a set to find it. The set first takes all numbers from 1 to N and then I run lower bound on each L,R segment in order and remove the indexes which lie in [L, R]. I think it's pretty obvious that in this manner we can find the first occurrence for all indexes. An index once marked is removed from the set.
My submission — https://codeforces.me/contest/1847/submission/212454814
i assigned priority by using a little trick lets make a jump array initialise jump[i]=i+1 now we will iterate for these m segments for the ith segment(li,ri) lets start from li so if it is not visited(make a boolean bitset) then we will assign the current_priority to it and increase our current_priority by 1 now make li=jump[li] and update jump[li] as max(jump[li],ri+1)
do these for all segments
in your while loop the worst case is O(m),so total time complexity of your set finding stuff is at least O(m*m);so how its not TLE'ing.
Do you realize once the set becomes empty its going to be O(1)?
I did it using a lazy segment tree( I saw many others doing it with sets, but idk how yet) Let T be an array of length 'n' where T[i] = x means that x is the first index such that L[x] <= i <= R[x].
Initialise T to inf(or a value greater than m).
The range update can be done in O(logn) using a lazy segment tree (google it).
I know this is overkill, and I am looking for someone to explain the method using set. But if you have code for lazy seg tree beforehand, then it really is easy to implement during the contest without any fuss.
my submission
Thank you guys
We need to know the lowest indexed substring present at a given index 'i' of the string s from a collection of all possible substrings. Let's say we have this set for the index'i'. When we move to 'i+1', we simply need to delete all subtrings which end at 'i' and insert all substrings which begin from 'i+1'. So, we can iterate across all elements from 0 to n, performing 'm' insert and delete operations overall in the set, analogous to a line sweep.
https://codeforces.me/contest/1847/submission/212435999
Thanks for this well-phrased explanation( I could understand the idea from reading your comment just once!)
P.S. How did I not see this first ?!! (Maybe because I read lazy segment tree just 2 weeks ago, and when I saw the exact problem, I got sucked in.)
The way I did it was to insert every number till n in a set. Then for each l and r I find the lower bound of l if it exists and is lesser than r and remove it from the set. Now find the lower bound of this number and keep doing this till either the set is empty or lower bound exceeds r.
Submission: 212923478
Can anyone answered me how to do this with dsu please?
I still can't figure it out.
The solution using DSU is actually quite elegant; it should look something like this:
Thanks! I can see it now! It is quite similar to the approach of https://codeforces.me/edu/course/2/lesson/7/2/practice/contest/289391/problem/B .
Why I am getting wrong answer in D? Please help, my submission:212445636
You should debug how you calculate the priorities of indexes. Maybe there are some other issues as well but didn't check all of it.
Input:
Must be
inf 3 1 2 4 inf
but you findinf 3 1 2 inf inf inf inf
got AC... Thanks Man!
I'm pretty sure time complexity in D is different from the one from the tutorial as it should include m
i have the same solution for c (the first subm before A) but it TLEd on pretest 3 , why?
from your code: for(int i = 0;i < maxn * 1e5 ;i++){ trie[i][0] = trie[i][1] = 0; }
suppose t==1e5 -> needs 1e5*maxn*t -> tle
no not the trie one i meant the first submission (before A as i mentioned)
212482567 I changed the map to a vector and it passed. don't use maps when values are not big your complexity is like O(n*(1<<8)*log(1<<8)) honestly it should pass by that complexity but the constant coeff of the map is high !
yeah i tried changing it to array it passed too, damn man i wouldnt have gotten all this penalty..................................
why would they not make the TL 2 seconds
D was really good problem
In Problem B : If the and of all the element is not zero why can't we divide them in more than one group? Example : 3, 3, 3 According to the editorial answer should be 1 but we can clearly see the answer could be 3 with minimum possible and for each group as 3.
sum of the continuous groups should be minimum. if we divide into 3 groups, the sum is 9 whereas if we divide it as one group, sum is only 3.
In Problem B's editorial, it is written that if F(1,n)>0 then the answer will always be 1, but if the test case was like
n= 4
2,3,2,3
then we can see F(1,n)= 2>0 and we can divide the array into groups means the answer should be >1
the sum of all the groups should be minimum
if we divide the array into two groups, the sum would be F(1, 2) + F(3, 4) = 2 + 2 = 4
if we don't divide, then the sum is F(1, n) = 2, which is the minimum possible sum
Can anyone Please tell were this code is wrong. https://codeforces.me/contest/1847/submission/212493731
I removed these lines and submitted, it works. I'm surprised you didn't notice. submission:
But How it Worked ?
painful E
hartiksalaria explain solution of problem D.
First, explain to me why for TC 2, the answer for 2nd query is 3.
UPD: Finally I understand what's going on there thanks to letsintegreat
So first observe that you actually just care about the first time any index in s appears in string t(s) which gives us a compressed version t'(s) where every index is present at most one time. Now to make t(s) lexicographically largest we try to put as many ones in the prefix as possible.
We can place all the ones we have in our string s in the prefix of t'(s) so our t'(s) can have min(len(t'(s)), one) length of prefix filled with one, so we just need to replace every zero in that prefix with some one. So answer for each query is just number of zeros in that prefix.
wish i had read 2nd TC. I was in another dimension, problem misunderstood. I thought that we have to swap in string t but it was 's'. it is easier than problem C :(. Thanks gawd, i would have never cared to read TC.
I agree, the implementation and idea is not that hard to come up with, it's just that there were so many moving parts in the problem that it was easy to miss a crucial detail, during the contest I had the correct idea but somehow I assumed to just ignore every index which was not present in the given m segments due to which I wasn't able to make sense of 2nd query of TC 2.
I have no regrets. I am still not able to think to give priority to indexes :)
We can't help misunderstanding the problem in the first go. But not going through sample test cases is a huge mistake, especially in problems with so many intricacies such as D.
F can be also done in O(Nlog(max(A[i]))logQ):
We can notice that from each starting point, only at max 30 positions are important ie. Whenever a new bit is added in the range bitwise OR.
So we only need to look at these ~30N potential answers instead of the n^2 potential answers.
Now for each potential answer, we get a prefix of queries that can be satisfied, so we can use a difference array type technique, for minimums instead of for sums.
submission: 212515812
Is it just me or do the proofs for C make no sense?
If anyone can dumb it down for me, I would greatly appreciate it.
My consideration during contest:
let
Using the first formula to replace
in the second one.(assuming
)
Now we discovered that
can actually be all contiguous subarray of the orginal
and same for the upcoming elements.
skytyf Could you explain how to go to "am+2=ai⊕..⊕aj−1" more? I think it is am+2=aj⊕..⊕ai⊕..⊕am. But i don't know how to transform to yours.
F can be solved in O(n log max(a[i]) + q (log n + log q)) 212559533
Why do I keep getting TLE on test 13? 212785251
me tle on test 12 bud. making me doubt the complexity 212807276
Proof of C is poorly written.
really very very poor
$$$O(NlogMAX + QlogQ)$$$ solution for F. Uses some dumb counting sort + the observation about $$$logMAX$$$ required bits.
https://codeforces.me/contest/1847/submission/212474580
Hi, I have submitted code for problem D, but it is showing WA. Can someone help me understand why? Here is my submission link: 212934178
Thank you for problem E!
In the end, what worked for me was an adaptive priorities approach. For each side, maintain a mask of four bits: whether this side can still be
1
,2
,3
, and4
. Now, as long as we have questions to ask, pick a triple of masks $$$(m_1, m_2, m_3)$$$, pick random sides with such masks, and ask about them. As feedback, see how many sides $$$u$$$ became uniquely determined after this question. After that, change priority of $$$(m_1, m_2, m_3)$$$ by $$$u - 1$$$: it does not change if we found one new side, decreases by $$$1$$$ if we found none, and so on. Now, if we ever pick a triple of masks with the best possible priority, the silly questions are quickly filtered out, and only the promising questions remain.Had to also remember the asked questions, and consider pairs of questions: considering triples alone, my approach could not uniquely assign values for tests like
[3, 3, 4, 4]
. That's a technical detail though, which can be approached differently..
F can be solved in an easier way if one answers queries offline, sorted in decreasing order of $$$v$$$: 244806887
I don't think there can be any shitty explanation than this for 1847-C