Первый решивший: rob00
Давайте рассмотрим $$$2$$$ случая.
- Если $$$x \geq y$$$. В этом случае, каждую секунду блендер будет смешивать $$$min(y, c)$$$ фруктов (где $$$c$$$ — это количество не смешанных фруктов). Cледовательно, ответ будет $$$\lceil {\frac{n}{y}} \rceil$$$.
- Если $$$x < y$$$. Здесь каждую секунду блендер будет смешивать $$$min(x, c)$$$ фруктов. В этом случае ответ будет равен $$$\lceil {\frac{n}{x}} \rceil$$$ аналогично.
Таким образом, итоговый ответ — это $$$\lceil {\frac{n}{min(x, y)}} \rceil$$$.
#include <iostream>
using namespace std;
int main(){
int t = 1;
cin >> t;
while(t--){
int n, x, y;
cin >> n >> x >> y;
x = min(x, y);
cout << (n + x - 1) / x << endl;
}
}
Первый решивший: neal
Можно заметить, что значение $$$a_{n-1}$$$ всегда будет отрицательным в итоговом ответе.
Следовательно, мы можем вычесть сумму $$$a_1 + a_2 + \ldots + a_{n-2}$$$ из $$$a_{n-1}$$$, а затем вычесть $$$a_{n-1}$$$ из $$$a_n$$$.
Таким образом, итоговая сумма будет равна $$$a_1 + a_2 + \ldots + a_{n-2} - a_{n-1} + a_n$$$. Больше этого значения получить нельзя, так как $$$a_{n-1}$$$ всегда будет отрицательным.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int mod = 1e9 + 7;
void solve(){
int n;
cin >> n;
ll ans = 0;
vector<int> a(n);
for(int i=0;i<n;i++){
cin >> a[i];
ans += a[i];
}
cout << ans - 2 * a[n - 2] << '\n';
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while(t--){
solve();
}
}
Первый решивший: Pagode_Paiva
Будем поддерживать изначально пустую строку $$$t$$$, так чтобы $$$t$$$ встречалась в $$$s$$$ как подстрока.
Будем увеличивать строку $$$t$$$ на один символ, пока её длина меньше $$$n$$$.
Проведём $$$n$$$ итераций. На каждой из них будем проверять строки $$$t + 0$$$ и $$$t + 1$$$. Если одна из них встречается в $$$s$$$ как подстрока, то добавим нужный символ в конец $$$t$$$ и перейдём к следующей итерации.
Если ни одна из этих двух строк не встречается в $$$s$$$, то это значит, что строка $$$t$$$ является суффиксом строки $$$s$$$. После этой итерации будем проверять строку $$$0 + t$$$. Если она встречается в $$$s$$$, то добавим $$$0$$$ к $$$t$$$, иначе добавим $$$1$$$.
Итого, на каждой итерации мы выполняем по 2 запроса, кроме одной итерации, на которой выполняем 3 запроса. Однако после этой итерации будем делать только по 1 запросу, поэтому суммарно потребуется не более $$$2 \cdot n$$$ запросов.
#include <iostream>
#include <vector>
#include <string>
#include <array>
using namespace std;
bool ask(string t) {
cout << "? " << t << endl;
int res;
cin >> res;
return res;
}
void result(string s) {
cout << "! " << s << endl;
}
void solve() {
int n;
cin >> n;
string cur;
while (cur.size() < n) {
if (ask(cur + "0")) {
cur += "0";
} else if (ask(cur + "1")) {
cur += "1";
} else {
break;
}
}
while ((int) cur.size() < n) {
if (ask("0" + cur)) {
cur = "0" + cur;
} else{
cur = "1" + cur;
}
}
result(cur);
}
int main() {
int t;
cin >> t;
while (t--)
solve();
}
2013D — Минимизировать разность
Первый решивший: edogawa_something
Первое утверждение: если $$$a_i > a_{i+1}$$$, то в этом случае всегда выгодно сделать операцию на позиции $$$i$$$. Следовательно, конечный массив будет неубывающим.
Второе утверждение: если массив неубывающий, то делать операции невыгодно.
Будем поддерживать стек, в котором будет храниться отсортированный массив. Каждый элемент в стеке будет представлять пару $$$x, cnt$$$, где $$$x$$$ — значение, а $$$cnt$$$ — количество его вхождений.
При добавлении $$$a_i$$$ в стек будем хранить сумму удалённых элементов $$$sum$$$ из стека и их количество $$$cnt$$$. Изначально $$$sum = a_i$$$, $$$cnt = 1$$$. Мы будем удалять последний элемент из стека, пока он больше, чем $$$ \frac{sum}{cnt}$$$. После этого пересчитываем $$$sum$$$ и $$$cnt$$$. Затем добавим в стек пары $$$\left( \frac{sum}{cnt}, cnt - sum \mod cnt \right)$$$ и $$$\left( \frac{sum}{cnt} + 1, sum \mod cnt \right) $$$.
Временная сложность алгоритма — $$$O(n)$$$, так как на каждой итерации добавляется не более 2 элементов в стек, и каждый элемент удаляется не более одного раза.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
ll a[200200];
int n;
void solve(){
cin >> n;
for(int i=1;i<=n;i++){
cin >> a[i];
}
stack<pair<ll, int>> s;
for(int i=1;i<=n;i++){
ll sum = a[i], cnt = 1;
while(s.size() && s.top().first >= sum / cnt){
sum += s.top().first * s.top().second;
cnt += s.top().second;
s.pop();
}
s.push({sum / cnt, cnt - sum % cnt});
if(sum % cnt != 0){
s.push({sum / cnt + 1, sum % cnt});
}
}
ll mx = s.top().first;
while(s.size() > 1){
s.pop();
}
cout << mx - s.top().first << '\n';
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t = 1;
cin >> t;
while(t--){
solve();
}
}
Первый решивший: meme
Пусть $$$g$$$ — наибольший общий делитель массива $$$a$$$. Поделим каждый элемент $$$a_i$$$ на $$$g$$$, а в конце просто умножим результат на $$$g$$$.
Рассмотрим следующий жадный алгоритм. Создадим изначально пустой массив $$$b$$$ и будем добавлять в конец массива $$$b$$$ элемент, который минимизирует НОД с уже имеющимся массивом $$$b$$$. Можно заметить, что НОД через максимум 10 итераций станет равным 1. После этого можно в любом порядке добавить оставшиеся элементы.
Пусть $$$A$$$ — это минимально возможный НОД для текущего префикса массива $$$b$$$, а $$$B$$$ — оптимальный ответ, такой что $$$A < B$$$. Тогда мы можем сначала поставить $$$A$$$, а затем записать последовательность $$$B$$$, в том же порядке. Ответ не ухудшится, так как $$$A + gcd(A, B) \leq B$$$.
Временная сложность алгоритма: $$$O(n \cdot 10)$$$.
#include <bits/stdc++.h>
using namespace std;
int a[200200];
int n;
void solve(){
cin >> n;
int g = 0, cur = 0;
long long ans = 0;
for(int i=0;i<n;i++){
cin >> a[i];
g = __gcd(g, a[i]);
}
for(int i=0;i<n;i++){
a[i] /= g;
}
for(int t=0;t<n;t++){
int nc = 1e9;
for(int i=0;i<n;i++){
nc = min(nc, __gcd(cur, a[i]));
}
cur = nc;
ans += cur;
if(cur == 1) {
ans += n - t - 1;
break;
}
}
cout << ans * g << '\n';
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t = 1;
cin >> t;
while(t--){
solve();
}
}
2013F1 — Игра на дереве (простая версия)
Первый решивший: EnofTaiPeople
Сначала разберём, как будет проходить игра. Алиса и Боб начинают двигаться друг к другу по пути от вершины $$$1$$$ к вершине $$$u$$$. В какой-то момент один из игроков может свернуть в поддерево вершины, которая не лежит на пути $$$(1, u)$$$. После этого оба игрока идут в самую дальнюю доступную вершину.
Пусть путь от вершины $$$1$$$ до вершины $$$u$$$ обозначается как $$$(p_1, p_2, \dots, p_m)$$$, где $$$p_1 = 1$$$ и $$$p_m = u$$$. Изначально Алиса стоит в вершине $$$p_1$$$, а Боб — в вершине $$$p_m$$$.
Для каждой вершины на пути $$$(p_1, p_2, \dots, p_m)$$$ введём два значения:
$$$a_i$$$ — это количество вершин, которые посетит Алиса, если она спустится в поддерево вершины $$$p_i$$$, которое не лежит на пути $$$(1, u)$$$;
$$$b_i$$$ — это количество вершин, которые посетит Боб, если он спустится в поддерево вершины $$$p_i$$$, которое также не лежит на этом пути.
Обозначим расстояние до самой дальней вершины в поддереве вершины $$$p_i$$$ как $$$d_{p_i}$$$. Тогда:
$$$a_i = d_{p_i} + i$$$ — количество вершин, которые Алиса может посетить, если она спустится в поддерево на вершине $$$p_i$$$;
$$$b_i = d_{p_i} + m - i + 1$$$ — количество вершин, которые Боб может посетить, если он спустится в поддерево на вершине $$$p_i $$$.
Теперь рассмотрим, что происходит, если Алиса стоит на вершине $$$p_i$$$, а Боб на вершине $$$p_j$$$. Если Алиса решит спуститься в поддерево вершины $$$p_i$$$, то она посетит $$$a_i$$$ вершин. В это время Боб может дойти до любой вершины на подотрезке $$$(p_i, p_{i+1}, \dots, p_j)$$$. Бобу выгодно спуститься в поддерево на вершине с максимальным значением $$$b_k$$$, где $$$k \in [i+1, j] $$$.
Следовательно, Алисе выгодно спуститься в поддерево вершины $$$p_i$$$, если выполняется условие: $$$a_i > \max(b_{i+1}, b_{i+2}, \dots, b_j)$$$ Иначе она должна перейти на вершину $$$p_{i+1}$$$.
Для Боба ситуация аналогична: он спускается в поддерево вершины $$$p_j$$$, если для него выполняется условие, аналогичное условию для Алисы.
Для того чтобы эффективно находить максимум на подотрезке $$$(p_{i+1}, \dots, p_j)$$$, можно использовать дерево отрезков или разреженную таблицу. Это позволяет находить максимум за $$$O(log n)$$$ для каждого запроса, что даёт общую временную сложность $$$O(nlog n)$$$.
Однако можно доказать, что вместо использования дерева отрезков или разреженной таблицы можно просто пробегаться по всем вершинам на подотрезке и завершать цикл при нахождении большей вершины. Это даст решение с временной сложностью $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
const int maxn = 2e5 + 12;
vector<int> g[maxn];
vector<int> ord;
int dp[maxn];
int n, m, k;
bool calc(int v, int p, int f){
bool is = 0;
if(v == f) is = 1;
dp[v] = 1;
for(int to:g[v]){
if(to == p){
continue;
}
bool fg = calc(to, v, f);
is |= fg;
if(fg == 0){
dp[v] = max(dp[v], dp[to] + 1);
}
}
if(is){
ord.push_back(v);
}
return is;
}
struct segtree {
int t[maxn * 4];
void build (int v, int tl, int tr, vector <int>& a) {
if (tl == tr) {
t[v] = a[tl];
return;
}
int tm = (tl + tr) >> 1;
build (v * 2, tl, tm, a);
build (v * 2 + 1, tm + 1, tr, a);
t[v] = max(t[v * 2], t[v * 2 + 1]);
}
int get (int v, int tl, int tr, int l, int r) {
if (tr < l || r < tl) return 0;
if (l <= tl && tr <= r) return t[v];
int tm = (tl + tr) >> 1;
return max(get(v * 2, tl, tm, l, r), get(v * 2 + 1, tm + 1, tr, l, r));
}
} st_a, st_b;
int get(int v){
ord.clear();
calc(1, 0 , v);
int m = ord.size();
reverse(ord.begin(), ord.end());
vector<int> a(m), b(m);
for(int i=0;i<m;i++){
a[i] = dp[ord[i]] + i;
b[i] = dp[ord[m - i - 1]] + i;
}
st_a.build(1, 0, m-1, a);
st_b.build(1, 0, m-1, b);
for(int i=0;i < (m + 1) / 2;i++){
if(a[i] > st_b.get(1, 0, m-1, i, m - i - 2)){
return 1;
}
if(b[i] >= st_a.get(1, 0, m-1, i + 1, m - i - 2)){
return 0;
}
}
}
void solve(){
cin >> n;
for(int i=1;i<=n;i++){
g[i].clear();
}
for(int i=1;i<n;i++){
int a, b;
cin >> a >> b;
g[a].push_back(b);
g[b].push_back(a);
}
int u, v;
cin >> u >> v;
ord.clear();
calc(v, 0, u);
auto p = ord;
for(int x:p){
if(get(x) == 1){
cout << "Alice\n";
}
else{
cout << "Bob\n";
}
}
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t = 1;
cin >> t;
while(t--){
solve();
}
}
#include <bits/stdc++.h>
using namespace std;
int dfs(int v, int p, int to, const vector<vector<int>> &g, vector<int> &max_depth) {
int ans = 0;
bool has_to = false;
for (int i : g[v]) {
if (i == p) {
continue;
}
int tmp = dfs(i, v, to, g, max_depth);
if (tmp == -1) {
has_to = true;
} else {
ans = max(ans, tmp + 1);
}
}
if (has_to || v == to) {
max_depth.emplace_back(ans);
return -1;
} else {
return ans;
}
}
int solve(const vector<vector<int>> &g, int to) {
vector<int> max_depth;
dfs(0, -1, to, g, max_depth);
int n = max_depth.size();
reverse(max_depth.begin(), max_depth.end());
int first = 0, second = n - 1;
while (true) {
{
int value1 = max_depth[first] + first;
bool valid = true;
for (int j = second; j > first; --j) {
if (value1 <= max_depth[j] + (n - j - 1)) {
valid = false;
break;
}
}
if (valid) {
return 0;
}
++first;
if (first == second) {
return 1;
}
}
{
int value2 = max_depth[second] + (n - second - 1);
bool valid = true;
for (int j = first; j < second; ++j) {
if (value2 < max_depth[j] + j) {
valid = false;
break;
}
}
if (valid) {
return 1;
}
--second;
if (first == second) {
return 0;
}
}
}
}
void solve() {
int n;
cin >> n;
vector<vector<int>> g(n);
for (int i = 0; i < n - 1; ++i) {
int a, b;
cin >> a >> b;
g[a - 1].push_back(b - 1);
g[b - 1].push_back(a - 1);
}
int s, f;
cin >> s >> f;
--s, --f;
int ans = solve(g, s);
if (ans == 0) {
cout << "Alice\n";
} else {
cout << "Bob\n";
}
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
solve();
}
2013F2 — Игра на дереве (сложная версия)
Первый решивший: rainboy
Прочтите разбор простой версии задачи.
Путь $$$(u, v)$$$ можно разбить на два вертикальных пути $$$(1, u)$$$ и $$$(1, v)$$$, далее мы будем решать для вершин на пути $$$(1, u)$$$, путь $$$(1, v)$$$ решается аналогично.
Для каждой вершины на пути $$$(1, u)$$$, введем два значения.
- $$$fa_i$$$ — Первая вершина на которой победит Алиса, если спустится в поддерево, когда Боб начинает с вершины $$$p_i$$$.
- $$$fb_i$$$ — Первая вершина на который Победит Боб если спустится в поддерево, когда Боб начинает с вершины $$$p_i$$$.
Утверждение, Алисе выгодно спускаться в поддерево вершины $$$v$$$, только на каком-то подотерзке вершин с которых начинает Боб. Аналогичное утверждение выполняется и для Боба.
Теперь нужно для каждой вершины Алисы, определить отрезок на котором нам будет выгодно спускаться. Левая граница отрезка для вершины $$$p_i$$$, будет равна $$$2 \cdot i$$$, так как Алиса всегда будет на левой половине пути. Не трудно заметить, что правую границу отрезка можно находиться с помощью бинарного поиска.
Переопределим значение $$$b_i$$$, приравняем его к $$$d_{p_i} - i$$$. Чтобы проверить выгодно ли нам спускаться в поддереве для фиксированной середины $$$mid$$$, нам необходимо чтобы выполнялось следующее условие: $$$a_i > max(b_{i+1}, b_{i+2}, \ldots, b_{mid - i}) + mid$$$.
Обозначним за $$$(l_j, r_j)$$$, подотрезок на котором Алисе выгодно спуститься вниз, если Боб начнет с вершины $$$p_j$$$. Тогда значение $$$fa_i$$$ ,будет равно минимальной позиции $$$j$$$, на которой выполняется следующее условие: $$$l_j \leq i \leq r_j$$$, это можно находиться с использованием сета.
Значение $$$fb_i$$$, считается аналогично.
Алиса выигрывает на вершине $$$p_i$$$, если $$$fa_i \leq i - fb_i$$$, в противном случае побеждает Боб.
Для эффективного нахождения максимума на подотрезке, можно использовать разряженную таблицу. Также использование сетов и бинарных поисков дает нам временную сложность $$$O(nlogn)$$$.
#include <bits/stdc++.h>
using namespace std;
const int maxn = 2e5 + 12;
vector<int> g[maxn], del[maxn], add[maxn];
vector<int> ord;
int ans[maxn];
int dp[maxn];
int n, m, k;
bool calc(int v, int p, int f){
bool is = 0;
if(v == f) is = 1;
dp[v] = 1;
for(int to:g[v]){
if(to == p){
continue;
}
bool fg = calc(to, v, f);
is |= fg;
if(fg == 0){
dp[v] = max(dp[v], dp[to] + 1);
}
}
if(is){
ord.push_back(v);
}
return is;
}
struct sparse {
int mx[20][200200], lg[200200];
int n;
void build(vector<int> &a){
n = a.size();
for(int i=0;i<n;i++){
mx[0][i] = a[i];
}
lg[0] = lg[1] = 0;
for(int i=2;i<=n;i++){
lg[i] = lg[i/2] + 1;
}
for(int k=1;k<20;k++){
for(int i=0;i + (1 << k) - 1 < n;i++){
mx[k][i] = max(mx[k-1][i], mx[k-1][i + (1 << (k - 1))]);
}
}
}
int get (int l, int r) {
if(l > r) return -1e9;
int k = lg[r-l+1];
return max(mx[k][l], mx[k][r - (1 << k) + 1]);
}
} st_a, st_b;
void solve(int v){
ord.clear();
calc(1, 0, v);
reverse(ord.begin(), ord.end());
m = ord.size();
vector<int> a(m+1), b(m+1);
vector<int> fa(m+1, 1e9), fb(m+1, -1e9);
for(int i=0;i<m;i++){
a[i] = dp[ord[i]] + i;
b[i] = dp[ord[i]] - i;
del[i].clear();
add[i].clear();
}
st_a.build(a);
st_b.build(b);
multiset<int> s;
for(int i=1;i<m;i++){
int pos = i;
for(int l=i+1, r=m-1;l<=r;){
int mid = l + r >> 1;
if(st_b.get(i+1 , mid) + mid < a[i] - i){
pos = mid;
l = mid + 1;
}
else r = mid - 1;
}
if(i < pos){
add[min(m, 2 * i)].push_back(i);
del[min(m, pos + i)].push_back(i);
}
for(int x:add[i]){
s.insert(x);
}
if(s.size()) fa[i] = *s.begin();
for(int x:del[i]){
s.erase(s.find(x));
}
}
s.clear();
for(int i=0;i<=m;i++){
add[i].clear();
del[i].clear();
}
for(int i=1;i<m;i++){
int pos = i;
for(int l=1, r = i-1;l<=r;){
int mid = l + r >> 1;
if(st_a.get(mid, i-1) - mid + 1 <= b[i] + i){
pos = mid;
r = mid - 1;
}
else l = mid + 1;
}
pos--;
if(pos >= 0){
add[min(m, pos + i)].push_back(i);
del[min(m, 2 * i - 1)].push_back(i);
}
for(int x:add[i]){
s.insert(x);
}
if(s.size()) fb[i] = *s.rbegin();
for(int x:del[i]){
s.erase(s.find(x));
}
}
for(int i=m-1;i>0;i--){
b[i] = max(b[i+1] + 1, dp[ord[i]]);
if(b[i] >= st_a.get(1, i-1)){
fb[i] = i;
}
if(a[0] > max(st_b.get(1, i-1) + i, b[i])){
fa[i] = 0;
}
ans[ord[i]] = 0;
if(fa[i] <= i - fb[i]){
ans[ord[i]] = 1;
}
}
}
void solve(){
cin >> n;
for(int i=1;i<=n;i++){
g[i].clear();
}
for(int i=1;i<n;i++){
int a, b;
cin >> a >> b;
g[a].push_back(b);
g[b].push_back(a);
}
int u, v;
cin >> u >> v;
solve(u), solve(v);
ord.clear();
calc(v, 0, u);
auto p = ord;
for(int x:p){
if(ans[x] == 1){
cout << "Alice\n";
}
else{
cout << "Bob\n";
}
}
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t = 1;
cin >> t;
while(t--){
solve();
}
}
Problem E originally meant a solution without a gready, but what are rich in :)
And in F you need to separately figure out when the first / second player wins if he starts at his top, because otherwise it hurts a lot)
I have another solution of $$$E$$$ and I wonder if it is intended.
Enumerate $$$a_1$$$. Note all the divisors of $$$a_1$$$ as $$$d_1,d_2,\ldots,d_m$$$. We can check if there is $$$a_j$$$ such that $$$\gcd(a_1,a_j)=d_k$$$ for every divisors.
Note $$$dp_{i,j}$$$ as the max sum when $$$\gcd(a_1,\ldots,a_i)=j$$$. We only accept transform $$$dp_{i,j} \rightarrow dp_{i+1,k}(k<j)$$$ so $$$i$$$ is small ($$$log(maxa)$$$).
So the complexity is $$$log(maxa) \cdot m^2$$$ for an $$$a_1$$$.
The maximum number of divisors may be $$$240$$$, but if we tag the calculated numbers and skip them, the average $$$m$$$ is around $$$10$$$.
The final complexity is $$$O(n \cdot \log(maxa) \cdot \overline{m^2})$$$.
how do you check if there is a number that will turn N into one of its divisor D ?
i thought of inclusion exclusion on the prime factors to determine if there exists such a number X that doesnt include any of the prime factors to turn N into D but it seems like u didnt use inclusion exclusion so how did u do it ?
edit : seems like u only did it for one number
Let's assume $$$d_1<d_2< \cdots <d_m$$$.
Note $$$cnt1_x$$$ as the total of multiple of $$$x$$$, and $$$cnt2_x$$$ as the total of $$$a_i$$$ such that $$$\gcd(a_1,a_i)=x$$$.
For $$$i=m$$$ to $$$1$$$, do $$$cnt_2[d_i]+=cnt1[d_i]$$$, and $$$cnt_2[d_j]-=cnt_2[d_i]$$$ $$$(d_i=k\cdot d_j)$$$. The complexity is $$$O(m^2)$$$.
any intuitive / formal proof to D ? i have a weird one that i'll write when i have time
why is cout.flush() not required in question C?
endl flushes
:< could someone explain why cout<<ceil(double(n) / min(x,y)); in A led to WA
There are two reasons this may cause issue
first (which is mostly the issue here), the output will be in scientific notation
this can be solved by casting the output to long/int
Another issue which usually happens with large numbers is that double may not be precise enough, for example
Here the answer is off by 1 because double is not precise enough, a good solution for both these issues is to not use double for calculating ceil, instead you can calculate it using this
$$$ceil(a/b) = floor((a + b - 1) / b)$$$
Which can be written like this:
use (long long)ceill(..) instead
My solution for problem C:
First binary search to find maximum consecutive one and zero's. You can binary search over array such as [0, 1, 11, 111, 1111 ...] for 1 and when result of binary search is 0 for 1 that means there aren't any 1 in password.
Now you can try both possible combination of max consecutive one and max consecutive zero. For ex let's say max_consec_one = 3, max_consec_zero = 4, so either 1110000 or 0000111 has to be a substring.
From the above example if max_consec_one = 3 and 1110000 is valid substring that means on left of this substring there has to be some amount of zero's and same logic for right. To find this cnt of zero's for left and one's for right you can again binary search.
You can repeat above step until string t reaches the size n of the original string.
I don't know how to prove that it is less than n * 2 queries, but I think it is. I don't know whether this approach is correct. I think my solution has some sort of implementation mistake. Here's my submission
Can someone help me to find out if my approach is incorrect or implementation?
I spotted a problem in your approach, check out this case:
111010000
here max_consec_one = 3, max_consec_zero = 4, yet neither 1110000 nor 0000111 are substrings of the original string
Thanks, btwn I upvoted you but someone else downvoted you it's kinda funny that you got downvoted for no reason.
can anyone explain problem D and E solutions for like why we did what we did . The intuition behind them.
You should add the binary search and prefix sum solutions to D as well.
As Sol2/Sol3.
Please share the prefix sum solution of it
So in D you literally said "here is the solution" no explanation to the logic or why it works
Waiting for the logic explanation! Someone who understood the solution please explain
can someone explain the solution to problem D
can someone explain the solution to problem D
The solution written in the editorial and the binary search solution of problem D are basically equivalent.
If we always keep the array non-decreasing, then performing operations is not advantageous, which means we have the array we needed. So the problem becomes how can we keep the array non-decreasing when we continuously add elements to it. We can use binary search to search the left side of where we add new element, finding the longest interval that can be averaged using the operation that problem provide us. Equally, we can use a stack to finish the same work, just traverse forward(left) one by one. However, simply traverse forward one by one has a complexity of $$$O(N)$$$ that cannot accepted, that's why the editorial records the number of occurrences of duplicate elements.
I dont understand the binary search solution. Can u illustrate more pls
You can set the start place fixed at where we add the new element, then binary search the end place between 1 and the start place, a end place is legal with the same condition of the editorial says. And you can use prefix sum to calculate the interval sum in $$$O(1)$$$ complexity. The binary search is correct in that if an interval has a suffix that cannot be averaged, then it cannot be averaged, and this fact ensures the monotonicity that binary search required.
Can u pls provide a code for that
after you do binary search as you say, the value of prefix sum may change, and we may update it in complexity $$$O(n)$$$ that can not accepted. Is that rihgt?Is there something I misunderstand?
You're right, maybe I have something misunderstand about the binary search solution. Thinking...
I used binary search to find $$$alpha$$$ the lowest maximum obtainable and the $$$beta$$$ largest minimum obtainable with the operation. Then it can be shown that it is possible to build a solution that reaches both of these values. The answer is then $$$alpha-beta$$$
After taking long time to read your editorial, i finally managed to slightly understand how the binary search approach works. And it seems there are some > or < wrongs and a 'is' is miss-spelled as 'it'? Sorry for my poor english btw.
I thought of the same approach but then while thinking couldn't resolve that how u are modifying the previous values from starting index to the adding index?? See what i mean that when we have a non decreasing monotonous array in our left side then we can use binary search on them to get where the new element fits in, then we can definitly take on the right side of that arrays sum by using prefix sum and then avg that part out.
But my question is even though we can do this we dont have a method of changing the left portion of the array in O(1). Worst case in changing will turn out to be O(n). Hence the TC will ultimately turn out to be O(n^2), as per myself.
The solution written above also states the same thing but they dont use the a sorted array instead they are using a stack, where they avg out the values. and keep count variable to indicate how many repetetive copies of it are existing in the left sorted section. This reduces the TC of our approach by a lot like they mentioned "since on each iteration, no more than 2 elements are added to the stack, and each element is removed at most once". Here i cant agree to the second point that each element is removed atmost once!! Like they are inserting more types of values than what are already present there, also the values m,ight be repetetive at different instances of index to be added.
sub link Can someone tell me why this is wrong? It's the same as the editorial, only difference being i wrote a simulation for the checking part
Wait so long for the editorial
In problem C: What if there is a substring of the form both t+0 and t+1 in the original string?
You dont care u cant just take any of them and procceed until u hit the end of the string (neither t + 0 nor t + 1 is a substring) with this u will have a suffix of the string then start extending from the left (aka 0 + t or 1 + t until the length of t is n)
Oh, right. Understood. Thanks.
Regarding E's time complexity as $$$O(n\log^2W)$$$ may be better.
In d question what is the binary search approach ?? minimize the maximum and find largest and smallest element and maximize the minimum and find the largest and smallest element and then compare both the difference whichever will be smaller will be our answer ??
In F1, I was trying a solution with simultaneous BFS (from 1 for Alice, and from u for Bob). It doesn't work, but I don't know why. I would appreciate if someone could point out the edge case. At each step, Alice goes to EVERY unseen node in her frontier, marking it as 'seen', after that the neighbours of each of these nodes become her next frontier. Then Bob does the same thing. If at any point a player's frontier is empty, or only have 'seen' nodes, and its that players turn, the player loses.
See if this may help you: 282194464. It's a simple backtracking solution using DFS, but I don't know why it doesn't TLE (since backtracking usually has exponential/factorial time complexity).
In this test case, Bob would win, but your algorithm outputs 'Alice' ?
Ah I see, got it, thank you so much
This is also my initial thoughts, since the previously shared test didn't fail my program I changed it to make it fail mine, expected output is still Bob:
Wansur I have a practice submission for problem D which was accepted, but I believe it shouldn't, as it has $$$O(n^2)$$$ time complexity. Here's the submission 282154960, and here's a test generator that should make it TLE:
I'm posting this here because there isn't a way to hack my own submission to a past contest.
Hacked it for you!
Thx, mate!
For those of you who don't understand what the tutorial of E is saying:
We want to prove that $$$F(a_1,a_2,\cdots,a_n,x) \ge F(x,a_1,a_2,\cdots,a_n)$$$ where $$$F$$$ is the sum defined in the problem and $$$x$$$ satisfies $$$x < a_1$$$. If this is true, there always exists an optimal solution where is the smallest element is re-arranged to the first position. Here is the proof:
$$$F(a_1,a_2,\cdots,a_n,x) = (\color{red}{a_1}) + \gcd(a_1,a_2) + \gcd(a_1,a_2,a_3) + \cdots + \gcd(a_1,a_2,\cdots,a_n,x)$$$
$$$F(x,a_1,a_2,\cdots,a_n) = (\color{red}{x + \gcd(x,a_1)}) + \gcd(x,a_1,a_2) + \gcd(x,a_1,a_2,a_3) + \cdots + \gcd(x,a_1,a_2,\cdots,a_n)$$$
Because $$$x < a_1$$$ and gcd is always smaller or equal the the difference of two numbers, we have $$$\gcd(x,a_1) \le a_1 -x \implies x + \gcd(x,a_1) \le a_1$$$.
For the rest of the terms, it is easy to observe that the term below is always less than or equal to the corresponding term above since $$$x$$$ is added in the calculation.
"It can be observed that the gcd will reach 1 in at most 10 iterations." can you please explain it for me?
$$$a_1$$$ has at most $$$\log(a_1)$$$ (not necessarily distinct) prime factors. In each iteration, at least one of them should be gone, or the $$$\gcd$$$ is already the minimum possible.
but why in each time one of them gone? if the left of array are same ,ans they are different from others , it wont go even one. And by the way,qingczha's provement int F(x,a1..an) is wrong ,because in fact for array 3 2 2 2 is wrong,but i dont know why
becauze if u don't have any element which is less than last, it means that every left elements are equel to each otehr, and you can stop your loop
At first, we divided every $$$a_i$$$ by $$$g = gcd(a_1,a_2,...,a_n)$$$. This means that the gcd of some prefix of $$$a$$$ will now eventually reach $$$1$$$ instead of $$$g$$$.
We are greedily building our new array, with every new element it is guaranteed that the new gcd will be strictly less than the previous (until we reach $$$1$$$ obviously). And since $$$a_i ≤ 10^5$$$, any $$$a_i$$$ can have at most $$$6$$$ distinct primes in its factorization, so we can safely say that the gcd will be reduced to $$$1$$$ in at most $$$10$$$ iterations.
i know at most $$$log_{2}a_i$$$ not necessarily distinct primes, but why $$$6$$$ distinct primes?
The minimum number that has $$$q$$$ distinct primes is just the product of the smallest $$$q$$$ primes.
$$$2×3×5×7×11×13=30030$$$
$$$2×3×5×7×11×13×17=510510$$$
$$$30030$$$ and $$$510510$$$ are the smallest numbers that have $$$6$$$ and $$$7$$$ distinct primes, respectively. Maximum possible $$$a_i$$$ is $$$10^5$$$ which is bounded by these two numbers meaning that any $$$a_i$$$ can have at most $$$6$$$ distinct primes.
that's right, thanks bro
Why not in 7 iterations?
Could you tell me how to explain the greedy in the tutorial? That means how to prove that $$$x,y,a_1,a_2,a_3,\cdots $$$ is not worser than $$$x,a_1,a_2,a_3,\cdots ,y$$$ if $$$\gcd(x,y)\le \gcd(x,a_1)$$$?
With the fact I've proven above, we know that it's safe to put the smallest element at the front. Given this, every other element $$$a_i$$$ should be considered to be its gcd with $$$x$$$ since any factor that $$$x$$$ doesn't have will be gone in subsequent calculations(because $$$x$$$ is always include in the gcd).
I got it, thx.
Why is it optimal to add the smallest element at the front?
https://codeforces.me/blog/entry/134170?#comment-1200965
Oh my bad, thanks!
"For the rest of the terms, it is easy to observe that the term below is always less than or equal to the corresponding term above since x is added in the calculation.". I can't observe it. Could would explain further
$$$\gcd(x,a_1,a_2) = \gcd(x,\gcd(a_1,a_2)) \le \gcd(a_1,a_2)$$$ because the gcd is a divisor and hence must be smaller or equal to each of the input.
Got it. Thanks
someone kindly explain solution for problem D in a bit elaborate.The tutorial is of no use
Your proof for problem C seems incorrect to me.
In your proof, you mentioned However, after this iteration, we will make only 1 query. But what if that iteration does not exist? For example, you coincidentally reach a suffix of size $$$n - 1$$$ for $$$t$$$, which may take $$$2(n-1)$$$ queries. And after $$$2$$$ more queries, you realized that your $$$t$$$ has reached the end of $$$s$$$. So you need $$$1$$$ additional query to know whether the first character is
'0'
or'1'
. In total, this is $$$2(n - 1) + 2 + 1 = 2n + 1$$$ queries.In your reference implementation, this case does not actually happen but you may need to address it in your proof.
I was thinking the same thing. I tried out few test cases and maybe this will clear answer you question —
Not every operation will cost you 2 queries. Say you start your query with
"0"
, if there is no substring"0"
, then we are looking at a string"11..11"
as answer and this will take exactly2*n
queries. On the contrary, one can argue that answer string can be of the form"011..11"
, then too if you start your query with"0"
, you only take one query to get the one substring 2 queries for each character in the other"11..11"
part.Basically, your suffix of size
n-1
will take2*(n-1)
queries only if they are the opposite of the character that you are querying for first( example- answer string —"100000"
and you query first fort+"1"
and thent+"0"
). But if the first character of answer string is the same as the character you are querying first then you get the whole string and not the suffix ( your answer builds as"10.."
)I hope I was able to explain!
If the first characters costs 2 queries, it means the the whole string consists of
1
only(assuming you are querying in the order of0
and1
). In this case, exactly 2n queries will be used to figure the string out.qingczha Yes, it is a good explanation.
why is the problem D explained without explanation?
can someone give some elaborative solution to problem D.the tutorial doesn't explain anything imo
someone explain solution for task D in a bit elaborate manner.the tutorial is of no use
In case you still don't understand the tutorial of D:
The key observation here is: If two consecutive elements $$$a_i > a_{i+1}$$$, it is always not bad to "mix them up", i.e. assign $$$a_i := \lfloor{\frac{a_i+a_{i+1}}{2}}\rfloor$$$ and $$$a_{i+1} \to \lceil{\frac{a_i+a_{i+1}}{2}} \rceil$$$. For instance, (10,1) -> (5,6) and (8,4) -> (6,6). The reason is that: 1. The answer is not worsen since the range is smaller. 2. If in the optimal operation sequence the value of $$$a_i$$$ is transferred to some element of large index instead of $$$a_{i+1}$$$, it is always valid to transfer it from $$$a_{i+1}$$$(Why?).
what do you mean by transferring ai? can you explain pls
simply $$$a_i \to a_{i}-1$$$ and $$$a_j \to a_{j}+1$$$ as defined in the problem
I got this but I am not sure how to simulate all of this?
I can only think of an O(n^2) solution where you do it n times to get a sorted array.
I dont get the stack implementation in the editorial :(
Let's say you have a sorted array $$$a,\cdots,a,b,\cdots,b,c,\cdots,c$$$ where $$$a<b<c$$$ and now you want to append a $$$d$$$ and keep it sorted.
Firstly, what the stack stores is an efficient representation of the elements in the current array, i.e. $$$[(a,cnt_a),(b,cnt_b),(c,cnt_c)]$$$. Now to add $$$d$$$, we firstly mix it up with $$$c$$$. The result can be calculated in $$$O(1)$$$ because we know the sum and the total count. One slight tricky point is to split the "floor" part and the "ceil" part. For example, we have $$$(10,7)$$$ (7 tens) and mix it up with a $$$1$$$, we know that the sum is $$$10*7+1 = 71$$$ and the count is $$$7+1=8$$$. The result should be $$$[(8,1),(9,7)]$$$. (Try to figure out how $$$1$$$ and $$$7$$$ are calculated.)
If mix-up with currently largest element(i.e. $$$c$$$) would cause the the result to be smaller than the second largest element(i.e. $$$b$$$), the first mix-up is actually not needed and we include $$$(b,cnt_b)$$$ in the mix-up as well. We keep the inclusion until the result satisfies the sorted-ness.
Ohh got this, we can do the 7,1 by the sum and average value or something.
Thanks a lot!!
Автокомментарий: текст был обновлен пользователем Wansur (предыдущая версия, новая версия, сравнить).
Problem C can also be solved with O( n + 2 * log(n) ) complexity. Use binary search to find the longest continuous string of 0's using queries. Then we can build the right side of the string with 1's and check with queries. If it's not a substring we replace the 1 with 0 and continue until we reach fill the string or we end up with more 0's consecutively than what we found to be the longest. Next we use binary search to query how many of the ending 0's are also apart of the confirmed substring we know from the last query section. This will finish constructing the right side of the string. To find the answer, we now add 1's to the front of the string and query, if it's not a substring then the answer is to replace the 1 with a 0.
...
Now let us try to reduce this maximum element of the array A.
we can notice that reducing the maximum element of the array A does not affect the minimum element of the array A .
suppose we are reducing mx to mx-y; . Lets just assume that it is possible to make the mx to (mx-y) .
Now we need to prove that when we try to increment the minimum element of the array A , it does not make the maximum element of the array A , any bigger , in fact maximum element might decrease but never increase .
suppose for we try to increase mn to mn+x , lets just assume that it is possible to make the mn to (mn+x)
From Above equation we can clearly see that a[i] element become mn+x and a[i-1] decreases only , so we can clearly see that
from this when we increase the mn(minimum) to mn+x , the maximum element element does not get any bigger , in fact the maximum element might decrease but never increase , ith element is becoming mn+x but the (i-1)th element involved in the operation is just decreasing ,for example suppose a[i-1] = mx then in the above operation since a[i-1] is mx it will decrease but not increase.
then minimum value = max(0,alpha-beta);
My solution : 282501994
Cool solution understood that better than in the editorial
Wait isn't min going to change when min > mx — y? a[i+1] += (a[i] — (mx-y)) a[i] = mx-y
initially mx = 18 and mn = 6;
what i mean is that what if mx — y gets smaller than min
That is not possible , mx is changed to mx-y and mx-y < mn (you are saying this) You can look at the last element = a[n] , a[n] >= minimum(mn), we are making all the elements a[i] <= mx-y ; since the last element = a[n] >= mn , the last element can only increase . so you can not make a[n] to mx-y which is mx-y < mn . So the condition you mentioned can never happen , is this helpful ?
thx, yes it was helpful, thats indeed good solution, bro what's your real rating
perfect explanation, thanks you
This is what I want to do but I have difficulty coming up with binary search part. Thank you.
Why the code of F1 can solve the problem in O(n)? Thank you!
i have similar doubt, can u plz explain Wansur
I also doubt the identical question. Can anybody give us some hints? Thank you.
Problem D. I saw a solution that we can binary search upper and lower bounds simultaneously. And I had tried this and got AC. But I'm very confused that how to prove this two independent operations is correct?
i have the same doubts.
I wrote an editorial about this solution to prove that both bounds can be used simultaneously
Thanks a lot!
In the case of a buffed up C , how to get to the answer in less than 2*N queries?
Any clues about this?
I wrote a more detailed editorial for problem D, if anyone is interested https://codeforces.me/blog/entry/134292
In the D problem, in authors solution as we are putting any element in stack only if it is larger than the previous one, as result max is at the top of stack and min should be at bottom, so taking minimum of all elements of stack(first element) should also give minimum but it is giving wa on tc3. Also, since the tc is large i cant work it out, any idea as to what might be the problem. WA_solution
Thanks in advance
wrong answer 1st numbers differ
, the first case is not very large(n = 20)const ll INF = INT_MAX/10;
Thanks alot, it was a total blunder by me :(
[DELETED]
Could anyone share the DP solution for E? Please brighten me :))
can someone explain why this is wrong for F1.
Basically I tried to find the path from 1 to u. For every path value I found Dp(max depth not on path)
then I checked for early exit. If this makes sense
How to prove problem e in detailcc?
I wonder in problem E.Is there a dp in The right time complexity ? Like "wuhudsm"we can Note dpi,j as the max sum when gcd(a1,…,ai)=j.But at least it cost O(n*n),because enumerate i cost N,and State transition cost N. It is at least O(N*N)
So can we have a rigorous mathematical proof for the solution of problem B?
What does the $$$A$$$ and $$$B$$$ in the editorial of E actually mean? are they single element or sequence?
the $$$A$$$ and $$$B$$$ in E's editorial are sequences, or elements, or the gcd of a sequence?
just elements
Can anyone find the issue with my approach to Problem D?
https://pastecode.io/s/e7awhp5j
problem E is god tier. loved it
I really struggled to understand 2013F2, which I guess makes sense because it's rated 3500 for a reason.
tldr; the idea is, for each node $$$x$$$ Bob starts at, we can compute the earliest turn at which Alice beats Bob and Bob beats Alice. If Alice's "win turn" is <= Bob's "win turn" then Alice wins, otherwise Bob wins.
After studying the code and attempting it myself for several days I'd like to offer a somewhat different and more detailed explanation of F2 that I think is more direct. The $$$(l_j, r_j)$$$ stuff is correct but I think makes the logic look harder than it is.
F1 variant: the game simplifies to the following:
Let's start with the F1 version where we only need to find the winner for a single start vertex. The solution proceeds as follows, let $$$m$$$ be the number of nodes in the $$$1$$$ to $$$u$$$ path:
F2 variant: this one is harder because Bob can now be at any position on the simple path between $$$u$$$ and $$$v$$$. As the editorial says this path is a subset of the union of $$$1..u$$$ and $$$1..v$$$ so we can solve for all nodes on both of those paths, and then iterate over the $$$u$$$ to $$$v$$$ path at the end and print the answers.
The idea is the same as the F1 version, but this time we need to compute Alice and Bob's win turns for each of Bob's positions.
Alice win turns: if Alice turns off the path at $$$i$$$ then she has $$$t_i \equiv h_i + 1$$$ moves remaining. She will beat Bob at
Bob win turns: similar logic to Alice's win turns. The idea is the Alice win turns are correct, but they assume that Bob has not already won. So we do another loop from Bob's perspective to find his earliest win turn at each index; if they're the same or lower then Bob wins. The idea of finding a win turn of ranges of Bob start positions is the same, but the details are a bit more complicated:
This last point is where the mysterious
fa[x] <= x-fb[x]
comes from in the sample code (the code uses $$$i$$$ in place of $$$x$$$; I used separate symbols for current position $$$i$$$ versus starting position $$$x$$$ to hopefully reduce confusion).The code uses an interesting
multiset
approach for finding the minimum win index. If an index is valid for a range of Bob start positions $$$x1..x2$$$ then $$$i$$$ is put inadd[x1]
andrem[x2]
. These record when to add and remove candidate indices as we iterate overx
values. Themultiset
makes it easy to find the min and max after doing the updates. Atx
we add all indices inadd[x]
, find the min/max, then processrem[x]
because thex1..x2
intervals are inclusive.Instead of the clever multiset you can use a min segment tree for Alice's win turns and a max segment tree for Bob's win turns (technically Bob's win indices as discussed above). Same linearithmic runtime but easier copy-paste if you have a reference segtree implementation handy (I think copy-paste from a personal repo is acceptable here but idk for sure).
Hey, for the solution shared for
2013F1 — Game in Tree (Easy Version)
, the last paragraph mentions that it can be proven that instead of using a segment tree or sparse table, one can simply iterate through all vertices on the segment and terminate the loop upon finding a greater vertex. This approach will yield a solution with a time complexity of O(n).I am unable to prove this, so was hoping if someone could help me out with it. Thanks!
The more I think about it, the more I think the statement is correct. Here's my proof:
The worst case is if the current player, to find the other player's best number of countermoves with a loop in
O(n)
, has to run that loop more thanO(1)
times.Suppose the current player is Alice, and at her current position
i
, she has a side path off the Alice-Bob path that has a lot of moves, such that we'll doO(n)
iterations to find Bob's best counterplay oni+1..j
(Bob is currently atj
). That'sO(n)
iterations for Bob's max — but Alice will only be at positioni
once.We we find Bob's best counterplay by iterating from
i+1..j
then each loop starts withi+1
, which hasO(n)
countermoves — because thei+1..j
path isO(n)
nodes long, plus whatever nodes of off nodei+1
. Suppose Alice hask
different locations where she can step off the Alice-Bob path such that we'll doO(n)
iterations.By starting at
i+1
for Bob's counterplays withO(n)
moves, to do more than one iteration, Alice must have access toO(n)
moves.Therefore Alice must encounter at least
O(k*n)
nodes on side paths. There are onlyO(n)
nodes total, and thereforek
must beO(1)
.The above proof can be generalized a bit to say Alice and Bob are
d
nodes apart, there arek
places where Alice and Bob can step off that triggerO(d)
iterations. The loops would takeO(k*d)
iterations. But they'd also require havingO(k*d) = O(n)
nodes in total. Thereforek*d
is ordern
so the total loop time isO(n)
.And similarly the proof is the same for Bob basically, except Bob's "counterplays loop" has
>=
instead of>
for Alice.I am curious about the trivial solution presented for question F1. It's claimed that the time complexity of trivial loop is $$$O(n)$$$, but I cannot see why. (Though I have to admit that, to me it really looks like the trivial solution has linear time complexity)
See my earlier comment for a proof.
The idea of the proof is that suppose Alice and Bob are separated by
O(k)
nodes. Then to doO(k)
iterations in each of Alice's and Bob's loops, they have to have access to at leastk
nodes in a subtree off the current path. Therefore to doO(k**2)
iterations there must beO(k)
nodes in subtrees atO(k)
locations,O(k**2)
total nodes. Therefore the max iterations done in the loops is bounded by the number of nodes which isO(n)
.In other words each iteration done in the loop over the other player's turn requires a node in a subtree. Thus the total iterations is bounded by the number of nodes in the tree which is
O(n)
.Another solution to D:
1) $$$max = max$$$ of all possible suffix average ceil.
2) $$$min = min$$$ of all possible prefix average floor.
3) $$$ans = max - min$$$.
ref: https://codeforces.me/contest/2013/submission/285605752
Could you please explain this solution?
It was difficult for me to explain why the solution fits within 2n queries in problem C. So, I slightly modified the approach as follows: In the first query, I ask for two values ("00" and "11"). If both are false, I can determine that the answer is either 0101... or 1010...I proceed with the same method as explained in the solution.This reduces the number of queries by two.so 2n.
I tried to code for E but failed on Hidden Test Case .Can Anyone tell me what's wrong with solution //this is the codde ~~~~~ int n; cin >> n; vector vec(n); for (int i = 0; i < n; i++) { cin >> vec[i]; } sort(vec.begin(), vec.end());
~~~~~