This was a fun contest, and I solved all the problems except G.
Thoughts
- A: Nice and simple problem A. Not much to say here
- B: Nice problem on sorting, but it may have been a bit too hard for a problem B. Also, as 4dh4r5h said, it's possible that there was cheating in B.
- C: Not a bad solution idea, but it's very easy to mess up the implementation. That's probably why it was the hardest out of [A...E1]
- D: Nice problem, but it was very similar to a previous one. Good difficulty level for a Div3 D though.
- E1: Very misplaced. Not much else to say here.
- E2: Nice problem with an interesting solution idea.
- F: Nice problem, although I'm not sure another adhoc problem was the best choice for a problem F.
- G: I didn't solve it given time, but according to yash_daga, it's a 2d dp problem.
Solutions
Note that the number of occurrences of $$$\mathtt{A}$$$ and $$$\mathtt{C}$$$ should be equal to the number of occurrences of $$$\mathtt{B}$$$. Checking this condition is sufficient to solve the problem.
for _t in range(int(input())):
s = input()
if s.count('A') + s.count('C') == s.count('B'):
print('YES')
else:
print('NO')
Another possible solution is to notice that the number of occurrences of $$$\mathtt{B}$$$ will equal exactly half of the length of the string.
for _t in range(int(input())):
s = input()
if 2*s.count('B') == len(s):
print('YES')
else:
print('NO')
There are several solutions to this problem, I'll present the one similar to insertion sort.
How insertion sort works is we iterate through the list making sure that $$$a_0, a_1, \dots, a_{i-1}$$$ is sorted, and then we add $$$a_i$$$ to that list in its correct place.
Notice that when we add $$$a_i$$$ in between $$$a_{j-1}$$$ and $$$a_j$$$, for example, it is exactly the same as shifting $$$a_j, a_{j+1}, \dots, a_i$$$ to the right once. We want to shift to the left however, so we can shift this subarray $$$i-j$$$ times to the left.
for _t in range(int(input())):
n = int(input())
a = [*map(int, input().split())]
b = []
for i in range(n):
for j in range(i+1):
if a[j] >= a[i]: break
if j < i:
a[j:i+1] = [a[i]] + a[j:i]
b.append((j+1, i+1, i-j))
print(len(b))
for (l,r,d) in b:
print(l, r, d)
The main idea of this solution is to try and place ticks wherever possible while only staying in the blue ($$$*$$$) region, and because ticks can overlap, we can just mark those places as 'covered'. Then, if the covered region is the same as the blue ($$$*$$$) region, then it's possible, and otherwise, it's not possible.
#include <bits/stdc++.h>
using namespace std;
char s[20][20];
bool cov[20][20];
void solve() {
int N, M, K;
scanf("%d%d%d", &N, &M, &K);
for (int i = 0; i < N; ++i) {
scanf("%20s", s[i]);
for (int j = 0; j < M; ++j)
cov[i][j] = false;
}
for (int i = 0; i < N; ++i) {
for (int j = 0; j < M; ++j) {
for (int k = K; k < N; ++k) {
bool can = true;
for (int l = 0; can && l <= k; ++l) {
can &= i+l < N && j+l < M && s[i+l][j+l] == '*';
can &= i+k-l < N && j+k+l < M && s[i+k-l][j+k+l] == '*';
}
if (!can) continue;
for (int l = 0; l <= k; ++l) {
cov[i+l][j+l] = true;
cov[i+k-l][j+k+l] = true;
}
}
}
}
bool good = true;
for (int i = 0; i < N; ++i) {
for (int j = 0; j < M; ++j) {
good &= (s[i][j] == '*') == cov[i][j];
}
}
puts(good ? "YES" : "NO");
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
We want to pair off the two people with the largest sociabilities, and repeat. If we do this naively, The time complexity will be $$$\mathcal{O}NM$$$, where $$$M$$$ is the sum of all $$$a_i$$$.
A faster way would be to use a Priority Queue data structure, such as std::priority_queue
in C++, the heapq
module in Python, PriorityQueue
in Java, or collections::BinaryHeap
in Rust. We will store the sociability $$$a_i$$$ and the index $$$i$$$ (note that this order is important; we don't want to sort by index!) in our priority queue.
#include <bits/stdc++.h>
using namespace std;
int a[200005];
pair<int, int> b[400005];
void solve() {
int n, k = 0; scanf("%d", &n);
priority_queue<pair<int, int>> pq;
for (int i = 0; i < n; ++i) {
scanf("%d", a+i);
if (a[i]) pq.emplace(a[i], i);
}
while (pq.size() >= 2) {
auto [a1, i1] = pq.top(); pq.pop();
auto [a2, i2] = pq.top(); pq.pop();
if (a1 >= 1) pq.emplace(a1-1, i1);
if (a2 >= 1) pq.emplace(a2-1, i2);
b[k++] = {i1, i2};
}
printf("%d\n", k);
for (int i = 0; i < k; ++i) {
const auto [i1, i2] = b[i];
printf("%d %d\n", i1+1, i2+1);
}
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
Whenever you're doing a problem with lexicographical comparisons, remember that lexicographic means greedy.
We greedily put a number $$$a_i$$$ first if it's smaller than $$$\textrm{ans}[0]$$$, and last otherwise. We don't need to worry about elements being equal because we're given that the input is a permutation.
It's also interesting to note that this problem was solved online. I guess the solution would have been easier to see if this was an interactive problem though.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n; scanf("%d", &n);
deque<int> ans = {0};
scanf("%d", &ans[0]);
for (int a, i = 1; i < n; ++i) {
scanf("%d", &a);
if (a < ans[0]) {
ans.push_front(a);
} else {
ans.push_back(a);
}
}
for (const int x: ans) {
printf("%d ", x);
} printf("\n");
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
Notice that once we put the first $$$i$$$ elements, they will all be contiguous for the rest of the processing. So we can greedily put $$$a_i$$$ wherever we want, because it won't actually affect the rest of the numbers any differently if we put it in the front or the back.
How do we find which is better though? Notice that this equivalent to finding the number of $$$a_j$$$ where $$$j<i$$$ such that $$$a_j<a_i$$$. We can do this by using an order statistic tree. In C++, this is __gnu_pbds::tree
. In other languages, I don't know of an order stastics tree, but a segment tree can be used instead.
Note that in the implementation, we must use Tree<pair<int, int>> tree
where tree
contains pair{a[i], i}
, because otherwise we will delete duplicates.
Let $$$\textrm{bef}$$$ be the number of inversions if we put $$$a_i$$$ at the front of the deque, and $$$\textrm{aft}$$$ if we put it at the back.
- $$$\textrm{bef} =$$$
tree.order_of_key({a, -1})
- $$$\textrm{aft} =$$$
i - tr.order_of_key({a, i})
And we can simply add $$$\textrm{min(bef, aft)}$$$ to our answer for each iteration.
An exercise for the reader: Why do we check for {a, -1}
when calculating $$$\textrm{bef}$$$?
#include <bits/stdc++.h>
using namespace std;
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
template <typename T> using Tree = tree<T, null_type,
less<T>, rb_tree_tag, tree_order_statistics_node_update>;
void solve() {
int n; scanf("%d", &n);
Tree<pair<int, int>> tr;
int64_t ans = 0;
for (int a, i = 0; i < n; ++i) {
scanf("%d", &a);
int inv_bef = tr.order_of_key({a, -i});
int inv_aft = i - tr.order_of_key({a, i});
ans += min(inv_bef, inv_aft);
tr.insert({a, i});
}
printf("%lld\n", ans);
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
We can treat this as a sort of successor graph problem. Don't worry if you've never heard of this before, it's just a graph where each vertex has exacly one outgoing edge (and in this case, only one incoming edge).
For each starting position $$$i$$$, we can keep track of the positions it visits.
- If it only visits $$$1$$$'s before it cycles, that means the bitwise AND will be $$$1$$$, so the answer is $$$-1$$$.
- Otherwise, notice that the time it takes before all the visited positions become $$$0$$$ is equal to the longest streak of $$$1$$$'s that we visit. Also note that the longest streak might start from the end of the positions and loop back to the start, but we can easily remedy this by iterating twice.
We keep track of the positions we visit and then don't revisit them, using a boolean vis
array. Remember to reset globals for each test case!
#include <bits/stdc++.h>
using namespace std;
bool a[1'000'000];
bool vis[1'000'000];
void solve() {
int n, d; scanf("%d%d", &n, &d);
for (int b, i = 0; i < n; ++i) {
scanf("%d", &b);
a[i] = b;
vis[i] = false;
}
int ans = 0;
for (int i = 0; i < n; ++i) {
if (vis[i]) continue;
int loops = 0, streak = 0, iterations = 0;
for (int j = i; loops < 2 || j != i; ++iterations) {
if (a[j]) ans = max(ans, ++streak);
else streak = 0;
vis[j] = true;
j = (j + d) % n;
if (j == i) loops++;
}
if (streak == iterations) {
puts("-1");
return;
}
}
printf("%d\n", ans);
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
Bonus 1: How much time does it take for each position to cycle? Express as a function of $$$n$$$ and $$$d$$$.
Bonus 2: Solve this problem for $$$0 \leq a_i \leq 10^9$$$
This solution is from yash_daga's explanation here (you are now obligated to go upvote him).
We can solve this problem with dynamic programming. Let $$$dp[i][j]$$$ denote the minimum length that the currently used segments are taking up, when you are $$$j$$$ units to the right of the start, after processing the first $$$i$$$ values.
Our base case is
- $$$dp[0][j] = 0 \ \ \textrm{ if } j < a[0]$$$
- $$$dp[0][j] = \infty \textrm{ if } j \geq a[0]$$$
Our transitions are:
- $$$dp[i][j-a] = \textrm{min}(dp[i][j-a], dp[i-1][j])$$$
- $$$dp[i][j+a] = \textrm{min}(dp[i][j+a], \textrm{max}(dp[i-1][j]-a, 0))$$$
Notice that the maximum answer is equal to $$$2\cdot\textrm{max}(A_i)$$$, so our $$$dp$$$ matrix should have dimensions $$$10000 \times 2000$$$. We can optimize the memory to $$$2 \times 2000$$$, because each layer only depends on the previous one, but it isn't necessary to solve this problem.
#include <bits/stdc++.h>
using namespace std;
template <typename T, typename U>
constexpr inline bool ckmin(T &t, const U &u) {
return u<t ? t=u,true : false;
}
constexpr int INF = 2e9;
constexpr int A = 2000;
int dp[10000][A];
void solve() {
int n; scanf("%d", &n);
for (int i = 0; i < n; ++i) {
for (int j = 0; j < A; ++j) {
dp[i][j] = INF;
}
}
int a; scanf("%d", &a);
for (int j = a; j < A; ++j) {
dp[0][j] = 0;
}
for (int i = 1; i < n; ++i) {
scanf("%d", &a);
for (int j = 0; j < A; ++j) {
if (dp[i-1][j] == INF) continue;
if (a <= j) ckmin(dp[i][j-a], a+dp[i-1][j]);
if (j+a < A) ckmin(dp[i][j+a], max(0, dp[i-1][j]-a));
}
}
int ans = INF;
for (int j = 0; j < A; ++j)
ckmin(ans, dp[n-1][j] + j);
printf("%d\n", ans);
}
int main() {
int T; scanf("%d", &T);
while (T--) solve();
}
Another one here. for A to F https://codeforces.me/blog/entry/95352?#comment-844267
In the solution for F can you please tell why are you putting a upper bound of $$$2$$$ on $$$loops$$$?
If I am able to understand correctly the loop variable helps in counting the number of times we have looped over the array ( number of times passed or reached the starting index).
However I am confused as to why the upper bound of 2 is working for the variable.
That just saves us from wrapping the loop in a
for (int it = 0; it < 2; ++it)
Thanks for explaining. I completely overlooked this point.
How do you prove that's the solution for D?
I believe there's a proof by contradiction. If I can prove it I'll add it here.
In E2, not many people know this i guess, but instead of using pair to form multiset, you can change
less<int>
toless_equal<int>
. That gives the same result.You can check my code for reference.
I didn't want to put that, because it causes problems when you have to remove elements. Using
pair<int, int>
works in that case as well.You can erase elements like this if you need too.
indexed_set.erase(indexed_set.find_by_order(indexed_set.order_of_key(key)));
You have no idea how much you can get messed up with less_equal in pbds. It is a usual practice to not have this less_equal in pbds
yes I have encountered same problems as given in those comments. But also, it only has issues with erasing, which has an alternative(by erasing the pointer to it) and also with lower_bound, which can be thought of as, lower_bound gives upper_bound and vice-versa.
Anyways, its like you can use
#define int long long
as it is convenient but obviously, it is a bad practice and care must be taken.you do you