Idea:FP7317
Problem Description
Since the constraints are small, you only need to simulate this process.
You are tasked with finding the minimum number of bandages required to defeat a dragon using a sword. The sword deals:
- 1 damage if the i-th attack is not divisible by 3.
- 2 damage if the i-th attack is divisible by 3.
If your health will becomes less than $$$0$$$ when the dragon attack you (in other words, your hp is less than $$$k$$$) , you must use a bandage.
void solve(){
int h,k,n;cin>>h>>k>>n;
int ans = 0, l = k, r = 1;
while(h > 1){
r++;
k -= n;
if(k <= 0){
k = l - n;
ans++;
}
if(r % 3 == 0){
h-=2;
}else{
h--;
}
}
cout<<ans<<endl;
}
Idea:reyleigh
Problem Statement
OwlBear is attacking a village. The village has n cannons, each dealing ai damage. At the k-th second, the OwlBear is protected from the cannon at index (*k*-1) mod n. Given q queries, each with a damage threshold h, find the minimum number of seconds required to deal at least h damage.
Analysis
The key idea is to precompute the prefix sums of the damage dealt in each second, excluding the blocked cannon. Let sum
be the total damage all cannons can deal in one second. Then, the damage dealt in the i-th second is sum - a[i % n]
. We can create a prefix sum array p
where p[i]
stores the total damage dealt from second 1 to second i.
To answer a query with damage h, we can first calculate how many full cycles of n seconds are needed. This can be done by ans = h / p[n-1]
. The remaining damage is h -= (ans * p[n-1])
. Then, we multiply ans
by n to get the number of seconds in full cycles. If there is still remaining damage (h > 0
), we use binary search (specifically lower_bound
) on the prefix sum array p
to find the minimum number of additional seconds needed.
Solution
- Read n and the damage array a.
- Calculate the total damage
sum
and create the prefix sum arrayp
wherep[i] = sum - a[i]
and accumulate the prefix sums. - For each query h:
- Calculate the number of full cycles:
ans = h / p[n-1]
. - Update the remaining damage:
h -= (ans * p[n-1])
. - Multiply
ans
by n. - If
h > 0
, uselower_bound
onp
to find the index of the first value greater than or equal to h. Add this index + 1 toans
to get the final answer.
- Calculate the number of full cycles:
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int main(){
ios::sync_with_stdio(0);
cin.tie(0);
int tc;
cin >> tc;
while (tc--) {
int n;
cin >> n;
vector<ll> a(n), p(n);
for (int i = 0; i < n; i++) cin >> a[i];
ll sum = 0;
for (int i = 0; i < n; i++) sum += a[i];
for (int i = 0; i < n; i++) p[i] = sum - a[i];
for (int i = 1; i < n; i++) p[i] += p[i - 1];
int q;
cin >> q;
while (q--) {
ll h;
cin >> h;
ll ans = h / p[n - 1];
h -= (ans * p[n - 1]);
ans *= n;
if (h) ans += lower_bound(p.begin(), p.end(), h) - p.begin()+1;
cout << ans << '\n';
}
}
}
Idea:pramod_17
We can define $$$a_{i,j} = a_{j,i} = i + j$$$ for $$$i \neq j$$$. Then $$$\gcd(a_{i,j}, i + j) > 1$$$, and the XOR of all elements is $$$0$$$ for all $$$i$$$,$$$j$$$ $$$(i \neq j)$$$.
For $$$i = j$$$, we have $$$i + j = 2*i$$$, which is even. So we may set $$$a_{i,i} = 2$$$ for all $$$1 \le i \le n$$$.
Now, if $$$n$$$ is even, all conditions are met.
If $$$n$$$ is odd, the XOR of all elements is $$$2$$$. To balance out that $$$2$$$, we can add $$$2$$$ to any value where the corresponding bit is not set while ensuring the $$$\gcd$$$ condition(one such possibility is adding $$$2$$$ to $$$a_{3,1}$$$).
#include <bits/stdc++.h>
using namespace std;
int main(){
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t;
cin>>t;
while(t--){
int n;
cin>>n;
vector<vector<int>> a(n+1,vector<int>(n+1,0));
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
if(i==j) a[i][j] = 2;
else a[i][j] = i+j;
}
}
if(n&1){
a[3][1] = 6;
}
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++) cout << a[i][j] << ' ';
cout << endl;
}
}
return 0;
}
Idea:sanju77
There are only $$$n$$$ possible different shifts.Let's calculate cost for each of them in O(1).
Assume one-based indexing for array.
Let cost[i]
represent the cost of the array after i
shifts.
Initial cost:cost[0] = 1*a1 + 2*a2 + ... + n*an
cost after first shift:
cost[1] = 1*a2 + 2*a3 + ... + (n-1)*an + n*a1
Rewriting cost[1] in terms of cost[0]
cost[1] = cost[0] - sum + n*a1
cost after second shift:
cost[2] = 1*a3 + 2*a4 + ... + (n-1)*a1 + n*a2
Rewriting cost[2] in terms of cost[1]
cost[2] = cost[1] - sum + n*a2
The generalized formula cost[i] from cost[i-1] terms :
cost[i] = cost[i-1] - sum + n*a[i]
Why the generalized formula is correct?
The generalized formula adjusts the cost for each shift by:
1. Subtracting the total sum (sum
) of the array (since every element moves one step back in weight).
2. Adding n*a[i]
to account for the element a[i]
, which moves to the last position in the array.
Using this formula, we can compute the cost for all shifts in O(1) per shift and take minimum among all, resulting in an overall complexity of O(n).
#include<bits/stdc++.h>
using namespace std;
int main(){
long long t;
cin>>t;
while(t--)
{
long long n;
cin>>n;
long long a[n];
long long sum=0;
long long ans=0;
for(long long i=0;i<n;i++)
{
cin>>a[i];
sum+=a[i];
ans+=((i+1)*a[i]);
}
long long cost=ans;
for(long long i=0;i<n;i++)
{
cost=cost-sum+n*a[i];
ans=min(ans,cost);
}
cout<<ans<<"\n";
}
return 0;
}
Idea:wuhudsm
Solution 1:
This algorithm is a randomized algorithm.
We randomly select two fixed points $$$x$$$ and $$$y$$$, then for all $$$z \neq x, y$$$, query "? $$$x$$$ $$$y$$$ $$$z$$$". If the result is 0, it means $$$z$$$ and $$$x$$$ are on the same side of $$$y$$$; otherwise, $$$z$$$ and $$$x$$$ are on different sides of $$$y$$$.
Thus, we use $$$y$$$ as the pivot and partition the elements into the left and right halves. To determine which side is the actual left half, we need to perform one operation, $$$O(1)$$$, (this is only required in the first partitioning step. After determining the correct position of $$$y$$$, no more operations of this type are needed).
Once we know the correct position of $$$y$$$, we can recursively solve the left and right halves.
You might have already noticed that this is essentially the process of quick sort. Therefore, the complexity proof is identical to that of quicksort.
Solution 2:
This algorithm is a deterministic algorithm.
If we can determine the value of $$$p[1]$$$, then we can perform a comparison "? $$$p[1]$$$ $$$x$$$ $$$y$$$", where we compare two numbers.
The method to determine $$$p[1]$$$ is given by the following code:
int u = 1, v = 2;
for (int i = 3; i <= n; i++) {
int res = ask(u, i, v);
if (!res) {
res = ask(u, v, i);
if (res) v = i;
else u = i;
}
}
cout << "1 " << u << ' ' << v << endl;
int res;
cin >> res;
if (!res) swap(u, v);
After determining $$$p[1]$$$, you can solve the problem using any $$$O(n \log n)$$$ sorting algorithm.
Idea:pramod_17
Note that performing an operation is always optimal as it won't decrease the answer.
At first, lets find the number of good subarrays before applying the operation. For this, the condition is $$$a_l + a_{l+1} ... + a_r >= k*(r-l+1)$$$ which is $$$pre[r] - k*r >= pre[l-1] - k*(l-1)$$$. So we can make a new array $$$b$$$ where $$$b_i = pre[i] - k*i$$$ and the initial number of good subarrays is equal to the number of inversions in the array $$$b$$$.
Now, lets say we increment $$$a_i$$$ by $$$1$$$ for some $$$i$$$, then the additional good subarrays formed should have their left boundary to the left of $$$i$$$ or equal to $$$i$$$ and their right boundary to the right of $$$i$$$ or equal to $$$i$$$ $$$(i.e., l \le i \le r)$$$ and $$$b_r = b_{l-1} - 1$$$. If we have the number of additional good subarrays when the operation is performed at $$$i^{th}$$$ position, we can easily find the number of additional good subarrays when the operation is performed at $$$(i+1)^{th}$$$ position using a map
.
Finally we take maximum over all positions and add it to the initial answer.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
template <typename T>
using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;
template <typename T>
using oms = tree<T, null_type, less_equal<T>, rb_tree_tag, tree_order_statistics_node_update>;
#define ll long long
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t--) {
ll n, k;
cin >> n >> k;
vector<ll> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
vector<ll> b(n + 1, 0ll);
for (int i = 0; i < n; i++) {
b[i + 1] = b[i] + (a[i] - k);
}
ll ans = 0;
oms<ll> inv;
inv.insert(0);
for (int i = 0; i < n; i++) {
ans += inv.order_of_key(b[i + 1] + 1);
inv.insert(b[i + 1]);
}
ll mx = 0;
map<ll, ll> left, right;
for (int i = 0; i < n; i++) {
right[b[i + 1]]++;
}
left[0]++;
ll val = 0;
for (int i = 0; i < n; i++) {
if (b[i + 1] == -1) {
val++;
}
}
mx = val;
for (int i = 2; i <= n; i++) {
val += right[b[i - 1] - 1];
val -= left[b[i - 1] + 1];
left[b[i - 1]]++;
right[b[i - 1]]--;
mx = max(mx, val);
}
ans += mx;
cout << ans << '\n';
}
return 0;
}