2022A - Автобус в Пенхамо
The key to maximizing happiness is to seat family members together as much as possible. If two members of the same family sit in the same row, both will be happy, and we only use two seats. However, if they are seated separately, only one person is happy, but two seats are still used. Therefore, we prioritize seating family pairs together first.
Once all possible pairs are seated, there may still be some family members left to seat. If a family has an odd number of members, one person will be left without a pair. Ideally, we want to seat this person alone in a row to make them happy. However, if there are no remaining rows to seat them alone, we’ll have to seat them with someone from another family. This means the other person might no longer be happy since they are no longer seated alone.
The easiest way to handle part 2 is to check the number of remaining rows and people. If the number of remaining rows is greater than or equal to the number of unpaired people, all unpaired people can sit alone and remain happy. Otherwise, some people will have to share a row, and their happiness will be affected. In that case, the number of unhappy people will be 2 × remaining rows - remaining people
.
- Write down key observations and mix them to solve the problem
2022B - Продавец Карел
Since no customer can buy more than one car from the same model, the minimum number of clients we need is determined by the model with the most cars. Therefore, we need at least: max(a_1, a_2, ..., a_n) clients, because even if a customer buys cars from other models, they cannot exceed this limit for any single model.
Each client can buy up to x
cars from different models. To distribute all the cars among the clients, we also need to consider the total number of cars. Thus, the minimum number of clients needed is at least: ceil((a_1 + a_2 + ... + a_n) / x). This ensures that all cars can be distributed across the clients, respecting the limit of x
cars per customer.
The actual number of clients required is the maximum of the two values: min_clients = max(ceil((a_1 + a_2 + ... + a_n) / x), max(a_1, a_2, ..., a_n)). This gives us a lower bound for the number of clients required, satisfying both constraints (the maximum cars of a single model and the total number of cars).
Apply binary search on the answer.
To demonstrate that this is always sufficient, we can reason that the most efficient strategy is to reduce the car count for the models with the largest numbers first. By doing so, we maximize the benefit of allowing each client to buy up to x
cars from different models.
After distributing the cars, two possible outcomes will arise: 1. All models will have the same number of remaining cars, and this situation will be optimal when the total cars are evenly distributed (matching the ceil((a_1 + a_2 + ... + a_n) / x) bound). 2. There will be still a quantity of models less than or equal to x remaining, which matches the case where the maximum number of cars from a single model max(a_1, a_2, ..., a_n)) determines the bound.
Imagine a grid of size w × x
, where w
represents the minimum number of clients needed: w=max(ceil((a_1 + a_2 + ... + a_n) / x), max(a_1, a_2, ..., a_n)) Now, place the cars in this grid by filling it column by column, from the first model to the n
-th model. This method ensures that each client will buy cars from different models and no client exceeds the x
car limit. Since the total number of cars is less than or equal to the size of the grid, this configuration guarantees that all cars will be sold within w
clients.
- If you don't know how to solve a problem, try to solve for small cases to find a pattern.
2022C - Джерримендеринг
We will use dynamic programming to keep track of the maximum number of votes Álvaro can secure as we move from column to column (note that there are many ways to implement the DP, we will use the easiest to understand).
An important observation is that if you use a horizontal piece in one row, you also have to use it in the other to avoid leaving holes.
Let dp[i][j]
represent the maximum number of votes Álvaro can get considering up to the i-th column. The second dimension j
represents the current configuration of filled cells:
j = 0
: All cells up to the i-th column are completely filled (including i-th). j = 1
: All cells up to the i-th column are completely filled (including i-th), and there’s one extra cell filled in the first row of the next column. j = 2
: The i-th column is filled, and there’s one extra cell filled in the second row of the next column.
For simplicity, we will call "L", "second L", "third L", and "fourth L" respectively the next pieces:
* ** ** *
** * * **
- From
dp[k][0]
(Both rows filled at column k): - You can place a horizontal piece in both rows, leading to
dp[k+3][0]
. - You can place a first L piece, leading to
dp[k+1][1]
- You can place an L piece, leading to
dp[k+1][2]
- From
dp[k][1]
(One extra cell in the first row of the next column): - You can place a horizontal piece in the first row occupying from k+2 to k+4 and also a horizontal piece in the second row from k+1 to k+3, leading to
dp[k+3][1]
. - You can place a fourth L, leading to
dp[k+2][0]
. - From
dp[k][2]
(One extra cell in the second row of the next column): - You can place a horizontal piece in the first row occupying from k+1 to k+3 and also a horizontal piece in the second row from k+2 to k+4, leading to
dp[k+3][2]
. - You can place a third L, leading to
dp[k+2][0]
.
More formally for each DP state, the following transitions are possible:
From dp[k][0]
: dp[k+3][0] = max(dp[k+3][0], dp[k][0] + vot) dp[k+1][1] = max(dp[k+1][1], dp[k][0] + vot) dp[k+1][2] = max(dp[k+1][2], dp[k][0] + vot)
From dp[k][1]
: dp[k+3][1] = max(dp[k+3][1], dp[k][1] + vot) dp[k+2][0] = max(dp[k+2][0], dp[k][1] + vot)
From dp[k][2]
: dp[k+3][2] = max(dp[k+3][2], dp[k][2] + vot) dp[k+2][0] = max(dp[k+2][0], dp[k][2] + vot)
To implement the DP solution, you only need to handle the transitions for states dp[i][0]
and dp[i][1]
. For dp[i][2]
, the transitions are the same as dp[i][1]
, with the rows swapped.
- Don't begin to implement until all the details are very clear, this will make your implementation much easier.
- Draw images to visualize easier everything.
2022D1 - Ассасин (простая версия)
2022D2 - Ассасин (сложная версия)
2022E1 - Billetes MX (простая версия)
Consider the extremal cases, what is the answer if the grid is empty? What is the answer if the grid is full? What happens if $$$N = 2$$$?
Observe that if $$$N = 2$$$, the xor of every column is constant.
Can we generalize this idea?
Imagine you have a valid full grid $$$a$$$. For each $$$i$$$, change $$$a[0][i]$$$ to $$$a[0][0]\oplus a[0][i]$$$. Observe that the grid still satisfies the condition!
Using the previous idea we can show that for any valid grid, there must exist two arrays $$$X$$$ and $$$Y$$$, such that $$$a[i][j] = X[i] \oplus Y[j]$$$.
Consider doing the operation described in hint 4 for every row and every column of a full grid that satisfies the condition; That is, for each row, and each column, fix the first element, and change every value in that row or column by their xor with the first element. We will be left with a grid whose first row and first column are all zeros. But this grid also satisfies the condition! So it must hold that $$$a[i][j] \oplus a[i][0] \oplus a[0][j] \oplus a[0][0] = 0$$$, but 3 of this values are zero! We can conclude that $$$a[i][j]$$$ must also be zero. This shows that there must exist two arrays $$$X$$$ and $$$Y$$$, such that $$$a[i][j] = X[i] \oplus Y[j]$$$, for any valid full grid.
Think of each tile of the grid that we know of, as imposing a condition between two elements of arrays $$$X$$$ and $$$Y$$$. For each tile added, we lose one degree of freedom right? We could make a bunch of substitutions to determine new values of the grid. How can we best model the problem now?
Think about it as a graph where the nodes are rows and columns, and there is an edge between row $$$i$$$ and column $$$j$$$ with weight $$$a[i][j]$$$. Substitutions are now just paths on the graph. If we have a path between the node that represents row $$$i$$$ and column $$$j$$$, the xor of the weights in this path represents the value of $$$a[i][j]$$$. What happens if there's more than one path, and two paths have different values?
To continue hint 7, we can deduce that if there is a cycle with xor of weights distinct to $$$0$$$ in this graph, there would be a contradiction, and arrays $$$X$$$ and $$$Y$$$ can't exist. How can we check if this is the case?
Do a dfs on this graph, maintaining an array $$$p$$$, such that $$$p[u]$$$ is the xor of all edges in the path you took from the root, to node $$$u$$$. Whenever you encounter a back edge between nodes $$$u$$$ and $$$v$$$ with weight $$$w$$$, check if $$$p[u] \oplus p[v] = w$$$.
Hint 9: So lets assume there is no contradiction, ie, all cycles have xor 0. What would be the answer to the problem? We know that if the graph is connected, there exists a path between any two tiles and all the values of the tiles would be determined. So, in how many ways can we make a graph connected?
Answer to hint 9: Say there are $$$K$$$ connected components. We can connect them all using $$$K - 1$$$ edges. For each edge, there are $$$2^{30}$$$ possible values they can have. Thus, the number of ways of making the graph connected is $$$\left(2^{30} \right)^{K - 1}$$$.
Why is this the answer to the problem?
This is just the solution. Please read the hints in order to understand why it works and how to derive it.
Precompute an array $$$c$$$, with $$$c[i] = 2^{30\cdot i} \pmod{10^9 + 7}$$$.
Let $$$a$$$ be the 2d array with known values of the grid. Consider the graph formed by adding an edge between node $$$i$$$ and node $$$j + n$$$ and weight $$$a[i][j]$$$ for every known tile $$$(i, j)$$$ in the grid. Iterate from $$$1$$$ to $$$n + m$$$ maintaining an array $$$p$$$ initialized in $$$-1$$$s. If the current proceed node hasn't been visited, we run a dfs through it. We will use array $$$p$$$ to maintain the running xor for each node during the dfs. If we ever encounter a back edge between nodes $$$u, v$$$ and weight $$$w$$$, we check if $$$p[u] \oplus p[v] = w$$$. If not, we the answer is zero. If this condition always holds, let $$$K$$$ be the number of times you had to run the dfs. $$$K$$$ is also the number of connected components in the graph. The answer is $$$c[K - 1]$$$.
Complexity: $$$\mathcal{O}(n + m + k)$$$.
Code: 285962028
2022E2 - Billetes MX (сложная версия)
Please read the solution to E1 beforehand, as well as all the hints.
Through the observations in E1, we can reduce the problem to the following: We have a graph, we add edges, and we want to determine after each addition if all its cycles have xor 0, and the number of connected components in the graph.
The edges are never removed, so whenever an edge is added that creates a cycle with xor distinct to zero, this cycle will stay in the graph for all following updates. So we can binary search the first addition that creates a cycle with xor distinct to zero, using the same dfs we used in E1. After the first such edge, the answer will always be zero.
Now, for all the additions before that, we must determine how many connected components the graph has at each step. But this is easily solvable with Disjoint Set Union.
Complexity: $$$\mathcal{O}(\log(q)(n + m + k + q) + \alpha(n + m)(q + k))$$$.
Code: 285961673
We will answer the queries online. Remember that if the graph only contains cycles with xor $$$0$$$, the xor of a path between a pair of nodes is unique. We'll use this in our advantage. Let $$$W(u, v)$$$ be the unique value of the xor of a path between nodes $$$u$$$ and $$$v$$$.
Lets modify a dsu, drop the path compression, and define array $$$p$$$, that maintains the following invariant:
For every node $$$u$$$ in a component with root $$$r$$$, $$$W(u, r)$$$ equals the xor of $$$p[x]$$$ for all ancestors $$$x$$$ of $$$u$$$ in our dsu (we also consider $$$u$$$ an ancestor of itself).
Whenever we add an edge between two nodes $$$u$$$ and $$$v$$$ in two different components with weight $$$w$$$, we consider consider the roots $$$U$$$ and $$$V$$$ of their respective components. Without loss of generality, assume $$$U$$$'s component has more elements than $$$V$$$'s. We will add an edge with weight $$$W(u, U) \oplus W(v, V) \oplus w$$$ between $$$V$$$ and $$$U$$$, and make $$$U$$$ the root of our new component.
This last step is the small to large optimization, to ensure the height of our trees is logarithmic.
With this data structure, we can maintain the number of connected components like in a usual dsu, and whenever an edge $$$(u, v)$$$ with weight $$$w$$$ is added, and $$$u$$$ and $$$v$$$ belong to the same component, we can obtain the value of $$$W(u, v)$$$ in $$$\mathcal{O(\log(n))}$$$, and check if it is equal to $$$w$$$.
This idea is similar to the data structure described here, and is useful in other contexts.
Complexity: $$$\mathcal{O}(\log(q + k)(n + m + k + q))$$$.
In our Olympiad, we had $$$v_i \le 1$$$ instead of $$$v_i \le 2^{30}$$$, and we also had a final subtask were tiles could change values. How would you solve this?
We have three different solutions for this problem, one of them found during the Olympiad! I'll let the contestant who found it post it if he wants to.