Блог пользователя SummerSky

Автор SummerSky, 3 года назад, По-английски

Recently, I have met three problems which all need segment tree with interval updating (lazy propagation). I did not solve any of them on my own, but got inspired a lot from their solutions. I would like to share some of my thought about the ideas behind these three problems. If you are learning segment tree as well, hope that this could help a little.

Thought 1. It might seem from the first sight that the problem needs a huge number of interval updating, however it turns out that only O(N) or O(NlogN) is enough.

1) Problem https://codeforces.me/contest/438/problem/D. This problem asks to update an interval by module of some integer P while the query is to find the sum of any given interval. I really get stuck here. I think that, for any given interval, for the worst case, we have to deal with every element, and thus lazy propagation never works. However, there exists a wonderful trick! For any x and p(both positive integers), if p<=x, then x%p < x/2 holds, meaning that any element will be updated for at most O(logN) times. As there are O(N) elements, and each updating in segment tree costs O(logN), thus the total complexity is O(NlogNlogN).

2) Problem https://codeforces.me/contest/445/problem/E. In this problem, again, I got very confused at first, when it seems that each modification might contain a huge number of interval updating, and I can not figure out on earth how much such updating it needs. However, the key idea is amortised analysis! For any segment that is fully or partly covered, it contributes O(logN) complexity. Next, if a segment is fully covered, it "disappears" from then, and will not contribute any complexity, while for a partly covered segment, the covered part "disappears" as well and the other part "forms" a new segment. Therefore, after each "updating" operation, there are at most three new segments created, so the total number of updating is O(N+3M), which turns out to give linear complexity!

Thought 2. When updating an interval, brute force will update every element included within this interval, while lazy propagation saves complexity if the "updating information" could be represented by a constant number of variables.

1) Problem http://codeforces.me/contest/447/problem/E. This problem requires to add a Fibonacci sequence to some given interval. I have been quite familiar with adding some integer to an interval, where we could use an extra variable to store this integer, i.e., lazy propagation. But, this is the only pattern which I know, and I think there is only such a pattern, until this problem teaches me a lesson about how lazy propagation really works. Fibonacci sequence has a feature that, the n-th element could be represented by a weighted sum of the first and second element, which also applies for its prefix sum. Therefore, we could use two variables to store the first and second element for any interval, which is enough to implement lazy propagation. Similarly, by using this idea, we could also deal with some other sequences, like Arithmetic sequence, geometric series, and so on.

Finally, due to the limited space, most of the details about the solutions have been omitted, and some parts are perhaps not described clearly. So, if you are interested in this topic, or have any suggestion, or would like to discuss the solutions, I will be very glad and ready for that :D

Полный текст и комментарии »

  • Проголосовать: нравится
  • +31
  • Проголосовать: не нравится

Автор SummerSky, 4 года назад, По-английски

It is really a tough time for me to complete another 100 virtual contests (If you are interested, you could take a look at this My advance and my thought, after completing about 100 virtual rounds, which I wrote several years ago). Once, I stopped competitive programming for a long time, and when I decided to start again, I found that I had forgot most of the advanced algorithms, data structure and tricks. I really suffered a lot when I resumed practicing, but anyway this has been the past and finally I struggled to conquer it.

Although I really feel that I have improved a lot than before, however the fact is that my rating still goes down. I thought this should make me frustrated, but I find that I do not focus so much on rating as before, while instead, I start to enjoy competitive programming itself, solving problems, learning new techniques, and so on. Of course, obtaining higher rating is still my target, but keeping up practicing and learning is more important. I don’t know if someday I will completely give up competitive programming or not, but before that day comes, I will try my best to hold on.

There is also a little story which I would like to share here. After reading my first post (see the link above), one programmer sent me a message saying that he had got encouraged a lot from my post and would work hard to practice. As far as I remember, he was a pupil (or a specialist) at that time, however, last year, when I occasionally read that message again and found that he has become a master! Wow, so amazing! Well, I wanna say that this time, I got encouraged from you, since this makes me believe again that ‘practice makes perfect’.

Finally, as an expert, what I could do is quite limited, but if you are struggling with some problem, and find that my solution has got accepted, welcome to discuss.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +122
  • Проголосовать: не нравится

Автор SummerSky, 5 лет назад, По-английски

I started practicing from round 1 and now I have completed about 150 rounds. Of course I did not solve all of the problems since some of them are too difficult for me...

Although the tutorials are very good, however I feel that sometimes it might be still a little bit difficult for beginners, like me, to fully understand, especially for many div1 problems. On the other hand, as the problems are quite interesting and we can learn many clever ideas and techniques behind them, I think discussion helps a lot for obtaining deeper understanding.

Therefore, if anyone is also trying to solve problems one round after another, you are welcome to discuss problems with me, especially for those problems that I have submitted and got accepted (well, after all, I don't know how to solve those that I have never submitted...)

I would like to share any ideas or techniques involved in each problem that I have solved. Hope that every one could reach 'red' someday :D

Полный текст и комментарии »

  • Проголосовать: нравится
  • +44
  • Проголосовать: не нравится

Автор SummerSky, 6 лет назад, По-английски

215A - Bicycle Chain

Exhaustive enumeration.

215B - Olympic Medal

According to the definition, we have , which gives . Thus, we should find the maximum r1 and p1, and minimum p2.

215C - Crosses

At first, note that the rextangulars can only have size (2a + 1) × (2b + 1) and (2c + 1) × (2d + 1), where a, b, c, d starts from zero. Then, there are two cases that we should take into consideration.

1) One rectangular is completely contained by the other one. Without loss of generality, we assume that a ≥ c and b ≥ d. For a rectangular with size a × b, there are (n - (2a + 1) + 1) × (m - (2b + 1) + 1) different positions to put it within the board. The inner rectangular can have (a + 1) × (b + 1) different sizes. As we can swap these two rectangulars, there are in fact 2 × (a + 1) × (b + 1) feasible answers. Furthermore, when a = c, b = d, the “swapping operation” gives the same answer, and thus we have 2 × (a + 1) × (b + 1) - 1 different answers. Taking the different positions into consideration, the final answer is (n - (2a + 1) + 1) × (m - (2b + 1) + 1) × (2 × (a + 1) × (b + 1) - 1).

2) Two rectangulars form a “crossing”. We enumerate a, b, c and calculate d based on the target area size s. The remaining work is similar to case 1).

215D - Hot Days

The main idea is greedy algorithm, and as each interval is independent, we only focus on one single interval.

1) ti < Ti: At first, we should note that if there are more than one buses which satisfy mj + ti > Ti, where mj denotes the number of students in the j-th bus, we can always achieve a better result by moving all the “extra” students to one single bus. For instance, if there are two buses with mj + ti > Ti and mk + ti > Ti, , the compensation fees will be xi(mj + mk), while if we move mk + ti - Ti students to the j-th bus, the fees are reduced to xi(mj + mk + ti - Ti). Therefore, there can be either one single bus with mj + ti > Ti or no bus at all. We can compute the maximum Y satisfying m > Y(Ti - ti), where Y means that if the number of buses is less than or equal to Y, then every bus should take Ti - ti students except for one single bus taking all the remaining students. For any y ≤ Y, the total cost can be written as (m - (y - 1)(Ti - ti))xi + y × costi, which is a monotonic function. Thus, the minimum value is obtained either at y = 1 or y = Y. For any y ≥ Y + 1, the total cost is y × costi and the minimum value is (Y + 1)costi. Therefore, the global minimum value should be min(m × xi + costi, (m - (Y - 1)(Ti - ti))xi + Y × costi, (Y + 1)costi).

2) ti ≥ Ti: for this case, the total cost is always m × xi + y × costi, and thus the minimum value is m × xi + costi.

215E - Periodical Numbers

The main framework is “prefix” idea, i.e., we define a function f(x) to denote the number of feasible integers less than or equal to x. Thus, the required answer is f(r) - f(l - 1).

Suppose that we write x in a binary form, and it has length len. For instance, len = 3 when x = 7. Then, all the feasible answers must have a binary length ilen ≤ len, which gives the following two cases.

1) ilen < len: any periodic integer y satisfies the requirements, and thus contributes a feasible answer, since y < x always holds. For a given ilen, we find all its divisors. For instance, for a divisor d, it means the binary sequence should have a cycle of length d. As the head digit is always 1, we have 2d - 1 feasible integers. However, for any divisor d' > d of ilen, if d' is a multiple of d, i.e., ilen%d' = 0, d'%d = 0, all the 2d - 1 sequences will be counted again. The reason is that as long as the current sequence has a cycle of length d, then it also has a cylce of length d'. Therefore, we should decrease 2d - 1 when we calculate for d'.

2) ilen = len: different from case 1), only those periodic integers less than or equal to x should be considered. We still consider each divisor d of ilen, and find out the leftmost d binary digits, written as a1a2...ad, where ai = 0, 1. Note that a1 = 1 always holds, and thus we can always have a2a3...ad - 1 different feasible answers. Be careful that we should check whether a2a3...ad can serve as an extra answer or not. We can simply repeat a1a2...ad with d times and see if it is less than or equal to x. If yes, we in fact have a2a3...ad - 1 + 1 feasible answers. Similarly, we should decrease a2a3...ad - 1 or a2a3...ad - 1 + 1 when we compute for d'.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

214A - System of Equations

Enumerate all the feasible values of a and b.

214B - Hometask

The target integers must satisfy the following two conditions: the last digit is zero, and the sum of digits must be a multiple of three.

Therefore, there must exist at least one digit 0. Then, we calculate the sum of all the integers and the remainder after divided by three. If the sum is a multiple of three (remainder is zero), then all the digits should be output in a decreasing order. If the remainder is one, then we try to eliminate one single minimum digit that has the same remainder, i.e., 1, 4, 7, if there is no 1, 4, 7, then we try to eliminate two minimum digit that also has remainder two, i.e., 2, 5, 8. Note that at least one of the above two cases holds, since otherwise we can not have “the remainder of the sum is one after divided by three”. We deal with the case where the remainder is two in a similar manner.

214C - Game

The main idea is a greedy algorithm. We can divide the “optimal answer” into three cases according to which computer we select to first start with. Thus, we try to start with each of the three computers and the optimal answer must be included.

For each trial, the following work resembles the topological sorting. We keep completing jobs at the current computer until no jobs are remained, and then move to the next computer in the order of 1 → 2 → 3 → 1 → 2... as the first choice, and if this is infeasible, we move like 3 → 2 → 1 → 3 → 2....

214D - Numbers

I learned from the tutorials, and the main idea is to use dp[len][d] to denote the number of ways that we can build an integer with length len and digits d, d + 1, ..., 9. One can check the recursive formula shown there, which is quite clear and intuitive.

214E - Relay Race

Notice that the original problem is equivalent to the following descriptions: we start from the left upper cell and select two paths to move to the right bottom cell, while achieving the maximum points. In other words, we can imagine that there are two players starting at the left upper cell and moving to the right bottom cell at the same time.

If we denote the current positions of player 1 and player 2 as (x1, y1) and (x2, y2), then after one step, player 1 will be either (x1 + 1, y1) or (x1, y1 + 1). Note that no matter which case occurs, the sum is always x1 + y1 + 1, and for player 2, we have similar observations. Therefore, we can use dp idea and denote the state by using a tuple (d, x, y), i.e., we use dp[d][x][y] to denote the maximum points that we can obtain, under the state that both two players are at the d-th second diagonal, while player 1 is at the x-th cell and player 2 is at the y-th cell of the current second diagonal, respectively. The recursive formula is straightfoward, but be careful that it is slightly different between d ≤ n and d > n.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

208A - Dubstep

A straightforward string implementation problem.

208B - Solitaire

The main idea is dfs with memorization, which can also be viewed as an exhaustive tree search with pruning.

We can use dp[a][b][c][d] to denote the state, where a is the number of remaining cards, b, c, d denote the “results” of the rightmost three cards, i.e., the card's value and suit. Note that these four dimensions are sufficient to define a state. The reason is that no matter how we reach the state, all the a - 3 cards except for the current rightmost three ones keep the initial values and suits. Therefore, given a, b, c, d, the remaining cards always lead to the same result. According to the above arguments, we can run dfs to check the final answer, while memorizing the results of intermediate states and directly returning these results if we visit them for multiple times.

208C - Police Station

The framework is Floyd algorithm. In conventional Floyd algorithm, we calculate the minimum distance dis[u][v]. As an extra “bonus”, we can also compute the number of different paths which all have the minimum distance as cnt[u][v]. When an intermediate node k is checked to update dis[u][v], if dis[u][v] > dis[u][k] + dis[k][v], then we set cnt[u][v] = cnt[u][k] × cnt[k][v]; if dis[u][v] = dis[u][k] + dis[k][v], then we set cnt[u][v] = cnt[u][v] + cnt[u][k] × cnt[k][v]. The term cnt[u][k] × cnt[k][v] means that we have cnt[u][k] ways to reach node k while cnt[k][v] ways to leave node k, and thus this is a straightforward combination.

Finally, we check all the intermediate nodes except for node-1 and node-n, so that dis[1][n] = dis[1][k] + dis[k][n] and 2 × cnt[1][k] × cnt[k][n] / cnt[1][n] has the maximum value. Note that if k = 1 or n, the value is always cnt[1][1] × cnt[1][n] / cnt[1][n] = 1. For intermediate nodes, we have a coefficient 2 since it is an “intermediate” node and it contributes two edges within every shortest path.

208D - Prizes, Prizes, more Prizes

Straightforward implementation.

208E - Blood Cousins

I think this is a very nice problem for one to gain more understanding about what dfs can do.

If one is familiar with the classical LCA (lowest common ancestor) problem, recall that there is a very clever algorithm which adopts the “timestamp” idea to solve it. For this problem, we implement dfs, and maintain several arrays to store some important “information”. We use depth[u] to denote the depth of node u; timestamp[u] to denote the timestamp of node u, i.e., the time at which this node is visited during dfs; revtimestamp[t] to denote the node that is visited at time t; lefttimeborder[u] and righttimeborder[u] to denote the earliest and latest timestamp of all the children nodes of u, respectively; samedepth[d] to store all the timestamp of nodes with depth d.

Next, we should solve two subproblems.

The first one is to find the p-th parent of given node v. Note that the target node must have been included in samedepth[depth[v] - p], or to be more exact, its timestamp is included there. Moreover, the timestamp of the p-th parent must be the minimum one which is larger than timestamp[v]. Thus, we can use binary search to find its timestamp and then use revtimestamp[u] to get its node index, denoted as pu.

The second subproblem is to find the number of nodes which have the same p-th parent. These nodes must be included in samedepth[depth[v]], and they (their timestamps) must fall into the interval [lefttimeborder[pu], righttimeborder[pu]]. Therefore, we can use binary search again to find out the answer.

As s summary, this is a very excellent problem and it sheds light on the power of dfs and timestamp idea.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -1
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

205A - Little Elephant and Rozdil

The key problem is to determine whether there are multiple minimum values.

205B - Little Elephant and Sorting

If a[i] ≤ a[i + 1], we do nothing; otherwise, we should increase all a[j], j ≥ i + 1 together until a[i + 1] is equal to a[i], which contributes a[i] - a[i + 1] to the final answer. Meanwhile, we don't need “really” implement the increasing operations, as one can see that only the difference between two neighboring elements matters while this difference does not change even if we really increase the values.

205C - Little Elephant and Interval

A general idea behind this problem is that when asked to answer some query with interval [l, r], it may be more convenient to establish a function f(x) which gives a result for [1, x], and thus the answer to the query is f(r) - f(l - 1), just like the prefix idea.

205D - Little Elephant and Cards

Note that each card contributes at most two different colors, and thus we have 2n colors. This implies that there are at most four colors whose total number can be equal to or larger than n / 2. Therefore, the idea is summarized as follows.

At first, we compress the data into range [1, 2 × 105]. Then, we use a “hash table” to count the number of each color (notice that if a card has the same color at both sides, we increase the corresponding counter only once rather than twice). Next, we find out all the colors whose counters are equal to or larger than n / 2 (at most four). Finally, we enumerate each of these colors, and calculate the minimum number of flipping that we need.

205E - Little Elephant and Furik and Rubik

We first give a simple example to reveal the key idea behind this problem. Suppose that we have two substrings “ABC” and “ADC”. According to the rules, we can obtain two points, since both “A” and “C” have the same position. However, from another perspective, we can redistribute the two points into “A” and “C”, respectively, i.e., letter “A” contributes one point while letter “C” contributes one point.

From the above arguments, we can calculate the points for letters “A, B,...,Z” independently, and then add them together, which gives the correct answer as well. We give letter “A” as a simple example.

Suppose that letter “A” is found at position i in thr first string and position j in the second string. Then, there are min(i + 1, j + 1) × min(n - i, n - j) substring pairs that have “A” at the same position and thus contributes min(i + 1, j + 1) × min(n - i, n - j) points to the final answer (index i starts from zero).

Next, we store all the positions at which “A” appears in the first and second string in array a[i] and b[i], respectively. A trivial solution is to enumerate all the pairs and compute min(i + 1, j + 1) × min(n - i, n - j). However, the trick to reduce the complexity is that for any j > i, min(i + 1, j + 1) × min(n - i, n - j) is always equal to (i + 1) × (n - j). To take advantage of this observation, we adopt two pointers idea. We use p1 and p2 to point to arrays a and b, respectively, and these two arrays have length la and lb. As long as b[p2] ≤ a[p1], we move p2 forward, and it contributes (b[p2] + 1) × (suffixa[p1]) to the final answer, where can be viewed as a suffix sum. When we reach a b[p2] > a[p1], an extra value of (a[p1] + 1) × suffixb[p2] is added to the final answer, where . Then, we move p1 forward until b[p2] ≤ a[p1], and remember that each moving contributes (a[p1] + 1) × suffixb[p2] to the final answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

203A - Two Problems

A simple simulation problem, but be careful that one could solve 0, 1, or 2 problems.

203B - Game on Paper

Once a square becomes black, we check all the 3 * 3 areas (in fact there 9 such areas) which contain it, to see whether they are all black. The complexity is O(M).

203C - Photographer

A straightforward greedy algorithm solves this problem.

203D - Hit Ball

We can consider each of the three dimensions independently. At first, we can calculate when it hits the door according to vy. Then, we calculate the total distance that it goes along the x and y dimension, respectively, assuming that no reflection occurs. It can be observed that for x dimension, the distance has a cycle of length 2a while for y dimension, the cycle has length 2b. Therefore, we “eliminate” a multiple of cycles from the total distance and it is then simple to determine the final position, according to the “residual” distance.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

202A - LLPS

It is sufficient to output all the letters with the maximum alphabet order. The reason is that if the optimal palindrome sequence has multiple different letters, we can simply delete all the other ones except for the maximum one, which gives a better result.

202B - Brand New Easy Problem

The solution is straightforward. We should consider all the n! permutation patterns and calculate the similarity.

202C - Clear Symmetry

Inspired by the tutorials, it suffices to only consider odd number n.

For odd number n, we divide the whole figure into several areas. Suppose that we draw a horizonal line crossing the m + 1-th row, where , and another vertical line crossing the m + 1-th column. One can see that if there are a black squares in row r and column c, where , then there will be totally 4a black squares. If there are b1 black squares in row m + 1 and columns , then there will be 2b1 black sqaures in this row. Similarly, if there are b2 black squares in column m + 1 and rows , then there will be 2b2 black sqaures in this column. Finally, there may be d = 0 or d = 1 black square in row m + 1 and column m + 1.

Next, we discuss based on the following two cases.

1) m is even: one can see that we can have any 4a + 2b1 + 2b2 + d black squares, where a = 0, 1, ..., m / 2, b1, b2 = 0, 1, ..., m / 2 and d = 0, 1;

2) m is odd: if d = 1, then for , b1, b2 = 0, 1, ..., (m - 1) / 2, we can have any 4a + 2b1 + 2b2 + 1 black squares; if d = 0, then for , b1, b2 = 0, 1, ..., (m + 1) / 2, we can have any 4a + 2b1 + 2b2 + 0 black squares; for , b1, b2 = 0, 1, ..., (m - 1) / 2, we can have any 4a + 2b1 + 2b2 + 0 black squares.

Finally, for the given x, we enumerate n from 1 until we find that there exists at least one solution for 4a + 2b1 + 2b2 + d = x according to the above cases.

202D - Guess That Car!

The trick lies in the formula c × d2. Note that d2 = x2 + y2, and thus dimension X and Y are independent if we consider x2 and y2 as an entity, which resembles Manhattan Distance. Thus, we can determine the optimal X and Y, respectively, and for simplicity, we only focus on how to calculate the optimal X (Y is similar).

First, for each column, we compute the sum of all the elements in different rows. Next, we enumerate each feasible position and calculate the cost and finally select the optimal one as position X.

202E - Fragile Bridges

The main idea is dp. For each node i, we use leftnoback[i] to denote the maximum points we can obtain if we start from node i and move to left but do not return back. In contrast, we use leftback[i] to denote the maximum points that we can obtain if we move to left but return back to node i (one can implement similar operations when moving to right, i.e., calculate rightnoback[i] and rightback[i]).

Then, we enumerate each position i again as the starting node, and we have the following feasible operations:

1) move to left and do not return to node i. This gives a value leftnoback[i];

2) move to left and return back to node i and move to right but can return or not to node i. This gives leftback[i] + max(rightnoback[i], rightback[i]);

3) move to right and do not return to node i. This gives a value rightnoback[i];

4) move to right and return back to node i and move to left and return back or not to node i. This gives rightback[i] + max(leftback[i], leftnoback[i]).

From all the above cases, we find out the globally maximum value.

Here are the details about how to compute leftback[i] and leftnoback[i] (note that this is similar for the “right” term).

We use x to denote the initial points between node i and its left node i - 1. If x = 1, then leftback[i] = 0 since we can never return back once we go to node i - 1. If x > 1, then leftback[i] = leftback[i - 1] + x / 2, where “/” is integer division. For leftnoback[i], we have leftnoback[i] = 1 + (x - 1) / 2 + max(leftback[i - 1], leftnoback[i - 1]).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +1
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

200B - Drinks

Straightforward implementation.

200C - Football Championship

Instead of figuring out an explicit formula to calculate the required answer, we can simply adopt two loops to implement an exhaustive search to find out it.

The first loop enumerates the value d = X - Y, from d = 0 to d = 100 (100 is large enough to cover all the cases), and the second loop deals with Y = 0, 1, ..., 100. Within the inner loop, we compute the scores, goals and missed goals for each team, and sort them according to the given rules. Once finding that the team is in top 2, we can immediately output the current Y and X = d + Y as the answer.

200D - Programming Language

The main idea is exhaustive enumeration, and be careful when deal with the strings.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -8
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

199A - Hexadecimal's theorem

We use f[n] to denote the n-th Fibonacci integer, and thus f[n] = f[n - 1] + f[n - 2] = f[n - 2] + f[n - 3] + f[n - 2].

199B - Special Olympics

There are four circles, and we are going to check each of them to see whether it forms a “full” circle or not. We use r to denote the current circle that we are considering, and use R1 and R2 to denote the other two circles that form a ring, with R1 < R2. Then, the circle r forms a “full” one if any of the three conditions are satisfied.

1) r is completely outside R2;

2) r is completely inside R1;

3) R2 is completely inside r.

199C - About Bacteria

We can find that . Then, we can construct an inequality, and compute the answer.

Another solution is that suppose we substitute x = 1 to the function, and calculate ai = f(i, 1). Then, a0, a1, a2, ..., an forms an increasing sequence, and ai + 1 = kai + b. Assume that ai ≤ t < ai + 1, then we need n - i steps to obtain a value that is no less than an = z. The reason is simple, since this is an increasing function. For an intuitive understanding, we can generate another sequence starting with b0, b1, ..., bm, where b0 = t and bi + 1 = kbi + b, and one can check that ai ≤ b0 < ai + 1 ≤ b1 < ai + 2 ≤ b2.... Therefore, we only need check that which segment does t fall into.

199D - Jumping on Walls

This is in fact a shortest path problem. We can simply adopt the SPFA (shortest path faster algorithm) framework, and calculate the minimum distance (in fact it is “minimum time”) from the starting point to any other one. Be careful that when we update the minimum distance, we should consider the current “water level” as an extra condition, i.e., we can only update the minimum distance if all the corresponding positions are still safe. The player can escape successfully if he can reach any position that is larger than n.

199E - Delivering Carcinogen

The main idea is to implement binary search to calculate the time, and the remaining work is to determine whether one can catch the other one within the time t, which we are currently testing.

We first compute for player 1 the position that he can reach after time t. Then, we check whether the second player can catch him at that position or not. Then, we will face the following problem. Given two points A and B strictly outside the circle, what is the shortest distance between them?

If the straight line connecting them has no common points with the circle, then their distance is just the length of line. Otherwise, we draw their tangent lines starting from A and B to the circle, respectively, with intersecting points a and b. Then, the answer is the length of Aa, plus the length of Bb, plus the length of bow .

Полный текст и комментарии »

  • Проголосовать: нравится
  • +1
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

197A - Plate Game

The trick is that if the first player can put down a circle, then he will win. The strategy is to put the first circle exactly on the center, and as long as the second player can put one circle, the first player can simply put another one on the center-symmetric position. Thus, the problem has been reduced to determining whether the first circle can be put on the table or not.

197B - Limit

For , it can be written as ( is similar). Thus, as x -  > ∞, , we can further reduce the term xn - m based on the value of n - m.

197C - Lexicographically Maximum Subsequence

We should first select all the “z”. By denoting the position of the last “z” as iz, we should further select all the “y” with indices larger than iz. Suppose that the last position of “y” is iy, and we select all the “x” with indices larger than iy. The above process is repeated until we reach the end of the given string. If some letters do not appear in the given string, we can simply skip them.

197D - Infinite Maze

The main observation is that starting from “S”, if and only if we can reach two different points (x1, y1) and (x2, y2), while x1%n = x2%n and y1%m = y2%m, then the answer is yes (see proof in tutorials).

I can only prove the “if” part. We can construct a scheme to go infinitely far away. We can draw a line from “S” to (x1, y1), and from “S” to (x2, y2), respectively. Although these may not be straight lines (in fact they are paths), we can still treat them as “straight” and thus they form an angle. Next, (x2, y2) also serves as (x1, y1) of some other “S*”, and “S*” also has its own (x2, y2). If we repeat the above step, one can see that they just form some pattern which consists of the same copy but have infinite extension.

As for the implementation, we can just use simple bfs and adopt some extra indicators to tell to which board the current (x1, y1) belongs.

196E - Opening Portals

I follow the turotials.

At first, we run a “parallel” Dij's algorithm to find out, for each node, the distance to its nearest portal node. This algorithm can be implemented based on SPFA (shortest path faster algorithm, and one can find a large number of materials on the internet). Specifically, we use dis[i] to denote the distance from node i to its nearest portal node, amd use pre[i] to denote this nearest portal node. These two arrays should be updated in the SPFA framework.

Next, we try to construct a new graph which only describes the “distance” among portal nodes. We can enumerate each edge given in the original graph, and for each edge connecting two nodes u and v, we find the corresponding two portal nodes pre[u] and pre[v]. We add a virtual edge between pre[u] and pre[v], with weight of dis[u] + w + dis[v], where w denotes the weight of edge between u and v. After constructing the new graph, we can run Kruskal's algorithm to find the MST.

However, I still have two questions...

1) How to prove that the shortest path (virtual path) between any two portal nodes are always included in the new graph. We only know that for some two portal nodes u and v, the shortest path must go through some edge in the original graph. This means that if we want to find this shortest path, there must exist at least one edge having u and v as its two nearest portal nodes, but how to guarantee this...

2) I think this is a “steiner tree” problem (see 152E - Garden), and if I am correct, this is NP-hard and it is weird to have a polynomial solution. Where am I wrong...

Полный текст и комментарии »

  • Проголосовать: нравится
  • -1
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

195A - Let's Watch Football

We can solve this by building an x-y coordinate, where the horizonal coordinate denotes the time and the vertical one denotes the quantity that we have downloaded or watched. One can simply draw two lines to describe the “download behavior” and “watch behavior”. We should find out the minimum t so that the “download behavior” line is always above the other line.

195B - After Training

A straightforward implementation problem.

195C - Try and Catch

The main framework is to adopt a “stack” to match every “try” with the correct “match” (just like the classical “bracket matching problem”). Then, we find out the “nearest” try-catch pair that has the same “exception name”. Be careful of the processing of string parsing.

195D - Analyzing Polyline

First note that we can omit any y = kx + b with k = 0, as they have no effect on the result (they just move the whole polyline in the vertical direction). Then, we deal with each function according to the sign of k, i.e., we divide them into two types, one containing positive k while the other containing negative k.

Next, we implement a “duplicate elements removing” operation, for positive k and negative k, respectively. Suppose that we have two functions y = k1x + b1 and y = k2x + b2, and they satisfy k2 = c × k1, b2 = c × b1, where c is a positive constant. This means that these two functions have the same “turning point”, and we can add them together to form a new single function y = (k1 + k2)x + (b1 + b2) while having no effect on the final result.

After the above operations, we can count the number of different “turning points”, and this is just the final answer.

195E - Building Forest

The main idea is disjoint set union (dsu), with path compression (I am not sure of this name, but the target is to avoid the degradation from a tree to a single link with quite large depth). One can search this on the internet.

For this problem, we should also maintain an extra array to store the distance from some node u to its current root node v. One can check the codes shown in the tutorials.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +2
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

194A - Exams

Suppose that we have m exams with value 2. Then, we have 2m + 3(n - m) ≤ k ≤ 2m + 5(n - m). As the problem guarantees that there is at least one reasonable answer, we have m ≥ 3n - k. Therefore, the answer should be max(3n - k, 0).

An intuitive understanding of the above result is that we can first assign 3 to all the n exams. If 3n ≤ k, it means that we can assign the extra k - 3n values to some exams and we have zero 2s. On the other hand, we have 3n > k, and thus we have to “set” at least 3n - k exams from 3 to 2.

194B - Square

By some simple induction, one can see that we should find the minimum positive integer k so that k(n + 1) is a multiple of 4n. The result is that .

194C - Cutting Figure

Let the number of “#” is k. If k ≤ 2, the answer is “impossible” (check the problem description). Otherwise, we check whether there is at least one cut point or not. If yes, the answer is obviously 1 since we can remove the cut point. If no, it suffices to remove two points to break the connectivity (this result seems quite intuitive and one can check the tutorials for proof). We can use tarjan's algorithm to find the cut point if there is any.

194D - Xor

The trick is that any k consecutive “xor” operations is equivalent to either one single (k is odd) or zero “xor” operation (k is even).

Thus, we can use dfs to solve this. We build a dfs function with input res and flag, where res denotes the total number of operations that we still can implement, and flag denotes whether the last operation is a single (flag = 1) or zero (flag = 0) “xor” operation. Whenever we enter the dfs function, we first set all the remaining res operations as “xor” and calculate a value and update the maximum answer. Then, we check the value of flag and if flag = 0, we can implement one single “xor” operation and recursively call this dfs function with res - 1 and flag = 1. However, no matter what the value of flag is, we should always call dfs once again with res - 1 and flag = 0. Be careful that, as usually done in dfs with backtracing, we should record the “state” before we call dfs again, while after calling, we should restore the state.

194E - Hamming Distance

I learned from the turotials.

Supoose that we write the first sequence s1[n] in the first row, the second sequence s2[n] in the second row, and so on, which finally forms a matrix (this is straightfoward). For each column, it can be viewed as a sequence of length 4, consisting of 0 and 1. We first give an intuitive example to show how the tutorials work. For a column (1 0 1 1), i.e., s1[i] = 1, s2[i] = 0, s3[i] = 1, s4[i] = 1, after six “xor” operations (according to the order), it gives (1 0 0 1 1 0), i.e., , , and so on. Similarly, for a column (0 1 1 0), it gives (1 1 0 0 1 1). Based on the above examples, we can see that the column can have 24 = 16 different sequences, which correspond to 16 “xor operation sequence”. Thus, as the final matrix has 4 rows and m columns, each column must be one of the 16 sequences, and we can further build another matrix with 6 rows and m column, and the i-th column is just the “xor operation sequence” of the i-th column in the 4 × m matrix. Next, suppose that the number of each “xor operation sequence” involved in the 6 × m matrix is x1, x2, ..., x16. Then, we can build an equation Ax = b, where A is a 6 × 16 matrix consisting of the 16 “xor operation sequence”, x is just the vector (x1, x2, ..., x16) and b is a vector of length 6, just the given six values in the problem. Now, the problem is reduced to solving such a linear equation with 16 variables and 6 equations.

Nevertheless, some of these 16 “xor operation sequence” are the same. For instance, one can check that both (0 1 1 0) and (1 0 0 1) will give (1 1 0 0 1 1). Therefore, some of the variables can be combined to one single variable. After some simple compution (or write extra codes to accomplish this), the original equation can be further reduced to 7 variables and 6 equations. To solve this, we can enumerate the value of any one of the 7 variables, while solving the remaining 6 ones by gaussian elimination, and find out the optimal answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

192A - Funky Numbers

My solution is to first calculate all the integers that have a form of while not exceeding 109. Then, for the given integer n, we can enumerate the first term from the previously obtained results and check whether their difference is also an integer that we have computed, which can be implemented based on binary search. The total complexity is thus O(NlogN).

192B - Walking in the Rain

We can test the threshold from small values to large ones. For each threshold, we check whether the first and last elements are both still “accessible” (event-1), and further check whether there are two consecutive elements, except for the first and last ones, that are not “accessible” (event-2). We find the maximum threshold that event-1 is true and event-2 is false.

192C - Dynasty Puzzles

The basic idea is dp. We use dp[i][j] to denote the maximum length that we can obtain while using all the given strings, with starting letter “i” and ending letter “j”. The recursive formula is as follows:

When we meet a new string s, with starting letter “k” and ending letter “j”, we should update all dp[i][j] with i = 0, 1, ..., 25, according to dp[i][j] = max(dp[i][j], dp[i][k] + length(s)), as long as dp[i][k] > 0 (this means that we have found some combination of strings that starts with letter “i” and ends with letter “k”). Besides, string s can also serve as the first string, and thus we should further update dp[k][j] = max(dp[k][j], length(s)).

Finally, we should find out the maximum value of dp[i][i] as the answer.

192D - Demonstration

Notice that the last element does not affect the result. We first sort the other n - 1 elements in a decreasing order, and compute the sum of the first k terms as ks. If ks ≤ b, it means that we have no choice but to select the last element. The reason is that they have enough money to reject any choice unless we select the last one.

On the other hand, if ks > b, it implies that we have a chance to obtain a better result. At first, as ks = a[1] + a[2] + ... + a[k] > b (index starts from “1”), it means that we can select any one of their original indices (remember that they have been sorted and thus we should find their “true” indices) as the final result, since b - (a[1] + a[2] + ... + a[j - 1] + a[j + 1] + ... + a[k]) < a[j] holds for any j and our final choice can not be rejected. Besides, we also have a chance to select any one of the other n - k - 1 (the last one is excluded) indices as our final result. It is obvious that we should first try a[1], a[2], ..., a[k - 1] so that the administration has the least money with a value of b - (ks - a[k]) to reject our final choice. Therefore, we can test the other n - k - 1 indices and check whether their cost is larger than b - (ks - a[k]), and if yes, then we can select it.

We keep updating the “best” result according to the above steps and finally output the answer.

192E - Fools and Roads

A classical LCA problem.

At first, we should figure out how to calculate LCA of any given two nodes with compleixty of order logN. There are several classical algorithms to solve this and as one can find a large number of materials talking about this, we omit the details here. As for me, I used dfs to build a “timestamp array” and implemented RMQ to calculate LCA.

Next, we use dis[u] to denote the number for which the edges from u to the root node have been visited (quite similar to prefix idea). For instance, dis[u] = 2 means that every edge from u to the root node has been visited for two times.

For each query with given nodes u and v, we should imcrease dis[u] and dis[v] by one, respectively while decreasing dis[LCA(u, v)] by two. One can draw a simple graph and check this with paper and pen.

After dealing with all the queries, we are going to calculate the times for which each edge has been visited. To obtain a correct result, we should visit dis[u] in a decreasing order of the depth of u, and whenever we complete a node u, we should add dis[u] to its parent node. For instance, u is a leaf node and also serves as a child node of node v. Then, the edge between u and v has been visited for dis[u] times, and we update dis[v] as dis[v] = dis[v] + dis[u]. The reason is that any edge from v to the root node is counted both in dis[v] and dis[u], as it belongs to two prefix arrays.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +8
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

190A - Vasya and the Bus

Three cases are sufficient to solve this problem.

1) m = n = 0, the answer is “0 0”;

2) n = 0, m > 0, the answer is “Impossible”;

3) none of above, the answer is n + max(m - n, 0) and n + m - 1.

190B - Surrounded

Without loss of generality, we assume that radius r1 > r2 and the distance between their centers is d. The problem can be solved based on the following cases.

1) two circles have at least one common point: the answer is r = 0 since it can be put at any one the common point(s).

2) one circle is exactly outside the other one: the answer is .

3) one circle is exactly inside the other one: the answer is .

Besides, one should carefully deal with the precision problem. For instance, when we try to check whether d < r1 + r2 holds or not, we should compare (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2) < (r1 + r2) * (r1 + r2) since only integers are involved here, which gives strictly precise result.

190C - STL

We can enumerate the type (“pair” or “int”) in the given order one by one, while using a “stack” s to maintain the current “state”.

In details, when we meet a “pair”, we output “pair<” and push an integer 2 to the stack. The reason is that if this is a reasonable sequence, then “pair” means a new start and it must be “pair<”. We push 2 to the stack to indicate that we need wait for another two “types” to complete the current “pair”.

On the other hand, when we meet an “int”, we check whether the “stack” is empty or not. If yes, it means that there must be only one “int” (also implies that n = 1) and zero “pair”, and we only output “int” (note that this case might also occur if it is not a reasonable sequence, for instance “pair int int int”, however we have other methods to tell this, which will be mentioned later). If no, then we take out the top integer in the “stack”. If it is 2, it means that the current “int” serves as the first part of the last “pair” which is still not completed. Thus, we should output “int,” and set the top integer to 1, as we have obtained the first part and only need wait for the second one. If the top integer is 1, it means that the current “int” serves as the second part of the last “pair”. Thus, we should output “int>,” and pop out the top integer of the “stack”. Next, notice that the last “pair” has been completed and it may serve as the first part of some other “pair”. Hence, we should further check the top integer of the “stack”, and if it is 1, we output “>” to complete the current “pair” and pop out the top integer. We keep the above process until the “stack” is empty or the top integer is 2, and if the latter case occurs, we should output “,” and set the top integer as 1.

After finishing the above process, the sequence is reasonable if the following two conditions are fulfilled. The stack is empty and the number of “pair” is strictly one less than that of “int”.

190D - Non-Secret Cypher

A classical “two pointers” problem. We use one pointer s to point to the starting point while using the second pointer t (t > s) to point to the minimum position so that there exists at least one integer which has appeared exactly k times, within the interval [s, t - 1]. We move these two pointers to find out all such intervals, and for each of them, it contributes n + 1 - t (index starts from zero) to the final answer.

The left work is to determine whether for some interval [s, t - 1], it can meet the requirements or not. We can use a segment tree which maintains the maximum number of appearance of integers [l, r], to accomplish this. Do not forget to compress all the input data to a smaller range so that segment tree is feasible. When we move s one step forward, we decrease the number of appearance of integer a[s] by one in the segment tree and complete the updating of other nodes. When we move t one step further, we implement similar updating. The value stored in the root node is just the maximum number of appearance of some integer within the current sliding window.

190E - Counter Attack

I followed the idea mentioned in tutorials.

At first, we insert all the node indices into a “set”. Then, we adopt three loops to solve the problem.

loop 1: as long as the “set” is not empty, we always take out the first element and push it to a “queue”. Then we erase it from the “set” and go to loop 2;

loop 2: as long as the “queue” is not empty, we take out the first element u (remember to pop it out from the “queue”) and go to loop 3;

loop 3: we enumerate every element that is still in the “set” and use binary search to check whether it is connected to u. If no, we push this element to “queue”, and erase it from the “set” (it seems to be a good idea if we store all these elements first and then erase them when we return back to loop 2).

I think the above algorithm has complexity of order (m + n)logn (worst case), which can be computed based on amortized analysis.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -5
  • Проголосовать: не нравится

Автор SummerSky, По-английски

189A - Cut Ribbon

This problem asks to solve an equation xa + yb + zc = n. We can enumerate each x and y but calculate z directly according to zc = n - xa - yb, which gives an algorithm with complexity O(n2).

189B - Counting Rhombi

The key idea is to enumerate all the feasible center points. For each center point with coordinate (x, y), it contributes min(x, w - x) × min(y, h - y) to the final result.

189C - Permutations

We adopt two pointers p1, p2, to point to the position of array a and array b, respectively. Before moving on to the general algorithm, we start with a simple example to obtain some intuitive understanding.

At first, we try to put the value a[i] = b[0] to position 0. If a[0] = b[0], we do nothing but try to put the value a[i] = b[1] to position 1. We continue this until the first a[i]! = b[i] is met, and without loss of generality, we assume that a[0]! = b[0]. To accomplish this, we should take out all the elements in array a[n] with indices larger than or equal to i (remember to count the number of operations), and then we can put a[i] = b[0] to position 0. Well, what should we do with the other elements that have been taken out. In fact, they “have been” put to correct positions. Although our algorithm can not tell their correct positions right now, as we may take out more elements from the end and thus move their positions by some offset, we still have full “observation” and we can completely determine their current correct positions by considering the “future”.

Therefore, we have the following general algorithm. At first p1 = p2 = 0, and we check whether a[p1] = b[p2]. If yes, we increase both p1 and p2 by one. Otherwise we check whether the element a[i] = b[p2] has been taken out or not. If yes, we increase only p2 by one; otherwise we take out more elements until a[i] = b[p2] is met, and then increase p2 by one. We count the number of operations during this process.

189D - AlgoRace

For each type of car, we adopt floyd algorithm to calculate the minimum time it costs to go from city i to city j, which is denoted as t[i][j]. Then, we use dp[i][j][k] to denote the minimum time it costs to go from city i to city j with no more than k changes. The recursive formula is dp[i][j][k] = minj' ≠ j(dp[i][j'][k - 1] + t[j'][j]). Note that it is sufficient to calculate k up to n, since we have at most n cities, and the optimal path must be a simple path with no loops. Thus, remember to update k = min(k, n) for the originally given k.

189E - Weak Memory

The main idea is to use binary search to find the optimal q, since if there exists some q' that satisfies the requirements, then any q ≥ q' must be a feasible answer as well.

For any q that we are testing, we should check whether we can reach the ending point t from the starting point s or not. If yes, we further decrease q; otherwise we try to increase it and test the next value again. For each test, we can implement SPFA (shortest path faster algorithm) to compute the shortest path distance from any point i to s, denoted as dis[i], with complexity O(E) where E is the number of edges. However, we should modify the calculation slightly when we meet a special “volunteer point”. Once we have reached one of such points with index j, we should set dis[j] = 0, since the distance is “cleared” at such points. Finally, if dis[t] ≤ q, then the current q satisfies the requirements and we should further decrease it; otherwise we increase it for the next test.

Полный текст и комментарии »

Автор SummerSky, 7 лет назад, По-английски

186A - Comparing Strings

These two strings must have the same length. Then, we compare them letter by letter and find out all the indices of different letters. The number of difference must be exactly two and we check whether they satisfy a simple swapping rule.

186B - Growing Mushrooms

The problem requires extremely precise results and thus we should try to only use integers to handle all the calculation.

At first, we determine for each player that he should use a first or b first. For convenience, we assume that a > b, and we should check which one of and is larger. We can rewrite as , and thus it is sufficient to compare their numerators to determine which one has a larger value. Similarly, sorting should also be implemented in this manner, i.e., we only compare their numerators. Finally, when we output , it is equivalent to the problem of printing the result of x / 100 for some positive integer x, with a precise decimal place. We can transfer x into a string and insert “.” at the correct position.

186C - Plant

I know two methods to solve this problem.

1) After some observation, one can find that for the n-th figure, the number is .

2) We use an and bn to denote the number of upward and downward triangles, respectively, in the n-th figure. The recursive formula is an = 3an - 1 + bn - 1 and bn = 3bn - 1 + an - 1. By substituting the formula of bn into the latter equation, we have an + 1 - 6an + 8an - 1 = 0. This type of recursive equation has a solution of pattern as an = Axn + Byn, where A, B are constants while x, y are the roots of equation z2 - 6z + 8 = 0 (I forget how to prove this but it is done in this way). Therefore, we have x = 2, y = 4 and thus a1 = 2A + 4B = 3 and a0 = A + B = 1, and one can compute the final result.

186D - Mushroom Scientists

This is a mathematical problem and we solve it by using Lagrange Multiplier method.

At first note that the optimal answer must satisfy x + y + z = S since if x + y + z < S, we can always increase x, y or z by S - (x + y + z) and we will obtain a better result.

According to Lagrange Multiplier method, we have

Unable to parse markup [type=CF_TEX]

. The left work is to implement partial derivative and solve the equation. The conclusion is that , , . If a = b = c = 0, the answer can be x = 0, y = 0, z = S.

186E - Clever Fat Rat

We can consider that all the oats in the first layer are sorted just in the given order, and when they fall down to the deeper layers, they still keep the same order. From this perspective, we can say that the first layer divides the given n oats into n segments, and the second layer divides then into n - 1 segments while the third layer divides them into n - 2 segments, and so on.

Based on the above arguments, we can use dp[i][j][l][r] to denote the maximum weight that can “appear” in layer i, scale j, if all the oats within the interval [l, r] fall into this scale (or segment). The recursive formula is dp[i][j][l][r] = maxm = l - 1, l, l + 1, ...r(dp[i - 1][j][l][m] + dp[i - 1][j + 1][m + 1][r]), but note that if dp[i][j][l][r] < w[i][j], then dp[i][j][l][r] should be set to zero. The reason is that as no oats can fall down further from this scale, it is equivalent to saying that there is no oat left in this scale.

It seems that there is something wrong with the test case...

185D - Visit of the Great

I solve this problem based on tutorials.

We can prove that the common divisor of these terms is either 1 or 2 (the proof in tutorials is quite clear). But this does not work when p = 2. For p = 2, the answer is straightforward, since for odd k, k2l + 1 is even and their LCM must be even, which gives an answer 0, while for even k, k2l + 1 is odd and their LCM must be odd, giving an answer equal to 1. For the following arguments, we only focus on p > 2.

If k is an even number, then k2l + 1 is an odd integer and all the terms must be co-prime. Thus, the LCM is simply their product. When k is an odd number, k2l + 1 is even and thus their LCM is their product divided by 2r - l.

The left main issue is to compute the product, which can be represented as Q = (k2l + 1)(k2l + 1 + 1)(k2l + 2 + 1)...(k2r + 1), and (k2l - 1)Q = k2r + 1 + 1. Thus, , and we should compute the inverse of k2l - 1 based on module p. According to fermat's theorem, if 2l%(p - 1) = t and k is not a multiple of p, then k2l%p = kt%p. Nevertheless, even if k is a multiple of p, the result is also correct since kt%p still gives zero. After obtaining r = k2l - 1%p, we can calculate its inverse by using fermat's theorem again. Be careful that r might be zero (it has no inverse), and for this case, it means k2l ≡ 1modp, and thus k2l + 1 ≡ 1modp and for any j ≥ l, we have k2j ≡ 1modp. Therefore, the product is equal to 2r - l + 1%p.

Finally, I have to say that this is a really difficult and complicated mathematical problem, involving quite many details and corner cases.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

182B - Vasya's Calendar

Straightforward implementation.

182C - Optimal Sum

Let us first try to find some “insight” to this problem. Suppose that the optimal set of positions at which we should reverse the sign of the integer, is (i1, i2, ..., im). One can observe that im - i1 ≤ len must hold, i.e., the leftmost and rightmost reversed positions must fall into some interval of length len. The reason is that there must exist one (maybe multiple) interval of length len that gives the optimal answer, and thus any reversed position that falls out of this interval makes no sense. Therefore, it is safe to modify k as k = min(k, len).

Another observation is that if we are going to reverse k signs, we will either reverse k negative integers or k positive integers. The reason is obvious since a positive integer and a negative integer “cancel” each other. Moreover, we must select the k integers with the maximum absolute values and reverse them.

Based on the above arguments, we can simply enumerate every interval of length len (like a sliding window), and try to obtain a larger value by reversing not more than k signs. For simplicity, we only consider reversing k maximum negative integers (“maximum” means maximum absolute value), since reversing k maximum positive integers is similar.

We can treat every interval of length equal to len as a query. By using “query square root decomposition”, the complexity can be reduced to O(N1.5) from trivial O(N2). There are a large number of materials talking about this on the internet.

The left work is how to store and update the maximum k integers. I read some of the accepted codes and learned the following technique. We maintain two “set”s maxk and candidatek. The first one stores the maximum k integers while the second stores the other integers that are not the maximum k ones but belong to the current interval. Whenever the sliding window is moved one step further, we should deal with inserting and deleting elements in maxk and candidatek. One should carefully deal with the details involved here.

182D - Common Divisors

Recall that even for integer up to 106, the number of its divisors is only about several hundreds. Therefore, we can calculate all the divisors for each string length in previous, and check which divisors are “string divisor”s. Next, we find the longest common prefix string of the two strings, and denote the length as len. Finally, we find all the common “string divisor”s that is not larger than len.

182E - Wooden Fence

We extend each board to two new boards if it is not a square, by swapping its length and width, while they still have the same “index” (it is required that no neighboring boards should have the same index). In other words, we create two new boards but eliminate the old one. Then, we use dp[i][j] to denote the number of ways to build a fence of length j and the last board has index i. As you may see, the idea is dp but we compute it in a forward manner. From dp[i][j], we should find out all the feasible boards that can be built next to the board with index i, and then move to some dp[k][j + a[k]], where a[k] is the length of board k.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

180A - Defragmentation

We can adopt an array to store the segment of each file. Each element is in fact a “pair” which contains two values. The first one is the index of the file while the second one is the index of the segment in the current file. Then, the problem is reduced to sorting the originally given array in an imcreasing order with all the empty cells appending at the end.

For an element at position i, we should move it to the target position j. In order to do this, we should move the element at position j to an empty cell and then move that element from position i to j (if i = j, nothing is done). One can see that we implement at most two operations for each element, and thus the total number of operations will not exceed 2(n - 1).

180B - Divisibility Rules

Let us investigate some characteristics of each type. For convenience, we write an integer as t = ambm + am - 1bm - 1 + ...a0b0, where b is the base. Note that it is sufficient to focus on r = b%d instead of b, since we have b = kd + r, and thus bm = qd + rm. One can see that t = q'd + amrm + am - 1rm - 1 + ...a0r0, and the first term does not affect the result. Therefore, we focus on t = amrm + am - 1rm - 1 + ...a0r0 for the following arguments.

2-type: Suppose that it is enough to check the last m terms. This means that for rm, rm + 1, rm + 2, ..., every term must be divisible by d. Note that if rm is divisible by d, then any rm + δ is also divisible by d, and thus it is sufficient to check whether rm has a divisor equal to d. In general, we find all the prime divisors of r and d, respectively. There exists some rm that has d as its divisor if and only if r has all the prime divisors that d has. Suppose that both r and d have a common prime divisor p. More specifically, the quantity of p included in r is a while in d is b. For instance, 12 has two 2s while 48 has four 2s. To find a minimum m, we should guarantee that for every prime divisor, we have a × m ≥ b. For instance, 12 = 22 × 3 and 48 = 24 × 3. For prime divisor 2, we need 2 × 2 ≥ 4 while for prime divisor 3, we need 1 × 1 ≥ 1, and thus we need m = 2, i.e., 122 is divisible by 48.

3-type: Suppose that s = am + am - 1 + ... + a0 is divisible by d. Then, t - s is also divisible by d. Note that t - s = am(rm - 1) + am - 1(rm - 1 - 1) + ... + a1(r - 1), and as rm - 1 = (r - 1)(rm - 1 + rm - 2 + ... + 1) = q(r - 1). Thus, t - s = Q(r - 1), and to guarantee that this is divisible by d, we need r - 1 = 0, i.e., r = 1.

11-type: The idea is similar to 3-type. Suppose that t = a1 - a0 + a3 - a2 + ... + a2m + 1 - a2m, and we have . Note that r2m + 1 + 1 = (r + 1)(r2m + r2m - 1 + ... + 1) = q(r + 1), and similarly r2m - 1 = p(r + 1). Thus, s + t can be written as s + t = Q(r + 1), which implies that r + 1 must be some multiple of d. As we have assumed that r < d, this in fact means r + 1 = d.

6-type: We can write d = p × q, where gcd(p, q) = 1 (gcd is short for greatest common divisor). If both (b, p) and (b, q) correspond to a “non 7-type”, then the current (b, d) is a 6-type.

Be careful that r might be equal to zero, and for this case, it is a 2-type and we only need the last digit. Besides, when d = 2, both r = 1 and r + 1 = d hold at the same time, and as the problem required, we should assign it with 3-type rather than 11-type. We can calculate answers for all the pairs of (b, d) previously, and output the answer corresponding to the query.

180C - Letter

We use p[i] to denote the number of upper case letters with indices (1, 2, ..., i) (prefix idea). Then, we enumerate the border position which seperates the last upper case letter and the first lower case letter, and calculate the number of modification that we need implement. This number can be computed with O(1) by using p[i].

180D - Name

We denote the first and second string as s and t, respectively. To find a minimum s satisfying s > t, we need find a maximum index i so that s[i] > t[i] while s[j] = t[j] for j < i, and for j > i, we should output the remained letters in s in the order of “a,b,c,...,z”.

We need prefix idea again, for instance, pt[i][26 + 1] to denote the total number of letters “a,b,c,...,z” appearing in positions j, where j ≤ i. Here we use 26 + 1 rather than 26, and “a” is mapped to 1 rather than 0. The reason is that string t may have shorter length than s, and to correctly handle this case, we should append letters that are “mapped to” 0 to the end of t (one can check the comparison rule mentioned in the problem statement, which is associated with different string length).

Therefore, we can enumerate indices from right to left and for each position i, we check whether we can put a letter with value t[i] + 1 at index i while all the letters with indices j < i are exactly the same as t[j]. This can be done with complexity O(26) by using pt[i][26 + 1]. After we find a feasible position, we output the same letters as t before this index and output t[i] + 1 at index i while output the “unused” letters in the order of “a,b,c,...,z”.

180E - Cubes

We compute for each color the maximum number that can be achieved.

For each color, we find all the consecutive intervals [l, r] (or referred to as segments), and store the left and right borders as a pair, in an increasing order of l. Then, this problem can be solved by using “two pointers” technique, i.e., we adopt two pointers h1 and h2 that point to the leftmost segment and the rightmost segment, respectively, and we are going to combine these segments together. The reasons are as shown follows.

Suppose that the optimal answer for this color is obtained by combining the i, i + 1, ..., j-th segment together. Then, this answer can also be obtained by setting h1 = i and h2 = j, since we will always cover this range as long as it is a reasonable range.

Therefore, we move h1 and h2 as what is usually used in two pointers technique, and compute the answer.

180F - Mathematical Analysis Rocks!

Let us focus on position i. For the first time, it is i. For the second time, it is p[i]. For the third time, it is p[p[i]], and the fourth time it becomes p[p[p[i]]]. As a[i] = p[p[i]], and b[i] = p[p[p[i]]], we have b[i] = p[a[i]], i.e., p[a[i]] = b[i], and thus we can obtain p[i].

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

175A - Robot Bicorn Attack

We can enumerate all the feasible division that divides the given sequence into three parts. For each part, we check whether it has leading zeros (be careful that “0” does not have leading zeros) and whether it exceeds 106.

175B - Plane of Tanks: Pro

Straightforward implementation solves this problen. It is suggested that when asked to compare a / b and c / d, one should equivalently check a × d and b × c.

175C - Geometry Horse

The main idea is greedy algorithm. It is obvious that lower points should be assigned with smaller factors, and thus we should sort one array in an increasing order of points and another array in an increasing order of factors, and multiply them and add the results together.

Here are two issues that we should take care of. One is that the number of factors may be less than the total number of figures. In this case, we should append extra factors equal to t + 1 to the end of sorted factor array so that every point will have a corresponding factor. The second issue is that the factor array is given in a form of “prefix sum”, as p[i] denotes the total number of operations that we need implement in order to increase the factor value from 1 to i + 1, rather than the number of operations that we need to increase factor from i to i + 1.

175D - Plane of Tanks: Duel

At first, we need two tables, one for “us” and the other one for our rival, written as dp1[i][j] and dp2[i][j], to denote the probability that after the opposite player's i-th shot, the defensive side still has hp equal to j. These two tables can be computed based on techniques similar to pascal's triangle. In fact, both the two tables have infinite size, since it is possible that no shot pierce each other, and thus both the two players can survive as many rounds as they “want”. Nevertheless, it is sufficient to calculate the table to some finite round. The reason is that as long as the probability that one pierces the opposite player is not zero, the probability that one survives to the m-th round has some coefficient pO(m), which can be omitted compared with 10 - 4 if m exceeds some threshold. Tutorials suggest that this value should be 5000, while I think that 2000 is enough (perhaps I am wrong)...

Then, we define the “first event”, i.e., if we win, we win after the first shot, second shot, third shot, and so on. These events are independent with each other and thus we can compute each of them independently and finally add the results together. To win after the i-th shot, we should calculate the number of rival's shot before our i-th shot, and the probability that we survive can be directly obtained from the previously computed table. Furthermore, we should also calculate the probability that the rival survives after our (i - 1)-th shot but has hp = 0 after our i-th shot, which can also be directly queried from our table.

This problem inspires me that sometimes, instead of simulating two players alternatively, it is more convenient to focus on one player's behavior and check what happens after his move.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

168A - Wizards and Demonstration

A straightforward mathematical problem, and it is better not to use any double numbers.

168B - Wizards and Minimal Spell

When we get a string starting with “#”, we should directly output it and go to the next line (in C++, it is “cout<<endl;”); otherwise, we output it while deleting all the space, and still stay at the current line (no “cout<<endl;”). We can adopt an “indicator” flag = 0, 1, which indicates whether the last input string starts with “#” or not (for instance, flag = 1 means yes). Note that when we meet a string starting with “#”, we should first implement “cout<<endl;” if flag = 1. Do not forget that when we meet the end of the input, we should implement an extra “cout<<endl;” if flag = 1.

168C - Wizards and Trolleybuses

First let us see what happens if the bus can keep accelerating until it reaches the terminal. It will have a speed equal to . Thus, for buses whose maximum speed vi ≥ V, it reaches the terminal with time , where ti denotes the waiting time before departure. For the other buses with vi < V, it first reaches speed vi and then keep this speed until arriving at the terminal, which gives total time .

Next, when we are dealing with the i-th bus, if it reaches the terminal with longer time than the (i - 1)-th one, then it just arrives at the terminal with its “original” time; otherwise, it will catch up with the previous bus and reach the terminal with the same time.

168D - Wizards and Huge Prize

Let us try to decompose the original problem into smaller ones step by step.

To win at least l tours, we should win l, or l + 1, or l + 2,..., or n tours. Thus, the total probability is the sum of probability to win exactly x tours (x = l, l + 1, ...n), and the original problem is reduced to computing the probability of winning exactly x tours. To win exactly x tours, we can win y (y = 0, 1, 2, ..., x) tours with prize (without bags) and x - y tours with bags (without prizes) while we should further guarantee that we have enough capacity to take back all the prizes. Thus, we have obtained two subproblems, and the first one is to calculate the probability of winning s tours with prizes and the second one is to compute the probability of winning t tours with bags while achieving a capacity c.

For the first subproblem, we can use dp1[i][j] to denote the probability of winning j tours among the first i tours, which can be calculated by pascal's triangle. For the second subproblem, we use dp2[i][j][k] to denote the probability of winning j tours among the first i tours while achieving capacity k, which can also be calculated based on pascal's triangle. Note that the maximum capacity can be vary large (about 200 × 200) and it is infeasible to compute all of them. Nevertheless, it is sufficient to compute for k up to 200. The reason is that there are at most 200 prizes and instead of calculating the probability of successfully taking all the prizes back, we can compute the probability that the prizes can not be taken back (therefore, 200 is enough for k) and then subtract this from the overall probability to obtain the result, like “set A and its complement set S - A” (S is the complete set). Therefore, we need a third dp3[i][j], denoting the probability of winning j tours among the first i tours (this is set S), which is also computed based on pascal's triangle.

168E - Wizards and Numbers

Without loss of generality, we assume that a > b, and denote (a, b) as a state. The main framework is based on dfs. To check whether (a, b) is a winning state or a losing state, we can first test the result of (b, a%b). If (b, a%b) is a losing state, then it is obvious that we should directly implement a%b and thus leave the rival a losing state. If (b, a%b) is a winning state, then we must implement some a - bk so that after the rival implement several steps, he “has to take” us to the state (b, a%b), and thus we win. Now we focus on the analysis of the latter case, i.e., (b, a%b) is a winning state.

As a > b, we can write it as a = kb + r where k is the quotient and r is the remainder. One can check that if (b, a%b) is a winning state, then (a = b + r, b) is definitely a losing state, since either a - b or a%b directs to state (b, a%b). Based on this observation, we can further deduce that (a = 2b + r, b) is a winning state, as we can implement a - b and this directs the rival to a losing state (a = b + r, b). Similarly, we can determine the result of any state (a = kb + r, b), and in fact we have two general cases (for the following arguments, we might omit the second term b when we describe a state).

1) b is an odd number: the conclusion is that an odd k leads to a losing state while an even k results in a winning state, i.e., we have a state sequence “LWLWLW...”. We can prove this by induction. Base case k = 1 is obvious. Now we assume that this holds from k = 1 to k = n - 1. If k = n - 1 is an odd number, then k = n is an even number and must be a winning state since k = n directs to a losing state by implementing a - b. If k = n - 1 is an even number, as we are only allowed to implement a - bp and thus a - bp = nb + r - bp = (n - bp - 1)b + r. Note that bp - 1 is always an odd number and thus (n - bp - 1) is always an even number, which is a winning state. Therefore, we can only be directed to winning states, and thus k = n is a losing state.

2) b is an even number: the conclusion is that the state sequence has a period of length b + 1, and for every subsequence from k = n(b + 1) + 1 to k = n(b + 1) + b + 1, it has pattern as “LWLWLW...LWLWW”. More specifically, for n(b + 1) + t, t = 1, 2, ..., b, the state is winning if t is even; otherwise the state is losing and for t = b + 1, it is also a winning state. The proof is also based on induction. The base case is 0 × (b + 1) + 1, 0 × (b + 1) + 2,...,0 × (b + 1) + b + 1, and one can prove this after some simple deduction (0 × (b + 1) + b + 1 is a winning state as we can implement a - b2 = (b + 1)b + r - b2 = b + r and thus move to state (a = b + r, b) which is a losing state). Now, assume that the conclusion keeps true up to (n - 1)(b + 1) + b + 1, and we are going to prove n(b + 1) + t for t = 1, 2, ..., b, b + 1.

For any odd number t ≤ b, we can implement a - bp = (n(b + 1) + t)b + r - bp = (n(b + 1) + t - bp - 1)b + r, where p ≥ 1. Note that bp - 1 = (b + 1 - 1)p - 1 = q(b + 1) + ( - 1)p - 1 (binomial expansion), and thus a - bp = (n(b + 1) + t - q(b + 1) - ( - 1)p - 1)b + r. Here k' = (n - q)(b + 1) + t - ( - 1)p - 1, and one can see that n - q < n, and t - ( - 1)p - 1 is always an even number. According to our assumption, this is a winning state, and it implies that n(b + 1) + t is a losing state for odd t since we can only be directed to winning states.

For any even number t, it is straightforward that it is a winning state since we can move to t - 1, which is a losing state as proved above.

For t = b + 1, we can implement a - b2 = kb + r - b2 = (n(b + 1) + b + 1)b + r - b2 = (n(b + 1) + 1)b + r. Here k' = n(b + 1) + 1 and it is a losing state as we have proved and thus t = b + 1 is a winning state.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +9
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

166A - Rank List

Sort the array in the required order, and find out all the terms that have exactly the same data as the k-th one.

166C - Median

In fact, we can keep inserting x into the array until we find that the median is equal to x.

Why this works? One can see that if the median is less than x, then we should insert more integers that are larger than x; otherwise we should insert more integers that are less than x. Nevertheless, x can be viewed as an integer that is both “less than” and “larger than” x itself.

It can be proved that it is definitely sufficient to insert at most n + 1 xs, and thus the complexity is N2log(N).

166D - Shoe Store

My codes have about 400 lines....

The basic idea is dp. We use dp[i][j] to denote the maximum profit we can obtain, when the j-th customer has made his decision (all the customers are sorted in an increasing order of their feet sizes l).

In details, i can take values 0, 1, 2, 3, which is in fact a bitmask. Note that when the j-th customer has feet size l[j] (we write l for short), he can buy shoes of size s = l and s = l + 1. We use 00 to denote that after the j-th customer has made his decision (he can buy one pair of shoes or nothing), shoes of size s = l and s = l + 1 are not sold, and use 01 to denote that after his decision, shoes of size s = l is not sold while that of size s = l + 1 is sold. Similarly, 10 denotes that after his decision, shoes of size s = l is sold while that of s = l + 1 is not sold. Finally, 11 denotes that both shoes of size s = l and s = l + 1 are sold.

Note that dp[i][j] only depends on dp[i][j - 1]. However, the recursicve formula should be deduced based on three cases.

1) l[j] = l[j - 1] + 1: for each state of dp[i][j - 1], i.e., 00, 01, 10, 11, the last bit denotes the shoes of the same size as that denoted by the first bit of states (also 00, 01, 10, 11) of dp[i][j]. This means that, for instance, 01 can not be transferred to states 00 and 01, since the shoes of size l[j] has been sold after the (j - 1)-th customer made his decision.

2) l[j] = l[j - 1]: the states of dp[i][j - 1] and dp[i][j] denote exactly the shoes of the same size. This means that, for instance, 01 can not be transferred to states 10 and 00 (similar reasons as above).

3) l[j] > l[j - 1] + 1: this means that the states of the j-th customer do not depend on those of dp[i][j - 1].

In case that the shoes of size l do not exist in the given input, we can still consider that this pair of shoes “exist” with zero cost.

166E - Tetrahedron

We use fa[n], fb[n], fc[n] and fd[n] to denote the total number of ways to stay at nodes a, b, c and d after n steps, when we first start at node d. The recursive formula for each function is similar, and they all have forms of fa[n] = fb[n - 1] + fc[n - 1] + fd[n - 1]. Thus, we can adopt an array dp[4][n] to calculate and store all the results.

To further reduce the space complexity, note that dp[i][n] only depends on values of dp[j][n - 1], and thus it is sufficient to use dp[4][2] to store the values at the last step, and update values at the current step.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -8
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

165A - Supercentral Point

Straightforward implementation with exhaustive enumeration.

165B - Burning Midnight Oil

The main idea is binary search, since if v is a reasonable result, then any v' > v also serves as a reasonable answer. Moreover, we can calculate kp in previous, and it is sufficient to consider p up to 40, since the problem guarantees that the maximum value of n is 109. Do not worry that kp will lead to overflow, as we will never use the value which exceeds v. The left work is to adopt binary search to find the minimum value of v.

165C - Another Problem on Strings

We can solve this problem based on two cases.

1) k = 0: we find out all the consecutive intervals [l, r] that only 0s are included. This interval contributes to the final answer, where t = r - l + 1.

2) k > 0: we find out each interval [l, r] that exactly k 1s are included and integers with indices l and r are also 1s. Then, we move from position l to the left and find out the first index l' that a[l'] = 0 while a[l' - 1] = 1. Similarly, we find the index r' so that a[r'] = 0 while a[r' + 1] = 1. Then, this interval contributes (r' - r) × (l - l') to the final answer.

The total complexity is O(N) according to amortized analysis.

165D - Beard Graph

At first, we should realize that the whole graph is either a single link, or a tree that there exists a unique root node so that after deleting this node, the tree is decomposed into several independent single links.

We can build a segment tree to deal with the modification of the color of some edge, and the query of the distance between two nodes. In order to do this, we have to assign new indices to each edge. Without loss of generality, we start from the root and adopt dfs for each of the “single link”. For each link, we assign both the edges and nodes with indices 1,2,3,... until all the nodes have been visited. After these operations, we can find that for each single link, the indices of edges and nodes belonging to it have consecutive numbers. As for the segment tree, each node contains its managing range, i.e., the edges that it covers, and the number of white edges belonging to this range.

When the color of some edge is changed, it is equivalent to modifying a single element in the segment tree. For instance, when an edge becomes white, we increase one for each node that covers this edge in the segment tree. When we are asked to output the distance between two nodes a and b, we should consider two cases based on whether they belong to the same single link or not, which can be conveniently obtained by using disjoint set union. Besides, for each node, we should also compute the distance to the root node, as dis[a], in previous. If they belong to the same single link, we can directly query the segment tree to check whether there exists at least one white edge between them, since they have consecutive indices in the segment tree, and the answer is |dis[a] - dis[b]|. If they belong to different single links, we should check whether there is at least one white edge from node a the root node, and from b to the root node, independently. To achieve this, we should find the index of the child node that belongs to the same single link as node a (or b), and query this range in the segment tree. The answer is dis[a] + dis[b].

165E - Compatible Numbers

I read tutorials and found that the solution is based on dp.

We start with a simple example. Suppose that we are asked to find an integer x that reaults in zero after XOR with 6. We write 6 in binary form as (00...0110), where the total number of digits is 22 (note that 4 * 106 = 4 * (103)2 < 4 * 10242 = 222, and thus 22 digits are enough to represent all the integers required by this problem). We can see that (11...111001) can serve as a potential answer. However, we can find many more reasonable sequences, such as (11....110001), (11...110000) and so on. One can check that the correct sequences must have zeros at the second and third positions, counted from right to left, while the values at the other positions never matter.

We define that a sequence can be reduced to another sequence, by converting some single 1 to 0. For instance, for (01110), it can be reduced to (01100), (01010), (00110). Thus, given an integer x, we represent it in a binary form and inverse every binary digit (0 to 1 and 1 to 0), obtaining a new integer x'. Then, it is sufficient to check whether x' can be reduced to some integer that can be found in the given array.

We can use bit-mask based dp algorithm to calculate for each integer that, whether it can be reduced to some other one that has appeared in the given sequence.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +9
  • Проголосовать: не нравится

Автор SummerSky, 7 лет назад, По-английски

160A - Twins

We sort the array in a decreasing order, and calculate the prefix sum p[i]. Our target is to find the minimum index i so that p[i] × 2 is strictly larger than the total sum.

160B - Unlucky Ticket

After dividing the given sequence into the first half and second half, we sort both of them in an increasing order, and compare the elements with the same indices, one by one. We should check whether the first half has strictly larger or smaller elements than the second half, at each position.

160C - Find Pair

The general idea is to first sort the given array in an increasing order, denoted as (a1, a2, ..., an). Then, the required sequence should be (a1, a1), (a1, a2), ...(a1, an), (a2, a1), (a2, a2), ...(a2, an), (a3, a1), ...(an, an). For instance, for (1, 2, 3), we have (1, 1), (1, 2), (1, 3),(2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3). However this is not correct if we have elements with the same value. For instance, for (1, 2, 2), it should be (1, 1), (1, 2), (1, 2),(2, 1), (2, 1), (2, 2), (2, 2), (2, 2), (2, 2).

Nevertheless, the above idea still works after some slight modification. Note that the first element can always be determined by calculating k / n (index starts from 0), no matter whether we have same values or not. Suppose that the first element is ai. Then, we find the number of elements that are strictly smaller than ai, denoted as s, and also the total number of elements that are equal to ai, denoted as t. Now, the problem is reduced to finding out the (k - s × n)-th pair in all the t × n pairs, which have ai as the first element. Moreover, the second element of these t × n pairs must be ordered as a1, a1, ...a1, a2, a2, ..., a2, a3, ...an, i.e., any aj has been repeated t times. Thus, it is straightforward to compute the (k - s × n)-th element in this sequence.

160D - Edges in MST

This problem involves several classic techniques, such as disjoint set, kruskal's algorithm to find minimum spanning tree (MST), tarjan's algorithm to find bridges in graph, and so on.

Suppose that we have obtained an MST, and we should note that it is possible to replace some edges with other ones which have the same weight, but definitely impossible to use an edge to replace another one with different weight, since we will either obtain a smaller MST (but this is contradictory to our assumption) or a larger MST (this is not an MST!).

Therefore, we can sort the edges in an increasing order of their weight, and adopt the framework of Kruskal's algorithm, but deal with all the edges which have the same weight at the same time. Suppose that edges (ei, ei + 1, ..., ej) have the same weight, and now we start enumerating from ei to ej, and we will have the following two cases.

1) we find that ek connects one of the already connected components (remember that disjoint set is used in kruskal's algorithm), then it is definitely not included in any MST. The reason is that this edge must form a cycle with some other edges which have smaller weight, and if it is included in some MST, we can immediately replace it with any edge from “some other edges” and obtain a smaller MST.

2) we find that ek connects two different connected components. Then, it is possible that this edge must be included in any MST, since it might be the unique edge that connects these two components, which is also referred to as Bridge. On the other hand, it might be only included in some MSTs. The reason is that we may find other edges with the same weight and they also connect these two components.

Thus, we consider the connected components as “nodes” and build a “graph”, and adopt tarjan's algorithm to find out all the bridges. However, this “graph” may contain multiple edges, for instance, several edges connect the same two components. Be careful to handle these multiple edges when using tarjan's algorithm.

160E - Buses and People

We construct a segment tree to solve this problem (do not forget compressing the original data into smaller range so that we can obtain a segment tree with feasible size). Each node “manages” a time interval [l, r], and stores the index of the bus which falls into the current time interval and reaches the farthest distance (also stores this distance).

Next, we sort both buses and passengers all together, in an increasing order of their starting positions (by sorting them together, we can guarantee that whenever we are enumerating a passenger, all the buses that have starting positions ahead of him have been inserted into the segment tree, since only these buses have the potential to send the current passenger to his destination), and enumerate the sorted result one by one. When we meet a bus, we insert it into the segment tree, i.e., we visit the nodes from top to bottom, and update the farthest distance stored at each node (also bus index). When we meet a passenger, we check the segment tree to see whether there exists a bus that can take him to the destination.

I think several issues should be clarified when we try to “query” the segment tree.

1) I think the query complexity is not logN. Instead, it may be quite large. Thus, we might need adopt some “pruning” techniques to reduce the search space.

2) Whenever we find that the farthest distance at the current node is strictly less than the passenger's destination, we should immediately stop the search, since no bus can take the current passenger to his destination; otherwise, whenever we reach some leaf node, we directly return the bus index (since this means that the correct bus has been found); otherwise, we first try to visit the right child node and if it does not return a reasonable bus index, then we continue to visit the left child node (of course the premise condition is that the passenger's time falls into the time interval of the left child node).

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится