Блог пользователя Geothermal

Автор Geothermal, история, 5 лет назад, По-английски

Hi all!

I'm proud to announce Competitive Programming Academy, my new programming competition training community.

Though the broad objective of CPA is to help people grow as competitive programmers, there are three specific goals I hope to achieve with this program:

  1. To provide a programming competition community focused specifically on training and growth. There are many existing communities that work well for discussing programming contests, but I think having one emphasizing training will make it easier for people to ask questions without fear of being ignored (or downvoted). Of course, the other advantage of asking questions there is that I'll be around to answer them. Additionally, my hope is that having a community with which to train is likely to help people who don't have other competitive programmers around locally find motivation to grow.
  2. Helping people create training plans and find helpful resources to use while practicing. While I'm not convinced there's a huge difference between various training platforms (e.g. I don't think it matters much whether you solve problems on Codeforces versus AtCoder versus Codechef versus Kattis, as long as you solve problems at all), I do think identifying these resources may be helpful for complete beginners, and for others, I'm hoping to minimize the amount of time trainees need to spend thinking about how to train in order to maximize the amount of time they can actually spend training.
  3. Communicating the intuition behind challenging problems. While most successful competitive programmers train individually (and this approach is generally effective), I think there's one question most people have trouble answering on their own or with editorials while working on challenging problems: how would I have thought of that? I think the most important way I (or a coach/mentor/etc in general) can help with the training process is by helping trainees build the intuition helpful in solving competition problems.

In particular, I should note that because I don't want to restrict CPA to a smaller size, it is not a one-on-one training program--for the most part, I won't be able to select specific problems for each individual person in the program, for example. If enough people would find it useful, I'm open to creating some sort of community-based mentoring structure, matching mentors with groups of trainees for a more personal touch. (My instinct, however, is that this is probably not the way to go because finding active and effective mentors would be challenging and because a similar concept has been tried in the past and failed, so I plan to implement it only if much of the community is fairly convinced that this would be useful.)

Currently, the plans include a set of chats for general discussions and questions and answers and moderated discussion sessions after Codeforces rounds or group virtual contests. I'm starting small because I'd like to get a sense for the program's scale and what people are looking for before expanding too much--in the near future, I'm hoping to add additional services. Some ideas I've been bouncing around include the aforementioned mentoring program, lectures (with accompanying problemsets) on specific topics that come up regularly in contests, and a problem-of-the-day system.

Competitive Programming Academy is hosted on Discord--join the server at https://discord.gg/VxVnGHu. Please read the announcements in #first-time, fill out the linked form, and you'll be added soon afterwards. Please make sure to read the rules and guidelines carefully. To ensure a high quality of discourse, violations of these rules will result in mutes and/or bans from the server.

Looking forward to seeing many of you there!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +208
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Hi all!

I'm currently considering opening up a competitive programming training program of some sort. The details are definitely still in the works, though, and I wanted to discuss some preliminary ideas with the community to get feedback on what would be most helpful (and on whether there would be any interest in this).

I'm interested in starting a training program for three main reasons:

  • To make competitive programming more accessible to those without access to resources like college courses or training camps

  • To provide a space in which questions are encouraged and actively answered (especially since I think this generally isn't the case in the community at large)

  • To help competitive programmers build problem-solving skills and intuition, which I think aren't sufficiently emphasized by most contest training resources

I think one of the most frustrating things about being a new (or even intermediate) competitive programmer is that it's generally pretty difficult to get help from people more experienced than you. My hope is that creating a community with the explicit intent of training competitive programmers less experienced than myself will alleviate this problem, and perhaps will help establish a norm of helping newcomers within the overall competitive programming community.

However, I'm not sure exactly what such a program would entail, so I'd love to get some input from the community on what would be most helpful. I would most likely organize this through a Discord server, which would include places to ask for help with problems, to discuss competitive programming in general, and to discuss how best to train. However, I think it'd probably be best to incorporate some element of active training into the program--some preliminary ideas I've considered include:

  • Regularly published problemsets, including tasks of varying difficulties. Most likely, I'd publish relatively short problemsets a few times a week (something like one problem at each of three overall levels, with each trainee attempting one of the problems). For the next few days, those interested could work on solutions and message me for hints and feedback, and afterwards, I'd post my solutions, with a particular focus on outlining how I came up with my overall approach.

  • Topic lectures introducing important algorithms and discussing several problems applying them. These would probably be most useful for people with some experience (e.g. stably rated above 1400-1500), since below that there aren't really too many specific topics to cover.

  • Scheduled community virtual contests, with discussion sessions afterwards to talk through the problems and discuss ideas, especially with a focus on the intuition underlying the solutions.

  • For a smaller group of people with more competitive programming experience, I'd be open to providing personal assistance with training, problem/topic selection, overall contest strategy advice, etc.

  • If enough strong competitors are willing to get involved, it might make sense to have some sort of mentor-trainee program in which newcomers are matched with more experienced programmers in order to get help and advice.

The target audience for this would depend on who's interested--I think that at least in theory, I'd be able to work with anyone up to around (and including) the 2200s. My main goal would probably be to help novices have their questions answered and come up with a good general training plan while working more personally with more advanced competitors to focus specifically on building up intuition and shoring up weak areas.

Of course, all of this is very vague, since the above isn't any sort of finalized plan. At this point, though, I have a couple requests: 1. Post below if you would be at all interested in this. It'd be helpful for me to figure out how many people would want to be involved as I decide whether starting this up would be worth the effort. 2. If you have any thoughts on which (if any) of the above ideas you'd find helpful, or any other forms of training that would be useful to you, please share them--in the event I do decide to make this happen, I'd like to know what I can do to be of as much assistance as possible.

Thanks to everyone who read this far--if you have any questions or comments, please be sure to post them below!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +376
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Can't Wait for Holiday

The only approach here is the trivial one--compare the input to each of the possible seven values and print the appropriate answer for each of them. One reasonably fast way to implement this is to create an array $$$A$$$ containing the seven inputs, with "SUN" in position zero and "SAT" in position six. Then, we compare $$$S$$$ to each value in the array and, upon finding the value $$$i$$$ such that $$$A[i] = S$$$, we print $$$7-i$$$. This is valid because the next Sunday after the day represented by our input would come in position seven.

Runtime: $$$O(1)$$$. Click here for my submission.


B — ROT N

Again, the only real solution here is to directly implement the approach given in the problem statement. However, character arithmetic gives us a fairly nice way to do so. For each character in $$$S$$$, we can compute the position of the corresponding letter in the alphabet by taking $$$S[i]$$$ minus the character 'A'. From here, we can add $$$N$$$ to this value and take the result modulo $$$26$$$ to find the offset corresponding to our new letter upon ciphering. Then, we can add 'A' to the offset and set $$$S[i]$$$ to the result to update $$$S$$$ accordingly. Finally, we simply print $$$S$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Buy an Integer

Observe that we should buy a number with the greatest possible number of digits, since any number is strictly greater than any other number with fewer digits. We iterate over the possible numbers of digits in decreasing order and print our answer as soon as we find a number of digits at which we can afford some number.

Given that we want to buy an $$$N$$$-digit number, the cheapest one is $$$10^{N-1}$$$, which will cost $$$10^{N-1}A + NB$$$. If this is greater than $$$X$$$, we must try a lower value of $$$N$$$. If not, let's solve for the maximum number we can buy. Suppose this number is $$$M$$$. Then, we must have $$$MA + NB \leq X$$$, so $$$MA \leq X - NB$$$, so $$$M \leq \frac{X - NB}{A}$$$. Thus, any $$$N$$$-digit number that is at most $$$\frac{X - NB}{A}$$$ will be cheap enough. Keep in mind that since we've assumed we're buying an $$$N$$$-digit number, $$$M$$$ must be at most $$$10^N - 1$$$, even if $$$\frac{X - NB}{A}$$$ is $$$10^N$$$ or greater. (For example, if $$$A = 1$$$, $$$B = 20$$$, and $$$X = 40$$$, then we clearly cannot buy any number with at least two digits, and when we consider one digit, we find $$$\frac{X - NB}{A} = 20$$$. However, since we're only considering one-digit numbers, the greatest possible answer is $$$9$$$.)

From here, we can iterate over all possible values of $$$N$$$ in decreasing order and print the answer corresponding to the first sufficiently cheap value of $$$N$$$. It's probably a good idea to check separately whether we can buy $$$10^9$$$, since this case works a little differently because we can't buy any larger $$$10$$$-digit numbers.

Runtime: $$$O(\log MX)$$$, where $$$MX$$$ is the greatest integer sold. Click here for my submission..


D — Coloring Edges on Tree

We claim $$$K$$$ is the maximum degree of any edge. Our construction is fairly simple. Root the tree at vertex $$$1$$$ and perform a DFS. Then, when we reach a node $$$v$$$, color its children edges with any set of colors from $$$1$$$ to $$$K$$$ not already used to color the edge from $$$v$$$ to its parent, if $$$v$$$ isn't the root. This will always be possible because we know by our choice of $$$K$$$ that there will be at least $$$\textrm{deg} \, v$$$ colors to choose from, so we can use the colors $$$1$$$ through $$$\textrm{deg} \, v$$$ for this, possibly excluding the one already used for the edge from $$$v$$$ to $$$v$$$'s parents. In practice, my solution iterates through the colors starting with $$$1$$$ and skipping the one already used if necessary.

Proving that no answer lower than this $$$K$$$ is possible is fairly trivial. Suppose for contradiction that $$$\textrm{deg} \, v$$$ is greater than the minimum possible $$$K$$$ for some vertex $$$v$$$. Then, by the Pigeonhole Principle, it is impossible to color the edges incident to $$$v$$$ with different colors from $$$1$$$ to $$$K$$$, so this must not be the case. Therefore, $$$K$$$ must be at least $$$\textrm{deg} \, v$$$ for all vertices on the graph, which implies that our solution is optimal.

Runtime: $$$O(N)$$$. Click here for my submission.


E — Rem of Sum is Num

First, note that the required conditions cannot hold for a sequence of at least $$$K$$$ elements, since any value $$$K$$$ or greater cannot be the remainder when our sum is divided by $$$K$$$. Otherwise, we can write the given equation for the range $$$i$$$ through $$$j$$$ as

$$$\sum_{k = i}^j A_k - 1 = 0 \, \textrm{mod} \, K.$$$

We thus decrease all values in the array by $$$1$$$, reducing our problem to counting subarrays summing to $$$0 \, \textrm{mod} \, K$$$ with length less than $$$K$$$. This can be solved using prefix sums and a sliding window. First, ignore the condition on the length of $$$K$$$. Then, the number of valid sequences ending at $$$j$$$ is the number of earlier positions $$$i$$$ such that

$$$\sum_{k = 0}^i A_k - 1 = \sum_{k = 0}^j A_k - 1,$$$

working modulo $$$K$$$. We can iterate over all prefixes of the array and maintain a map from prefix sums modulo $$$K$$$ to the number of times they've occurred so far to keep count of this. Then, to bring back the condition on the length of our array, we can remove the sum that occurred $$$K$$$ positions earlier from our map before incrementing the answer at each position so that at any given time, all subarrays of length $$$K$$$ or greater are not considered as options.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


F — Sugoroku

We employ dynamic programming. Let $$$dp[i]$$$ be the minimum number of moves to reach cell $$$N$$$ from cell $$$i$$$, and let $$$nxt[i]$$$ be the least possible next move from position $$$i$$$ that makes it possible to achieve this minimum. We can compute $$$dp[i]$$$ for all positions except the game-over cells as one plus the minimum of $$$dp[i+j]$$$ over all $$$j$$$ from $$$1$$$ to $$$M$$$, and $$$nxt[i]$$$ is the least $$$j$$$ that achieves this minimum. (For simplicity, we let $$$dp[i]$$$ be infinity for all game-over cells.)

Of course, implementing this as given would take $$$O(NM)$$$ time, so we need to be faster about it. There are lots of ways to do this--for example, we could use a segment tree. However, my solution employs a priority queue. The queue will store pairs $$$(dp[i], i)$$$ and will be sorted such that the lowest element is that with the lowest value of $$$dp[i]$$$, with ties broken by selecting the lowest value of $$$i$$$ (in order to satisfy lexicographic minimality), since at any given point we should transition to the smallest possible next point. Then, we iterate over all positions in decreasing order of $$$i$$$. To compute $$$dp[i]$$$, we first check the top element $$$(dp[j], j)$$$ from the queue and, while this top element has $$$j > i+M$$$, pop it from the queue, since we can no longer make a move that will get us far enough to reach this position. Then, if any elements remain in the queue, we can set $$$dp[i]$$$ to $$$dp[j] + 1$$$ and $$$nxt[i]$$$ to $$$j-i$$$. If no elements remain in the queue, it is impossible to reach the end of the board from position $$$i$$$.

From here, we can print $$$-1$$$ if $$$dp[0]$$$ is infinity. Otherwise, we can reconstruct and print the answer using the $$$nxt$$$ array.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


As always, questions and comments are welcome below.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +90
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Circle

The area of a circle with radius $$$r$$$ is $$$r^2 \pi$$$, while the area of a circle with radius $$$1$$$ is $$$\pi$$$. The answer is thus $$$\frac{r^2 \pi}{\pi} = r^2$$$, so we can simply print $$$r \cdot r$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Echo

First, the answer is obviously no if $$$N$$$ is odd. Otherwise, we can directly check whether the strings created by the first and last halves of $$$S$$$ are the same. One way to do this is to iterate over all positions $$$i$$$ from $$$0$$$ to $$$\frac{N}{2}-1$$$ (remembering that the positions in a string are zero-indexed) and check whether position $$$i$$$ stores the same character as position $$$i + \frac{N}{2}$$$. If this isn't the case for some $$$i$$$, the answer is no. If, on the other hand, this holds for all $$$i$$$, the answer is yes.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Average Length

The key to this problem is the C++ STL next_permutation function. (Some other languages may have similar methods; some do not.) If you aren't familiar with this function, running next_permutation(vec.begin(), vec.end()) rearranges the elements in the vector vec into the next lexicographically higher permutation, returning true if this was possible or false if vec is already in its lexicographically highest possible arrangement (which occurs when it is sorted in decreasing order). The function runs in $$$O(N)$$$.

We can use this function to generate all possible permutations of $$$N$$$ elements in $$$O(N \cdot N!)$$$. Then, for each permutation, we can easily compute the distance between each consecutive pair of points and add them to the answer. Finally, we divide the answer by $$$N!$$$ to get our average and print it.

If you're using a language that doesn't support a function like this: first of all, you should probably learn C++, but secondly, there are also other ways to generate the permutations. One fairly simple way to do this is recursive backtracking, storing an array containing the elements used so far in our current permutation and iterating over all possible remaining elements. This can be implemented efficiently, but even an extremely inefficient implementation runs in $$$O(N^3 \cdot N!)$$$, which, given $$$N \leq 8$$$, should run in time.

Runtime: $$$O(N \cdot N!)$$$. Click here for my submission.

Fun fact: There's also an $$$O(N^2)$$$ solution to this problem relying on a slightly more clever insight. Note that element $$$i$$$ will appear immediately before element $$$j$$$ in $$$(N-1)!$$$ of our permutations, since there are $$$N-1$$$ ways to position $$$i$$$ and $$$(N-2)!$$$ ways to arrange the remaining $$$N-2$$$ elements. Thus, we can add the distance from point $$$i$$$ to point $$$j$$$ multiplied by $$$\frac{1}{N}$$$ to our answer, since it will appear in $$$\frac{1}{N}$$$ of our permutations. The total sum over all ordered pairs $$$(i, j)$$$ with $$$i \neq j$$$ is our answer.


D — Knight

First, note that $$$X+Y$$$ must be a multiple of three for any paths to exist, since both of our moves increase the sum of our coordinates by $$$3$$$. If this does not hold, the answer is zero. Then, we must take $$$K = \frac{X+Y}{3}$$$ moves. Suppose that we take $$$N$$$ moves that increase our coordinates by $$$(2, 1)$$$ and $$$K-N$$$ moves that increase our position by $$$(1, 2)$$$. Then, in order to reach $$$X$$$ as our x-coordinate, we must have that $$$2N + K-N = X$$$, so $$$N = X - K$$$. Then, we must take $$$K - N$$$ moves that increase our position by $$$(2, 1)$$$. Note that if $$$X-K$$$ or $$$2K-X$$$ is less than zero, we obviously cannot make a negative number of moves in one direction, so our answer is zero. (This will occur if one of $$$X$$$ and $$$Y$$$ is more than double the other.)

Now, note that the order of our moves doesn't affect our final outcome. Thus, we need to make $$$K$$$ moves, of which $$$N$$$ are one type and $$$K-N$$$ are the other. The number of ways to do this is simply $$$\dbinom{K}{N}$$$, or, in terms of our original variables, $$$\dbinom{\frac{X+Y}{3}}{\frac{2X-Y}{3}}$$$, which is our answer. The choose function can be computed in $$$O(X+Y)$$$.

Runtime: $$$O(X+Y)$$$. Click here for my submission. The logic I used is slightly different from what was described here, but it is ultimately equivalent to this solution.


E — All-you-can-eat

We employ dynamic programming. One key insight here is that the timing condition is equivalent to eating some dishes that take a total of no longer than $$$T-1$$$ minutes, followed by one final dish whose time constraint doesn't matter.

Let $$$dp[i][j][k]$$$ be the greatest happiness we can achieve if we've eaten dishes taking a total of $$$i$$$ minutes to consume and we've taken some subset of the first $$$j$$$ dishes. $$$k$$$ is $$$1$$$ if we've already taken a dish and ignored its time constraint and $$$0$$$ if not. Then, we can transition from $$$dp[i][j][k]$$$ to $$$dp[i+A_i][j+1][k]$$$ or, if $$$k = 0$$$, to $$$dp[i][j+1][1]$$$, and add $$$B_i$$$ happiness in the process. Additionally, since we can waste time, skip a dish, or not bother to eat an extra dish with no cost, we can also transition to $$$dp[i+1][j][k]$$$, $$$dp[i][j+1][k]$$$, or $$$dp[i][j][k+1]$$$ with no happiness increase. Of course, each $$$dp$$$ value should be the maximum possible over all ways to transition to it.

From here, our answer is $$$dp[T-1][N-1][1].$$$

Runtime: $$$O(N^2)$$$, as we have $$$O(N^2)$$$ states and $$$O(1)$$$ transitions in our DP table. Click here for my submission. Note that I used a 2D DP table storing rows corresponding to $$$i$$$ and $$$k$$$. My loop iterating over $$$x$$$ corresponds to $$$j$$$ in this table.


F — Laminate

Again, we use dynamic programming. Let $$$dp[i][j]$$$ be the minimum cost of painting the first $$$i$$$ columns given that we've changed $$$H_i$$$ for $$$j$$$ of them, given that column $$$i$$$ has not had its $$$H_i$$$ changed.

Let's figure out the transitions. Suppose we want to reach $$$dp[i][j]$$$, and the last $$$k$$$ columns before column $$$i$$$ will have their heights changed. In this case, our last painted column is $$$i-k-1$$$, so our previous state was $$$dp[i-k-1][j-k]$$$. Now, if the height of column $$$i$$$ is less than or equal to the height of $$$i-k-1$$$, we can set the height of all adjusted columns in the middle to the height of column $$$i$$$ and then paint all of these columns using some of the same strokes we used for column $$$i-k-1$$$, so the cost of this transition is zero. Otherwise, we can use $$$H_{i-k-1}$$$ strokes to start painting column $$$i$$$ but will need $$$H_i - H_{i-k-1}$$$ more to finish, so this is the cost of our transition.

There are several ways to compute the final answer from this DP table, though the constraint that column $$$i$$$ must not have had its height modified is bothersome since it could be that the best solution involves modifying the last column's height. Probably the easiest way to get around this is to add an extra column with $$$H_i = 0$$$ at the end--given our cost scheme, this will never increase the height, nor will it be advantageous to change the height of this column. Then, our answer is $$$dp[N][K].$$$

Runtime: $$$O(N^2K)$$$, since each of our states has $$$O(N)$$$ transitions. Click here for my submission.


Feel free to post below if you have any questions or comments.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +80
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Now that the first North American ICPC regionals are complete, I'm starting a list of teams attending the North American Championships next February.

To start things off, the list will include team names and ranks. However, please comment below if you know any of the members of the advancing teams--I'll fill in their Codeforces handles if they have them or their names if not. (To avoid leaking sensitive personal information, please do not provide someone's Codeforces handle if their profile does not include their real name and you haven't received permission from them to post their username.)

Additionally, if you notice any errors, let me know. In particular, I'm gathering a list of schools by taking the top teams at each regional, but I may need to adjust this list if any teams choose not to accept their NAC invitations.


East Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Waterloo Waterloo Gold FatalEagle Reyna wleung_bvg
2 Purdue University Purdue RE Kuroni strongoier Monogon
3 Carnegie Mellon University CMU2 Gilwall PlasmaVortex WhaleVomit
4 University of Toronto UofT Royal Blue Deemo platers teochaban
5 University of Michigan Victors Kognition ramchandra xukailun0316
6 Case Western Reserve University CWRU White Hung Vu Huy Nguyen Trung Nguyen

Greater New York

Rank University Team Name Member 1 Member 2 Member 3
1 Cornell University Tank Cadets aaaaajack Chilli daggertye
2 Columbia University Columbia-1 taorunz xzj1997 zhj
3 New York University NYUCIMS-TEAM1 Andy Polizzotto Muge Chen Zhen Li
4 Princeton University Princetn-1 joshkol1 roosephu Shunyu Yao

Mid-Atlantic USA

Rank University Team Name Member 1 Member 2 Member 3
1 Swarthmore College cout << 1/0 << endl; Geothermal nitrousoxide silxi
2 Duke University Castle Jack Yuheng Xu Liang Lyu Muthu Arivoli
3 University of Maryland Yellow
4 North Carolina State University Punctuation Blender Duncan Page Thomas Barnette Yosuke Mizutani
5 Virginia Tech 46 is a Prime Number atakyn SpargelTarzan NinjaDoggy
6 University of North Carolina at Chapel Hill UNC-Alternate

Mid-Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Illinois at Urbana-Champaign UIUC — A I_love_simplespy Suzukaze yhchang3
2 University of Chicago Fire Enchanted
3 Truman State University TruWarriorss
4 Washington University in St. Louis Hashirama
5 Rose-Hulman Institute of Technology RHIT Fighting Engineers — Black
6 University of Kentucky uky-blue

North Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Wisconsin-Madison Model Solution bvd RobeZH top34051
2 University of Nebraska-Lincoln Poincare maxnguyen Cuong Viet Than Kim-Hao Duong Nguyen
3 Iowa State University Oak Nation Armada
4 Milwaukee School of Engineering Javanaughts Jacob Huebner Kiel Dowdle Nicholas Johnson
5 Kansas State University KSU Lynx
6 Dakota State University Dunder Mifflin Scranton
7 South Dakota School of Mines and Technology Blue gdhofer Brian Brunner Mangesh Sakordekar

Northeastern

Rank University Team Name Member 1 Member 2 Member 3
1 Massachusetts Institute of Technology MIT NULL Benq C_SUNSHINE sqrtdecompton
2 Harvard Harvard ekzhang Franklyn_W pbt17
3 Brown University Blueno Bears
4 McGill University Bees
5 Northeastern University Husky Alpha

Pacific Northwest

Rank University Team Name Member 1 Member 2 Member 3
1 U British Columbia University of British California .rem davidberard ehnryx
2 UC Berkeley Berkeley Blue Doriath eyg suchir
3 U Washington Combo GoogleBot milmillin Nonthakit Chaiwong
4 Stanford Stanford Cardinal andyshih12 helios1111 miagkov
5 UBCO UBC O(1) IvanCarvalho keyvankhademi Andrew Kiggins

Rocky Mountain

Rank University Team Name Member 1 Member 2 Member 3
1 University of Utah Utah Hydrogen Igor Durovic Oliver Flatt Sam Zachary
2 University of Alberta Alberta Green IanDeHaan Noah Gergel Jacob Skitsko
3 University of Calgary Emerald BENYAM1N Thomas Vu Charlie Zheng
4 University of Lethbridge University of Lethbridge WA Anemone JSwidinsky rmaccrimmon

South Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Texas at Austin UT Orange LamoreauxAJ vamaddur B0r0m1r
2 University of Texas at Dallas Comets 1
3 Texas A&M University Unordered Cartographers Durumu hoke_t nathanmandell
4 Rice University I see PC

Southeastern USA

Rank University Team Name Member 1 Member 2 Member 3
1 Georgia Institute of Technology Georgia Tech 5 akaiNeko animeshf xyz2606
2 University of Central Florida UCF Kamino Ahmad Harpae SecondThread
3 Emory University M||E ChatC Rudy1444 Chenxi Xu
4 University of Florida UF Fire
5 Florida Institute of Technology Panthers A

Southern California

Rank University Team Name Member 1 Member 2 Member 3
1 California Institute of Technology Baltech B
2 University of Southern California USC Trojans AsleepAdhyyan deSigntheClutch TianyiChen
3 University of California San Diego UCSD J mjguru NibNalin zucker42
4 University of California Los Angeles UCLA Bruins

Comment below if you notice any missing or incorrect information.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +184
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — 9x9

We can simply directly implement the procedure given in the problem. Print $$$AB$$$ if $$$A$$$ and $$$B$$$ are less than $$$10$$$ and $$$-1$$$ otherwise. One particularly fast way to do this is to use the ternary operator, which takes a boolean expression and two values as inputs and returns the first value if the expression is true and the second value otherwise. In C++, this looks like

$$$\texttt{A < 10 && B < 10 ? A*B : -1}$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — 81

Since we're only considering pairs of numbers from one through nine, we can simply iterate over every pair of numbers from one to nine and check if each pair multiplies to $$$N$$$. As soon as we find such a pair, we output Yes and exit the program. If we reach the end of the loop, we can then print No, since we've checked every possible pair and found that none of them work.

Notice that we could also iterate over all numbers from $$$1$$$ to $$$9$$$, and for each number $$$i$$$, check if $$$N$$$ is divisible by $$$i$$$ and, if so, whether $$$N / i$$$ is between $$$1$$$ and $$$9$$$. However, this would take longer to implement, and with so few values to check, the above approach is efficient enough anyway. This is a good example of a problem where it's better to quickly implement a trivial solution than to waste time coming up with a more efficient idea.

Runtime: Technically $$$O(1)$$$, though our loop will iterate $$$81$$$ times (or $$$45$$$, if you only iterate over pairs $$$(i, j)$$$ with $$$j \geq i$$$). Either way, it will run very quickly. Click here for my submission.


C — Walk in Multiplication Table

Note that we can walk to any pair $$$(i, j)$$$ in $$$i+j-2$$$ steps. Thus, we simply need to find the pair $$$(i, j)$$$ with $$$ij = N$$$ that minimizes $$$i+j$$$. Luckily, we can enumerate all possible pairs $$$(i, j)$$$ and check them in $$$O(\sqrt{N})$$$. To do this, without loss of generality, we can assume $$$i \leq j$$$, and we must have $$$i \leq \sqrt{N}$$$ since $$$N = ij \geq i^2$$$, so $$$\sqrt{N}^2 \geq i^2$$$, so $$$\sqrt{N} \geq i$$$. Thus, we can iterate over all possible values of $$$i$$$ from $$$1$$$ to $$$\sqrt{N}$$$ and, among all working pairs $$$(i, j)$$$, we pick the lowest value of $$$i+j-2$$$ and print it.

Runtime: $$$O(\sqrt{N})$$$. Click here for my submission.


D — Water Bottle

First, note that when we take an $$$A$$$-by-$$$B$$$ rectangular cross-section, tilting it counterclockwise at an angle of $$$\theta$$$, the water will be at the height of the top-left corner. (Draw the picture yourself if it's hard to visualize this in your head.) Then, we want $$$A$$$ times the area within this rectangle at or below this height to equal $$$X$$$.

We perform casework on whether the water bottle is at least half full.

Suppose that it is not half-full. Then, examining the figure, we can observe the wet area will be a right triangle with one side of length $$$B$$$ and adjacent angle $$$90^circ - \theta$$$. We can thus express the area of this triangle as $$$\frac{B^2 \tan 90^\circ - \theta}{2}$$$, and the total volume of the water will be $$$\frac{AB^2 \tan 90^\circ - \theta}{2}$$$. Thus, we have $$$X = \frac{AB^2 \tan 90^\circ - \theta}{2}$$$, which gives $$$\tan 90^\circ - \theta = \frac{2X}{AB^2}$$$, so $$$90^\circ - \theta = \arctan \frac{2X}{AB^2}$$$ and $$$\theta = 90^\circ - \arctan \frac{2X}{AB^2}$$$.

Now, suppose that the bottle is at least half-full. Then, we can observe that the part of the rectangle not filled with water is a right triangle with one side of length $$$A$$$ and adjacent angle $$$\theta$$$. This triangle has area $$$\frac{A^2 \tan \theta}{2}$$$, so the volume of the box not filled with water is $$$\frac{A^3 \tan \theta}{2}$$$. Therefore, we have $$$A^2B - X = \frac{A^3 \tan \theta}{2}$$$, so $$$\tan \theta = \frac{2A^2B - 2X}{A^3}$$$, so $$$\theta = \arctan \frac{2A^2B - 2X}{A^3}.$$$

Now, we can simply implement these formulas. Make sure to convert from radians to degrees if your programming languages uses radians by default! You can do this by multiplying by $$$\frac{180^\circ}{\pi}$$$. (If your programming language doesn't have $$$\pi$$$ built-in as a constant, you can compute it as $$$2 \arcsin 1$$$.)

Runtime: $$$O(1)$$$. Click here for my submission.


E — Gluttony

First, an intuitive observation: we should assign the fastest eaters to the most challenging foods. One way to prove this is to note that if we have two eaters with constants $$$a \leq b$$$ and two foods with difficulties $$$c \leq d$$$, we can see that $$$max(ac, bd) = bd \leq max(ad, bc)$$$, so we are better off assigning the eater with the lower constant to the food with higher constant. We can apply this algorithm many times to prove that this ordering--fastest eaters with the highest-difficulty foods--is optimal.

So, let's sort the eaters in increasing order and the foods in decreasing order. Then, we binary search for the answer, noting that this is valid because if we can finish within some amount of time, then we can also finish within any greater amount of time. To check if some value $$$T$$$ could be the answer, note that in order to make the maximum eating time equal to $$$T$$$, we need to train all eaters for whom $$$A[i] F[i] > T$$$ until $$$A[i] = \lfloor \frac{T}{F[i]} \rfloor$$$. We can thus compute the number of training sessions required in $$$O(N)$$$. An answer is valid if at most $$$K$$$ training sessions are needed. We can then use binary search to find the least possible value of $$$T$$$ using $$$O(\log A_i F_i)$$$ of these queries, since the most time we could possibly take is $$$O(A_i F_i)$$$.

Runtime: $$$O(N \log A_i F_i)$$$. Click here for my submission.


F — Fork in the Road

Let's first compute three auxiliary arrays. First, we compute an array $$$deg$$$ such that $$$deg[i]$$$ is the number of edges coming from node $$$i$$$. Then, compute an array $$$P$$$ such that $$$P[i]$$$ is the probability that we reach node $$$i$$$ at some point in the process. To do this, we can initialize $$$P[0]$$$ (using zero-based indexing for the nodes, of course) to $$$1$$$ and all other $$$P[i]$$$ to $$$0$$$. Then, process the $$$P[i]$$$ in increasing order, and for each edge from $$$i$$$ to $$$j$$$, add $$$\frac{P[i]}{deg[i]}$$$ to $$$P[j]$$$.

Finally, we want an array $$$E$$$ such that $$$E[i]$$$ is the expected value of the number of steps it will take to reach node $$$N-1$$$ from node $$$i$$$. We can write $$$E[i]$$$ in terms of higher values of $$$i$$$ as follows, where the summation is over all vertices that can be reached from $$$i$$$:

$$$E[i] = 1 + \frac{1}{deg[i]} \sum_{j} E[j]$$$

From here, we can initialize $$$E[N-1]$$$ to $$$0$$$ and then compute the $$$E[i]$$$ values in decreasing order. Note that the computation of all of these arrays is $$$O(N^2)$$$.

Now, it should be easy to see that the answer, if we don't remove any passages, is $$$E[0]$$$. What if we remove some passage, though?

Let's say we remove a passage from $$$i$$$ to $$$j$$$. To compute the new value of $$$E[0]$$$, we compute $$$E'[i]$$$, the value of $$$E[i]$$$ if we didn't have access to this passage, and add $$$(E'[i] - E[i]) P[i]$$$ to $$$E[0]$$$. The intuition here is that this will capture all of the resulting change because if we don't reach $$$i$$$, then this will have no effect on us, but if we do reach $$$i$$$, which happens with probability $$$P[i]$$$, our expected value changes by the addition of $$$E'[i] - E[i]$$$.

Now, how can we compute $$$E'[i]$$$? One way to do this would be to simply reevaluate the formula given above, subtracting one from $$$deg[i]$$$ and considering only the other nodes. This would run in $$$O(N^3)$$$, but with a two-second time limit and a good constant factor (since there can only be about $$$\frac{N^2}{2}$$$ edges and any given vertex, on average, can only have about $$$\frac{N}{2}$$$ as its degree, so we'll take $$$\frac{N^3}{4}$$$ operations in total), this will probably be fast enough.

However, we'll present a solution that computes $$$E'[i]$$$ in $$$O(1)$$$, for an $$$O(N^2)$$$ solution in total. We're essentially trying to compute

$$$1 + \frac{1}{deg[i] - 1} \sum_{j} E[j].$$$

This time, the summation is over only the values of $$$j$$$ that are still adjacent to $$$i$$$ after the removal. First, let's find this summation. To do so, subtract $$$1$$$ from $$$E[i]$$$ and multiply it by $$$deg[i]$$$, which gives you the summation from the original $$$E[i]$$$. Then, subtract the $$$E$$$ value corresponding to the removed passage from the summation, giving you the new summation. Now we can multiply it by $$$\frac{1}{deg[i] -1}$$$ and add $$$1$$$, at which point we're finished. Then, we can use the value of $$$E'[i]$$$ as described above to compute our answer given the removal of a certain passage.

We can then compute the answer for every possible passage removal and print the best one.

Runtime: $$$O(N^2)$$$. Click here for my submission.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +95
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Weather Prediction

Store the three weather states in an array. Then, identify the element of the array equal to the input value, and print the next element in the array.

Time complexity: $$$O(1)$$$. Click here for my submission.


B — Tap Dance

We can employ brute force here, checking each element of the string individually and printing No as soon as we find one that doesn't satisfy the given condition. Of course, if all elements of the string comply with the requirements, the answer is Yes.

To implement the solution faster, note that the given conditions are equivalent to \texttt{S[i] != 'L'} when $$$i$$$ is odd and \texttt{S[i] != 'R'} when $$$i$$$ is even, so you can implement each of them using just one condition instead of three.

Time complexity: $$$O(N)$$$. Click here for my submission.


C — Attack Survival

Suppose player $$$i$$$ answers $$$C_i$$$ questions correctly. Then, they lose $$$Q - C_i$$$ points from the other players' correct answers, leaving them with $$$K - (Q - C_i) = K - Q + C_i$$$ points in the end. Then, they must have a positive score to survive, which occurs when $$$K - Q + C_i > 0$$$. This is equivalent to $$$C_i > Q - K$$$.

We can thus compute the value of $$$Q-K$$$ and the values of $$$C_i$$$ for each $$$i$$$. Then, for each $$$i$$$, print Yes if $$$C_i > Q-K$$$ and No otherwise.

Time complexity: $$$O(N)$$$. Click here for my submission.


D — Powerful Discount Tickets

We'll employ a greedy approach: at each point in this process, apply a ticket to the item for which this ticket will decrease our cost the most. Because each subsequent ticket we apply to any given item will have less value than the last ticket we used on the same item, this approach is always optimal.

To implement this strategy, we'll employ a priority queue, containing a single entry for each item and sorted by the amount a ticket would save us if used on that item. $$$M$$$ times, we pick the most valuable use of a ticket and subtract the savings from our total cost. Then, we compute the amount the next ticket will save us on the current item and insert this into the priority queue. (To save implementation time, you could alternatively just insert every possible ticket for each item into the priority queue immediately--since each item's price will become zero after the use of at most thirty tickets, this won't be too time-consuming. However, this does degrade the performance to $$$O(N \log^2 N)$$$.)

Time complexity: $$$O((N+M) \log N)$$$. The logarithmic factor is due to our priority queue operations. Click here for my submission.


E — Who Says a Pun?

To start, let's hash every substring of $$$S$$$. (Be careful to do this in $$$O(N^2)$$$, not $$$O(N^3)$$$, by using the shorter hashes to help compute the long ones.) Then, we can iterate over each possible value $$$L$$$ of the answer and use brute force to check if two substrings of length $$$L$$$ satisfy our condition. To do so, we maintain a set containing hashes we've encountered so far and iterate over all positions of the string. At each position, we add the hash of the appropriate length ending at this position to our set and check if the hash starting at this position is in our set already. If so, $$$L$$$ is indeed a possible answer.

This has complexity $$$O(N^2 \log N)$$$, since we have $$$N$$$ values for $$$L$$$, $$$O(N)$$$ hashes at each length, and our set operations take $$$O(\log N)$$$. This is a little on the slow end, so to speed up our program, we use binary search rather than iterating over every possible $$$L$$$. It is easy to see that if we can achieve some $$$L$$$, we can also find an example for any lower value of $$$L$$$ (simply take prefixes of the length $$$L$$$ substring), so we can binary search on the answer, which speeds up the time complexity after hashing to $$$O(N \log^2 N)$$$.

Time complexity: $$$O(N^2)$$$, due to the hashing step. Click here for my submission.


F — Xor Sum 3

Let's start with an observation: we really only care about the bits that occur an even number of times in the overall set. If a bit occurs an odd number of times in the overall set, it will occur an odd number of times in one of our two subsets and an even number of times in the other, so it will be counted once in our overall sum. On the other hand, a bit occurring an even number of times in the overall set could occur an even number of times in each subset, counting it zero times, or an odd number of times in each subset, counting it once.

Let's remove every bit that occurs an odd number of times in our overall set from all elements in the data. Now, we're left only with the bits occurring an even number of times. Now, we want to find the maximum XOR-sum of any subset of the processed data. This is because any bits (that show up an even number of times in total) occurring in the XOR-sum of one of our subsets will also occur in the XOR-sum of the other, while any bits that don't occur in the XOR-sum of one subset won't occur in the other. Afterwards, we can double this XOR-sum (since each bit in it occurs in both subsets) and add back the values of the bits showing an odd number of times to get our final answer.

We've thus essentially reduced this problem to identifying the maximum XOR-sum of any subset of our data. We can solve this problem by computing the linear basis of our data and iterating over the values it contains in decreasing order of significance, adding each value if it increases our sum. More details on using linear bases to solve XOR problems can be found in this article--I used the author's solution to Problem 3 to finish this problem.

Time complexity: $$$O(N \log A_i)$$$. Click here for my submission.. Again, substantial credit goes to DrSwad, as I used code from his article to compute the maximum subset XOR after processing the data.


As usual, feel free to leave comments or discuss alternate solutions below!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +64
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Password

We have $$$N$$$ choices for each of the three characters in our password. This gives us a total of $$$N \cdot N \cdot N = N^3$$$ possible passwords. We can thus print $$$N \cdot N \cdot N$$$ as our answer.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Buffet

We can solve this with a brute-force simulation, simply implementing the procedure given in the problem. Iterate over the dishes in the order specified by array $$$A$$$, and when we eat dish $$$i$$$, add $$$B[i]$$$ to our answer, plus $$$C[i-1]$$$ if we just ate dish $$$i-1$$$.

Note that since we'll eat every dish once, we could also just add the sum of array $$$B$$$ to our answer as we read it in, iterating over $$$A$$$ only to add values from $$$C$$$ where necessary. My solution implements this approach.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Maximal Value

We're given a condition on $$$B_i$$$, but since we want to find the best possible array $$$A$$$, let's try to rephrase it as a condition on $$$A_i$$$. We claim that the given condition is equivalent to $$$A_i \leq min(B_i, B_{i-1})$$$, given that both of those values exist. (If $$$i=0$$$, $$$B_{i-1}$$$ won't exist, so we just have $$$A_0 \leq B_0$$$. Likewise, since $$$B_N$$$ doesn't exist, we have that $$$A_N \leq B_{N-1}$$$, since the $$$B_i$$$ term won't come into play.)

This might be relatively intuitive, but let's prove it formally. We'll show that our condition on $$$A_i$$$ is both necessary and sufficient for the condition on $$$B_i$$$ to be true. Let's start by showing necessity. Suppose that our condition does not hold. If $$$A_i > B_i$$$, then this contradicts the fact that $$$B_i \geq max(A_i, A_{i+1})$$$, and similarly, if $$$A_i > B_{i-1}$$$, then we cannot have that $$$B_{i-1} \geq max(A_{i-1}, A_i)$$$. Therefore, if this condition is not true, we cannot have the condition on $$$B_i$$$ given in the problem, showing that the $$$A_i$$$ condition is necessary.

Now, we'll prove sufficiency. If $$$B_i \geq max(A_i, A_{i+1})$$$, we must have that $$$B_i \geq A_i$$$ and $$$B_i \geq A_{i+1}$$$. Both of these are implied by our $$$A_i$$$ condition, so this condition is sufficient--that is, as long as our $$$A_i$$$ condition is true, our $$$B_i$$$ condition is also true.

We have thus shown that the $$$A_i$$$ condition is both necessary and sufficient for the $$$B_i$$$ condition to hold, so the two are equivalent.

Now, the problem is relatively easy. To maximize the sum of the array $$$A$$$, we want each $$$A_i$$$ to be as large as possible, so we simply set each $$$A_i$$$ to $$$min(B_i, B_{i-1})$$$. Then, our answer is the sum of these values.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Face Produces Unhappiness

We make two key observations. (Note that for simplicity, we'll refer to the number of happy people as the answer.)

  1. The answer will be at most $$$N-1$$$.
  2. Each of our $$$K$$$ moves can increase the answer by at most $$$2$$$.

The intuition behind these observations isn't too complicated, but the proofs are fairly annoying. The main interesting part of this problem is coming up with the algorithm which generates optimal moves. If you're not interested in the proofs, you can skip the following section.


Claim 1 is relatively easy to prove. Suppose that all $$$N$$$ people are happy. Then, all $$$N$$$ people must be looking at someone else facing the same direction, which implies that all must be facing the same direction. However, then, a person on one of the ends won't be looking at anyone, so that person won't be happy, a contradiction.

Claim 2 is a little more complicated. First, note that there are only four people whose happiness could be affected by any of our moves: the two people on either end of the move and the two people outside of the move standing next to the people inside of it. It's fairly easy to see that everyone else outside of the move won't be affected--they'll be looking at the same person as they were before the move. Additionally, the people inside the move and not on the end will be looking at the same person after the move, and those two people will both have swapped the directions in which they're looking, so the happiness of this group also won't change.

Now, let's discuss the remaining four people. First, we'll prove that we can't increase the answer by more than two by showing that there cannot be more than two of these people who were unhappy before the move but happy after it.

The proof is long and relatively uninteresting, but the basic idea is to note that there are only four people whose happiness could be changed by a move: the two people on either end of the move and the two people outside the move next to them. From there, we can do casework on which three of them were unhappy before the move but happy after it. As we can show that none of these cases are possible, we cannot increase the answer by more than two with a single move.


Now that we've proven that an increase of two per move is optimal, we'll give an algorithm that achieves this maximum. Suppose that the input starts with an L. (The same logic applies if it starts with an R.) Then, with each move, take a block of consecutive R's (with L's on each side of the block) and flip it. Flipping any of these blocks except for one on the end of the string will increase our answer by two, so we should flip these ones first. Finally, if we have any moves left over and there's a block of R's on the end, we flip that block, giving us the maximum answer of $$$N-1$$$.

To make this approach clearer, let's go over the second sample test case. We have the string LRRLRLRRLRLLR and can make up to three moves. We'll thus flip the first block of R's to get LLLLRLRRLRLLR. Likewise, we'll flip the second and third blocks, to get a final string of LLLLLLLLLRLLR. This gives the desired answer of nine happy people. If we had any more moves, we would flip the second-to-last block of R's, to get LLLLLLLLLLLLR and an answer of eleven, and with one more move after that, we would flip the final R to get LLLLLLLLLLLLL, which gives an answer of twelve.

This gives us a fairly simple implementation. First, count the answer in the given string. Then, for each move, we can increase the answer by two as long as it doesn't go over $$$N-1$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


E — Second Sum

We iterate over the numbers in decreasing order. For each number, we count how many subarrays contain exactly one of the numbers we've seen previously. We then multiply that by the current number and add it to our answer.

Now, we'll explain how to do this more precisely. Let's let $$$a$$$ and $$$b$$$ be the last and second-to-last positions of greater numbers in the set before the current number. Similarly, let $$$x$$$ and $$$y$$$ be the first and second positions of greater numbers after the current number. Then, there are two ways our subarray could contain exactly one larger number: it could contain position $$$a$$$, but not positions $$$b$$$ or $$$x$$$, or it could contain position $$$x$$$, but not positions $$$a$$$ or $$$y$$$. Thus, if our current number's position is $$$i$$$, there are $$$(a-b)(x-i) + (y-x)(i-a)$$$ subarrays containing $$$i$$$ and exactly one greater position. (The first term accounts for subarrays containing $$$i$$$ and $$$a$$$ and the second accounts for subarrays containing $$$i$$$ and $$$x$$$.)

To query for $$$a$$$, $$$b$$$, $$$x$$$, and $$$y$$$, we can use a set containing all the positions we've seen so far. The one remaining detail is to figure out what to do if $$$a, b, x,$$$ or $$$y$$$ doesn't exist (because there aren't two lower or higher positions of greater numbers). It turns out that if $$$a$$$ or $$$b$$$ doesn't exist, we can set it to $$$-1$$$, and if $$$x$$$ or $$$y$$$ doesn't exist, we can set it to $$$N$$$. (The intuition here is that if there are no such positions within the array, the points at which our subarrays become invalid occur when we leave the array--that is, one position before the start of the array and one position after the end.) The rest of the solution works the same as before.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


F — Many Slimes

We employ a greedy approach. For obvious reasons, our initial slime should be the largest one (as otherwise, we will never be able to create the largest value). Then, when each slime reproduces, make it create the largest possible slime we still need to build. (Of course, this new slime must be smaller than our current slime.) If any of our slimes cannot create any new slimes when required to do so, the answer is no. If we can build all the slimes in this manner, the answer is yes.

The proof that this approach is correct is fairly simple. Larger slimes can do anything smaller slimes can do and then some--therefore, if we have the choice between creating a large slime and a smaller slime, we are never worse off when we simply create the large slime.

We can implement this by storing the slimes in a multiset and using C++'s upper_bound function to query for the largest possible slime for a given slime to create.

Runtime: $$$O(N \cdot 2^N)$$$. (The multiset operations make this $$$O(2^N \log 2^N)$$$, which is equivalent to the given complexity.) Click here for my submission.

As an aside, I think this problem was substantially easier than E, and proving that the solution works was probably easier than doing the same for D.


Please feel free to share questions or comments below.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +117
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Tenki

This is a fairly simple task--just iterate over all positions in the strings and count the ones where S[i] = T[i].

Runtime: $$$O(|S|)$$$. Click here for my submission.


B — Power Socket

Note that we already have one socket, so we need to fill $$$B-1$$$ more sockets. Additionally, each new outlet we add will take up one socket and add $$$A$$$ sockets, resulting in a net addition of $$$A-1$$$ sockets. We thus need to create at least $$$B-1$$$ sockets where each outlet gives us $$$A-1$$$ sockets, making our answer $$$\lceil \frac{B-1}{A-1} \rceil$$$. Since most programming languages use floor division, we can also express this as $$$\lfloor \frac{A+B-3}{A-1} \rfloor$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


C — Lower

We essentially use two pointers. Maintain the starting position at zero. Then, while the next position isn't higher than the current one, we make that move. If the next position is higher than the current one, we need to reset the starting position to the current position. The answer is then the maximum number of consecutive moves we make without resetting the starting position.

You can also think of this as simulating the given process and picking the best answer, but only from starting positions that are higher than the previous position. The reason this gives us the best possible answer is that if we have a starting position lower than the position before it, we could get one extra move by starting from the previous position instead, so we should never start at a position that is not higher than its previous position.

Runtime: $$$O(N)$$$. Click here for my submission.


D — ModSum

Notice that the result when a number is taken mod $$$X$$$ can be at most $$$X-1$$$. Therefore, since we're summing results mod $$$1$$$, $$$2$$$, $$$3$$$, and so on, up to $$$N$$$, our answer can be at most $$$0 + 1 + 2 + \cdots + N-1 = \frac{N(N-1)}{2}$$$.

Now, let's prove that we can always achieve this answer. Take $$$1$$$ mod $$$2$$$, $$$2$$$ mod $$$3$$$, and so on, up to $$$N-1$$$ mod $$$N$$$. Then, take $$$N$$$ mod $$$1$$$, which adds zero to the sum. Therefore, this approach gives us an answer of $$$1 + 2 + 3 + \cdots + N-1 = \frac{N(N-1)}{2}$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


E — League

Let's rephrase this as a graph problem. We essentially have $$$\dbinom{N}{2}$$$ matches we need to play, and we are given a set of criteria that each indicate that some match must be played before some other match. We can express the matches as vertices on a graph and the given requirements as directed edges. An edge from one match to another means that the first match must be played before the second.

Phrased like this, the problem seems very similar to topological sorting (given the set of requirements that one vertex must come before another). And indeed, this motivates a similar solution. Maintain the set of matches with no incoming edges. Then, while this set is non-empty, increase the day by one and delete these matches from the graph, removing their outgoing edges as well. Finally, replace the set of matches with no incoming edges by adding any matches whose last incoming edges were removed. (To avoid checking each individual match every time, we can maintain the indegrees of each match and then simply check whether a match now has indegree zero each time we remove one of its incoming edges.)

After the set is empty, we must have either processed every match or encountered a cycle. In the latter case, the answer is $$$-1$$$, as this indicates that there is no order in which to play the matches. In the former case, we can now output the number of days we computed.

Runtime: $$$O(N^2)$$$. Click here for my submission.


F — Engines

First, I'll present the (rather sketchy) solution I used in-contest. Then, I'll discuss the somewhat simpler approach that was most likely intended by the problemsetters.

My solution used a pruned complete search. The actual idea is fairly simple--the challenging part is showing that it will run in time.

First, let's split into four cases. In two of the cases, we'll be attempting to maximize the x-coordinate; in the other two we'll try to minimize it. Similarly, in two of the cases, we'll attempt to maximize the y-coordinate, and in the other two we'll try to minimize it. (The motivation for these cases is that we maximize our end result by either maximizing or minimizing the two coordinates.) The rest of the solution will be tailored to the case in which we attempt to maximize both coordinates--the other cases can be made identical by multiplying the appropriate values in the data by $$$-1$$$.

Now, let's make a key observation. If, after considering some number of the engines, we could reach two points $$$(a, b)$$$ and $$$(c, d)$$$, with $$$a <= c$$$ and $$$b <= d$$$, we shouldn't consider $$$(a, b)$$$ in the future, as no matter what moves we make later on, $$$(c, d)$$$ will always do a better job of maximizing $$$x$$$ and $$$y$$$.

Let's suppose we're adding the engines one by one. Given a set of candidate points we've reached so far, our new set of candidates is the original set plus the points in the original set when the current engine is added to them. Then, we can prune this data to delete any redundant nodes as outlined in our observation.

Finally, after we process every engine, we can iterate over all points in our candidate set and pick the one that gives us the best answer. We'll then print the best answer from all four of our cases.

We should be able to intuitively see that this is correct--each case is guaranteed to print the correct answer if that answer lies in its quadrant. However, the tricky part is bounding its time complexity. One fairly simple bound is $$$O(N^2 \, maxX)$$$, as we consider $$$N$$$ engines, and at any point our set may contain at most $$$O(N \, maxX)$$$ points because it will contain at most one point for each X-coordinate. However, in practice, we actually do much better than this, as it's rare that we'll actually have anywhere near $$$N \, maxX$$$ points. My solution had an unnecessary $$$\log N$$$ factor in the big-O expression (though this made it somewhat faster in practice--I sorted the candidate points rather than iterating over every possible position), but it passed in about 900 ms. It's possible that some particularly tough test case might cause this submission to TLE, but such a case would have to be very contrived.

Click here for my submission.


Now, let's discuss a simpler and more efficient solution. The key observation is that when we order the engines by the angles their vectors make with the x-axis, the answer will include vectors in some contiguous range. A proof of this fact is at this link. Then, the solution is fairly easy to implement--we can iterate over all possible ranges and compute the answers for each of them, taking the maximum as our result. When implemented appropriately, this is an $$$O(N^2)$$$ approach. Credit for this solution goes to silxi, as he was the one who showed it to me after we both finished the contest.


As always, feel free to post questions or comments below.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +116
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Although AtCoder seems to be getting back to posting English editorials, I'm continuing to release some myself in case having the extra explanations helps some people.

(Sidenote: Some of the provided submissions come from after the contest itself, because I had to leave midway through, before I got the chance to debug my E or write my F.)


A — +-X

Just use your language's max function. As $$$A$$$ and $$$B$$$ are small, we don't need to worry about integer overflow.

Runtime: $$$O(1).$$$ Click here for my submission.


B — One Clue

Let's try to find the minimum and maximum values in the answer. It's fairly easy to see that all integers between them will also be in the set.

To find the minimum possible position of a black stone, we place $$$K$$$ black stones such that $$$X$$$ is at the maximum position. Then, the minimum position will thus be $$$X-K+1$$$, as we have $$$K-1$$$ black stones to the left of $$$X$$$. Likewise, if we place $$$K-1$$$ black stones to the right of $$$X$$$, our maximum position is thus $$$X+K-1$$$.

We should thus print all integers from $$$X-K+1$$$ to $$$X+K-1$$$.

Runtime: $$$O(K)$$$. Click here for my submission.


C — Green Bin

We take advantage of the small length of the strings. Represent the strings as character arrays. Then, note that two strings can be rearranged to become equal if and only if, after sorting their character arrays, the arrays are equal.

We read in the strings and sort their character arrays. We then build an array out of the character arrays and sort that array. From there, we know that all equal character arrays will form a consecutive subsegment of the new array.

Each set of $$$K$$$ equal strings adds $$$\binom{K}{2}$$$ to our answer. So, we now simply need to find the lengths of each of the subsegments containing identical arrays.

Counting sets of equal elements in an array is a relatively standard problem. We can use two pointers--maintain the starting position of our current segment and, when we find a position $$$i$$$ that has a different array from the starting position, we have a set of $$$i - start$$$ to add to the answer.

Runtime: $$$O(N |S| \log N)$$$, to account for the sorting (since comparing two of the character arrays takes $$$O(|S|)$$$ time). Click here for my submission.


D — Summer Vacation

Note that on day $$$i$$$, it's pointless to do any jobs that take more than $$$M-i$$$ days to pay out. We use this lemma to build a greedy strategy.

Let's pick our jobs for the days in reverse order. Sort the jobs in increasing order of payout time. We keep a priority queue containing the values of the jobs we can do. Then, iterate over the days $$$i$$$ from $$$M-1$$$ to $$$0$$$. Add the jobs that take $$$M-i$$$ days to the priority queue (by maintaining our position in the array, we can do this in $$$O(N)$$$ across all values of $$$i$$$), then pick the most valuable job in the priority queue (if the queue isn't empty, of course) and do it on day $$$i$$$. We're essentially picking the most valuable job we can do on day $$$i$$$ that we haven't already reserved for a future day.

The idea here is that on each of these days, there's no reason to wait to do the most valuable job we can--we're no better off doing a more valuable job on an earlier day rather than doing it on our current day, and in fact we may be worse off because some other jobs we want to do might become possible in that time.

Runtime: $$$O((N+M) \log N)$$$. The logarithm factor is due to our priority queue operations. Click here for my submission.


E — Coins Respawn

Note that when we travel along edge $$$i$$$, we gain $$$C_i$$$ coins, but eventually, $$$P$$$ coins will be subtracted from our final score. Therefore, we effectively have increased our score by $$$C_i - P$$$.

Afterwards, we have a weighted directed graph (possibly with both positive and negative weights), and we want to find the longest path on from vertex $$$1$$$ to vertex $$$N$$$ on this graph. If we multiply all of the edge weights by $$$-1$$$, this becomes a shortest path problem, and it can be solved in $$$O(NM)$$$ with the Bellman-Ford Algorithm.

The trick here is to handle the case in which the answer does not exist properly. Bellman-Ford is equipped to detect negative cycles (by determining whether processing the edges one more time would still make our path better, even if the algorithm already terminated), but the catch is that we can only use a negative cycle if it is on some path from $$$1$$$ to $$$N$$$. We therefore run a BFS (a DFS would work too) on the graph with all its edges reversed, starting with vertex $$$N$$$, to determine which vertices can reach $$$N$$$. Then, before confirming that an edge creates a negative cycle, we must check that it has been found both in our Bellman-Ford process (which means it can be reached from $$$1$$$) and in our BFS (which means it can reach $$$N$$$).

If this doesn't give us a usable negative cycle, we can simply print the given answer--that is, $$$-1$$$ times the shortest path--or zero, if it's greater.

Runtime: $$$O(NM).$$$ Click here for my submission.


F — Polynomial Construction

There are several solutions to this problem. The one presented here actually does not depend on the fact that all values of the polynomial are $$$0$$$ or $$$1$$$. However, I think it's still one of the simpler solutions, and it relies on a relatively accessible idea.

The basic idea is to employ finite differences. If you aren't familiar with this concept, I recommend this article to learn about the topic. While the article works with real-valued polynomials, the key theorems turn out to be true when applied over the integers modulo $$$P$$$.

We start by computing all finite differences of this polynomial in $$$O(P^2)$$$. Sequence zero is simply the input data; sequence one is the differences of consecutive values in the input array, and so on, until we have sequence P-1, which only has one element. Let $$$D_i$$$ be the first element in sequence $$$i$$$.

Our key lemma is that $$$f$$$ is equal to:

$$$\sum_{n = 0}^{P-1} \frac{D_n}{n!} \left( \prod_{m = 1}^{P-1-n} x-m \right).$$$

Of course, all arithmetic is modulo $$$P$$$. Division is represented by multiplication by a modular inverse.

This is a similar concept to the second theorem presented in the Proof of Theorem section in the linked article. The proof is fairly lengthy and is thus omitted here, but I encourage you to take a look at it if you're interested. It's a complex formula, but it's relatively easy to evaluate, and it should be fairly tractable with a solid understanding of finite differences.

The naive implementation of this algorithm could be $$$O(P^3)$$$ due to the need to evaluate $$$P$$$ different inner products. However, we take advantage of the fact that the products are relatively similar. We evaluate the sum starting at $$$n=P-1$$$ and moving downwards. At each step, we multiply the existing polynomial by $$$x-n$$$ (doable in $$$O(P)$$$ since this polynomial only has two terms) and then add $$$\frac{D_n}{n!}$$$ to the result. The result after dealing with $$$n=0$$$ is our answer.

Runtime: $$$O(P^2)$$$. Click here for my submission.


As usual, feel free to comment below if you have any questions.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +34
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Transfer

First, we compute the resulting amount of water in Cup 1. Observe that this is equal to $$$min(A, B+C)$$$. Let this value be $$$D$$$. Then, the amount transferred from Cup 2 to Cup 1 is simply $$$D - A$$$, so the amount remaining in Cup 2 is $$$C - D + A$$$.

Time complexity: $$$O(1)$$$. Click here for my submission.


B — Uneven Numbers

There are several efficient ways to approach this problem. One is to simply to count the one-, three-, and five-digit numbers less than or equal to $$$N$$$. However, because of the small maximum on $$$N$$$, a more naive approach works. Simply iterate over every value from $$$1$$$ to $$$N$$$, count its digits, and add one to the answer if the count is odd. (We can count a number's digits by repeatedly adding one to the count and dividing the number by ten until it reaches zero.)

Time complexity: $$$O(N \log N)$$$. Note that $$$O(\log N)$$$ is possible with the more efficient approach. Click here for my submission.


C — Build Stairs

We employ a greedy strategy. Process the elements in increasing order, and decrease the current element whenever we can do so without making it less than the last element. (The first element should thus always be decreased.) If at any point the current element must be less than the last element, no matter what we do, the answer is no.

The reason this strategy is optimal is that decreasing an number will make it easier (or at least as easy) to deal with the next element in the list, since any value the next element could have taken will still work when the previous number is lower, and in fact decreasing the previous number expands the range of possible values for the next set. (For example, if we have an element 3, the next element must be at least 3, but if we decrease this to 2, all the same values 3 or higher still work, but 2 does as well.)

Time complexity: $$$O(N)$$$. Click here for my submission.


D — Gathering Children

We first note that we can break up the given string into components consisting of a string of R's followed by a string of L's. Observe that no child will ever leave its component.

We thus solve the problem for each component. Within each component, a child on the left side will continually move to the right, or a child on the right side will continually move to the left, until it reaches the middle two characters, RL. At this point, the child will move between these two positions until the end of the process. With $$$10^{100}$$$ steps, we can see that all children will reach these two positions.

How can we determine how many children will land on the R and how many will land on the L? Note that one of these two characters must be in an odd position (based on its position in the original string) and the other must be in an even position. Because $$$10^{100}$$$ is even and each move changes the parity of a child's position, each child will end up in whichever position has the same parity as its original position. That means that whichever of R or L is on an odd position will get all the children in the component starting in odd positions, while the other will get all the children in the component starting in even positions.

We can implement this by using a two pointers-style technique to track the components.

Time complexity: $$$O(N)$$$. Click here for my submission.


E — Max GCD

First, can we substantially limit the possible answers to this problem? It turns out that we can: our $$$K$$$ moves keep the sum of our elements constant, so the answer must be a factor of our sum. Since our sum is at most $$$5 \cdot 10^8$$$, we can check every value less than or equal to the square root of the sum to compute a list of these factors. Now, we can simply check, for each of these factors, whether it is possible to use our $$$K$$$ operations to make all items in the list divisible by this factor. We then take the maximum of the factors for which this is possible.

To perform a check on a factor $$$F$$$, start by creating a multiset storing the values of the array modulo $$$F$$$. We'll need to decrease some of the values in the multiset and increase others so that all are equal to an element of the list $$$-F, 0, F, 2F,$$$ etc.

The key lemma is as follows: there will always exist a way to turn the smallest $$$X$$$ elements of the multiset, for some $$$X$$$, into $$$0$$$, and the remaining elements into $$$F$$$, in some number of moves. Moreover, this will be the smallest possible number of moves we could use.

In case you're interested, I'll give a proof that this is true after the solution. (The proof is relatively convoluted and hard to follow, which is why I'm emphasizing the rest of the solution first.)

Iterate over the multiset in increasing order. Maintain the number of subtractions required to decrease the elements we've seen so far to $$$0$$$ and the number of additions required to increase the elements we haven't seen yet to $$$F$$$. When these costs are equal, that's the minimum number of moves we need to achieve the factor $$$F$$$. If this minimum is at most $$$K$$$, $$$F$$$ is a possible answer. We can then simply print the maximum value $$$F$$$.

This concludes the solution--for completeness, a proof of the lemma is included below.

To prove existence, note that the sum of the values in the multiset must be $$$0$$$ mod $$$F$$$. Call an element "high" if we'll be raising it to $$$F$$$ and "low" if we'll be lowering it to $$$0$$$. Let the "cost" of an element be the number of additions or subtractions necessary to raise or lower it to its desired value. Call the "high cost" the sum of the costs of all high elements and the "low cost" the sum of the costs of all low elements. Note that the cost of a high element $$$i$$$ is $$$F - i$$$ and the cost of a low element $$$i$$$ is $$$i$$$.

Start with all elements being high. We maintain the difference of the high cost and the low cost. Initially, the high cost is the sum of $$$F - i$$$ over all elements. Since $$$F$$$ is $$$0$$$ mod $$$F$$$ and the sum of all $$$i$$$'s is $$$0$$$ mod $$$F$$$, this sum will be $$$0$$$ mod $$$F$$$. Express it as $$$kF$$$, for some integer $$$k$$$. Note that $$$0 \leq k \leq N$$$. The low cost, meanwhile, will be $$$0$$$, because no elements are currently low.

Suppose that we switch an element $$$i$$$ from high to low. In this case, the high cost will decrease by $$$F - i$$$ and the low cost will increase by $$$i$$$. Therefore, the difference will decrease by $$$(F-i) + i = F$$$. Therefore, if we switch the $$$k$$$ lowest elements from high to low, the high cost will be the same as the low cost. This means that it will take the same numbers of subtractions to deal with the low elements and additions to deal with the high elements, meaning that these subtractions and these additions can be our moves.

We have thus proven that this is doable. Now, we prove that it is optimal. First, note that if we're going to set all elements of the multiset to either $$$0$$$ or $$$F$$$, we minimize our costs by making the least values our low elements and the greatest values our high elements. Moreover, we can observe that it will always be inefficient to set any element to a value other than $$$0$$$ or $$$F$$$, as since our values are all between $$$0$$$ and $$$F-1$$$, moving to one of these other values will pass $$$0$$$ or $$$F$$$ and thus will cost us moves without getting any elements closer to $$$0$$$ mod $$$F$$$.

Time complexity: $$$O(\sqrt{S} + N \log N F(S))$$$, where $$$S$$$ is the sum of the $$$A_i$$$ and $$$F(S)$$$ is the maximum number of factors of any number less than $$$S$$$. While the exact value is hard to compute, it's not hard to see that with $$$N = 500$$$, $$$F(S)$$$ will be small enough to easily pass. Click here for my submission.


F — Enclosed Points

For each point, we count the number of subsets whose bounding rectangles enclose this point. Our answer is the sum of these values.

First, perform coordinate compression, mapping the X and Y-coordinates to integers from $$$0$$$ to $$$N-1$$$. Then, iterate over the points in increasing order of $$$X$$$. Additionally, maintain a segment tree, where each node represents a Y-coordinate and is $$$0$$$ if we have not seen this coordinate yet but $$$1$$$ if we have.

We solve the problem for each point using the Principle of Inclusion/Exclusion. First, the total number of rectangles is $$$2^N$$$, as we can either include or exclude each point from our subset. However, we must subtract the ones that don't contain the current point. For a rectangle not to contain the current point, all of the points in its subset must be above, below, to the left, or to the right of our current point. For each of these cases, let $$$P$$$ be the number of points satisfying its criterion. Then, we subtract $$$2^P$$$ to delete these cases.

However, we've subtracted out some cases twice. In particular, if all points are to the upper-left, upper-right, lower-left, or lower-right of our current point, we've subtracted it out twice (for example, the upper-left case was subtracted in the upper and left cases above), so we need to add $$$2^P$$$ back to account for each of these cases, where $$$P$$$ is the number of points satisfying each criterion.

The only case in which all points in a subset satisfy three or more of our subtraction criteria is the empty set (as no point can be both above and below our current point or both to the left and the right of this point). We added the empty set once in our original count, subtracted it four times, and then added it back four times in our most recent cases. We want it to count zero times, so we need to finally subtract the empty set once to get our final answer for the point.

Now, we need to compute the number of points satisfying these eight criteria. The first four are easy. A point $$$(X, Y)$$$, where $$$X$$$ and $$$Y$$$ are coordinates from $$$0$$$ to $$$N-1$$$, has $$$X$$$ points to its left and $$$N-X-1$$$ points to its right. Similarly, this point has $$$Y$$$ points below it and $$$N-Y-1$$$ points above it.

The remaining four criteria require us to use our segment tree. In particular, the number of lower-left points is equal to $$$query(0, Y)$$$ on our segment tree, as this stores the number of points to the left with Y-coordinates from $$$0$$$ to $$$Y$$$. Similarly, the number of upper-left points is equal to $$$query(Y, N-1)$$$ (or alternatively, simply the total number of left points minus the number of lower-left points). We can then compute the lower-right points by subtracting the lower-left points from the total lower points, and likewise, we can compute the upper-right points by subtracting the upper-left points from the total upper points.

Once we have these counts, we can use binary exponentiation to compute $$$2^P$$$ in $$$O(\log P)$$$ for each $$$P$$$. After we update the answer, we can update the segment tree at $$$Y$$$ to prepare for our next point.

Time complexity: $$$O(N \log N)$$$. Click here for my submission.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +139
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

A — Harmony

Without loss of generality, let $$$A < B$$$. Then, we have three cases:

  • $$$K$$$ is less than $$$A$$$. This gives $$$A - K = B - K$$$, which gives $$$A = B$$$, which is false.
  • $$$K$$$ is greater than $$$B$$$. This gives $$$K - A = K - B$$$, which is also false.
  • $$$K$$$ is between $$$A$$$ and $$$B$$$. This gives $$$K - A = B - K$$$, which gives $$$2K = A+B$$$.

Thus, we must have $$$2K=A+B$$$. If $$$A+B$$$ is odd, there is thus no solution. If $$$A+B$$$ is even, our answer is $$$\frac{A+B}{2}$$$. It is easy to verify that this number is indeed between $$$A$$$ and $$$B$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — 0 or 1 Swap

Let $$$K$$$ be the number of positions $$$i$$$ at which $$$p_i \neq i$$$ (using 1-indexing). If $$$K = 0$$$, the answer is yes, as we can simply leave the permutation as is. If $$$K = 2$$$, the answer is also yes: swap the two misplaced elements. (Notice that we can never have $$$K = 1$$$, as if any element is put in the wrong position, the element that was meant to be in that position must also be misplaced.)

If $$$K > 2$$$, the answer is no: our swap can only affect two elements and can thus only correct at most two misplacements. We can thus simply compute $$$K$$$ and check whether it is at most two to get our answer.

Runtime: $$$O(N)$$$. Click here for my submission.


C — City Savers

We employ a greedy strategy. Notice that the first hero is the only one who can attack monsters in the first city. Therefore, the first hero should attack as many monsters in the first city as possible, only moving on to the second city after all the monsters in the first city are eradicated. Then, after the first hero attacks, the second hero is now the only remaining hero who can attack the monsters in the second city, so the second hero should prioritize second-city monsters over third-city monsters.

This logic extends to the following strategy. Process the heroes in increasing order of $$$i$$$. Let each hero start by attacking as many monsters as possible in city $$$i$$$. Then, if they can still kill more monsters after all the monsters in city $$$i$$$ are defeated, our hero can use their remaining energy to attack the monsters in city $$$i+1$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Digits Parade

Let $$$dp[i][j]$$$ be the number of ways to create an $$$i$$$-digit number consistent with the first $$$i$$$ digits of the given pattern and congruent to $$$j$$$ modulo $$$13$$$. As our base case, $$$dp[0][i] = 0$$$ for $$$i$$$ from $$$1$$$ to $$$12$$$, and $$$dp[0][0] = 1$$$ (as our length-zero number has value zero and thus is zero mod $$$13$$$.)

Notice that appending a digit $$$k$$$ to the end of a number that's $$$j$$$ mod $$$13$$$ gives a number that's congruent to $$$10j+k$$$ mod $$$13$$$. We use this fact to perform our transitions. For every state $$$dp[i][j]$$$ with $$$i < N$$$, iterate over the possible values of $$$k$$$. (If $$$S[i] = \, ?$$$, there will be ten choices for $$$k$$$, and otherwise there will only be one choice.) Then, we add $$$dp[i][j]$$$ to $$$dp[i+1][(10j+k) \, \% \, 13].$$$

To get our final answer, we can simply print $$$dp[N][5].$$$

Runtime: $$$O(N).$$$ Notice that this will have a weak constant factor, with $$$13$$$ states in every row of our DP table and up to $$$10$$$ transitions for each state, but our algorithm is still easily fast enough. Click here for my submission.


E — Golf

Let's assume $$$X$$$ and $$$Y$$$ are at least zero, for simplicity. (If $$$X$$$ and/or $$$Y$$$ is negative, we simply multiply the appropriate values in our output by $$$-1$$$.)

Let $$$N$$$ be the minimum number of moves we need. Then, the total Manhattan-distance across all of our moves is $$$NK$$$. Additionally, in the end, we must move $$$X+Y$$$ further in positive directions than in negative directions. Therefore, if $$$A$$$ and $$$B$$$ are the total distances we move in positive directions and negative directions, we must have $$$A+B = NK$$$ and $$$A-B = X+Y$$$. Solving gives $$$A = \frac{NK + X + Y}{2}$$$ and $$$B = \frac{NK - X - Y}{2}$$$. Since we can only move to integer points, these must both be nonnegative integers.

We use these facts to make the following observations.

  • If $$$K$$$ is even and $$$X+Y$$$ is odd, there is no solution. This is because $$$NK + X + Y$$$ will be the sum of an even and an odd, and will thus be odd, so the result when it is divided by two won't be a fraction.
  • We must have that $$$NK \geq X+Y$$$ and that $$$NK = X+Y$$$ mod $$$2$$$. Both of these are implied by the fact that $$$B$$$ is a nonnegative integer. The former also should be intuitive: if we move less than $$$X+Y$$$ distance, there is no way we can get to $$$(X, Y)$$$.
  • If $$$N = 1$$$, then we must have $$$X+Y = K$$$, as we can only move to a point with $$$X+Y = K$$$ in a single move.

We claim that the first observation is the only case in which the answer is $$$-1$$$ and the latter two observations are the only restrictions on $$$N$$$. Therefore, the answer is the least $$$N$$$ that satisfies the latter two observations' requirements. To prove this, we simply give the construction, which is necessary to finish anyway.

We start by dealing with our negative movement. While $$$B$$$ remains greater than $$$K$$$, simply move $$$K$$$ in the negative direction of whichever coordinate is currently closer to our goal. Once $$$B$$$ is at most $$$K$$$, we move $$$B$$$ in the negative direction of the coordinate closer to the goal, as before, and $$$K - B$$$ in the positive direction of the other coordinate.

Afterwards, we have moved exactly $$$B$$$ units in the negative direction, so we know the rest of our movement must be in the positive direction. We can therefore simply increase $$$x$$$ by $$$K$$$ until it reaches $$$X$$$, and then switch to increasing $$$y$$$ until it reaches $$$Y$$$.

You may be asking yourself why in our first step, we need to move in whichever direction is currently closer to the goal. The reason is that this protects us from forcing ourselves into unnecessary waste. As an example, suppose that $$$X=8$$$, $$$Y=0$$$, and $$$K=5$$$. We can compute by the above rules that $$$N = 2$$$, $$$A = 9$$$, and $$$B = 1$$$. Since we're currently at $$$(0, 0)$$$, we have distance $$$8$$$ from the goal in the $$$X$$$ direction and distance $$$0$$$ from the goal in the $$$Y$$$ direction. Our algorithm tells us to take our negative step in the $$$Y$$$ direction while using the rest of our movement to increase $$$X$$$. This takes us to $$$(4, -1)$$$. Our next move is then to $$$(8, 0)$$$, as desired. If we took our negative step in the $$$X$$$ direction, we would get to $$$(-1, 4)$$$ or $$$(-1, -4)$$$, which leads to wasted movement in the $$$Y$$$ direction. There is no one-step path from either of these points to $$$(8, 0)$$$.

Runtime: $$$O(N)$$$. Our limiting factor is the need to reconstruct our steps. We can bound $$$N$$$ as $$$O(X+Y)$$$, so this is fast enough. Click here for my submission.


F — Strings of Eternity

For simplicity, we let $$$N = |S|$$$ and $$$M = |T|$$$.

Let's try to understand the setup here. We can reformulate the problem: we want to find the greatest $$$i$$$ for which the concatenation of $$$i$$$ copies of $$$T$$$ appears somewhere in an infinite concatenation of $$$S$$$.

One key observation is that it never matters which copy of $$$S$$$ we're in: we can define our state simply as the position of $$$S$$$ we're in after adding a copy of $$$T$$$ to our concatenation.

This motivates the following first step. Concatenate copies of $$$S$$$ until the resulting string has length of at least $$$N+M$$$. Then, using any efficient string matching algorithm (I successfully used Rabin-Karp), determine which of the $$$N$$$ positions in $$$S$$$ could be the start of a copy of $$$T$$$.

We now reformulate this as a graph problem. Notice that if we start a copy of $$$T$$$ in position $$$i$$$ on $$$S$$$, the next copy of $$$T$$$ would start at position $$$(i+M) \, \% \, N$$$. We build a directed graph where each node represents one of the $$$N$$$ positions in $$$S$$$. If position $$$i$$$ could be the start of a copy of $$$T$$$, we add an edge from $$$i$$$ to $$$(i + M) \, \% \, N$$$.

The answer is now the length of the longest path in this graph, as traversing an edge represents adding a copy of $$$T$$$. This is a fairly standard problem. We can solve it using a similar algorithm to topological sort. Maintain a queue of vertices we are currently able to process, and store arrays maintaining the lengths of the longest paths ending at each node and whether each node has been visited. To start, consider every vertex with indegree zero. Add these vertices to the queues and set the lengths of their longest paths to zero.

When we visit a node, mark this node as visited, then subtract one from the indegree of the vertex at the end of this vertex's edge (if the vertex does have an edge). Then, set the longest path to this next vertex as the maximum of its previous value or one plus the longest path to our starting vertex. If the indegree of the next vertex is now zero, add it to the queue.

At the end of our process, the length of the longest path is infinity if and only if some vertex has never been visited (as this occurs exactly when our graph contains a cycle). If every vertex has been visited, our answer is the highest value in our distance array.

Runtime: $$$O(N+M).$$$ Click here for my submission.


If you have any questions, please feel free to post them below!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +144
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

As many of you are aware, some users abuse Codeforces' virtual participation feature in order to submit prewritten solutions immediately when the contest starts so as to appear at the top of the contest leaderboards. A few examples are here, here, and here.

Of course, preventing cheating by virtual participants is rather difficult, especially because while these three are relatively obvious cases, other forms of dishonest behavior are just as plausible but are harder to detect. For example, someone who looked at the comments on a round's announcement before participating virtually and thus learned that a particularly valuable problem was relatively easy to solve would have an unfair advantage (as they would be able to earn more points by knowing to attempt this problem early into the contest), but one that would be rather difficult to detect.

Luckily, all Codeforces scoreboards include a checkbox towards the top-right of the page through which users can select whether to include unofficial scores in the rankings. This works great for rounds rated for all users (i.e. combined Div 1/Div 2 Rounds and rounds with separate contests for the two divisions). However, for rounds run only for Division 2 or Division 3, we run into a problem. There is no way to let the standings include participants who participated in the actual contest but who were over the rating cap, but not virtual participants from after the contest. I personally enjoy participating in these rounds to compete with the other unofficial contestants, to practice quick problem-solving, and to solve the harder problems, which are often interesting and challenging even though they're designed for a lower rating cap. Therefore, it's somewhat frustrating that it's difficult to access the standings as they were at the end of the contest (i.e. including only live participants) a while later.

It turns out that there's an easy solution already implemented. On Educational Round scoreboards, an option at the top-left allows users to select whether to view only Division 2 users or to show users from both divisions (separately from the option to show or hide virtual participants). I propose that Codeforces extend this feature to all rounds rated only for Division 2 or for Division 3. In other words, Div. 2 only and Div. 3 only rounds would have the same option to show or hide participants above the contest's rating cap independent of whether virtual participants are hidden.

Please feel free to share your thoughts below. While this is a relatively small problem, it does impact a large number of users: for example, 1557 competitors above the 1600 rating cap registered for the last Division 3 round alone. Moreover, I'm optimistic that this wouldn't be overly difficult to fix, as the necessary infrastructure already exists and is in active use. Implementing this proposal would simply be a matter of extending it to more rounds.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +141
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Hi all! As I've done several times in the past, I've written up an unofficial set of solutions to Codeforces Round 575 so that some solutions will be available before system testing completes. If my code gets hacked, I'll update the appropriate problem's solution as soon as I can.


A — Three Piles of Candies

We claim that the answer is always $$$\lfloor \frac{a+b+c}{2} \rfloor$$$. We clearly can't do any better: if Alice and Bob could each get more candies than this, then in total, they would have more than $$$a+b+c$$$ candies, which is impossible. To prove that this is always doable, let Alice take the smallest pile and Bob take the second-smallest pile. Then, distribute enough candies from the largest pile so that Alice has as many candies as Bob (which is always possible because the largest pile must contain more candies than the difference between the other two piles). Then, distribute the remaining candies from the large pile as evenly as possible.

Runtime: $$$O(1)$$$ per query. Click here for my submission.


B — Odd Sum Segments

Let's first figure out when there is an answer. Notice that the K subarrays must each have a sum of 1 mod 2, so the sum of the entire array must be congruent to K mod 2. Moreover, because every odd subarray must contain at least one odd element, there must be at least K odd elements in the array. If these conditions aren't satisfied, the answer will be No.

We claim that the answer is Yes for all other cases. To construct a solution, we output the indices of the first K-1 odd elements, followed by N. The first K-1 subsequences contain exactly one odd element, so their sum will be odd, and because the sum of the entire array is congruent to K mod 2, the sum of the final subarray is congruent to K-(K-1) = 1 mod 2, so it will have an odd sum, as desired.

Runtime: $$$O(N)$$$ per query. Click here for my submission.


C — Robot Breakout

Notice that for each robot individually, disabling action one means that the resulting $$$X$$$ cannot be lower than the robot's $$$X$$$, because the robot cannot move towards a lower value of $$$X$$$. Disabling action two means that the resulting $$$Y$$$ cannot be higher than the robot's $$$Y$$$, for similar reasons. Actions three and four have similar effects.

We thus maintain the maximum and minimum possible values for $$$X$$$ and $$$Y$$$. Initially, both maxima are $$$100,000$$$ and both minima are $$$-100,000$$$, in order to satisfy the output constraints. Then, as we process each robot, for each disabled action, we update the appropriate maximum or minimum. If the maximum $$$X$$$ is less than the minimum $$$X$$$ or the maximum $$$Y$$$ is less than the minimum $$$Y$$$, the answer is $$$0$$$. If not, we can output any valid point in the given range.

Runtime: $$$O(N)$$$ per query. Click here for my submission.


D — RGB Substring

For the smaller version, D1, we can consider every length-K substring of S individually. For each substring, we take three cases: the substring could start with an R, an G, or a B. Then, the rest of the values of the substring are fixed: any R must be followed by a G, any G must be followed by a B, and any B must be followed by an R. To compute the cost of editing this substring into one of these three cases, we can simply count the number of values in this substring that differ from the intended value at that position. Then, the best answer for this substring is the minimum cost over our three cases, and the best answer overall is the minimum cost over all substrings.

For the larger version, we can apply a very similar technique, using a sliding window approach to optimize the solution. We can modify our cases as follows: essentially, we need our string to have a length-K substring equal to the corresponding substring at the same position in RGBRGBRGB..., GBRGBRGBR..., or BRGBRGBRG.... We consider each of these cases individually. Start by computing the number of "errors" (i.e. positions at which the original string differs from the goal string) in the first K elements. Then, to shift the window, subtract one from the count of errors if the first position in the window causes an error and add one to the count of errors if the position added at the end of the window contains an error. We then simply take the minimum possible number of errors.

Runtime: $$$O(N)$$$ per query (or $$$O(K(N-K))$$$ for the easy version). Click here for my submission.


E — Connected Component on a Chessboard

Without loss of generality, assume $$$B \geq W$$$. (The other case proceeds similarly--in fact, we can simply add one to $$$x$$$ at every point in a solution to this case to get a solution to the case in which $$$B$$$ and $$$W$$$ are swapped.)

The answer is Yes if and only if $$$B \leq 3W+1$$$. We prove this by induction on $$$W$$$. If $$$W = 1$$$, $$$B$$$ can be at most $$$4$$$ by inspection. Then, every new white cell we add can give us at most three new black cells, as the white cell must be adjacent to at least one preexisting black cell. This construction is possible if we simply make a large column, alternating black and white cells, and then add black cells above or below the end white cells and to the left and right of every white cell. To get fewer black cells, we can omit some of the black cells on the sides and/or above and below the ending white cells.

This proof is essentially the key to solving the problem. We can simply implement this procedure to construct a solution. Start by building a column of $$$W$$$ white cells and $$$W-1$$$ black cells. Then, we have space for $$$2W+2$$$ more black cells at the top and bottom and on the sides, so we should simply use as many of those cells as necessary to add the remaining black cells to our component.

Runtime: $$$O(B+W)$$$ per query. Click here for my submission.


F — K-th Path

I'll first present the solution I used in contest. Then, I'll discuss a simpler solution, courtesy of dorijanlendvaj.

Recall that Dijkstra's algorithm visits points in increasing order of their distance from the source node. Therefore, if we wanted to find the K'th shortest path from a single node, we could do so more quickly by running Dijkstra's and terminating as soon as we've visited K other nodes. We exploit a similar property to solve the problem while considering paths from all nodes.

First, we read in the data. For each vertex, we sort the edges coming from that vertex in increasing order of weight. (Ties can be broken arbitrarily.) Then, we define a "state" variable. A state consists of a shortest path from a starting vertex to some other vertex in which we log four variables: the starting node, the second-to-last node on the path, the index of the final edge in the second-to-last node's adjacency list, and the length of the path.

We maintain a priority queue of these states, pulling the least-weight ones first. Start by adding a state for the single-edge paths using each node's shortest edge. Then, to process a state, we first check that the shortest path from our starting node to the ending node has not been found yet. If not, we add this length to our list and output it if it's the K'th one we've found. Then, we add a new state to the priority queue representing the path from our starting node to the ending node, plus the shortest edge of the ending node.

Then, we add a new state in which we replace the last edge of the current state with the next-shortest edge from our second-to-last node. In other words, we increase the index variable in our state by one.

The key reason this method works is that it guarantees that we process all possible states in increasing order of weight. Moreover, we start with $$$N$$$ states, and each state only creates a second extra state if it creates one of our $$$K$$$ paths. Therefore, we'll end up with $$$O(N+K)$$$ states in total.

Due to the priority queue operations and sorting, our runtime is $$$O((N+K) \log (N+K) + M \log M).$$$

Click here for my submission.

Here's a second solution. I haven't submitted this one myself, so please feel free to comment if you spot any errors. (Although I received this solution from Dorijan, I wrote it up myself, so all blame for any errors should go to me.)

Eliminate all edges except the K cheapest ones, with ties broken arbitrarily. Then, run N Dijkstra's iterations or an efficient all-pairs shortest paths algorithm to find all possible lengths of paths. From here, sort the paths and pick the K'th least expensive one. This is correct because none of the paths we're looking for will include an edge other than one of the K cheapest ones. Moreover, it's fairly easy to see that this will run in time: there will be, at most, $$$O(K^2)$$$ paths resulting from this process.


Feel free to post below if you have any questions!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +95
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Since AtCoder often doesn't release English editorials to their beginner contests, I thought I'd write up my solutions. Feel free to add a comment if anything was unclear!

A — T or T

Our alternatives here are sending all $$$N$$$ people on the train, for a total of $$$N \cdot A$$$ yen, or using the taxi, for a total of $$$B$$$ yen. We should thus print the minimum of these two values.

Code: https://atcoder.jp/contests/abc133/submissions/6265526

B — Good Distance

We reword the problem slightly: for how many points $$$y$$$ and $$$z$$$ is $$$\sum_{k=1}^D (y_i-z_i)^2$$$ a perfect square?

Given the small input constraints, we know that this sum is going to be at most $$$10 \cdot (20-20)^2 = 16000$$$. Hence, we can simply compute a list of all perfect squares less than $$$16000$$$ and iterate over all $$$\dbinom{N}{2}$$$ pairs of points. For each pair of points, we compute the summation and check if it is equal to any of our squares. If so, we increment the answer. With about $$$125$$$ perfect squares less than $$$16000$$$, this takes roughly $$$125 \cdot \dbinom{10}{2}$$$ operations, which comes in well under the time limit.

Code: https://atcoder.jp/contests/abc133/submissions/6268768

C — Remainder Minimization 2019

We start with an observation: if there is a multiple of $$$2019$$$ in the range $$$[L, R]$$$, the answer is 0, as we can simply take that multiple as one of $$$i$$$ or $$$j$$$ and any other value in the range as our other variable.

We first check whether this is the case. Iterating over every value from $$$L$$$ to $$$R$$$ would be too slow, so instead, we simply check whether $$$\frac{L-1}{2019}$$$ is different from $$$\frac{R}{2019}$$$, using integer division. In this case, there must be some multiple of $$$2019$$$ in our range. If the two values are the same, we know there is no multiple of $$$2019$$$ in the range.

In the latter case, we have a range of length at most $$$2018$$$, so an $$$O((R-L)^2)$$$ solution is good enough now. We can thus iterate over every pair of values from L to R and select whichever gives us the minimum value for the desired remainder.

Code: https://atcoder.jp/contests/abc133/submissions/6273254

D — Rain Flows into Dams

We are essentially solving a system of equations here. Suppose that mountain $$$i$$$ receives $$$2x_i$$$ liters of rainwater. Then, for each dam $$$i$$$, we know that $$$A_i = x_i + x_{i+1}$$$, where $$$x_{N+1} = x_1.$$$

We start by solving for $$$x_1$$$. Since $$$A_1 = x_1 + x_2$$$, we have $$$x_2 = A_1 - x_1$$$. Then, similarly, $$$x_3 = A_2 - A_1 + x_1$$$. We notice a pattern: $$$x_5 = A_4 - A_3 + A_2 - A_1 + x_1$$$, and in general, for any odd $$$i$$$, $$$x_i = x_1 + \sum_{k=1}^{i-1} A_k \cdot (-1)^k.$$$ More simply, to compute $$$x_N$$$, we simply add the even $$$A_i$$$ and subtract the odd $$$A_i$$$ to and from $$$x_1$$$.

We thus have from our equation for dam $$$N$$$ that $$$2x_1 + \sum_{k=1}^{N-1} A_k \cdot (-1)^k = A_N$$$. We can compute this sum in $$$O(N)$$$ and use it to solve for $$$x_1$$$. From there, we can substitute this into the equation for dam $$$1$$$ to get a value for $$$x_2$$$, and we can substitute that into equation two to solve for $$$x_3$$$, and so on. Then, we multiply these values by two and print them.

Code: https://atcoder.jp/contests/abc133/submissions/6272209

E — Virus Tree 2

We build the tree from the top down. Root the tree arbitrarily; we have $$$K$$$ choices for the root. Then, we do DFS. In each step of our DFS, we count the number of ways to assign colors to the current node's children and then run our DFS process on each of those children.

To count the number of ways to assign colors to a node's children, note that each child must have a different color and those colors must be different from the current node's color and the parent node's color (since the current node's parent has distance two from the current node's children). Thus, we start with $$$K-1$$$ colors available for the children of the root or $$$K-2$$$ colors available for the children of any other node. Then, once we pick a color for a node, we have one fewer color option than we did before. If we have $$$x$$$ children of a node that isn't the root, we can thus express the number of ways to pick their colors as follows:

$$$\prod_{i=0}^{x-1} K-2-i$$$

The case for the root works similarly, except we use $$$K-1$$$ instead of $$$K-2$$$. The answer is then the product of these values over all of the parent nodes, since we need to assign colors to each of the children.

Code: https://atcoder.jp/contests/abc133/submissions/6275812

F — Colorful Tree

The core idea is LCA with square-root decomposition. First, we figure out how we'll eventually want to answer queries. Note that within a tree, the distance between A and B, if these two nodes have C as their LCA, is equal to $$$dist(A, root) + dist(B, root) - 2 \cdot dist(C, root).$$$ Therefore, we now need to efficiently compute distances between nodes and the root.

Now, suppose the distance from a node $$$A$$$ to the root is $$$D$$$ initially. However, suppose that we're changing the lengths of all edges with color $$$c$$$ to $$$d$$$, and there are currently $$$x$$$ nodes of color $$$c$$$ on this path with total length $$$y$$$. Then, we can see that the distance after this change is $$$D - y + x \cdot d.$$$ We now need to efficiently compute $$$x$$$ and $$$y$$$.

To compute these values of $$$x$$$ and $$$y$$$, we call certain nodes "special" and record the values of $$$x$$$ and $$$y$$$ for all colors at those nodes. Then, to compute $$$x$$$ and $$$y$$$ for a certain node and color, we repeatedly move from that node to its parent, incrementing $$$x$$$ and $$$y$$$ if we reach an edge of our desired color, until we reach a special node, at which point we add that node's $$$x$$$ and $$$y$$$ to our sum and return it.

The key insight is that we can pick the special nodes so that we can both compute the $$$x$$$ and $$$y$$$-values for the special nodes quickly and reach a special node from any regular node by climbing up the tree in relatively few steps.

To do so, we select the special nodes as follows. Sort the nodes in decreasing order of depth. Then, for each node, we move up $$$\sqrt{N}$$$ levels in the tree. If we don't find a special node in this process, make the node $$$\sqrt{N}$$$ levels above this node special.

Obviously, this guarantees that we will only need to move up at most $$$\sqrt{N}$$$ steps to find a special node from any regular node. However, we also can prove that this gives us only $$$O(\sqrt{N})$$$ special nodes, allowing us to compute them and their $$$x$$$ and $$$y$$$-values efficiently. Any node up at least $$$\sqrt{N}$$$ levels in the tree must obviously have at least $$$\sqrt{N}$$$ nodes in its subtree, and because of the order in which we process the nodes, each new special node we add gives at least $$$\sqrt{N}$$$ nodes access to a special node that did not have it already. Therefore, we will have at most $$$\frac{N}{\sqrt{N}} = \sqrt{N}$$$ special nodes in total.

We can thus compute the special nodes and their $$$x/y$$$-values in $$$O(N\sqrt{N})$$$, and for each query, we take $$$O(\sqrt{N})$$$ steps to move upwards and find a special node. Therefore, our total complexity is $$$O((N+Q)\sqrt{N})$$$, which passes in time.

Code: https://atcoder.jp/contests/abc133/submissions/6290486

Полный текст и комментарии »

  • Проголосовать: нравится
  • +98
  • Проголосовать: не нравится

Автор Geothermal, история, 5 лет назад, По-английски

Since it will be a little while longer before the editorials for the recent Division 3 Round come out, I thought I'd post some solutions I wrote up. Please feel free to post below if you have any questions or notice any errors. If any of my solutions end up getting hacked, I'll replace the submission and solution text as soon as possible. Links to my submissions are at the bottom of this post.


A: We can easily prove that the answer will not be much bigger than A, so simply iterating through all the possible values until we find one that works will do fine. (As a trivial proof, note that 1003 is interesting, so we will have to test at most 1003 numbers.) Then, the only challenging part from here is finding the sum of the digits of a number, which can be implemented easily with modular arithmetic.


B: Start by sorting the prices. It turns out that only the greatest and least prices matter. We have two key facts to prove:

  1. The answer is -1 if the difference between the greatest and least prices is greater than 2*K.
  2. Otherwise, the answer it is equal to the least price plus K.

Part 1 has an easy proof: if the difference between these two prices is greater than 2*K, we obviously can never force the greatest and least prices to become equal. Otherwise, we can easily prove that the least price plus K will have absolute difference at most K to all of the other prices, so this is a valid solution. Since the answer must have absolute difference of at most K to the least price, the least price plus K is clearly optimal, so it's our answer. This can be implemented in O(N log N) by sorting or in O(N) by simply picking the least and greatest prices while reading in the data.


C: For convenience, we subtract one from K. Then, the answer is -1 if and only if K / B (using integer division) is less than N. Otherwise, subtract N * B from K and subtract B from A. Then, the answer is the minimum of N and K / A (again using integer division). Essentially, we're figuring out how many times we can add the difference between A and B (i.e. replacing B's with A's) until we have too little charge.

This is clearly an O(1) solution for each query, so our overall complexity is O(Q).


D/E: See G/H.


F: The key observation is that any number up to 200,000 will have a relatively small number of factors (in fact, it turns out there are at most 160 of them). Ignore duplicates within the data, since obviously they're never useful, and sort the data such that A[0] is the greatest value. Then, we can see based on this observation that if A[0] is in a set of two problems, the other problem can be at least roughly A[170] (adding a bit of wiggle room on top of the 160 factors to prevent edge cases from becoming an issue), so the higher number in a set of two problems should obviously never be less than A[170]. Likewise, if A[0] is in a set of three problems, the second problem can be at least A[170], and the third number can be at least A[340] (since there will be at most about 320 factors between A[0] and the second problem combined), so in this case, the first problem should be no less than A[340].

To actually implement the solution, we start by taking A[0] as the best set of one problem. Then, to deal with sets of two problems, we iterate through the largest 170 values in the array. For each value, we find the greatest smaller value in the array that does not divide our original value, and we check if this gives us prettier pair of problems. (Obviously, we should take the greatest possible second problem in this case.)

Finally, to deal with sets of three problems, we start by considering the first 340 problems as potential first problems. Then, we build an array of the 170 largest candidate values for our second and third problems. (Since we can take the largest candidate and guarantee that we'll find another viable candidate in this array, we know that we don't need any numbers outside of the first 170 candidates.)

Checking all 170 choose 2 pairs of potential second and third problems would be fine if we only had a single query with size up to 200,000, but with lots of queries (consider about 400 queries of size roughly 500), it may be possible to break this solution (because the optimization of checking only the first 350 elements in the array as first problems doesn't help much if there aren't many problems in the array in the first place). It may be possible to pass with a few constant-factor optimizations, since not many cases will be able to break this in the first place, but to be completely safe, we can implement a solution that will find the best pair of candidates in O(N) time in the number of candidates.

To do so, maintain a stack containing the candidate values we haven't paired up yet. Initialize this stack such that it contains only the first (largest) candidate. Then, iterate through the candidates in decreasing order of size. While it doesn't divide the number on top of the stack, pop this candidate from the stack and check if this pair is the best pair of candidates we've seen so far. Then, if the stack is empty, we've paired up our largest element, so we don't need to check any lower values, so we can break. If the stack is non-empty, add this candidate to the stack. The key observation (and the reason this works) is that any number in this stack divides all the numbers below it, so if a number divides the value on top of the stack, it also divides all the values below it. (So, if we can't pair a candidate with the value on top of the stack, we won't be able to pair it with any of the other unpaired values in the stack either.)

To put it all together, add the best pair of candidates to the first value we're considering and check if their sum is greater than the best problem set we've found so far. Once we've checked all possible first values, print the best answer we've found.

Here's an alternate approach to the three-problem case: Observe that if A[0] is not included in the set, then the only possible set that gives a superior result to an alternative including A[0] is A[0] / 2, A[0] / 3, and A[0] / 5. So, we simply have to test this case and then all possible cases that do contain A[0], which proceeds similarly to our two-problem case. Credit to nishant403 for presenting this solution!


G: We first decide which numbers of candies we should use. Note that this alone is enough to answer the easy subtask. To do so, build an array containing the number of each type of candy and a set containing the numbers from 1 through N (i.e. the possible numbers of candy we might want to use). Then, for each type of candy, add the greatest value less than the number of this type of candy that remains in the set to our answer, and remove this value from our set. Essentially, we're greedily taking the most candies we can from each set, because obviously we won't get a higher answer by taking fewer candies than possible from a set.

One key observation is that the set of numbers we've added to get our answer will be unique--in other words, there will only be one way to choose which numbers of types of candy we'll use. This allows us to solve the hard version of this problem. Maintain a priority queue containing the numbers of type-1 candies in sets of candies large enough to use. Then, iterate through the numbers of types of candy we need to use in decreasing order (for example, in sample test three, these numbers are 4, 3, and 2). Add the number of type-1 candies in all remaining types of candy with more candies than the current number to the priority queue. Then, pick the greatest element in the priority queue and add the minimum of that element and the current number to the answer. Essentially, our strategy here is to greedily use the greatest possible number of type-1 candies at each step in this process, since we'll never want to save a greater number of type-1 candies for a smaller set.

The complexity for each case is O(N log N), where the limiting steps are sorting the data and our usage of sets and priority_queues.


H: Note that the solution presented here solves both E/H at once. In-contest, I didn't see what made E substantially easier than H, so I used the same solution for both of them.

We first consider a subproblem: how many unique subsequences of length L exist in our string? To solve this subproblem, start by counting the number of distinct letters in each suffix of our string. (This gives us the answer for L=1.) Then, we update this array L-1 times to compute the number of distinct subsequences of each length up to and including L. To do so, at index i, the number of subsequences of length L in the suffix of S starting at position i is the number of such subsequences in the suffix starting at position i+1, plus the number of subsequences of length L-1 in the suffix starting at position i-1, minus the number of subsequences of length L-1 in the suffix starting at the next occurrence of S[i] (we overcounted these because choosing the next occurrence of S[i] rather than choosing S[i] gives us an identical subsequence). Once we've computed these arrays, we can then find the number of subsequences of length L in the entire string S.

Once we've found these values, we can easily solve the problem by prioritizing the largest subsequences. Iterate through the possible subsequence lengths in decreasing order and add the appropriate number of subsequences with the current length (all of them, unless this gives us more than K total subsequences), and break the loop immediately once we get to K total subsequences.

One note is that there may be lots of subsequences: a trivial (though by no means strict) upper bound is 2^100, which is too many to fit into a long long. However, this is not an issue because to receive an incorrect answer, we would need there to be more than 2^63-1 subsequences of a certain length while there are less than K subsequences of all greater lengths. With K up to 10^12, this means that there would need to be more than 1,000,000 times as many subsequences of some length as there are of all greater lengths, which is clearly not going to happen. Thus, while some values in the array may cause an overflow, we won't look at them when computing our answer, so we don't need to do any separate work to account for them.

The total runtime for the most efficient implementation of this idea is O(N^2). Less efficient implementations that run in O(N^3) will also pass.


My submissions are below:

Problem A

Problem B

Problem C

Problem D

Problem E (Note that this is the same code as my H.)

Problem F

Problem G

Problem H

Полный текст и комментарии »

  • Проголосовать: нравится
  • +110
  • Проголосовать: не нравится