Hello, Codeforces!
Welcome to the think-cell Round 1 supported by think-cell, which will start on Feb/17/2024 17:35 (Moscow time). It will be a combined rated round for both divisions. All problems were authored and prepared by Elegia and satyam343.
We would like to thank:
- errorgorn for round coordination and help with preparation
- KAN for helping with the preparation of problems
- Alexdat2000 for Russian translation
- GoatTamer and Non-origination for discussing problems with us while our proposal was in review and testing the round.
- Kaitokid, antekb, vgtcross, riano_, mtw, AboAbdoMC, Vladithur, BERNARD, milind0110, conqueror_of_tourist, Um_nik, gamegame, htetgm, brobat, LucaLucaM, the_hyp0cr1t3, tibinyte, 18o3, ExpensiveAC, MateiKing80, udhavvarma03, Non-origination, RDDCCD, Kuroni, Brovko, LipArcanjo, andrei_boaca, nok0, oursaco, ffao, prvocislo, SmolBrain, dario2994, brunovsky, tabr, PurpleCrayon, IceKnight1093, aryanc403 and Golovanov399 for testing the round
- MikeMirzayanov for great systems Codeforces and Polygon.
You will be given $$$9$$$ problems and $$$3$$$ hours to solve them. One of the problems will be divided into two subtasks. One of the problems will be interactive. Make sure to read this blog and familiarize yourself with these types of problems before the round!
We hope you'll like the problemset!
UPD 1 : The score distribution is $$$500 - 1000 - 1500 - (1250 + 1000) - 2500 - 2750 - 3500 - 3500 - 5000$$$.
Congratulations to the winners!
Congratulations to the first solves as well!
A : SSerxhs
B : SSerxhs
C : M_A
D1 : hitonanode
D2 : hitonanode
E : ksun48
F : MAOooOAM
G : gyh20
H : maroonrk
I : Kapt [upsolved]
UPD 2: Editorial is out.
And now a word from our round partner – think-cell:
think-cell is the leading data visualization software for business presentations. Our mission is to offer the most intuitive user interface for generating complex data-driven charts and slides, while at the same time ensuring seamless integration with Microsoft Office. We work on challenging visualization problems, reverse-engineer Microsoft’s code, and reinvent how slides are created. think-cell is the only German company funding the C++ ISO committee delegation, so there is a good chance that components we invent will find their way into the standard.
Right now, we are looking for smart, creative C++ engineers with a solid theoretical background to join our engineering team.
Highlights of the role:
- We use the latest C++ features as soon as the compilers support them.
- We’re not afraid of advanced template metaprogramming or macros when they avoid code duplication or lead to cleaner, more readable code.
- We prefer algorithms and ranges (esp. ours!) over raw loops.
- We take the time to deliver perfect code.
- We love refactoring and modernizing old code and introducing our own libraries.
- If we can do better than the standard library, we write our own!
- If we have done something cool, we talk about it at C++ conferences.
- If we are missing a C++ language feature, we write a proposal and present it to the C++ standard committee.
Here is what we offer:
- A competitive salary from the start and a raise to EUR 130,000 annually after only one year.
- Full support with relocation to Berlin or the option to work fully remotely.
- An apartment for the first month if relocating to Berlin.
- Lifestyle-friendly working hours, no deadlines, no overtime.
- No scheduled meetings.
- An international team of brilliant minds.
- A flat organisation with respectful colleagues and plenty of room for your ideas.
- A working environment that is driven by the strive for excellence.
Already convinced?
P.S. don't forget to check out our developer blog to learn more about our commitment to the tech world!
Join think-cell Round 1 that will start on Feb/17/2024 17:35 (Moscow time) for a chance to challenge your skills among other developers and win the following prizes.
- First place: Apple iPad Air (10.9-inch iPad Air Wi-Fi 256GB),
- Second and Third place: Apple Watch (Apple Watch Series 9 GPS + Cellular, 41mm Aluminum Case with Solo Loop),
- 4-50 places: think-cell branded socks
- 200 additional socks will be distributed randomly among the rest of the participants who solved at least one problem and for whom this is not the first rated round!
satyam343 failing to disappoint, as always
satyam343 good at his job as always.
It's hard for satyam343 to disappoint us!
I am looking for competitive programmers for a C++ developer position with a salary of 10 000 euros per month with relocation to Berlin or remotely. Communication with colleagues in English.
Would you be interested in this vacancy?
edit: this was sarcasm, don't contact me about this job offer.
interested mail:[email protected]
As a tester, there will be at most one problem about penguins.
yes, please
You mean Penwings ?
Tf do I need socks for? Oh wait...
As a tester, there will be atmost one problem that stack is bad at.
As a tester, good luck earning the socks! :D
Damn such a long contest! Excited for this one!
As a tester, I enjoyed this contest a lot.
how do you become a tester
It may be good to have the following appended to the name of the contest:
(Div 1 + Div 2)
As a tester, it’s my first round that I tested. Good luck to everyone!
satyam343 orz
Elegia orz orz orz
Waiting for a good contest!
why this is downvoted?
nothing goes as planned in this accursed world
because your comment is the same as you wrote in the other blog(obviously you just wanted contribution)
Damn, I never expected this crossover of authors. Nevertheless, I love 3hr rounds.
so now they decided to find engineers through codeforces, good
Hope to get some green socks
delectable round
omg EI round
orz survivor_winnner
Why is this not in homepage?
was thinking same!
I hope the problem statements will be clear :)
Yep round 926 C was realling confusing
Pucha kisine ?
let's see
This comment has been deleted.
Thanks for this round!
scoring distribution?
Is this rated contest?
Will this round be ranked?
On Codeforces's homepage, there was no mention of this.
Oh my God that socks too good.I need it!!!
As a tester, the next contes will be good because it's Div4
yeah
so like 100 people will get 2 socks each, or 200 people will get 1 sock each?
there is some probablity for both the above situtations, but there are different possible scenerios also.
bars and stars
OMG Elegia round
OMG Elegia round
"It will be a combined rated round for both divisions"
Did not understand, which divisions it's gonna be?
It will be ranked for every participant, including you. Assuming it will be solvable problems for every division contestant.
it's like div1+div2
hoping for problem statements which are actually understandable , writing in last contest 926 was too poor.
OMG EI Round!
Socks as prizes. Novelty. :>
The authors and testers will give contest from their alt accounts and win the prizes :D
here it is the birth of a new kind of rounds on codeforces
Socks❌ Sucks✔️
agree
As a tester, this contest will be great!
Looking forward for "Good Luck and High Rating" :-)
Good Luck
and High Rating
how did you do a to c in eight minutes
Hope to get some green socks
Is this a global round?
I think it's div1 + div2
Eh, most of rounds start at 00:35 AM for my time zone.. While at 07:00 AM I have to wake up and go to the duty :P
Would love to take part, but physically not able to. Good luck to everyone!
Will it be harder for me to gain rating in this round rather than in div 2 rounds?
Please postpone this round by 1 day, I have a wedding ceremony to attend. Thanks in advance.
Ofcourse I shall petition to make this round unrated.
just dont take the contest lmao. if they had to postpone a contest every time someone couldnt take it there would be a total of 0 contests so far.
Please postpone the wedding ceremony by 1 day if you want to attend the round. Thanks.
I got no brain cell, Can I participate in thinkcell!
At what rank in this contest, one would get the chance of being interviewed? Is this job available for freshers?
DARK ARIA
Its time for Gennady to comeback!
Yeah! 3 hours!
Yes Dad, 3 days
Score distribution?
Is it rated ?
for both divisions means which divisions?
1 and 2
Cant wait to drop to specialist! Hope this will be a good round and I get some socks
Can I cancel my registration or if I don't enter the contest will I get rating?
ur rating won't be affected unless u made a submission during the contest
thanks for your respond
as a tester
As a not tester, I can confrim that you are a tester
as a tester friend don't give me contribution
Good Luck!
I was hoping I'd be able to get some positive rating delta in this contest, but that score distribution is kind of a hint that I won't.
Contests authors are working very hard!
(3-day continuous contests)
Just a quickie, how to be a world class at knowing c++.
Hope that I can get those """programming""" socks
I am participating in first time. So I have a good feelings. I believe that every participant will enjoy of round. But I have a one problem or question. Will take T-shirts every participant or only awardeds?
Or only socks for awarded participants?
Best of luck all. Hopefully i solve 4 problems this contest ≽^•⩊•^≼
This means 100 pairs of socks right?
Or maybe 400 halves, it's random
Clashing with LC Biweekly, have to skip this one.
This socks for you.
Does it? LC Contests are better and not mathematical snoozefest
LC contests are cringefest. Too ez+boring problems.
speed matters mostly in those contests.
for which rating people this round is rated?
Everyone
hope to solve a,b,c,d in the least time,, all the best guys
very hard to decide between Leetcode Biweekly and CF round xD:)...
Can I get a pair of sock, please ? :)
Grinding for over 10 hours daily for 3 months, not only on codeforces but also on many other sites, learning binary search, two-pointer techniques, bitmasks, number theory, backtracking, and various other topics. I solved over 30 problems on each topic alone, numerous greedy problems, and the CSES searching problem set . however with all these efforts I still find myself unable to solve problem B, which might be of 800 difficulty
Me after solving A and B. I should now sleep :D
Don't look at difficulties of problems with difficulty $$$\le 1200$$$. Usually, they are not adequate.
I constantly press F5 to see who will get an ipad :) the race is so thrilling now
If i get 87 likes I'll reach 1700
when contest finishes, can someone explain solution to C pls?
I solved C by using 2 hours and 3 minutes later I solved D1 ...
is D1 easier than C?
Yes
@#$% why i didn't even read this problem(
Is intended solution for problem D2:
Mine used matrix instead of if-else.
What do you do with matrix? I guess, dp, but I didn't come up with dp.
The actual solution is much simpler.
The $$$f$$$-value is to do with something like the length of last 1-segment modulo 3. I used matrix to maintain the remainder and answer.
1)greedy algo for assigning 1s for substring
"if you are 1 -> greedily push 1 to the next position if there is no already 1 at your or prev pos"
111100100 -> 0|10000000 01|0000000 010|000000 0101|00000 01010|0000 010100|000 0101000|10 01010001|0 010100010|
2)dp with 3 values — amount of prev substrings with 1 above us, before us, none of these
Still can't understand the dp solution.
In D1, I did the following to get min number of 1s for each substring:
extending this to dp gives the following, once we put 1 at index i+1, we don't care about string at index i, i+1 and i+2 so we just jump to i+3.
Thanks, now I understand.
We can simplify: $$$max(0, n-i-3) + min(n-i, 3) = n - i$$$.
can u pls help me with understanding the problem statement and ur approach?
1 in
p
means that inq
we will be able to find substring that includes that index and has no. 1s >= no. 0s.if
p = 0011100
then q can be
q = 0001000
01
in q,1
,10
1 in q at position i can cover the requirement for 1s in p at positions
i-1
,i
,i+1
. And the task is to find min number of 1s in q that cover all 1s in p.dp[i] means sum of the answers for all substrings starting at i.
dp[i+3]
is the sum of answers for substings starting at i+3, andmax(0, n-i-3)
is the number of substrings starting at i+3, sodp[i+3] + max(0, n-i-3)
means adding 1 to all answers for substrings starting from i+3.min(n-i, 3)
is for answers for substrings[i, i], [i, i+1] and [i, i+2]
, i.e. those that don't reach i+3.(and as Igor replied, the dp formula can be simplified).
(assume mentioned greedy algo works)
go from left to right and count in parallel for all substrings
none_of_these — amount of prev substrings without 1 in last two positions
cnt1_prev — substrings .......1|0
cnt1_here — .......0|1
if val here is 1: you need to create new 1(on next pos) exactly "none of these" + 1(new substring starts here) times. Each of those substrings can end in (n — cur_pos) positions later, so
ans += (old_none + 1) * (n — cur_pos)
new_none = cnt1_prev
new_cnt1_prev = cnt1_here
cnt1_here = old_none + 1
if 0:
new_none = cnt1_prev + old_none + 1
new_cnt1_prev = cnt1_here
cnt1_here = 0
I hope it's clear enough
i can explain it better in russian, but your first message is in english and i can not reply with ru :(
My D1 DP solution is as follows. Create a DP table $$$D$$$ of size $$$(n + 1) \cdot (n + 1)$$$, where $$$D[i][j]$$$ ($$$i \le j$$$) represents the minimum number of 1s required for the substring $$$s_i s_{i + 1} \ldots s_{j - 1}$$$ (of length $$$j - i$$$).
Then, calculate the value of $$$D[i][j]$$$ as follows:
[deleted]
C was a pain.
How to solve D?
This code finds f(s)
Nice
hoping to become a specialist <<<<<<< hoping to get the socks
Is there any solution to C using segment tree?
The intended solution uses sorting.
I think no that approach will probably fail , I tried during contest but it failed then I came up with greedy solution and it worked.
I used tree compression
I think we can maintain a set consists of non existing element. We can greedily take the element from left to right
if the element doesn't exist in the answer set then we add it and keep decrementing it until we reach a value that still doesn't exist in the array then we can add it to the non-existing number set
if the elemen exists in the answer set, then we can find the biggest element in non-existing numbers set and add it to the answer while we maintain the set
My solution is basically the same idea you're talking about.
my bad,I couldnt help Stack construct the array b.
same
100 is Lexicographically Larger then 99..?
no, it isn't
chill guys, I thought he talking about strings
$$$[100]$$$ is lexicographically larger than $$$[99]$$$, because we're comparing the lexicographical order of arrays, and array values are treated as integers, not strings (similar to comparing instances of
std::vector<int>
in c++ with>
)after 2 hours spend in contest i figure it out. thanks vgtcross for clear this.
Who can explain the problem F? If it's not difficult, the steps to solve.
I kept reading statement of problem D1 , but couldn't understand the question even after reading for like 1hr
By when can we expect results and ratings?
is this round rated , any chance for rank increase?
Yes, the round is rated
It's not showing me in my rated contests and neither in unrated contest. Any idea why?
when ratings will be released , only then it will show in contests history
Who should i contact if it does not show in history even then?
Okk, by when ratings can be expected?
i guess it will update within 1 hr
Who should i contact if it does not update within 1 hr?
Bro chill you will get the rating.
Me
how to solve c anyone???
I think those who made more than one correct submissions for a question received penalties for the extra submissions
Why 512MB memory in G? I suppose you didn't intend to kill memory-inefficient solutions, and you just didn't know such solutions. Please consider using 1024MB (or larger) as a default when you don't care about memory.
chill bro
I am sorry about it. Initially, the memory limit was 256 MB. During testing, the maximum usage was 300 MB, with an average usage of around 180 MB. So, I thought 512 MB should be fine.
I hope you liked the contest despite this issue.
I got it. I now have a way simpler (and most probably intended) solution for G and it looks nice. Thank you for preparing the contest!
How to solve C? what's the optimal strategy
Well,emmm,I can not understand what problem D mean.QAQ
Your description of the problem is as good as the person who wrote the last round. XD
998244353forces
Hate D. Wasted my contest.
Nice contest, i liked $$$E$$$, and it's only problem i solved($$$\geq$$$ C) lmao:)
What's the intended time complexity of F?
I have $$$O(n \log n + q)$$$ amortized over the queries.
Feedback:
A: Fine easy problem. If anything a bit easy for its position, but I don't mind the existence of submit bait.
B: Fine problem; this felt like the sort of task where if you try a couple of things one of them will quickly work, and that turned out to be the case.
C: Decent problem, though not especially interesting. I didn't read the detail about how a set can only contain unique elements and implemented the trivial multiset version immediately, but adjusting for the set version didn't feel particularly challenging/interesting. I'm extremely surprised this had so few solves relative to A/B/D.
D: Good problem; characterizing the best strategy and then figuring out how to compute the answer for all substrings in parallel were both pretty interesting (the former task took me longer than it maybe should have...). This felt much harder than C to me, but to be fair I was warned by the scoring distribution. (I also found it harder than E and F, maybe my skill issue.)
E: Very good problem; after a nice little observation it turns out not to be too difficult. I found this easier than D. My one criticism is that the observation is perhaps a bit easy to guess without proving, but it's not too difficult to at least have strong intuition for why the observation is true.
F: Good problem; after recharacterizing the condition as maximizing
b_i & ~b_j
over all i, j, it works out nicely with a BFS-like strategy.G: Excellent problem, one of my favorite problems/possibly my favorite problem of the year so far. After figuring out how to arrange the subtrees so that we can do DP processing them in a specific order, the recursive strategy ends up working quite nicely. Very satisfying to think about, the idea is not hard to prove once you have it, and the implementation is short and easy. 10/10 problem, no notes.
H: Good problem. The statement took a few minutes to parse, but I think that's just because the idea involves a lot of notation. My initial reaction is that this seems like it shouldn't be possible, but it ends up working quite nicely. In some sense the general idea is very nicely motivated: if the solution isn't to output preorder and postorder traversals, what else are we going to do? Somehow I convinced myself those wouldn't work and tried other things for a while, but once I came back to that idea it turns out to work out quite nicely. The implementation took me a little longer than for most other problems in this round but still was not too bad. I think this is maybe conceptually easier than G, since G required more insight while for this problem the first thing you'd think to try works if you finish figuring out all the details. However, the implementation is harder, so giving them the same score/placing them in this order seemed reasonable.
I: Seems like a nice problem, but I had an hour to work on it and still had absolutely no idea what to do at the end. I quickly reduced to counting the number of ways to satisfy all the 1 constraints (and then separately counting the number of ways to satisfy all the 0 constraints), and when you formulate this in terms of PIE it seems like it should work out nicely. Unfortunately, this solution ended up relying on a sequence that, according to OEIS, seems not to have a nice closed form, and with that idea aside I didn't come up with anything that seemed productive. This definitely deserved the 5000-point score.
If there were any preparation errors, they didn't meaningfully affect my experience. The one negative about my experience in the contest is that I spent the last hour working on a problem I felt like I had no realistic chance of solving, but such is life and the three-hour duration was probably good for contestants who might be expected to solve one more task in a 9-problem round than they would if the round was shorter. I could perhaps imagine a 2.5-hour duration, but overall I can't fault the choice of duration.
Thanks to the authors!
Can you explain how you implemented F? What algorithms did you use?
After making the observation I discuss in my comment, the general strategy is to maintain two sets A and B (actually, I implement them as arrays, but let's think about them as sets for now). When b_i enters the array, we add all submasks of b_i to A and all submasks of
~b_i
to B. As soon as some value enters both sets, mark it as a possible answer.The surprising part is that the "add all submasks" step can actually be done efficiently. We just BFS down from
b_i
, iterating over all bits inb_i
and addingb_i ^ (1<<j)
to the set if it isn't there already. This way we only enter each bitmask in each set once, giving $$$O(n \log n)$$$ amortized complexity.I got it. Could this be implemented with a tree of segments?
Maybe? I suppose you could think of this as a variant of lazy propagation in some sense. But I think BFS is strictly easier to implement and will probably have a faster constant factor.
Thanks for explaining
Hey bro can u explain ur logic behind E please?
The key idea is that if we perform $$$i$$$ operations, any set of $$$n - 2ki$$$ elements can be the final set provided that at least one element remaining could have been the central element for the last operation (i.e., one element has $$$k$$$ deleted elements on either side). We can then iterate over $$$k$$$ and $$$i$$$ and do complementary counting (note that there are $$$O(n \log n)$$$ options for $$$k$$$ and $$$i$$$ due to the harmonic series). There are $$$\dbinom{n}{n-2ki}$$$ ways to choose any set of $$$n-2ki$$$ elements to be left over. If we want to ensure that no remaining elements could be the central element for an operation, then all elements remaining must come either before all but $$$k$$$ deleted elements or after all but $$$k$$$ deleted elements. This gives $$$2k$$$ positions where they can enter into the ordering, and we can count the number of ways using stars and bars.
ahh get it now, couldn't get to it in the contests
appreciate the time and effort
oh it was subsequence not substring T-T
I didn't understand the end of your explanation : what does mean " then all elements remaining must come either before all but k deleted elements or after all but k deleted elements" ?
I couldn't get the end of your explanation like how can we calculate the orderings in which no remaining elements could be the central element for an operation. How did you reach to the formula in your code? can someone please explain it with some examples ?
I think I can solve I if I can compute the first n terms of https://oeis.org/A167510. Is it the same sequence you ran into?
Yep, that's the one. I skimmed a couple of the papers linked from OEIS but none of them got me anywhere especially useful (though for my idea to work I would need to either do some convolution magic I don't see how to do/which might be impossible, or I would need a closed form that works out nicely for my computation).
I think convolution magic works. We need to compute stuff like dp[i]+=dp[j]*seq[i-j], and if we use divide and conquer, then the influence of the first half of the terms on the last half can be done using convolution?
Ahh, I see--thanks for the explanation! I think that trick is probably well-known by now (in the sense that I've seen a decent number of problems possibly made easier by something like that), but I haven't seen it before :)
https://codeforces.me/blog/entry/111399 has a few links about it in the "CDQ convolution" section.
If you check the GF terms in detail, or in some paper linked in the OEIS, you will find this form:(where $$$U=\frac{1-\sqrt{1-4t^2}}{2t}$$$)
Computing the first $$$n$$$ terms of the generating function in $$$U$$$ in $$$O(n\log n)$$$ time by brute force and plugging the expression of $$$t$$$ into it suffices.
When you say "plugging the expression", isn't it computing formal power series composition for which there is no fast algorithm according to https://codeforces.me/blog/entry/56422?#comment-401173 ?
Hm, did some searching and apparently this paper claims that specifically for this polynomial for U composition can be fast.
This can be done in $$$O(n \log^2 n)$$$ time using the fact that $$$xU^2-U+x=0$$$.
With that, $$$U^{n+2}=\frac 1x U^{n+1}-U^n$$$, so writing it in the matrix form gives a divide and conquer solution.
Could you please explain C? Read others' explanation but still cant understand the solution.
If $$$S$$$ was a multiset, we can easily see that the optimal strategy would be to let $$$S$$$ contain $$$a_i + i$$$ for all $$$i$$$. The trick is to deal with the fact that $$$S$$$ is not, in fact, a multiset. The idea is that by waiting to take $$$a_i$$$ from our set until some element with a lower index has been removed, we can add $$$a_i + i - 1$$$ to $$$S$$$ instead. So while our multiset has multiple copies of some element $$$x$$$, replace all but one of them with $$$x-1$$$, and do this until all elements are unique.
There are some details left to prove, e.g. showing that we can't achieve a larger set $$$S$$$ in the end and showing that we can decrease the index of all elements of $$$a$$$ in the desired way without ever trying to reduce the index of $$$a_0$$$. However, these aren't especially hard to work out, and the general idea is as given above. (To implement, I built the original multiset directly as a map from values to the number of times that value appears. Then, I repeatedly took the largest element $$$x$$$, output its value, and if there were $$$y$$$ copies of that element, added $$$y-1$$$ copies of $$$x-1$$$.)
Please can anyone share the proof for how it is always possible achieve values $$$(\alpha-1, \alpha)$$$ whenever we have two colliding occurrences of $$$\alpha$$$.
It makes intuitive sense, but I'm interested to see the proof for learning how to prove such combinatorial stuff.
Solved it. Thanks.
Reading questions wastes loooooooooots of time, maybe I should improve my reading comprehension skills
5 problems in 1:15 let's gooooooooooooooooooooooooooooooooooooooooooo I'm getting back to blueeeeee
just wait for skipped
:| sure
C became a lot easier when I noticed the output doesn't require the number of elements in the array $$$b$$$ — because this means it's always optimal to greedily assign the largest value possible while avoiding all duplicates. If there were cases when duplicates could be optimal, then the checker would have to know the number of elements in $$$b$$$ in advance before reading the actual elements.
Or, you know, the checker could use readLine
Yes, but in most cases the checker ignores whitespaces, so even when the statement says to print $$$x$$$ lines people can just print them in one line. If the checker was to exhaustively check each lines then the statement should clarify it imo.
Or, because the answer is unique, the checker could just check if output files are identical (disregarding whitespaces)
That's also true. Though it would be more clearer for the checker to identify which number corresponds to which test set. I think the current checker is kind of assuming that the participant will always output exactly $$$n$$$ numbers for each case.
My solution for problem C.
Look at an element $$$a_i$$$. We can get from it any value on segment $$$[a_i + 1, a_i + i]$$$.
But can we for any chosen values on segments for all elements achieve those values? Yes. There are $$$n!$$$ ways to choose values on segments for all elements. And there are $$$n!$$$ ways to choose a permutation of "take" operations. And there is a bijection between them: if we change permutation, some values on segments will also change.
Now we have $$$n$$$ segments. We want to choose one number from every segment and maximize minimal not appearing number. Can we sort segments by right borders in decreasing order and take them greedily? Well, no. It can happen, that a segment with big right border has small left border, which we can use later.
We can achieve here $$$9, 8, 7, 6, 5, 4, 3, 2$$$
Instead, we can use this greedy. We will take the biggest left border of the segment. But we will choose only from those segments, whose right border is not less, that number, we are going to take now. I achieved it by having two sets. I "iterate" on numbers in decreasing order. The first set contains segments, whose right border is too small yet, and sorts them by that right border. The second set contains segments, whose right border is enough, and sorts them by left border. I take elements from second set, every time decresing number by $$$1$$$ and moving segments, which became "available", from first set to second.
How to solve D2? I did DP on D1, no idea of how to solve for such a large n.
https://codeforces.me/blog/entry/125840?#comment-1118365
Cool problemset! Really enjoyed the contest!
so fast SYSTEM TESTING :)
This is how I came up with Problem C Solution :
At any moment, for index i in array A, we only need to consider how many elements before i that were deleted as the elements deleted after i do not update it's index in A.
Now, consider the following cases for an index j < i in array A.
$$$ Case 1 : A[j] + j < A[i] + i $$$
We can delete element at ith index before deleting element at jth index.
$$$ Case 2 : A[j] + j > A[i] + i $$$
If we delete jth element before deleting ith element then the value of ith element in array B will be lesser than A[i] + i since jth element deletion decreases it's updated index. I can always get an array B' in which I'll have two elements with values (A[j]+j, A[i]+i) which will be lexicographically larger than array B.
$$$ Case 3 : A[j] + j = A[i] + i $$$
It is better to delete jth element first, immediately followed by ith element since in other cases, we may end with one element lesser in our array B.
So, considering all these cases, we only need to consider for any index i, whether the last element in array B is equal to the value of element i i.e., (A[i]+i).
Here is my accepted code.
G is such an amazing problem! The solution felt like magic, shame I didn't have time left for it during the contest
How are the rating changes decided for div1+2?
use carrot extension or cf predictor
guessforces
The description of D is terrible
that was the point. It would be B if described clearly
Ok, I'll take the bait. What do you think is a better way to write the statement of D?
As a participant hope to finally reach pupil :3
How does this solution (quadratic time) pass for D2? Seems like weak tests.
Compiler optimizes the inner loop to take constant time instead of linear.
compiler is probably smart enough to optimize
with
other than that: this solution is not quadratic
Oh, I didn't know it was a thing! What if it was something like:
Would the compiler optimize it to
dp[i] += 13 * i
?honestly I'm not really aware how exactly it works, I just know that sometimes compiler can do magic optimzations :)
I guess you can experiment with it on your own
Nice Contest.
great round as expected. good job!
For problem F, a straight-forward brute force on Trie is as follows.
Build a Trie tree on all values, and greedily iterate on i from 22 to 0 to see whether 2^i can be added to the answer. Maintain two candidate Trie node lists L and R, which means that, for the current stage, any leaf node in the subtree of any node in R, when pairing with any leaf node in the subtree of any node L, there exists a valid mask, that can achieve the current maximum possible answer. And for each layer, if there exists a node in R that has a right child, and a node in L that has a left child, then we can greedily add 2^i to the answer just by letting the i-th bit of the mask be 0, and update L, R with all left children of nodes in L and all right children of nodes in R. Otherwise, 2^i can't be added to the answer, and any children of nodes in L and R remains the candidate.
In this brute force, although the addition of value is O(logN), the candidate list can grow as large as the size of N, which certainly won't fit the time limit. However, if we modify the simple brute force a bit by limiting that we have at most 100 candidate nodes for each layer (and simply throwing away other possible candidates), the solution magically passes all the tests.
Link to my submission: https://codeforces.me/contest/1930/submission/246909255
It's the first solution I can come up with when taking a first glance at the problem, but unfortunately I don't have enough time to write down the solution during the contest due to too much time wasted on debugging D2 which I used a far more complicated method than desired. I think more data should be added to hack such a simple "brute force + greedy" solution.
I hacked your solution. Here is the code for the hack https://pastebin.com/Kq4juNnM
I do agree that a stronger tests should be added for these kinds of solution. But to come up with my hack test, I need to analyze your pruning part: which side you prioritize keeping. In your code, you keep all your right descendants of the left candidates. So my test won't work if you just slightly change this priority. Imo it is not straight forward to come up with some general tests to rule out these kinds of solution, and might need several tests to do so.
Thank you for coming up with the hack! I totally agree that it's not easy to hack such solutions, and it's actually a general challenge for setting up tests for the category of "finding-the-best" cp problems, where desired observations can be substituted by some simple randomness or clever heuristics to work within the time limit as well.
It was a great experience thank you very much
Can somebody give a proof for problem C ?
it comes down to 2 basic cases we can pick the maximum of the current array of (a[i]+i) if its unique or we can take the first occurence of the maximum (a[i]+i) now if we take any other occurence then the smallest i for which a[i]+i(say,k) was maximum remains unchanged then for the next time that value gets picked and we already have the same value in the set so it won't count and hence the element will be wasted now if we choose the first occurence then for all other occurence , k become k-1 so for the next iteration same process will be repeated and the the duplicate updated value will be used when they occur as first occurence
I got this but I don't understand why putting
cnt[k-1] += cnt[k] — 1
Doesn't mess it up
Shouldn't it also change for all other elements? Why just changing the max elements satisfies while that max element can affect everyone
Every copy of max element then becomes a copy of max-1 as we always choose the first occurence of max, again if there was already a max-1 in the array then all the copies of max-1 become copies of max-2 and so on until each copy becomes an unique max of the current array Try dry running for 3 2 1 7 8 9 6 5 4
got it
Yeah:
We rewrite the array as v[i]+i for all i.
It is easy to see that we can do no better than the array with the duplicates "subtracted by 1" until they're unique.
Now we need to prove that we can achieve that.
First of all, let's show that we can achieve an array with all N elements. To do so, we go to the max element, and select the first occurrence of it. Every element after is decremented, so all following occurrences become max-1. We repeat the process.
Now, why is this suboptimal for the problem? Because while length is kept at N, sometimes we unnecessarily start removing too early in the array harming the elements after. for example initial array of [5,1] (in the modified format) will become [5,0] when it should be [5,1] if we start taking from 1.
Taking the 1 won't hurt the 5, and they are not identical they are "independent" so surely it must be taken first. We formalize this notion in the following algorithm:
Assign priority 0 to all elements initially. Priority P means that we need exactly P elements selected from before it in the array prior to it.
Find the first occurrence of max element, keep it "priority 0". (This is natural, if we select something before it prior to selecting it we will lose the max, so it must be priority zero).
Similarly, all the other occurrences of the max have their priority incremented to "1". Meaning there must be 1 element selected before they can be selected. They are also decremented, as we know they will be decremented before being selected. This is unchanged from before.
Now we look at the earliest occurrence of max-1 (which can be either from the initial array or a decremented max). All other occurrences have their priority set to its priority (first max -1) +1. that means, that if it originated from a max, then these have P=2 because they must be selected after.
Once we are done, all the numbers have priorities. FROM RIGHT TO LEFT, we select an element with priority 0, take it out, and decrement the priority of everything afterwards.
What this does, is when elements have the same "priority" we take the rightmost one in order to prevent the earlier ones from being decremented. This makes sure we don't needlessly decrement later elements by unnecessarily selecting earlier ones. Just satisfying the priorities means we will be taking all the elements as we specified earlier. (because it will appear in our set as v[i]-initial P)
Furthermore it is guaranteed that there will always be a priority 0 element (or array is empty and we are done), because a 0 must be followed by a 1 which will in turn become a 0 (and everything else decrements as well), so this algorithm completes.
int main() { int t; cin >> t;
}
How did above solution pass?like when we are sorting and then doing prev-1(first if condition inside loop),this case will be when there will be multiple duplicates but in the else statement we have to know the number of integers we have deleted from left side of new arr[i] encounterd?
When will the prize winners be announced?
winners are informed by mail
Did mails are already sent to the winners, or not yet?
Prolly takes some time
3 days have been passed, please announce the winners in your blog or in the comment section.
Any proofs of correctness for D greedy solution?
Didn't know about any Greedy, what's that .... It can be done using DP --> 246868553 in O(n) though
check for smaller cases like 10101 or 11100 or 11111
choosing segments of length 2 is the most optimal approach as choosing len>2 will require 2 or more 1's to make it the mode
Tutorial plz.
Thank you, Elegia so much for problem I! I've never felt as extatic about solving any problem in my life!
I got the answer for E but didn't implement it because I think it will TLE, what a pity XD
Become an expert successfully!
Congratulations
Tutorial plz.
Can someone prove my solution for C 246855499
I was messing around with ideas and the Idea that u can always choose in a pattern that the cnt will be max is possible so I just sent it and it was AC
Can someone prove it? Thx
Tutorial plz.
Update first solve for problem I as Kapt
Good level questions...
Tutorial plz.
Is it easy to cheat on codeforces contests these days ? Because from our country Sputnik123 wrote 5 problems. I don't say problems are too hard at this think-cell round but: In the real life contest (our NOI semifinal, yesterday), he scored much lower than me. And you can also check him submissions.I don't know really did the solutions leaked ? But I am sure this guy is so suspicious and cheated.
Here him first submission for problem C, This code gave Compilation error .
If you check this submission, you will realize that, he forget to change the map name. Lol he can't even cheat. (Check the submission link if you want to see.)
And after few more compilation errors he got accepted. The errors caused by stupid things. And also for accepted submission, there are functions in the template that are never used. And they were not at the first submissions. This is suspicious too. Who can say me that which of these function can be used in the problem C?
And lastly, I hope MikeMirzayanov will start a big plag test. Sorry for english grammatical errors. Have a nice day!
in problem c, after reverse sorting , i have used , if(a[i]==a[i-1])a[i]--; while correct answer is a[i]=min(a[i],a[i-1]-1); what is the difference?
If your goal is to make sure that a[i] is less than or equal to a[i-1]-1 in all cases, the second snippet is likely the correct one it provides more control over the update and ensures that a[i] doesn't become greater than a[i-1]-1.
How is the first one incorrect?
all of this was in a response to a comment i made in this post asking how 2ball did A to C in eight minutes he reached out to me in the dm's and here is what transgressed. I am not accusing him of anything other than being vulgar and disrespectful, a 20 something year old should not retaliate by calling someone the N word i am attaching the pictures of the chat MikeMirzayanov please take the necessary action edit- the image link i don't think i am able to embed it properly https://imgur.com/a/o00YXuU
Thank You So Very Much
What is the probability of me getting a pair of Think Cell socks, modulo
998244353
?246822702, 246822598 they are indeed same, but...
why would you give a cheating to problem A lol, the problem literally saying to write this code. I can't write it in other way
satyam343, MikeMirzayanov
For what it's worth, my code is also literally the same: 246820744
Exactly, I mentioned this previously here too. This 'A' problem shouldn't be considered for plagiarism, it's too obvious and really high chance for false positive.
Sorry, that's a coincidence.This code was completed independently by myself.Please return my rating
246852013 and 246856579,it's normal to have the same approach.
Hey, I got a mail today for Plagiarism in 1930B. It's saying that anish.naruto/246839764, Rafantasies/246883667 have a match but it's completely normal in this question. Like the logic was pretty straight forward and i think, really lot of people would have done with the same logic. Also, the way of implementation is different in my code using long long and other things. I don't even know the guy with whom my code is matching, please remove the plagiarism as it was my own original code.
sir please see once and help with your opinion's support on this coincidence.
skull
When will the owners of the socks be announced?
Why the contest is skipped? I don't make any cheat. Actually I use a template of my friend. Most of the time I need to change around 10-15 lines in this snipped. Use any friends template or any code snipped Is rule of violation?
Why selecting 200 people randomly is so time-consuming? It is about almost 2 weeks.
Please write some note to the blog, if they have been already determined and (maybe) private message or email sent to them.
Kind regards.
Congratulations to winners of the prizes! In a few weeks, you will be contacted via private messages with instructions to receive your prize.
As usual, we used the following two scripts for generating random winners, seed is the score of the winner.
Top 3
Socks winners
Thank U <3
my first problem solving prize
Thanku,I am really happy after hearing this sir @KAN . When i will get this
Thanks a lot. This is the first time I am getting a prize from Codeforces.
Thanks!
I am very very happy CoderForce --->> ThankYou so much. Love you CF.
After a long struggle in CP finally got a prize... YAY !!! ;)
Waiting for socks :))
congratulation please give us ur linkedin id
Can't wait to set Green Socks as DP :-)
Thanks!
First ever cp prize! thanks
Yayaaayyy! First time getting excited for socks :) Thank you KAN
Best Socks i can ever get :)
Yayy!! Socks!!! :D
Just curious. Did anyone get contacted
Not Yet!!
not me also
Thanks a lot !!
Eagerly waiting for it
Alhamdulillah,, My First prize from cp... tnx a lot........
Wow!
Have anyone received yet??
not yet till now
are they going to deliver seriuosly? its been 3 months!
I just noticed that I won these socks :)
Will it still be delivered? ;)
you received the prices? i didn't get it yet
neither have I.
disappointed by their commitment, despite being that trusted platform and the sponsors also being a respectable company, I didn't expected them to be so ignorant I think they don't bother about such consolation price and more favorable for high-performers in the contest Una_Shem can you please tell us whether there is update or not
Hey KAN. When the socks will be delivered. Seriously, it has been 4 months and no updates. If it can't be delivered, please let us know atleast.
Got them today thank you so much :3
Me too! Got them today! Thanks so much think-cell!