Hello, %username%!
Tomorrow Today at 19:00 MSK will be held Topcoder SRM 649.
Let's discuss problems after contest!
Good luck && have fun!
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
4 | atcoder_official | 161 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Hello, %username%!
Tomorrow Today at 19:00 MSK will be held Topcoder SRM 649.
Let's discuss problems after contest!
Good luck && have fun!
Name |
---|
Can't connect to the arena..
"A connection to the server could not be established"
same to me.
is the registration time over?
How to solve Div 1 Easy?
For a string to be uncertain, letters in different positions in the original string have to match in the final string. If the letters are more than K apart, you don't have enough deletions to make them match. So, based on this, if there are two equal letters at most K apart, such as
axxxxxa
You can delete all of the Xs and one A; there is no way to tell if you left the first a or the second a. If there are no equal letters at most K apart, return Certain.
850 seems so easy in hindsight u.u
How to solve it?
Outer binsearch, which is searching for the minimal time + inner binsearch, which is searching for minimal number of cuts it needs to make in each cart to fit into that minimal time. It's a pity I only needed to change one int variable to long long to make it AC :(
Actually, you can replace inner binsearch with just a loop, since the structure of cuts is such that first layers of the "splitting" tree all have cuts, that one layer has some cuts, and then it's no cuts at all. I saw some solutions like that, although I also used 2 bin searches.
How to solve div1 550 ?
You look from the most significant bit to the least significant bit and ask. If I flip this bit will it be better or worse? and if it's better to switch you switch, if it's worse you don't and go to the next bit.
I did such thing and it didn't even pass pretests.
We'll treat each bit separately.
First, note that at a bit k, if A[i] and A[j] differ in any bit above k, then the answer is already determined by the higher order bits. So, we only care about A[i] and A[j] being the same above k bits. Thus, it's easy to count the number of good pairs if we flip or not flip (using a map or something) by doing a linear scan through A.
I guess, complexity is O(n * log2(n)), right?
My solution using map (n * log n * log n) got tle. I'm not sure if my implementation was bad or if the complexity was too big. Changing map with unordered_map after contest got Accepted, but with 1.96 s on the biggest test.
LE: @_ami: Yes, you are right.
My solution of complexity O(sz*log(sz)*log(N)) also got TLE on last few cases and I think Topcoder is not allowing that to pass this time. I get TLE always on one of 104, 105, 94, 99 or something like that.
I think it's rather O(log(N) * sz * log(sz)).
I did it in O(sz * log(N)) (assuming you get get a key in constant time with the hashmap). You can see my implementation here
Well it can be solved without additional map in . For each bit we need only a linear scan. We split initial table into groups with the same prefix in binary expansion.
When the k-th bit comes we have to decide if we flip it or not. For each group we count number of inversions in both cases, and choose (globally) better option. Then we split groups into two or less parts and so on.
So the only problem is how to compute number of inversions in 0 - 1 table t of length n in O(n) instead of . But we can just go from left to right and remember the number of zeros so far i.e.
Or you can use a binary trie.
My code
It will make the complexity . Although it requires a lot of memory.
DAAAAMN!!! I would have won that TC if it weren't for that mistake
->
I got 2nd fastest solve to 550 and it would have been fastest solve to 850 (worth 587 pts) :( (I wouldn't go hacking if I would have been first :P)
Though I finally found that mistake and ended up on 14th place which is also very good, but being 1st in TC is one in a lifetime opportunity :<
Time limit of Div 1 Medium was a little tight. I lost a few minutes optimizing some constant on it :(
My Div1 Medium failed system tests, and after testing it in the room it has about ~60% probability of passing :(.
I guess that your code is O(sz * log2)? Not sure if the 105 limit is intentional to fail such solution but I find my O(sz * log) solution quite simple to come up with instead.
How come, the answer Div 2 500 for the case,
N = 45, K = 5
is 11?
Note that splitting each single sequence decrements K by one. Meaning, if you split all your sequences in one move, your K will be decremented by the number of sequences that you have.
{45} -> {10, 35} -> {9, 9, 26} -> {8, 8, 8, 18} -> {7, 7, 7, 7, 11} -> {6, 6, 6, 6, 6, 5} -> ...
I am the writer of this contest. I started to prepare problems 34 hours before the contest begin. Because of time limit in real life, I can't make appearance of my wolf character in the statement...
By the way, it was cool that there is no misjudge in spite of preparing time. The tester and admins are great:)
Lol, 34 hours is a very little amount of time to prepare 6 problems :d. What do you exactly mean by "preparing"? Did you really wrote 6 statements, 6 model solutions, 6 test sets in one day :o?
And aren't TC admins afraid that someone will someday fail to prepare everything in time? Do they have some spare problems?
Try to guess why this code gets AC.
On the example testcase s = "aa", K = 2 it returns "Certain".
Haha, exactly my thoughts, I have been staring at this code for 5-10 minutes, read it twenty times and of course unsuccessfully tried to challenge it with example test and I still don't know why it gives "Certain" :d.
s.size() returns a size_t, which is unsigned, So i think it will be compared with the unsigned version of n-1 which is a very large number.
He is lucky
OK, Marcin_smu told me that this is because of unsigneds — I considered this during challenge phase, but thought that unsigned + int is int :/. Is it a rule that uint + int = uint or is it an undefined behaviour or something like that?
http://stackoverflow.com/questions/5416414/signed-unsigned-comparisons
Casting int to unsigned when comparing is equally extremely stupid as - 1 % 2 = - 1 >_>
Can anyone explain the example test case and some other test case for Div 2 500 ?
5 2 Returns: 4 One optimal solution: {5} -> {2,3} -> {1,2} -> {1,1} -> {}. We used two splits: once when splitting the sequence of 5 carts into 2+3 and the second time when splitting the sequence of 2 carts into 1+1.
I don't understand how from state {1,2} transition in to {1,1}. It talks about split but doesn't the split of 2 should result {1,2} -> {1,1,1} ?
Not sure what I am misunderstanding in this question ?
At each move, you can do different things to each sequence. At this stage, the example splits the 2 sequence, and subtracts the 1 sequence by one.
Ok, so the algo subtracts one from the first sequence which is 1 and splits the second sequence which is 2 to 1+1.
Thanks.
In the example 2:
15 4 should return 5 right?
{15} -> {7, 8} -> {3, 4, 4, 4} -> {1, 2, 2, 2, 2, 2} -> {1, 1, 1, 1, 1} -> {}
This requires 3 splits and subtracting 1 from the sequences 2 times. So the total is 5 right? How is it 6?
Hello, I have two general questions related to the Div 2 500 point problem (CartInSupermarketEasy).
Question 1: I looked through the solutions of all of the top finishers in Div2 and I noticed this: 90% of them solved this problem using top-down recursive memoization. The remaining 10% solved it using bottom-up iterative DP. Is there a reason for such an overwhelming preference for recursive memoization in this case? When I ran the timer (see below), the iterative solution performs a bit faster.
Question 2: Something seems to be totally off with my timings. When I ran my solution on my computer using max input (100,100), it took 13.4 seconds, which shouldn't pass by any means. (The recursive solution was even worse: 15.5 secs). However, when running this same solution in TopCoder (practice), it passes the (100,100) test in 47 ms. I have noticed similar behavior in the past. I am using Visual Studio (CL compiler).
Thanks!
Question 1: I think that memoization is easier to code and understand, and still fast enough.
Q2: Actual clock running times depend on hardware, but I have an idea of what could be the cause. Are you using a testing plug-in that downloads the sample cases and tests your code against them, all in one run? If so, you may be recomputing dp(100,100) once per each test case.
I actually figured out what the problem was, I was running my Visual Studio in Debug configuration. Once switched to "Release" configuration, the problem was solved. It's still strange that this was causing the program to run x300 time slower, as I usually don't see such a discrepancy, even when running in Debug. Another neat thing I noticed while analyzing my program is that replacing std::min/std::max functions with min/max that I wrote, sped up the program by a factor of 3.
UPD: And here is an article confirming my observations: std::min causing three times slowdown