Tutorial is loading...
Jury solution: 173498496
Tutorial is loading...
binary search: 173497901
classic: 173497940
Tutorial is loading...
Jury solution: 173498569
Tutorial is loading...
Jury solution: 173498668
Tutorial is loading...
Jury solution: 173498009 Tutorial is loading...
Jury solution: 173498327
What was the point of making B so hard ?
Not able to see Jury solution.
You are not allowed to view the requested page
Fixed
For problem B, if we use binary search according to the editorial then there can be multiple answer as we are finding a segment but in problem it is said that the answer is unique. Please someone explain this issue...
Theoretically there is a unique answer (can be proven to be either an integer or exactly halfway between two integers) but the problem statement (requires accuracy to 6 decimal places) and the limited precision of floating point data types means there will be a segment of meeting points that all give an answer that is considered correct.
Thanks for the super fast editorial. Link for Solution not working
Can anyone explain the solution for D in a more clear and detailed way? Thanks!
Solution for D:
Note pairs:$$$(a[0],b[n-1]),(a[1],b[n-2]),...,(a[n-1],b[0]).$$$
In pair $$$(a[i],b[j])$$$,if the final position of $$$a[i]$$$ is determined,then $$$b[j]$$$'s position is also determined.
For example,$$$a[i]$$$'s final position is $$$x$$$ in the $$$2^{rd}$$$ string,we can get $$$b[j]$$$'s position is $$$(n-x-1)$$$ in the $$$1^{st}$$$ string.
Now let's check the answer.
If $$$n$$$ is even,
check if we can match each two pairs $$$(a[i_1],b[j_1])$$$,$$$(a[i_2],b[j_2])$$$ ,such that $$$a[i_1]==b[j_2],b[j_1]==a[i_2]$$$ or $$$a[i_1]==a[i_2],b[j_1]==b[j_2]$$$.
In other words,each number of occurrences of pair $$$(min(a[i],b[j]),max(a[i],b[j]))$$$ must be even.
If n is odd,
Similiar to the method above but a bit different.
Thanks for the wonderful solution!
small boss,fish more
Basically, the character pairs (s1[0],s2[n-1]), (s1[1],s2[n-2]), ... (s1[i],s2[n-1-i]) are connected. If you try to fix one character of that pair at any position j of s1, the other character of the pair would have its position automatically fixed at n-1-j of s2.
e.g.
Say we do the operation with k = 2
Observation: (c1,d5), (c2,d4) are connected, like in a mirror, before and after the operation. They will always behave like a reflection, no matter how many times we perform the operation.
After making this observation, we now have to check if we can make 2 equal strings. If we have two of the same character pair, we can always align them so that two indices have the same character in the two strings.
say (x1, y1) and (x2, y2) are 2 same character pair.
we can place them as such
x1 ... y2
x2 ... y1
We first place x1 (y1 placed automatically). Then below x1 we place x2(y2 placed automatically above y1)
Now we check if all character pairs are even in count, because we will pick 2 of those pairs and resolve 2 indices at a time.
If we find a character pair occurring for odd times, that is supposed to be the middle character of an odd length string. In that case, we have to check if the pair consists of the same character.
This is my submission
Nice explanation brother :)
Thanks
Disappointed with B.
But their difficulty
Credits: k1ngash
So fast editorial :)
My binary search solution for problem B FST'd, giving WA on TC-14 173495467 I used that same solution as editorial, finding the intersection of segments. Any help?
The problem is how the answer was printed, you should've used set precision before, not after printing the answer
Thank you!
When you try to cout a double variable:
What a sad sight...
B has multiple solutions, one being implementation heavy, and the others being also implementation heavy. Hands down, this is the worst round I have seen excluding the plagiarized ones.
upd: yep turns out my B solution was overcomplicated
B is definitely not implementation heavy if you have the right approach. It's incredible that you would criticize the round when your solution for D is plagiarized.
Yep the leaker was bad and my choices were wrong too
I deeply apologize, please skip my submissions.
upd: it's skipped
its extremely short (comparatively) , check my submission if you want to
For B, just move everyone towards the person who takes maximum time, and then print (min+max)/2.
173498541
Solution for A (https://www.youtube.com/w[user:MikeMirzayanov]atch?v=gHxV9jwwJyk). Solution for C (https://www.youtube.com/watch?v=VhZytayYfhk). Solution for D (https://www.youtube.com/watch?v=QWYxr-IEwE0).
These solutions were leaked on youtube and the name of the user is clearly visible so MikeMirzayanov please look into this.
We can't change those kind of people. They are bane to CP. Let them be. And people who take advantage of them are actually losing, although it may increase their rating.
Yea that's true, in the long run they are losing, but as someone who is trying to push my rating with a goal and timeline to follow, these incidents can ruin all the fun, and they have been much more frequent in last few months, so its time that something is done about it.
@Brovko
i just saw his code for problem d, which by the way can be sued to catch cheaters easily,
because he used some weird value '10' which equals to 12592, although I don't try solving the problem and I don't know what he is doing with that value, its just weird to use that kind of representation of the number, does anyone know why would he do that, or if there are some use cases for such a representation?
B is a great problem, but I think testers should have suggested to swap it with C. Very clever solutions
I think that the problem writers should remind us or include a warning to use setpercision() in these type of problems, similar to how interactive problems say to use flush.
They have mentioned it clearly at the end of output
They only mentioned the error margin, not that you had to use setpercision.
Indirectly it means the same.
It does not mean the same. The error margin is about the accepted accuracy of the answer, whereas the issue is about the programming language not displaying the answer to a sufficient precision by default.
For example, one approach for solving B is to binary search for a point $$$y$$$ where the person who spends the most time to get to $$$y$$$ from a position $$$\leq y$$$ spends the exact same time as the person who spends the most time to get to this position from a position $$$\geq y$$$. However, checking for equality of floating-point numbers should be avoided (due to precision inaccuracies), so one would replace the equality check with a check for whether the difference is below some $$$\varepsilon$$$. This, however, means that the evaluated answer might differ from the correct answer by $$$\varepsilon$$$, so the specified error margin allows us to choose $$$\varepsilon$$$ such that this approach is guaranteed to be within the error margin. If the problem required a stricter error margin, then we would need a smaller $$$\varepsilon$$$ (and the maximum number of binary search iterations would increase).
However, even in this approach, when the resulting variable stores the answer within the accepted error margin, the issue is that printing this variable directly in C++ using
cout
would still result in "Wrong Answer in Pretest 3", because the printed answer is not as precise as the actual value of the variable.It's especially troubling for those who realized that the answer can have at most one decimal place (in Problem B here), so they may never have considered that precision would be an issue. After all, printing a
long long
directly would print it to its complete 64-bit precision, so why shouldn'tlong double
do the same? If one does not already know of this discrepancy between integer and floating-point printing, it is impossible to logically deduce this from the note in the problem statement about the error margin. It doesn't help that the given samples don't involve large answers, so they would match even without setting the precision.This is a language-specific issue that is completely independent of the concept of error margins, where one who completely understands the latter and considers it correctly in their solution can still be ignorant of the former, with nothing in the problem alerting them of this issue. Since this issue is not related to logical problem-solving reasoning skills, it would be justified to highlight this in problems with floating-point answers, especially if it's in a Div2B problem, which is supposed to be relatively accessible (should generally be solvable with pure reasoning alone, without requiring prior knowledge of specific topics).
How is the value for problem B unique? For the given example: test case 3:- 1 4 0 0
the optimal value here is 2.5 as per solution, but it can be 1,2,3,4 as well. Where am I wrong?
You have to minimize maximum time, not total
You have to minimize maximum time of one person, not the total time. For example, if $$$x_0 = 2$$$, for the 1st person time will be 1 and for the 2nd person time will be 2, so the result will be max(1, 2) = 2, but if you choose 2.5, then both times will be 1.5, so the result will be 1.5, that is lower than 2.
My intuition for Problem C:
I used stack to keep a track of non-decreasing substring possible along with their indices. Then I added all stack items to another string 'ans', and then added the rest of indices with a increase in their value by 1 (except for 9) and then sorted the 'ans' string to get the required string!
Solution : https://codeforces.me/contest/1730/submission/173460849
Thx for fast editorial)
thanks for fast editorial
they did add a new D but with the same score as E... and they didn't deal with B/C so uhh
Brovko shnirelman Lankin listen to this guy in the future...
While taking part in the contest, I also felt like the subtask with all a_i being distinct for E (D in your screenshot) would have made the contest much better and less unbalanced.
the idea about x + t and x-t wasnt at all hard to get though, and i got and coded it in 4 mins even being a pupil. (i got WA because of precision error still rather new)
i think its much more simpler and faster than binary search
Its rather easy to get it if you think how to convert this problem into a case where i can solve easily where there is no readying time
I am shocked by the massiveness of the d*** author put on the editorial of the D. Seems like he couldn't care less
UPD: he added example, but still no proves
I thought there was a real editorial for D a short while ago. Probably it got accidentally removed. Unless my memory is wrong
I'll fix it. Thanks for the honest feedback.
UPD: Done
I cannot make head or tail of the editorial for D.
You can check out my comment https://codeforces.me/blog/entry/107261#comment-956152 and the comment below that :)
https://codeforces.me/contest/1730/submission/173497901 --> I understood the logic for B from the editorial but can any one tell me why have we done 2*t[i] and 2*x[i] in problem B ?? unable to figure it out :/
I used that fact from the second solution, that answer will be integer or integer+0.5, so to avoid working with doubles i multiplied all numbers by 2. You can do all calculations in doubles without multiplying by 2 but not forget about setprecision.
Can someone explain why in the editorial of E, we are considering only j1 and j2 positions of the divisor d and also how we are avoiding overcounting the segments ?
There are two cases: when $$$d$$$ lies not righter than $$$i$$$(in $$$j_1$$$) and when $$$d$$$ lies righter than $$$i$$$(in $$$j_2$$$). If both positions lie in segment then we calculate it only in the first case(because of extra condition in the second case: $$$l > j_1$$$), otherwise in segment will be either $$$j_1$$$, either $$$j_2$$$ and there won't any other overcountings.
Also, if there is another position $$$j_3$$$ of $$$d$$$, lying, for example, lefter than $$$j_1$$$, then any segment where this element will be the minimum will also contain a position $$$j_1$$$, so we will count it in the first case.
Thank you!
Does anyone have suggestions for resources to read about double and precision like that was used in problem B?
Eventually, I got a working algorithm, but still failed some of the test cases because of precision. When I put the line
I successfully passed the test cases whereas just doing
failed. Here are the two submissions:
Failed 173508008
Accepted 173508382
when you use
std :: cout
without usingsetprecision
, the output will be in scientific counting method, so the judgement can't judge the precision.The default output format for big floating point number is scientific notation which was actually accepted by the judge (not sure if this is intended). The problem is that the default precision is 6 digits, so instead of 4.0759558e7 which is 7 digits your code outputs 4.075960e7, so using either
std::setprecision
orstd::fixed
or both will fix the issue. BTW your codestd::setprecision(10);
won't work because it's the return value that plays the role not the function call.For problem B I did ternary search over the time selected for everyone to meet and it worked but I'm not sure why.
Code
How can you prove that the start time function is Unimodal? Anyone can prove it? or generally apply binary search and ternary search, whichever works take that? Please let me know, i am curious about this. I gave a trial and error during contest and it passed for me. I could not prove it, however.
Yeah, I guessed that the start time function was unimodal and passed, but couldn't prove it.
The problem is actually ternary search problem, i also used ternary search in a different way during the contest. https://codeforces.me/contest/1730/submission/173488400
I can prove it verbally that if u take any position to the left or right of optimal position. Without loss of generality, let's say at the current optimal position x0, a person who has maximal start time value is to the right of x0, then if u take any point to the left of x0, it makes the maximum value more maximum. On the other hand if u go right, it makes to the someone to the left of x0 more maximum. So it is unimodal.
Is there a solution you wanted to cut that took O(NDlogN) in E, or why are the constraints high?
Yes, there is a solution with D&C.
My D&C solution 173840282 passed comfortably. This problem is pretty similar to 1609F - Interesting Sections. I could simply copy my old code and modify it a bit.
Other solution to C problem is using merge sort.
The idea is increment elements of left array in merge if have an element of right array merged before the remaining left, but this increment can be performed once in all merges. Using an array of pairs with boolean flag to store if the element was incremented or not.
solution: 173513587
I got WA on this submission during contest time.(https://codeforces.me/contest/1730/submission/173466205) But after using "printf" instead of "cout", my submission is accepted. This is link of accepted version. (https://codeforces.me/contest/1730/submission/173516968) I am unable to understand the reason .Can anyone please help me out?
The reason for this is precision. I don't know what precision of printf is, but it's enough to accepted. The default precision of cout is 6, but u can change it with some functions. One way to do it is:
cout << setprecision(20) << your_number;
In the solution 1 of Problem B, The segment should be [xi−(T-ti),xi+(T−ti)].
B would have been good if the answer fitted in a double or a float. That division by 2 just ruins what would have been a very good B
edit: ok, read other comments. I need to put a setprecision in my template
For proof of B,
if xi <= y, xi + ti is not guaranteed less or equal to y, right? why the time needed is y — xi — ti then?
Can someone explain the logic? why Problem B classic solution working here?
We have 2 cases. Case-1 : First you will have to observe that if all t(i) are 0 then , ans will be simply (min(a[i..n]) + max(a[i...n]))/2. Here you find a distance from x(i) as |x(i) — x(0)|
Case-2 : Now suppose if t(i) are non-zero.. then |x(i) — x(0)| + t(i) can either be (x(i) + t(i)) — x(0) or x(0) — (x(i) — t(i))... Thus this single coordinate can effectively be replaced by 2 coordinates (x(i) + t(i)) and (x(i) — t(i)) [Distance from co-ordinate y can be either x(0) — y or y — x(0) ]. Thus this problem can be reduced to type-1 with the above assumption.
Unfortunately, I couldn't think the same during the contest :((
B’s problem itself is good (the difficulty is higher than normal B’s though), but what is “std::setprecision”? This is similar to mistakes like “printing Yes instead of the required YES”, or forgetting to use “std::endl” to flush the output for interactive problems.
In the future should we add hints in the problem statement on those issues (or at least add test cases revealing such issues in the problem statement)? They are not related to algorithms/problem-solving at all. We might still should learn them, but I don't want to fail contests just for learning / remembering such boring things.
Furthermore, (this might be aggressive), I also believe "int overflow" shouldn't be tested in contests (they are typically not related to algorithm/problem-solving either).
I completely agree that
setprecision
should've been specified in the problem description.Regarding int overflow, it's generally not an issue of the problemsetter deliberately trying to test for int overflow, but more that large values are needed to ensure that inefficient solutions do not pass the time limit. For example, you would often see values having a maximum of $$$10^9$$$, and you may need to multiply such values, so this would trigger an int overflow, but if the problemsetter were to reduce the maximum to $$$10^4$$$ (no overflow on multiplication), this would allow the programmer to say, build an array of size $$$10^4$$$ to indicate which values are present (which might simplify the problems significantly). So a large maximum is needed to ensure that the solution must be efficient, but this might result in int overflow as a consequence (thus requiring 64+ bits like
long long
). Therefore, this does become a matter of algorithm/problem-solving skill (that your solution can't exploit the size of the values).I do agree that int overflow should not be an issue for Div2 A or arguably even for B, but beyond that, I think it's reasonable to require that the algorithm must be efficient enough that it should not depend significantly on the size of the values, so int overflow is something that one needs to be aware of.
D: "interesting"
E: "5s, what's the f**k"
In the end I got a solution of $$$O(\sum d(a_i)\log n)$$$ but I think it's too slow.
After the contest, my friend told me he passed E with $$$O(\sum d(a_i)\log n)$$$.
The story tells me,
lower_bound
is so fast.Why
setprecision(6)
fail on B? Took me 20 mins just to debug it lolsetprecision(6)
means 'Round off to 6 significant figures'It includes the integer part of a real number.
Try to use it with
fixed
The checker told me
wrong answer 36th numbers differ - expected: '40759558.0000000', found: '40759600.0000000', error = '0.0000010'
The error is exactly 10-6 xD. Change to precision 9 got AC though.This happened to me too; I'm pretty sure it's cuz the error is slightly larger than 10e-6 (it's something like .00000103 I think).
explanation of editotial B:
1.Why the object(in other words, the pair) is unordered? That is because we can easily reverse a pair. for example, if we want to reverse a pair {c1,c2} initial:
{a1,a2},{b1,b2},{c1,c2},{d1,d2}
reverse a prephix to set {c1,c2} to the the top:
{c2,c1},{b2,b1},{a2,a1}
reverse the first pair:
{c1,c2},{b2,b1},{a2,a1},{d1,d2}
reverse a prephix to set {c1,c2} to its initial position:
{a1,a2},{b1,b2},{c2,c1},{d1,d2}
We can see that each pair remains their position, and {c1,c2} has been reversed.
2.Why can we sort the objects to a palindrome if a palindrome exists? That is because here is a method to arbitrarily sort the objects. Suppose you have a sequence like this:
{a,a},{b,a},{b,a},{c,d},{c,d}
Its palindrome may be this:
{b,a},{c,d},{a,a},{c,d},{b,a}
First we reverse the phephix to move {b,a} to the beginning:
{b,a},{a,a},{b,a},{c,d},{c,d}
Second we reverse the whole sequence to move {b,a} to the end:
{c,d},{c,d},{b,a},{a,a},{b,a}
Then we have made the last position equal, repeate the work on tht next position till the whole positions equal.
by any chance, you meant problem D and not problem B ?
Has anyone tried finding the position X0 in problem B using Ternary Search? I tried but I am not able to find why I am getting wrong answer in the third test :( , Here's my submission 173457434 , Update : I had not applied setprecision(10) and because of which I was getting marginal error :-| ; Idk weird solution
Yeah I solved it with ternary search 173480072
I think this was the most straightforward solution as you just ternary search on X_0 and check the maximum time for each person
Yes , I also felt this approach was correct but I didn't knew that the condition (r-l)>1e-8 wouldn't satisfy the precision constraints :(
You can use for which does a fixed number of iterations.
But you got WA because of the precision of output, not ternary search.
why my B always wrong on test 3 in two ways, maybe some details fault, but I can't find. Can anyone remind
maybe you multipled two number ranged 1e8? i found my code overflowed one spot and it was also wrong on test 3
did you cout a double?
As a participant that solved B in less than 10 lines in 7 minutes, I suggest it would be better if the positions are even numbers.
I have a greedy solution for B. For the persons whose times are below the maximum t,you can just let them gather towards the person whose time is the maximum.If one person meets the person with the maximum dressing time,he stops.It is obviously the optimum choice. Then sort the position,the mid point of the maximum and the minimum is the answer.
Another intuition for binary search solution in problem B:-
We want to minimize $$$max(t_{i}+|x_{0}-x_{i}|)$$$, so thinking about this graphically, the graphs of each of the terms $$$t_{i}+|x_{0}-x_{i}|$$$ will be a shifted 'V' graph in the X-Y plane,
take the fifth example in the first test case, the graph will be like:- If the image isn't visible above please visit this link https://postimg.cc/7bQM0WJJ
So we see that the curve for the maxima of all these group of graphs will again be a V shaped graph, and it will have a unique minimum value, further we can deduce that this minimum value will either occur at an integer value or a decimal terminating in .5.
Now the minima can be obtained by binary search on the variable x
I am so Angry i submitted this solution a thousand times but i did not set precision.
https://codeforces.me/contest/1730/submission/173553138
I tried solving B using double ans=(Min+Max)/2; got wrong answer on testcase 3
Submission link : https://codeforces.me/contest/1730/submission/173555196
but tried doing the same with int ans=(Min+Max)/2; cout<<ans/2; if(ans%2!=0){ cout<<".5"; } cout<<endl; It got accepted that way
Submission Link : https://codeforces.me/contest/1730/submission/173555548
Can anyone help me in this ?
Try to read checker comment in submission details.
wrong answer 36th numbers differ - expected: '40759558.0000000', found: '40759600.0000000', error = '0.0000010'
When
ans
is too big,std::cout<<ans
will print a float like this:4.07596e+007
which loses setprecision.Try to usestd::cout<<std::fixed<<ans<<std::endl;
or just useprintf
.Hi, Brovko! In the solution (1) of problem B's editorial, the segment will $$$[x_i−(T-t_i), x_i+(T−t_i)]$$$. You have written $$$[x_i−(T+t_i), x_i+(T−t_i)]$$$.
Thanks, fixed.
For QC why is this thing showing TLE>I have used only one loop with complexity of time as O(n).
include<bits/stdc++.h>
define ll long long
using namespace std; int main(){ int t; cin>>t; while(t--){ char str[200002]; cin>>str; ll arr[200002]; for(ll i=0;i<strlen(str);i++){ arr[i]=(str[i]-'0'); } ll poke=arr[strlen(str)-1]; for(ll i=strlen(str)-1;i>0;i--){ if(poke<arr[i-1]){ if(arr[i-1]+1>9){ arr[i-1]=9; } else{ arr[i-1]=arr[i-1]+1; }
} sort(arr,arr+strlen(str)); for(ll i=0;i<strlen(str);i++){ cout<<arr[i]; } cout<<endl;
} }
for(ll i=0;i<strlen(str);i++)
You caluclate strlen(str) many times and it's slow.
Want to express more powerfully?Use Markdown
how to use markdown??
If you're Chinese,just read this.If not,you can use search engines
Na I am not chinese
So after searching did you know how to use md(MardDown) now?
Can someone explain what exactly does set precision do in B. As only the average of the extremes is to be taken, at max a single decimal point would be there. Hence if the sum of the extremes is even I just do integer division and if it is odd I just do the same and add 0.5 in cout. But I got WA in test case 3, but adding set precision passed all test cases. I then just printed .5 as a string in the case of odd and I passed all.
If you add $$$0.5$$$ to int, you get double. If you print a double without setprecision, you may get a big error. For example, 1234567.5 is printed as 1.23457e+006.
Oh so the problem is related to printing and not actual values, thanks that makes sense.
Which classical problem is being referred to with regards to B?
Could somebody provide a link?
It's pretty easy.
If you have $$$n$$$ fixed points $$$p_i$$$, and you want to choose some $$$x$$$ to minimize:
$$$max(|p_i - x|)$$$
It's pretty obvious that for fixed $$$x$$$ answer will be either $$$|x - p_1|$$$ or $$$|p_n - x|$$$.
Then we just want to minimize $$$max(|x - p_1|, |p_n - x|)$$$.
I think you can already see that $$$x = (p_1 + p_n) / 2$$$ minimizes the answer.
Nice contest,tks!
(To me, Problem E & F is far easier than D ...)
Hello Brovko, can you please tell one thing for question B, what made you not to think in direction of DP. How you will describe your intuition for this? Is B at all solvable by DP if not considering it's time constraint and input size in consideration.
When I saw that you need to minimize the time, I immediately thought about binary search. I don't know how to solve it by DP. To calculate DP, you should use previous values to calculate next values. But how would you do it in this problem? In which order you would calculate some values?
Can anyone tell why my answer is wrong for 1st testcase. https://codeforces.me/contest/1730/submission/173617360
For the test case of
Your output is 0.083248376846, when it should be simply 0. Maybe your approach messes up for $$$n = 1$$$?
s2 += (char)min(s[i]+ 1 — '0', 9); s2 += (char)min(int(s[i]) + 1, int('9')); here s2 and s are strings USING THE first one i am getting empty string and using the second one i am getting correct answer . WHY ? is this happening
Anyone solved D? I got an idea but don't no how to achieve and implement it. THe idea is we have to make string a => palindrome and string b palindrome then swap n/2 len pref and suf. The complex part is here on how to make them palindrome using the given operations. I found D as implementation heavy
your observation skill should match implementation skill then you can solve. if you want easier implementation then you need to find more observations
Brovko, I think in editorial of 1730B - Meeting on the Line in 2nd part(i.e classic part): ** and the two we want to replace him with — y−xi+ti and y−xi−ti ** . In 1st equation sign of y should be positive i.e (y-xi+ti). Sorry if I am disturbing you by pinging again and again but I think it can be very confusing for beginner sometimes. Although it is a very small mistake i think. I can be wrong also.
It's a dash, not a minus
Brovko just one final question on the problem B after a long time(I asked you some questions earlier). Why binary search would take nlogn in problem B. Shouldn't it be nlogT, where T = 10^8. And will not it give TLE. As nlogn works for n <= 10^5. From this blog Your text to link here.... Why sir shnirelman code is not giving TLE. Sorry again but this one is really confusing me.
$$$nlogT$$$ is not $$$nlogn$$$
In 1 second the system can run about $$$10^8$$$ operations
$$$nlogT$$$ with $$$n = 10^5$$$ and $$$T = 10^8$$$ is $$$2.7*10^6$$$ so it's good
tezk , Why nlogT is not nlogn ?. Because at the end we are dealing with logT and we will take into consideration nlogT or NlogN combined and then check if it is exceeding 10^8 ? Like sir from this blog Time complexity analysis. The limit for NlogN is ok if N <= 10^5.
This one is really confusing. Can you explain in some clear way if you can , it will be really helpful.
Because for time complexity I always refer this Time complexity analysis. So how should I think during using it.
$$$nlogn$$$ is 1-variable, but $$$nlogT$$$ is 2, they are different
The link you give me only tackle 1-variable problem, and the point of that post is to give a rough estimation on what time complexity that might pass. When the problem beomes more complicated (2, 3 main variables, ...) you need to calculate the time complexity out yourself.
btw, yes the complexity of that solution is $$$nlogT$$$, not $$$nlogn$$$
tezk, if I am correct and understood your point coreectly it is like :
from my given post link above if I give N = 10^6 in NlogN , putting value will give : 2 × 10^7 and codeforces system process 10^8 operations per second, **so N = 10^6 will work because it is not exceeding 10^8 operations **. Right??
And similarly the problem B time complexity will be also NlogT which will give value 2.7 × 10^6. So it will also work fine as it is also less than 10^8. And time limit of the problem is 2 seconds.
Am I thinking right in this direction for understanding time complexity?? Will not bother you again and again. Sorey
Yes you are right.
Yes, it's $$$nlogT$$$ but $$$log T$$$ is very small. Even if $$$T$$$ was $$$10^{18}$$$, $$$log T$$$ would be $$$60$$$.
Does C really have an annoyingly simple(interesting) solution? Brovko
Yes. The idea of this solution is similar to the editorial but it has simpler implementation.
In B, isn't taking average of all points more accurate?
No, you need to find the maximum time so you don't need all points
why b is much more harder than c and why the solution with time complexity o(n*log(n)) gets tle? it's just like a bullshit for me!!!
Lets say : x[ ] = {1,2,3,10} and t = 0 for all , shouldn't it be optimal for them to meet at x = 2.5 instead of x = 5.5 ? How are we taking average of the extremes while we should take median of whole imo.
You should minimize the maximum time. For $$$x = 2.5$$$ it's $$$7.5$$$, but for $$$x = 5.5$$$ it's $$$4.5$$$.
In B, What if positions are X = [1, 2, 3, 10] t = [0 0 0 0] According to editorial, optimal answer is 5.5 For 5.5, distance = 4.5 + 3.5 + 2.5 + 4.5 = 15 For 3, distance = 2 + 1 + 0 + 7 = 10 So shouldn't answer be the median??
The suffix minimums **mni** should be left, because no matter what digit we leave after **mni**, it will be no less than **mni**, and therefore will not improve the answer.
is it mandatory to keep mni on left ? can't we simply sort the updated string ?if we can do this how this will work ?$$$mn_i$$$ are not necessary on the left, they should be left unchanged. Yes, the final string will be sorted.
Can someone explain, how unique answer in problem B, test case 3 : answer also exist for point 2 and 3 ?
Ternary Search Solution for B: 262674398
: Maximum time as a function of target points have a minima
In Problem B:
We get our answer in the classic solution of using point xi-ti, xi+ti for (xi, ti). I understand the solution, but this approach is not trivial; how can I ensure it comes again and I figure this out?
Please help Brovko,shnirelman
I don't think there is an optimal way, but you'll be fine as long as you practice enough (judging by your account, you are doing well).
B had the same idea as this contest's B
, which had appeared earlier too https://codeforces.me/contest/782/problem/B , basically theidea of intersection of L, R points to yield a small segment(or possibly a point)
Sorry if I'm asking a somewhat silly question.
In question B, why do we need to double up? And why does the method of checking intersecting segments work (why max, min)?
I really don't understand.
Any help is greatly appreciated.