123gjweq2's blog

By 123gjweq2, 5 weeks ago, In English

For an array $$$a$$$, $$$A = \max(a)$$$.

Could such a thing actually be made to be more efficient than normal sorting algorithms?

  • Vote: I like it
  • +12
  • Vote: I do not like it

»
5 weeks ago, # |
  Vote: I like it +45 Vote: I do not like it

I think you can even make it $$$O(n + \log A)$$$ if you set the timeouts as $$$\log(a_i)$$$ instead of $$$a_i$$$.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +29 Vote: I do not like it

    why stop there: use inverse-ackerman instead of log

»
5 weeks ago, # |
  Vote: I like it +17 Vote: I do not like it

Look up counting sort.

»
5 weeks ago, # |
  Vote: I like it +14 Vote: I do not like it

That's counting sort

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    so if it is a known thing and not just from some guy on r/programmingHorror's web project, why isn't it used as $$$the$$$ sorting algorithm?

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it -14 Vote: I do not like it

      It is used sometimes probably, the thing is that it's easier to type .sort() than write this. And it won't add much to solution's complexity

    • »
      »
      »
      5 weeks ago, # ^ |
      Rev. 2   Vote: I like it +1 Vote: I do not like it

      Because it uses $$$O(A)$$$ additional memory, not necessarily something you would want. And when $$$A$$$ is large, this becomes inconvenient.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        We might be able to optimize it to $$$O(\log A)$$$ additional memory (see my above comment).

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Cause if the upper bound of array elements is $$$O(n^2)$$$ the complexity would be $$$O(n^2)$$$ regardless of n. It can be used in some special scenarios.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        By "regardless" I just mean it's dominated by the other term That was a silly thing to say Sorry :)

»
5 weeks ago, # |
  Vote: I like it +57 Vote: I do not like it

Look up "Sleep Sort" precisely

»
5 weeks ago, # |
  Vote: I like it +49 Vote: I do not like it

Hacked.

var l = 1000
const arr = new Array(l)
for (let i = 0; i < l; i++) {
  arr[i] = 1
}
arr[0] = 10
arr[l - 1] = 9.9

for (let item of arr) {
    setTimeout(() => console.log(item), item)
}
  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +31 Vote: I do not like it

    I think a better implementation would go like this:

    1. Create $$$n$$$ threads, $$$1$$$ for each element in the array (don't start the threads yet).

    2. Go through each thread, very quickly starting them one after the other, so we don't run into the time error.

    To really make sure we don't run into the time error, we can first sort the threads by the value of the element they contain in ascending order so that when we start thread $$$b$$$ after thread $$$a$$$, the value of $$$b$$$ will surely go in after the value of $$$a$$$ because it is larger.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +77 Vote: I do not like it

      That is brilliant! If we sort the threads by the element they contain, we can further optimize it by replacing all setTimeout(_, x) calls with setTimeout(_, 0), since surely the earlier threads will still finish first. In fact, as an even further optimization, since the sleeping is redundant now, we can skip creating the threads and timeouts as well! Now, the only thing we have left to do is find an efficient sorting algorithm.

»
5 weeks ago, # |
  Vote: I like it +15 Vote: I do not like it

If you want $$$O(n + A)$$$ sorting, you can just use counting sort. Your time based sorting is almost certainly strictly worse, since for large $$$n$$$, the processor still has to use some algorithm to determine when to wake each thread, which is likely done using something equivalent to counting sort with worse precision or a priority queue.

»
5 weeks ago, # |
  Vote: I like it +5 Vote: I do not like it

high IQ

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it -20 Vote: I do not like it

    That's what I'm saying. Who even thinks of this? It's such out of the box thinking imo. It reminds me of that one short film on youtube where everyone's head explodes when they hear an original idea. This is kind of like that, because this is an original idea in my opinion.

    I just don't think cf problems test the same thing coming up with something like this tests. It might be like IQ vs some sort of creativity or something.

»
5 weeks ago, # |
  Vote: I like it +9 Vote: I do not like it

How about making the timeout $$$\frac{1}{item}$$$, and once sorted, based on this timeout, you could just reverse the final array, since then the sorting time would be the least,even less than $$$log$$$ and inverse-ackerman, assuming all the numbers are greater than equal to 1. Also like you said, once you include multithreading, there won't be any problem related to error in the solution.

The only restriction would be that we can not take the numbers less than equal to zero, one solution to that could be to sort the negative numbers separately by taking their absolute value, and then merging the final arrays and placing all the zeroes in between.

By the way pretty intresting sorting algorithm XD. Let's see how much time it takes me to get an AC with this method XD.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    296356256

    It seems python is too slow for this algorithm :/. In case anyone could optimize it, do let me know

    • »
      »
      »
      5 weeks ago, # ^ |
      Rev. 2   Vote: I like it +5 Vote: I do not like it

      lol what I figured out is that it is way too slow for that problem. Also, the fact that $$$k$$$ goes up to the huge number $$$10^7$$$ makes it so the algorithm will incorrectly sort a lot of the cases. I tried to solve it with the threading module, but I ended up basically copying your implementation of the algorithm when the threading failed.

      It turns out that python isn't even fast enough to process $$$1000$$$ elements per second. So I had to go back and find a $$$div. 2 A$$$ that had really tiny constraints. Luckily, I found this one with one case per test, $$$n \le 100$$$, and $$$a_i \le 1000$$$. Still, I had to binary search the proper amount of time to sleep for, since if I slept for too little time, I'd get WA due to wrong sorting, and if I slept for too long, I'd get idleness limit exceeded. Finally, I got an accepted submission.

      Also, I think the problem with sorting by $$$\frac{1}{a_i}$$$ is that there are too many sorting errors, especially as $$$a_i$$$ gets higher and higher. But actually that would, in theory, be an $$$O(n)$$$ sorting algorithm lol

      tl;dr: the time complexity of this sorting algo is a bit misleading

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        damn, saw your submission history, respect for the dedication 🫡

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

there is a way to save the sort elements in js with that method?

»
5 weeks ago, # |
  Vote: I like it +3 Vote: I do not like it

If the smallest precise measurable time in a computer is < 1 ns and we have unlimited CPU cores, then we can sort 1e9 elements within the integer range in 1 second.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Took so much time

Screenshot-2024-12-14-101830

Using log(time) is fast, still it takes more time then expected

Screenshot-2024-12-14-102451

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

It's all good until we have negative values

  • »
    »
    5 weeks ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    We can add an offset number for minimum negative value and use the absolute minimum negative value to convert numbers in positive and get actual numbers by substracting that offset number.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

The cost of getting faster time compilation is compensated with more memory I guess.

»
5 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

This sort is quite incorrect

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Radix sort or counting sort maybe?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

CLRS has a nice chapter on such algorithms It's chapter 8 (titled "sorting in linear time") if i'm not mistaken. Check it out if you're curious