Блог пользователя Errichto

Автор Errichto, 8 лет назад, По-английски

Hi. I recently overcomplicated one easy-ish geometry problem and encountered the following difficulty.

Let's say I have a set of linear functions f(x) = ax + b and two types of online queries:

  1. Given a and b, insert a new linear function.
  2. Given x0, print the maximum value f(x0).

I can handle both queries in logarithmic time by using BST and maintaining the convex hull of linear functions. Can I do the same (easily?) using set in C++? The problem is that the second query requires lower_bound that is able to say whether a linear function is on the left or on the right from the optimal one. I think it's impossible because it depends on the neighboring (after sorting CH by slope) functions.

During a contest, I implemented something in O(log2) — a lower_bound that runs an internal lower_bound to find the next linear function in the set. Later I came up with an idea to extend a linear-function struct to also store a copy of the next function in the set. It requires some extra work but should work and should be O(log), right? Do you see any easier way?

  • Проголосовать: нравится
  • +43
  • Проголосовать: не нравится

»
8 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think orderset should work.

»
8 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Here's my implementation, it was really tricky to do that so I hope it will be readable for you.

  • »
    »
    8 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Do I understand correctly that you store an iterator to the previous object on the set? This is roughly what I mentioned at the end of the blog (I wanted to store the copy of a previous or next object, though). Thanks for a code and I'm glad that this approach indeed works. Still, I hoped that maybe set has some function/property that I don't know about and it would be helpful here, so that we don't have store info about prev/next element.

    • »
      »
      »
      8 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I doubt that set has such function/property, especially that before I implement that code I read some other implementations to same problem, all of them were storing pointer to previous/next element.

      Anyway, it's not very useful to have simpler code since you can include an implementation to your library then copy/paste it whenever you want (in case of ICPC you will need to type it manually to your machine from your library but that's not big deal too because you don't need to think during typing)

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks a lot, kingofnumbers. It was very clear and readable. :-)

»
8 лет назад, # |
  Проголосовать: нравится +21 Проголосовать: не нравится

This problem can be implemented using Segment Tree in a quite simple way. (Much easier than BST or STL, I think)

It's called LiChao Segment Tree in China.

  • »
    »
    8 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Can you explain it? Remember that queries are online.

    • »
      »
      »
      8 лет назад, # ^ |
        Проголосовать: нравится +34 Проголосовать: не нравится

      The main idea of this algorithm is to arrange a g() on each tree node that has advantage on this segment

      every time we add a new f() to a node we compare it with the g().

      And we do the same thing with left child or right child depending on the cross point and the middle point of this segment

      When we come to an leaf, we just simply update the answer

      and for each query we check the answer on every g(x) of tree nodes which contains x

      and here's the code https://paste.ubuntu.com/24270950/

  • »
    »
    8 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I think we got that too, it's about treating the x-coordinate as point in the segment in tree a node contains a line that dominates a part of a segment. Although it is required for the query point too be small and integer to be online. Hope that helps

»
8 лет назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится
»
8 лет назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

the wcipeg article mentions a way to do it in log(n) per update/query using two sets ... I tested my implementation of it (on a problem that can be solved easily using normal CHT) here 13683410

  • »
    »
    8 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It's very easy to make a mistake in this approach so I will use something else. Thanks anyway.

»
8 лет назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

I am not sure, as I do not know much of the theory behind linear programming problems, but is this not equivalent to regular dynamic convex hull via dualization? I think the insertion is equivalent to some extent, and for the query you might be able to keep two synchronized sets of both the lines and their duals.

Think of it like this: you insert a point (a line dual). This might erase some of the existing points and segments between adjacent points (line duals). But these segments' directional lines are actually point duals, so dualizing them would actually result in the intersection between adjacent lines in the primal plane. Viewing this problem in this manner, everything is clear: you keep a data structure (set) of convex hull points of the dual plane (the lines), and a data structure (set) of convex hull points of the primal plane (the intersections).

I might be willing to learn dualization for this sole purpose, as it seems very helpful if it works as I expect it to. However, if someone can confirm / infirm my proposal, feel free to.

»
8 лет назад, # |
Rev. 2   Проголосовать: нравится +28 Проголосовать: не нравится

You can solve it with sqrt-decomposition.

Make an array which have all lines sorted by slope. Naively you need O(n) in insertion + O(n) in query.

This is slow, so you also prepare a normal CHT structure. For each sqrt(n) queries you do merge sort with that CHT + array, So, for each query you need O(sqrt(n) + lg(n)) time.

Due to the terrible constant factor of O(nlgn) code, this works about 2x faster than O(nlgn) code. proof — the exact same problem in Korean OJ. 2 fastest solution uses sqrt decomposition.

Also, generalizing this "buckets" into log(n) parts gives O(nlg^2n) time complexity. You can read about it here. my code implementing this takes almost same time as O(nlgn).