Блог пользователя MODDI

Автор MODDI, история, 11 месяцев назад, По-английски

Let's say we have an array of size N(N <= 100000) and Q queries (Q <= 100000), where each query is either:

  • Add(l, r, x), add x to every element between l and r
  • Query(l, r) — number of elements that are zero between l and r

Each add operation is performed under a certain (fixed) modulo m. Any ideas on how this can be done?

  • Проголосовать: нравится
  • +11
  • Проголосовать: не нравится

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится -8 Проголосовать: не нравится

I think u need to learn about segment trees this problem can be solved with segment tree each node of the tree contain two values let them { a , b }

a — > refer to the minimum element in the range of this node

b — > refer to how many element in the range of this node is equal to a

if the min element is not zero u can type -1 or any thing

otherwise u can type the number b

the merge function in the segment tree take two nodes (left child , right child) and it can be done like this

pair < int , int > merge (pair < int , int > lc , pair < int , int > rc ){

if (lc.first != rc.first) return min( lc , rc);

 lc.second += rc.second; 

 return lc;

}

sources to learn segment trees https://codeforces.me/edu/course/2
this one of the best sources I have ever seen

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yeah this is fine but, my question is different, how would you handle the mod m operation after you add something?

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I think you missed that it is modular addition

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Precisely, I was wondering if we can do mod every time we add to a node, or is there something smarter we can do?

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Do a lazy segment tree where each lazy node will store the sum of values summed to that range modulo $$$M$$$. When you arrive at a range entirely ($$$[L, R]$$$) within your query range, you will check the current value of the respective lazy node, let's say it's $$$V$$$. Now, you need to know how many elements in $$$[L, R]$$$ had an initial value of $$$K = (M - V)\mod M$$$ before all queries. That's easy: you'll keep a map ($$$pos$$$) where the key is an element in the array and the value is an array with all the positions this element appears. Now, you just do an upper bound on $$$pos[K]$$$ with $$$R$$$ and a lower bound on $$$pos[K]$$$ with $$$L$$$ and subtract the resulting indexes.

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          Interesting solution, but what will be the time complexity of each query, something like O(log^2 N)?

          But we will need to merge maps in each update, so the update function will possibly be something like O(log^2 N * num) where num is the number of different elements in map

          • »
            »
            »
            »
            »
            »
            11 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Yes, it should be $$$O(log^2 N)$$$. No, I don't think you need to merge maps — the $$$pos$$$ structure wouldn't be inside the segment tree, it would be something that you build with the initial array and never change again.

            The update function should simply push lazy values down and update related nodes with the new value.

            • »
              »
              »
              »
              »
              »
              »
              11 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              Yeah, but the the add updates are permanent, so after each update the values in the array change.

              • »
                »
                »
                »
                »
                »
                »
                »
                11 месяцев назад, # ^ |
                Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

                But this shouldn't matter, since the node already has stored the sum of values summed to that range.

                Each node will store two things:

                • How much has been summed to that range until now (let's say it's $$$V$$$)

                • How much it needs to push down to below nodes (this would be the field storing the "lazy" aspect)

                Your update function simply updates the first value and pushes down the second one.

                Your query, on the other hand, will be somewhat dynamic. When you arrive at a range that's fully within the query range, you simply find how many elements in the range of this node are $$$M - V (mod\ M)$$$.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 месяцев назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  Oh right, we just need to know how much we have added to some range to find the number of elements that are equal to zero.

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I think having a single value to represent the offset of a range doesn't end up working. Say I have nodes representing the following ranges: [1,2] [1,1] [2,2] If I add one to the range [2,2], this causes the values in [2,2] to be one ahead of [1,1].

          Later, if I make the query to the full range [1,2], there is no single integer that [1,2] could have that would solve the problem. To actually solve this using the current data, you would need to recurse further down the tree until you hit ranges that have a universal added value. I'm pretty sure this breaks the complexity and causes you to look at worst case O(n) nodes.

          • »
            »
            »
            »
            »
            »
            11 месяцев назад, # ^ |
            Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

            Oh, you're right. We can't do it with a single value representing the amount we added to a range.

»
11 месяцев назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

use sqrt technique.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yeah, I think this can easily be done in O((N+Q)sqrt(N)).

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      However, the updates(add query) still look kinda of impossible rn.

»
11 месяцев назад, # |
Rev. 3   Проголосовать: нравится +21 Проголосовать: не нравится

I couldn't come up with a solution using segment trees, but here would be a $$$O(N+Q*\sqrt{N})$$$ implementation:

First, separate each block of numbers into $$$\sqrt{N}$$$ blocks of roughly equal size. With each block, you'll have the following data:

  • The numbers in the range

  • A number, s, representing an addition to the whole range

  • A map or multiset which indicates the occurrence of a given value mod m

For each add query, take a look at each block that is touched by the range:

  • If the range fully covers a block, increment that block's s value by the sum.

  • If the range partially covers a block, iterate over all the values covered by the range and add directly to those array values, updating the information in the map/multiset accordingly.

For each zero query, take a look at each block:

  • If the range fully covers a block, we look at the occurrence map to see the number of occurrences of (m-s) mod m in the range and add that to the sum.

  • If the range partially covers a block, iterate over all the values covered by the range and increment the result when that array value + s mod m == 0.

If you use a map with hashing it would be $$$O(N+Q\sqrt{N})$$$ given the hashing isn't exploited. If you use a tree map or a multiset it would be $$$O(N+Q\sqrt{N}log(N))$$$.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I swear all the ideas in this post came to my mind even your's but all was not enough to solve the problem

    take a look at this case if we update a full block with a 3 values in a row let this values be

    x , y , z

    in the first if we update the block with x we will use the values in map okay

    but if we update a block again with value y we need to pre calculate the correct values of mods

    for the values in the map to update the block again with value y

    and the same for value z

    which need a square root time to update an only one block which is even worse than brute force

    may be i miss something can u explain more how would u update a block more than one time in a row

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится +15 Проголосовать: не нравится

      When a whole range is updated, it doesn't actually cause any changes to the map, it only updates a given s value which is supposed to represent a lazy add. When later looking for zeroes, we look for how many values in which $$$(v + s)$$$ mod m is zero.

      It is true that if we don't update the values in the map, we only have the values of $$$v$$$ not $$$v + s$$$. What I do to resolve this is instead of looking for the occurrences of zeroes in the map, I look for the occurrence of $$$(m - s)$$$ mod m since that is the value of v that makes $$$(v + s)$$$ mod m equal to zero.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think you can do sqrt-decomposition, by breaking the array into $$$\sqrt{N}$$$-sized blocks.

Queries on full blocks are done in $$$O(1)$$$ by keeping an histogram on each block. Partial queries can be done naively in $$$O(\sqrt{N})$$$. If $$$m$$$ is small the histogram can be an array. Otherwise, use hashing.

Performing an update that fully covers a block can be done in $$$O(1)$$$ by just keeping an offset on the block. You can do partial updates naively in $$$O(\sqrt{N})$$$

Is it fast enough?