Introduction:
Hello codeforces, In this blog, we will explore a technique for efficiently handling insertions and deletions in arrays using partitioning and deques. I cannot be certain if this idea has been previously explored or not, but I have conducted extensive research and consulted with friends to determine if a similar concept exists. If you have come across this idea before, I kindly request that you share the relevant link and acknowledge the original author's novelty in developing it And if no similar idea exists, then it seems like I've stumbled upon my very own data structure creation LOL. I have developed a method that leverages partitioning and deques to optimize array operations. I invite you to read about this efficient data structure and welcome your feedback and insights.
Understanding Partitioning and Deques:
Before delving into the technique, let's briefly understand the key concepts involved:
1.1 Partitioning
Partitioning involves dividing an array into equal-sized parts. In our approach, each partition contains sqrt(n) elements, where n represents the total number of elements in the array. The final partition may have fewer elements if the array size is not a perfect square. This partitioning ensures a balanced distribution of elements.
1.2 Deques (Double-Ended Queues):
Deques are data structures that allow insertion and deletion at both ends in constant time. They provide efficient access to elements at the front and back, making them ideal for our purpose.
Efficient Deletion: Let's explore how we can efficiently delete an element at a specific index, k, using the partitioning and deque technique:
2.1 Identifying the Corresponding Part:
we divide k by sqrt(n) to determine the index of the deque that contains the desired position.
2.2 Removing the Element:
Within the identified deque, we can delete the element at index k % sqrt(n) efficiently, taking O(sqrt(n))
2.3 Adjusting the Deques:
After the deletion, we need to maintain the equal-sized partitions. We sequentially shift elements from the front of the following deques to the previous ones, starting from the deque following the one we deleted from. This shifting operation takes O(1) time for each deque with overall O(sqrt(n)) (number of deques).
Efficient Insertion: Now, let's explore how we can efficiently insert an element at a specific index, k, using the partitioning and deque technique:
3.1 Identifying the Corresponding Part:
Similar to deletion, we divide k by sqrt(n) to determine the index of the deque that contains the desired position.
3.2 Inserting the Element:
Within the identified deque, we can efficiently insert the element at index k % sqrt(n), taking (O(sqrt(n))).
3.3 Handling Full Deques:
After the insertion, we need to maintain equal-sized partitions. We sequentially shift elements from the back of the current deque to the front of the following ones, starting from the deque we inserted at. This shifting operation takes O(1) time for each deque with overall O(sqrt(n)) (number of deques).
Time Complexity Analysis:
The partitioning and deque technique offers significant improvements in time complexity compared to traditional methods:
Deletion: O(sqrt(n))
Insertion: O(sqrt(n))
Access: O(1)
Examples:
Let's consider an array of size 25. We divide it into equal-sized parts, each containing 5 elements. The partitioning would look as follows:
- Part 1: Elements at indices 0 to 4
- Part 2: Elements at indices 5 to 9
- Part 3: Elements at indices 10 to 14
- Part 4: Elements at indices 15 to 19
- Part 5: Elements at indices 20 to 24
Scenario 1: Deleting an Element at Index 7
Identify that index 7 belongs to Part 2.
Remove the element from Part 2, taking O(sqrt(n)) time.
Adjust the deques by shifting the front from Part 3 to the back of Part 2, Part 4 to Part 3, and Part 5 to Part 4. This shifting operation takes O(sqrt(n)).
Scenario 2: Inserting an Element at Index 13
Identify that index 13 belongs to Part 3.
Insert the element into Part 3, taking O(sqrt(n)) time.
If Part 3 is already full, shift back from Part 3 to Part 4, Part 4 to Part . This shifting operation takes O(sqrt(n)).
Conclusion:
By leveraging partitioning and deques, we can achieve efficient insertion and deletion operations in arrays. This technique reduces the time complexity from O(n) to O(sqrt(n)), resulting in significant performance improvements.
I hope this blog has provided you with valuable insights into the efficient technique of using partitioning and deques for array operations. Feel free to reach out to me with any questions, suggestions, or ideas via direct message (DM) or the comments section.
you can find the implementation on GitHub
Note: you don't have to use all this code cause it may be a bit long just because this approach was to make the full Structure so feel free to customize it.
thanks to AliRagab313, _Mahmoud_Ayman, _heisenberg, Queen.., Moataz_Muhammed for their valuable contributions through testing, suggestions, and consultation. Their efforts have been truly appreciated.
I'm facing a naming dilemma for this structure, and I would love to hear your suggestions in the comments section. Some ideas that have been floating around include "mesh map" in the Egyptian common language, humorously meaning "not map" xD, or "SRT" (Sherif's Root Tree) — a playful choice inspired by my name, although it may look Narcissism lol.
Thank you for reading the blog ^^.
nice! it can be used to solve this problem https://open.kattis.com/problems/starwarsmovies
Yeah that's right!
Yes, I was able to solve it , Thank you for sending the problem.
1437vszombies came up with a similar technique a long time ago https://www.luogu.com.cn/blog/384596/yi-zhong-ji-yu-fen-kuai-di-shuo-ju-jie-gou (Chinese version), it might actually be better since it does not involve deques but only a tag for shifting.
Although the exact details may be unclear due to the language or I couldn't understand it clearly, the mechanism involves shifting elements either to the right or left, depending on the operation performed this done in specific section of the array with size sqrt. so, how can this handle the relevant order of the elements after that specific part, and have a constant access time?
It’s the same procedure, only it uses circular buffers to simulate the deques. I expect it to be much faster, as it only involves doing swaps (no memory allocations) and is more cache friendly. Also, the fact that the array is pseudo-contiguous might help with things like cache friendliness.
Yes, this approach is much better than mine. thx for clarifying.
https://dmoj.ca/problem/ccc17s5 I believe this problem can be solved with this.
Yeah! with small modification to store the sum this can be solved, thanks for sharing this ^^
The problem 455D - Serega and Fun can be solved with a similar idea (keeping sqrt deques and shifting elements).
Great work ^^, and i think STR as a good name
Thx ^^
Great work from u bro keep going
Thx ^^
Heker
Check this out
I am a bit confused. Can this problem be solved by Treap or similar tree data structures? Isn't it faster with O(logn) modify and query? I know your technique can do O(1) query, but I think most problems have equal modify and query numbers.
You're absolutely correct! There are multiple ways to solve these types of problems, such as using data structures like Fenwick trees or segment trees. While this data structure may not offer the optimal time complexity, they are still effective for solving a wide range of cases. The simplicity of these approaches makes them easy to understand and use as data structures to solve various problems even for beginners (like me xD)
Additionally, this concept can prove useful in real-life problem-solving scenarios. For Example, when the data size exceeds available memory, dividing it into chunks can be a viable solution, and this technique can be particularly helpful in such cases. Furthermore, in situations where there are only a few modifications to the data but frequent access is required, this can still be good enough for the mission. Moreover, due to its array-based nature, it exhibits better cache locality, which can further enhance performance.
However, there will be always trade-offs involved. While this technique may not be the best solution for these problems in terms of time complexity, it remains an effective as a data structure that can be utilized to solve various problem types. I appreciate your participation and input on this matter. thx ^^
The rope data structure appears interesting to me upon initial exploration. However, after researching it further, I learned that the worst-case time complexity for insertions is O(N). I'm wondering if this can be handled?
The worst-case time complexity of the insert function for ropes should be O (sqrt (N)), as most blogs say
link
But it seems that some people also say that the worst-case time complexity of the rope's insert function is O (logN)
Really great Idea, keep up the good work
thx^^
great work! data structer works in a lot of problems :)
thx^^
Great idea my friends , keep up the good work❤️
thx ^^
I just Tried to implement it on my own, and I am unable to achieve the O(sqrt(n)) complexity.
My adjust function will be called too frequently, making its time complexity worse than O(sqrt(n))
Like if initial array has 4 elements then i will be making 2 deques but what will happen after 1000 insertions, I definitely don't want 500 deques (it will simply make insertion operation O(n)).
So, if somehow I can optimise it.
Yup, you can optimize your approach. Before inserting any new element into the array, check if the square root of the new size will surpass the initial array's square root. If it does, create a new Mesh Map with the square root increased, and copy the old one into it.
For example, if the initial array has 4 elements and, after some operations, it becomes 9 elements, simply create a new Mesh Map with a square root of 3 and copy the old one into it.
Additionally, to avoid frequent copying operations, another optimization is to double the maximum size every time you create a new Mesh Map. This ensures that the time complexity of inserting remains O(sqrt(N)).
My adjust function is actually doing that, but one thing that we can say is creating a new meshmap or adjusting the current meshmap, both will take O(n) time.
And provided we have 10^5 queries of insertion and initial array of 10^5 elements, the adjust function will be called 300 times approximately.
So overall, answering all queries will take O(q(root(n)) + O(n(root(n)) Correct me if I am wrong.
you can simplify it further The complexity for inserting q elements can be this: O((n+q)*sqrt(n+q)) since q<=n O((2n)*sqrt(2n)) and as far as I know this should be O(n*sqrt(n))
You heard about sortedlist? We can do insertion and deletions in logn time(it is implemented using Fenwick trees).also you are just doing sqrt decomposition,And bro thinks he discovered new data structure,in every problem you do it's something new ,that doesn't mean you discovered new algorithm or data structures.absoluetly childish,I know i am rude :)
I already know that i'm not tourist. chill and have fun bro <3
I am rude,so chill and cry
I'm too lazy to even start reading this very long blog
ngl you're right xD
Hi, Can you solve this problem using described data structure.I am unable to implement it. https://atcoder.jp/contests/abc344/tasks/abc344_e
if yes please send me the solution.Eagerly waiting for your reply.thank you.
I will try.
Some people seem to not understand that having lgn update and query is not always best, different tradeoffs for different situations...
It is probably not completely novel and not too hard to figure out given constraints, but it is nice idea to think of and share.
Thanks for your opinion, appreciate it <3