It is a general question, which I came across while solving the Div2 1000 pts problem from SRM 735 on topcoder in practice. I couldn't get an efficient solution for the problem so I decided to read the editorial. I understood the approach, but there was a series of operations implemented using fenwick tree.
There were 2 kinds of operations. They required to answer the no. of elements less than a certain no. at a given instance and also to allow insertion of elements(something like updating the count). I have read policy based data structure (order statistic tree) and I thought that that might also be a good choice to perform these series of operations, as they allow insertion of elements and to find the order of a key(no. of elements less than a certain element) both in O(lg n). Similarly, every operation for a fenwick tree also had a complexity of O(lg n). The complexity using both fenwick tree and order-statistic tree came out to be O(m*n*lg n), where n <= 1e5 and m <= 50.
But the max. time for a test case using approach 1(fenwick tree) was 139ms, whereas using the approach 2(order-statistic tree) was 1.489 s. Actually, in approach 2, there were few more cases which had time greater than 1s.
Here is my code using the fenwick tree, and here is the one with the order-statistic tree.
I haven't solved many problems with the os tree. Indeed, I don't think it is that popular a data structure and generally the problem setter might have some other data structure in mind while making a problem or maybe some other technique. So my question is: Given a problem which I know can be asymptotically solved if I use os tree, Shall I try to use it or try to find alternative solutions to solve the problem and completely abandon the idea of using them ?
Being clearer, can I trust the O(lg n) bound per operation that is specified with this data structure ?