Hello!
My solution to this problem E implements the same ideia as the editorial. But first I used a O(m*n*lgn) solution here (for m=1000 and n=5000, which is ~6*10^7) that got TLE. I used map in this one which made it slower. Then I tried using map only in the beggining instead of using all the way in the code (as shown here). This means that first I mapped the string to the index that gives the element that the map would represent. Accessing the element this way is O(1), therefore the solution was O(m*n) which got accepted.
But shouldnt the 3 second limit be enough for less the 10^8 operations in the first solution? I though that 1 second meant 10^9 operations.
Sorry for the begginer's doubt. Possibly it's because I used many operations per iteration, but how should I know? what do you consider as the boundary, 10^7 per second?
PS: actually in the first solution the map is deleted and remade in every iteration, which means that its size is not always the maximun n, which is actually faster than what I considered in this post.
Maybe I am wrong.
But you also should count the max length of the word. Because in this case accesing element will be O(lgn * maxLength). Then we have 6*10^7 * 10 = 6*10^8, and it is quite big.
this doesnt explain it fully because 10^9 should also works, but it makes sense, thanks!
10^9 is way too much. 2*10^8 is comfortable and 4*10^8 is a stretch. 6*10^8 will be TLE.
10^9 does work in 1 second. But it really depends on what you do in these iterations. Some addition, subtraction, multiplication? Sure thing, lets be hopeful here. Add some division/mod operations, it doesn't look too bright now. With C++ map and strings there is a big constant factor here.
Along with the type of operations, now consider issues like cache miss, branch prediction, denormalized numbers, etc. And suddenly there is a huge difference between theoretical and practical complexities.