What would be the shortest piece of code that you would write in a contest for normalizing an array of distinct numbers? What about the most efficient?
e.g.: [102, 980, 2, 45] --> [2,3,0,1]
This is how I'm currently doing it — I'm looking for "better" ways of doing it:
vector<int> normalize(vector<int> A) {
vector<int> S(A), N(A.size());
sort(S.begin(), S.end());
for(int i = 0; i < A.size(); ++i) {
N[i] = lower_bound(S.begin(), S.end(), A[i]) - S.begin();
}
return N;
}
Please leave your solution in the comments section! Thank you!
I use the same code usually. Also, you might want to add S.erase(unique(S.begin(),S.end()),S.end()); after sort.
I'll keep that in mind for the times when the numbers are not distinct and I'd need such a normalisation.
I use smth like that:
It seems better to me because we can add not only A[i] into map, but, maybe A[i] + 1, A[i] — 1 too if we need that (if, as example, we need not to lose the proper body of segment, or smth else).
You may add a[i]+-1 to array before sort as well.
I mean using A[i], A[i] — 1 and A[i] + 1 same time. In my variant it is very short and simple: M[A[i]], M[A[i] + 1], M[A[i] — 1].
Example: we need that after normalizing of three segments: [3; 6], [6; 11], [11; 14] there were at least one point inside second segment, that lies outside other segments. Just normalizing them makes [6;11] be something like [1; 2], but we want something like [1; 3], because we need point 2 as self-body of second segment.
I understand you.
I understand it's not so short, but you can use function like
m(a[i] + 1);
. It's faster, but bit longer,I agreeDoing it this way is much slower than using arrays/vectors. Once I've got TLE because I used this approach instead of topic starter's one. That time I fixed it in a few minutes (we had full feedback on that contest), but someday it can be harmful, I think.
(тот контест был на прошлых летних сборах, кстати. Если не ошибаюсь, задача про площадь поверхности параллелепипедов, которую я решал как-то очень извращенно).
At least it is a different way. And also very interesting. If my info is correct map uses a Red-Black Tree as a support Data Structure. Maybe we could swap it with a hash_map/unordered_map and get a better performance?
unordered_map is in the new C++ standard and/or in the TR1 namespace as far as I recall. It's possible that evals might allow its usage but I don't know how things stand in terms of performance...
AFAIK, this problem cannot be solved with hash_map or something. Why? Because we need to compare two elements with less_than comparator, but not with equals_to. I mean, [1, 2, 3] is not a correct normalization for a [15, 43, 23], and [1, 3, 2] is. But [1, 2, 3] and [1, 3, 2] are indistinguishable with hash_map. The other reason is that if we have normalized array, than we can sort original array in linear time (just apply reversed permutation, if there are no equal elements). So normalized array cannot be built in linear time, because we cannot sort an array in o(n·log(n)).