№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 155 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Название |
---|
The first thing to notice is that you will use at most N-1 edges, otherwise you would create a cycle and that wouldn't be a shortest path, because you could eliminate the cycle from the path and that would result in a shorter one. So now we have at most 150 nodes and 150 edges for query.
Second, if we are at a node X coming from a path with Y edges, we can go to any adjacent node of X using a path with Y+1 edges.
How can we use this in our advantage?
Let's sort the edges by increasing weight and try to relax all of them in this order; each time we relax an edge (u,v) we are getting the shortest path to v using only an increasing path, because the edges we've relaxed so far have a small weight.
But there is still the issue of the amount of edges used, so we can consider each node being a pair <node,num_edges>, then we try to relax all <u,x>, 0 <= x < n-1, reaching all <v,x+1>.
We can notice that if one query starts at node A, any other query that also starts at node A will have already been computed, so we can save the result of each query for further use.
When the relaxing is done, we can see all <B,x>, 0 <= x <= c, and get the small distance.
I'm not sure if I've made myself clear, so feel free to ask anything.
thank you for explanation, if you don't mind what do you mean by sorting all edges?
I mean that instead of creating a graph just store the edges in an array/vector, then sort them by increasing weight.
thank you very much. I'll try to implement this idea :-)