Блог пользователя pimenta

Автор pimenta, история, 7 лет назад, По-английски

Hi,

I've wrote a solution for problem 883K - Road Widening in C++11 and Golang. C++11 runs in 156 ms, but Golang gets TLE (3000 ms).

C++11: 31844450

Golang: 31844038

Why? Did I do something wrong in the Golang implementation, increasing the time complexity?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +16
  • Проголосовать: не нравится

Автор pimenta, история, 8 лет назад, По-английски

Hi,

I've sent submission 25136808 using a global anonymous struct with some big arrays and methods inside. The submission got RTE case 7.

Next, I send submission 25137642 only removing the struct from around the arrays and the methods, and I got AC.

Then I thought "maybe the global arrays are not being initialized to zero when they are inside the struct". So I called memset(&st,0,sizeof st); and I got RTE case 7 again: 25137670.

What's the deal with structs in Codeforces? All these 3 codes work fine I'm my notebook in test case 7.

UPD: I've just submitted the same 3 codes again, now with GNU C++14. The behavior is the same for all 3 codes.

UPD2: Now I've just submitted 25138080, only giving a name to the struct from the first submission and it got AC. So the problem is with anonymous structs??

UPD3: Seems like anonymous structs are not supported in C++ 11, after some Internet research...

Полный текст и комментарии »

  • Проголосовать: нравится
  • +22
  • Проголосовать: не нравится

Автор pimenta, история, 8 лет назад, По-английски

Introduction

It's been around 18 months since dynamic minimum spanning tree first crossed my way and yesterday I've finally understood the offline square root decomposition for this problem. I think this solution is somehow explained here, but I also think it can be explained in a better way. This is the goal of this post. I'll explain how to solve two versions of the problem.

Acknowledgments

I'd like to thank thiagocarvp for explaining me the solution to the first version and victoragnez for explaining me the solution to the second version.

First version

Problem statement

First, you're given a connected graph with n vertices and m weighted edges. And then a sequence of q new edges is added to the graph. For each of these q new edges, output the weight of a minimum spanning tree considering only this and the previous edges. For example, take V = {1, 2}, E = {({1, 2}, 5)} and the sequence (({1, 2}, 7), ({1, 2}, 3)), i.e., n = 2, m = 1 and q = 2. The answers are 5 and 3, respectively.

Naive approach

Let's try to answer the queries online. First, build a MST for the initial graph. All we can do with new edges is try to improve the current MST, i.e., the MST can only become lighter, never heavier. It's not hard to see that the required procedure is the following greedy algorithm.

There are only two possibilities for a new edge ({u, v}, w):

  1. An edge between u and v is already present in the MST. In this case, just update its weight taking the minimum between the new and the old weight.
  2. There's no edge between u and v in the current MST. In this case, the new edge will create a cycle. Then just remove the heaviest edge from this cycle.

The first situation can be easily handled in using maps. The second, however, takes more effort. A simple DFS could find and remove the heaviest edge of the cycle, but it would cost O(n) operations, resulting in a total running time of at least operations in the worst case. Alternatively, it's possible to augment a link cut tree to do all this work in per new edge, resulting in a much better running time.

So the naive approach is either too slow (DFS), or too much code (link cut tree).

Solution

The naive approach might be hard to apply, but it certainly helps us to make an important observation:

Two consecutive MSTs will differ in at most one edge.

In other words, the changes in the solution are very small from one query to the next. And we are going to take advantage of this property, like many popular offline algorithms do. In particular, we'll do something like the square root decomposition of Mo's algorithm. Usually, this property is achieved by sorting the queries in a special way, like Mo's algorithm itself requires. In our case, we have just noticed that this is not necessary. Hence, we'll process the queries in a very straightforward way (and I keep asking myself what took me so long to understand this beautiful algorithm!).

The observation is used as follows. We'll split the queries in consecutive blocks of consecutive queries. If we compute the edges that simultaneously belong to all the MSTs of one block, we'll be able to reduce the size of the graph for which we should compute minimum spanning trees. In other words, we're going to run Kruskal's algorithm q times, once per new edge, but it will run for much smaller graphs. Let's see the details.

First, imagine the MST Ti computed right after adding the edge e of the i-th query. Now, if e belongs to Ti, consider . What does it look like? Sure, Ti' is a forest with two trees (components). And if we condense these two components, we'll get a much smaller graph with precisely two vertices and no edges. Right now, this much smaller graph do not seems to be useful, but let's see what happens if we consider this situation for not only one, but for a block of new edges.

Now, imagine the MST Mi computed right after adding all the edges of the i-th block Bi. The graph is a minimum spanning forest with at most components, because the removal of an edge increases the number of components in exactly one and we are considering the removal of at most edges. Therefore, a condensation would produce a set Si of at most vertices. Let's write X to denote the total sum of the weights of the condensed edges (the internal edges of the components).

Consider a MST for the set Si that uses only the edges added before the i-th block. This MST will have at most edges. If we use the edges of this MST to initialize and maintain a multiset M of edges, we can insert a new edge in M and run Kruskal's algorithm times, once per query. Over all blocks, we'll run Kruskal's algorithm q times for graphs with at most vertices and edges. For the j-th query, we should output X + Yj, where Yj is the total sum of the weights of the edges chosen by Kruskal's algorithm.

In a step-by-step description, the algorithm is as follows:

  1. Store the m initial edges in a multiset edges.
  2. Compute large, an array with the edges of a MST for the initial m edges (Kruskal's for m edges).
  3. For each block [l, r]:
    1. Create an empty array initial and swap the contents with large.
    1. Insert edges e[i] in the multiset edges, l ≤ i ≤ r.
    1. Recompute large for the new state of edges (Kruskal's for O(m + q) edges).
    1. Use large to find the forest and condense its components (using a DSU, for example) and to find the value of X.
    1. Create a multiset M of edges and use initial and the condensed components to fill it with at most edges.
    1. For each edge e[i], l ≤ i ≤ r:
      1. Insert e[i] in M.
      1. Compute Kruskal's minimum weight Y for the multiset M and output X + Y (Kruskal's for edges).

We run Kruskal's algorithm times for a graph with O(m + q) edges and q times for a graph with edges, so the total running time is around , if we have a fast DSU implementation.

Here is my AC implementation for this problem:

Code

Second version

Problem statement

Again, you're first given a connected graph with n vertices and m weighted edges. This time, the i-th query is a pair (ei, ci) where 1 ≤ ei ≤ m and you should output the weight of a minimum spanning tree after making w(ei) = ci permanently (until a new update for ei is required). For example, take V = {1, 2, 3}, E = (({1, 2}, 5), ({2, 3}, 6), ({3, 1}, 7)) and the updates (1, 8) and (2, 9). The answers are 13 and 15, respectively.

Solution

I'll assume that you have read the solution to the first version, which is simpler, but the core idea is the same: we'll somehow compute the edges that will remain unchanged thourgh a block of consecutive updates and will belong to all the minimum spanning trees of this block. Again, the remaining graph will have vertices and edges, so we'll compute the answer for each query in the same time complexity as before.

First, we should reduce the m edges to around O(n) edges using Kruskal. In this phase, we should consider only the edges that won't be updated in this block. Let's refer to these edges as non-updated. In the end, the graph will not be necessarily connected, but the purpose here is only to discard the edges that we know for sure that won't be present in any of the MSTs of this block. Let's call the edges selected by this phase as possibly useful edges.

Now, we should split the possibly useful edges in two disjoint sets: the at least certainly useful edges that will be present for sure in all the MSTs of this block and the at most remaining possibly useful edges to be considered by the Kruskals of this block. For this, we'll just run Kruskal one more time, except that the DSU should be initialized with a special procedure. This procedure is simply to call the union operation for each updated edge (the edges that will be updated in this block). After this initialization, this DSU will clearly have at least components, which means that this second execution of Kruskal will have to connect at least components to make the graph fully connected (this time the graph will end connected for sure!). The edges selected by Kruskal's algorithm are the certainly useful ones, while the discarded are the remaining possibly useful ones.

Along with the specially initialized DSU of the previous Kruskal, you can build a second one using only the certainly useful edges. This second DSU represents the condensation of the forest.

At last, build a set with the remaining possibly useful edges and the updated edges. Use and maintain this set to process the queries. For each update, remove the updated edge from this set, update its weight and insert it again, then run Kruskal's algorithm over this sorted set of edges. You can also maintain a larger set with all the m edges during the updates and across all blocks.

Here is my AC implementation for this problem:

Code

Conclusion

I hope this post can be useful for others. Constructive criticism and related problems to solve are welcome in the comments!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +82
  • Проголосовать: не нравится

Автор pimenta, 8 лет назад, По-английски

As you can check here, it was very hard for many people (including me) to understand editorial's solution for problem 758E - Сломанное дерево. The solution has two parts: one DFS to compute the current, minimum and maximum weights for each subtree and check if we should print -1; and then a second DFS to reduce the values of the edges and fix the tree. The explanation for the first part is great, just check it and remember to ignore the "dp" prefix in variable names (there is no overlapping subproblems, so that's not dynamic programming). But when it came to explain the second part (which is the harder one), the author decided to spare words. So here it goes the full explanation. I'll assume you've read the explanation for the first part and I'll use the same (bad) variable names.

Before calling dfs2(1), we should create an array goal such that goal[u] will store how many units we should remove from the edges of the subtree rooted at u. Therefore we should start with goal[1] = dpw[1]-dpmax[1].

The second DFS is a "pre-order" traversal in the following sense. Suppose we're visiting u. First, we should break the value goal[u] and store the pieces in goal[v] for each edge (u, v). In other words, we should break the goal for vertex u into possibly smaller goals for the children of u. Then we should call dfs2(v) recursively only after no more transfers are possible from goal[u] to the branch that contains v. Let's see some code for this idea.

First, let's code a helper function to transfer w units from goal[u] to goal[v]:

transfer(u, v, w):
  goal[u] -= w
  goal[v] += w

The first reduction is to remove the obligatory units, i.e., dpw[v]-dpmax[v]:

dfs2(u):
  for each (u, v, w, p) in E:
    transfer(u, v, dpw[v]-dpmax[v])

After this reduction, it's possible that goal[u] is still greater than zero (but never less than zero, by the construction of the arrays dpw and dpmax). Then we should greedily remove some units from the subtrees rooted at the children of u, assuming their current weights as dpmax[v]:

  for each (u, v, w, p) in E:
    transfer(u, v, min(goal[u], dpmax[v]-dpmin[v]))

We do that because it's always better to remove units from deeper edges (this is not hard to see). It's important to note that these two loops cannot be merged. We should take some additional units from the subtrees rooted at the children of u only if taking the obligatory units was not enough.

Now, if goal[u] is still greater than zero, then all we can do is to take some units from the edges adjacent to u, assuming the current weights of the subtrees as dpmin[v]. This is the last transfer operation from u to each of its children, so we can finally do the recursion:

  for each (u, v, w, p) in E:
    tmp = min(goal[u], min(w-1, p-dpmin[v]))
    w -= tmp
    p -= tmp
    goal[u] -= tmp
    dfs2(v)

I'd like to thank Jakube for his submission 24488076 which helped me to understand the editorial's solution and hope this post can help others.

Here is my code: 24557663

Полный текст и комментарии »

  • Проголосовать: нравится
  • +14
  • Проголосовать: не нравится

Автор pimenta, история, 8 лет назад, По-английски

Hi,

whenever I try to see the list of submissions of some user (link http://codeforces.me/submissions/user), the message "Ooops! Something has broken down in Codeforces. Do not panic, you can try to reload the page or return Home. Anyway we will carefully read megabytes of logs, analyze stacktraces and fix the problem." appears besides an image of a bug.

If I logout and try again, I can see the page.

Is this also happening to someone else?

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор pimenta, 8 лет назад, По-английски

Introduction

One of the tasks of last sunday's round problem 755F - PolandBall and Gifts was to check if k is the sum of some submultiset of positive integers. The most important observation to solve the problem is:

The sum of all elements in the multiset equals to n ≤ 106.

Wild solution appeared! But not for me :( In the whole puzzle, this was the only missing part. All I could think was about an O(n2) DP. Although this is a well-known problem with well-known solutions, I couldn't find really good tutorials and I've had to struggle with the poor explanations and the contest's source codes. The most helpful reference was this one, but it does not explain the trick in full details and it only shows half the solution for the problem we'll discuss here. So, in this post, I'll try to explain the solution from the beginning. Of course, any additional good references are welcome in the comments.

I chose a solution that was submitted by at least three different users, including tourist, ershov.stanislav and rajat1603. This is not the easiest solution to understand, but IMHO is the easiest to code. Check my implementation near the end of this post.

Prerequisites

Dynamic programming and bitwise operations.

Problem statement

Warning: Here, n has a different meaning from problem 755F - PolandBall and Gifts's n. Symbol k will denote the same thing, though.

You're given a multiset of positive integers X = {(a1, m1), (a2, m2), ..., (an, mn)}, i.e., integer ai occurs mi times in the multiset. Decide if there's a submultiset with sum equals to k. For example, if X = {(1, 7), (3, 2), (19, 5)} and k = 10, then is a solution, because .

Let's start with the upper bounds. Without loss of generality, we may assume that:

  1. The integers ai are such that ai < k, because ai = k yields a trivial answer and ai > k is useless. Therefore we have at most k - 1 distinct integers: n ≤ k - 1 = O(k). And, in the worst case, ai = i.
  2. The multiplicities mi are such that ai·mi ≤ k, because if ai·mi > k, then we know that the integer ai occurs in the solution at most ci < mi times.
  3. Remember the n-th harmonic number: . From that and from the previous bounds, we have .
  4. Finally, . Proof.

Solutions

Simpler version, naive DP

We could replicate the repeated integers creating a sequence (b1, b2, ..., br) such that and apply the following dynamic programming.

Let f(i, j) be only if it's possible to find a subsequence from (b1, b2, ..., bi) with sum equals to j. Then

the answer is f(r, k) and the cost is , from Upper Bound 3. The only non-trivial transition is the last: if you can include bi in the solution, you can either take it (subproblem (i - 1, j - bi) remains) or leave it (subproblem (i - 1, j) remains). If any of these choices return , then subproblem (i, j) has a positive answer.

Simpler version, tricky DP

The well-known std::bitset data structure is capable of performing O(n) bitwise operations with an O(1 / 32) constant improvement (that actually depends on the underlying integral type being used... std::bitset uses long, which has 32 bits in the Codeforces environment). We can twist the DP above to be suitable for using bitsets.

Let f(i) be a k + 1-length bitset such that j-th bit (0-based) is 1 only if it's possible to find a subsequence from (b1, b2, ..., bi) with sum equals to j. Then

the answer is the k-th bit of f(r) and the cost is . The non-trivial transition took me some minutes to understand, since I didn't know the precise meaning of this DP. But at this point, I think it's not hard to see that the left shift by bi positions combines all values allowed by subproblem (i - 1) with the current integer bi, while the or ensures that all values allowed by subproblem (i - 1) stay allowed. The latter is the same as leaving the integer bi. Pretty cool, right?! But there's something even cooler!

Full problem, naive DP

Let's go back to the multiplicities.

Let f(i, j) be only if it's possible to find a submultiset from {(a1, m1), (a2, m2), ..., (ai, mi)} with sum equals to j. Then

the answer is f(n, k) and the cost is . The only non-trivial transition is the last: all subproblems of the form (i - 1, j - ai·t) are or'd together, subject to 0 ≤ t ≤ mi and j ≥ ai·t. In other words, we consider all possible amounts t of taking ai into the solution. This is pretty much the same DP as the one for the simpler version of the problem. Notice that the upper bounds are the same!

Full problem, tricky DP

Let f(i) be a k + 1-length bitset such that j-th bit (0-based) is 1 only if it's possible to find a submultiset from {(a1, m1), (a2, m2), ..., (ai, mi)} with sum equals to j. Then

the answer is the k-th bit of f(n) and the cost is . The non-trivial transition combines all possible amounts t of taking ai into the solution with all the values allowed by subproblem (i - 1). If it's hard to understand this DP, spend some more time in the simpler version. Try to get everything clearly before going ahead.

Full problem, trickiest DP

Now, we're going to kill that factor. Consider the follwing C++ bottom-up implementation:

  bitset<MAXK> dp;
  dp[0] = 1;
  for (int i = 1; i <= n; i++) {
    for (int x = 0; (1<<x) <= m[i]; x++) {
      dp |= (dp << (a[i]*(1<<x)));
      m[i] -= (1<<x);
    }
    dp |= (dp << (a[i]*m[i]));
  }
  // now, dp[k] equals the k-th bit of f(n)

This is the previous DP with a small but very important modification. Instead of blindly or'ing all the possible shifts iterating t from 0 to mi, we do that in a faster and smarter way. We only do that for the first powers of 2 that we can permanently subtract from mi and for the remainder.

The following loop invariant holds in the beginning of each iteration: ai is combined with shifts t = 0, 1, 2, 3, ..., 2x - 1. When x = 0, the invariant trivially holds. Now, suppose this is true for some x. Then combination of shifts t = 0 and t = 2x will produce shift t = 2x. Combination of shifts t = 1 and t = 2x will produce shift t = 2x + 1. Combination of shifts t = 2 and t = 2x will produce shift t = 2x + 2. Combination of shifts t = 3 and t = 2x will produce shift t = 2x + 3... Until combination of shifts t = 2x - 1 and t = 2x produces shift t = 2x + 2x - 1 = 2x + 1 - 1 and the invariant maintenance is complete.

First of all, we just showed that for all non-negative integer z. Now, let y be the value of x after the last iteration. We know that we subtracted from mi. Then the last bitwise operation for a fixed i combines the shifts t = 0, 1, 2, 3, ..., 2y - 1 with the final shift w = mi - (2y - 1). In the previous paragraph we saw that these combinations yield shifts t = w, w + 1, w + 2, w + 3, ..., w + 2y - 1. The last shift is actually t = mi - (2y - 1) + 2y - 1 = mi. To see that shifts t = 0, 1, 2, 3, ..., w - 1 are also combined, just notice that w - 1 must be less or equal to 2y - 1, since otherwise w would be larger than 2y and the loop would be still running. So we conclude that all shifts from t = 0 to t = mi are now combined in the bitset f(i)! Now, that is cool, right?!?

The cost of this algorithm is clearly , using Upper Bound 4.

Application to problem 755F - PolandBall and Gifts

Warning: The n here refers to problem 755F - PolandBall and Gifts's n. It does not refer to the number of distinct integers in the multiset anymore. Symbol k still refers to the goal value, though.

In the more general version of the problem, discussed above, we used the fact that ai·mi ≤ k to state Upper Bound 4: . In the round's problem, we have an even tighter bound. Let me remind you about the first observation of this post:

Not only the product ai·mi is bounded by some value for all i, but the whole sum of these products over all i is bounded by that very same value. As proved here, this strong additional constraint gives us the upper bound . Hence, the overall complexity using the last algorithm discussed here is , as k ≤ n.

Conclusion

Despite these upper bounds and algorithms are well-known to many people, they weren't to me until yesterday. I hope I've gathered all necessary information about this subject in this post and hope this will help others.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +209
  • Проголосовать: не нравится

Автор pimenta, история, 8 лет назад, По-английски

Given a bipartite graph with nonnegative edge weights, find a matching with maximum total weight (maximum number of edges is not required).

Example:

V = {1, 2, 3, 4}, E = {{1, 2}, {3, 4}, {2, 3}}

w({1, 2}) = 1, w({3, 4}) = 1, w({2, 3}) = 10

Optimal solution: {{2, 3}}

A friend suggested the following solution:

  1. Ignore weights and find maximum cardinality bipartite matching MCBM.
  2. For each , define w'(e) = BIGVALUE - w(e).
  3. Find mincost flow for f = 1, 2, 3, ..., MCBM with respect to new weight function w'. Output the minimum.

Will that work? Is there any faster solution?

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор pimenta, 9 лет назад, По-английски

I was searching for this well-known technique, but I couldn't find any good tutorials about it, so I had to figure it out myself. And now I want to write my first contribution. The binary lifting method for answering LCA queries is explained in many places, like this, this and this (the method with cost ). I'll explain it one more time, emphasizing how it can be adapted to answer queries about paths on weighted trees (static version, so topology/weight updates are not supported).

Suppose we have a weighted tree G = (V, E), where V = {1, 2, ..., n} and is the weight function. First, we need to define some things, which can all be calculated with a simple DFS:

  1. p(u) is the parent of vertex u in the DFS spanning tree. p(r) = r, if r is the root.

  2. pw(u) is the weight of the edge from u to its parent, p(u). Therefore, pw(u) = w({u, p(u)}). If r is the root, pw(r) must be the identity element of your query, like for min, or  - ∞ for max.

  3. L(u) is the level of vertex u and is defined as L(u) = L(p(u)) + 1. L(r) = 0, if r is the root.

This method is interesting because it combines dynamic programming and divide and conquer paradigms. Let f(u, i) be the 2i-th ancestor of vertex u. So what is the recurrence relation? How can we break path from u to its 2i-th ancestor into pieces, so we can compute it more easily? Stop and think. If we're storing ancestors in powers of two, how about breaking the path in its half? Which vertex is exactly in the middle of u and its 2i-th ancestor? Yes, its 2i - 1-th ancestor. More precisely, f(u, i - 1). So the acyclic recurrence is

f(u, i) = f(f(u, i - 1), i - 1)

and the base case is

f(u, 0) = p(u)

The greatest i that makes sense for f(u, i) is such that 2i = O(n). So . Then we have DP states and computing one state costs O(1). Therefore, preprocessing will take up to operations (DFS + DP).

The intuitive algorithm to find LCA(u, v) using f(u, i) follows:

  1. Let r be the root. Choose u to be the farther vertex from r of u and v. In other words, swap u and v if L(v) > L(u).

  2. Let . Set u to be its ancestor in the same level of v, climbing up the tree. This can be done with , for i = k, k - 1, k - 2, ..., 0, if, at each iteration, f(u, i) is still not above v, that is, if L(f(u, i)) ≥ L(v).

  3. At this point, if u = v then LCA(u, v) = v and we can stop the algorithm.

  4. Finally, climb up the tree with u and v. In other words, make and , for i = k, k - 1, k - 2, ..., 0, if, at each iteration, f(u, i) and f(v, i) are still not above their LCA, that is, if f(u, i) ≠ f(v, i). At the end of this stage, p(u) = p(v) is the LCA we seek.

No doubt, this algorithm runs in time. Now, let's observe the strategy it is based on. First, we climb the path from u to its ancestor in the same level as v, then we climb from v and this new u to their LCA. Note that all these jumps split the path between original u and v vertices in disjoint paths. Therefore, we're actually dividing the problem in smaller disjoint subproblems. This is the principle of divide and conquer and all those algorithms and data structures for RMQ and LCA. So is easy to adapt the algorithm described above to answer queries about paths in the underlying tree. All we need is to define a function g(u, i) to give the optimal (or total) result for path from u to its 2i-th ancestor, which can be formally defined as

and

g(u, 0) = pw(u)

The adaptation to the 4-stage algorithm described above is very simple. Start the answer with the identity element of your query (like for min, or  - ∞ for max). Whenever you make or (stages 2 and 4), improve the answer with the respective result: g(u, i) or g(v, i). If stage 4 is actually executed, then is necessary to improve the answer with pw(u) and pw(v) in the end. That's all!

Thanks to dalex for the insight (here).

For more dynamic versions of this problem, we have:

  1. Heavy-light decomposition (here and here), so you can update weights.

  2. Link cut trees (here), so you can update both weights and topology.

Both HLD and LC trees can answer queries in time, but they're harder to code and understand.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +51
  • Проголосовать: не нравится

Автор pimenta, 9 лет назад, По-английски

Hi,

I'm trying to solve this problem, which asks the number of labeled trees with N vertices and maximum degree K (modulo 109 + 7), where 1 ≤ N ≤ 100 and 1 ≤ K ≤ N.

What I have so far:

If f(n, k) is the answer, then f(n, k) = f(n, k - 1) + g(n, k), where g(n, k) is the number of labeled trees with n vertices, at least one vertex with degree k and no vertex with degree larger than k. But, is there a formula for g(n, k)?

f(n, 2) = n! / 2, because this is the case of a "list" tree. The answer is the number of permutations (n!), removing duplicates ( / 2), such as 1<->2<->3 and 3<->2<->1 (for f(3, 2)).

And, of course, f(n, 1) = 0, f(1, k) = 1, f(2, k) = 1 and f(n, n) = f(n, n - 1).

There is also Cayley's formula: the number of labeled trees with n vertices is nn - 2. Then f(n, n - 1) = nn - 2.

Thanks in advance for any help!

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор pimenta, 9 лет назад, По-английски

The problem gives a MST T and a series of Q queries, each one with a new edge e = {u, v} such that no edge between u and v exists in T. For every query, we have to improve T with e and print the new weight of T.

The best I can do is run a DFS (O(|V|)) to find the current path P between u and v, and find the heaviest edge in P. If , we improve T removing and inserting e. The overall running time for a test case is O(Q|V|).

Does anybody know an asymptotically faster algorithm for this problem?

Thanks in advance!

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится