Блог пользователя Dormi

Автор Dormi, история, 3 года назад, По-английски

1534A - Colour the Flag

Author: crackersamdjam

Hint 1
Hint 2
Hint 3
Tutorial
Solution

1534B - Histogram Ugliness

Author: Dormi

Hint 1
Hint 2
Hint 3
Tutorial
Solution

1534C - Little Alawn's Puzzle

Author: Ninjaclasher

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution

1534D - Lost Tree

Author: Plasmatic

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution

1534E - Lost Array

Author: Plasmatic

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution

1534F1 - Falling Sand (Easy Version)

Author: Dormi

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution

1534F2 - Falling Sand (Hard Version)

Author: Dormi

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Tutorial
Solution

1534G - A New Beginning

Author: Aaeria

Tutorial
Solution

Tests generator

1534H - Lost Nodes

Author: Ninjaclasher

Tutorial
Solution
  • Проголосовать: нравится
  • +314
  • Проголосовать: не нравится

»
3 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Thanks for quick editorial.

»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

This is hard :(

»
3 года назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

B solution: should it be $$$a_i > a_{i-1}$$$?

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Why I am getting WA on test case 4 in problem C. 119395523 . I used dfs and printed 2^(connected comp).

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

in question B editorial it is mentioned that

"observe that decreasing a column will never affect whether it is optimal to decrease any other column, so we can treat the operations as independent."

Why they are independent, Why it will never affect other columns?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Initially consider just column i, all the columns in the range [1,i-2] and [i+2, N] won't be affected (since anything you do to column i doesn't change the outline of these columns).

    Since height of i column would be greater than the i-1 column (for any updation to height of i based on the algorithm), right outline of i-1 column won't change (left outline also doesn't change, by induction i.e. use i-1 as the central column). You could similarly argue for the i+1 column.

»
3 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

I tried a different approach for D, which I am not able to prove. So it would be great if someone can have a look and either prove or disprove it.

My approach
  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    This approach is same as given in tutorial. As you start from level 2, at this level you will get parent of all nodes at level3 so you would never query about nodes at level3 and go to level4 and will continue in similar fashion

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      No, I may go to level 3 as well. I am querying those vertices whose parents I don't know currently. If there exists a sibling for a node, I will calculate the parent of sibling as well. Now, I will not query this sibling node as its parent is known and it may have children which will be in level 3. So when I reach level 3, I will query for the children.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +9 Проголосовать: не нравится

    You are basically querying on all the even level vertices. Which is a form of bipartite graph. But it might get hacked. Say the total number of nodes present on even levels is more than n/2. How are you tackling this? I queried on even/odd nodes whichever set is smaller.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    Yes I also used the Same Approach . The Correctness of this method can be proved by simply Saying that for any edge in the tree this approach will ask for max 1 node as a query (0 in case of that edge which connects it's parent and Sibling ) . Thus the maximum Number of Queries asked in the process will definitely <= ceil(n/2) and upper limit will be reached when no node has any sibling , straight structure like 1->2->3->4->5

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +6 Проголосовать: не нравится

    even i did it with the similar approach i tried on three four examples ans its working fine. This seems intuitive to me as for non leaf nodes we are querying only one time for finding parent and in this query only we also get info of one children also so we basically covered two nodes at least in one query it can be even more if node has siblings so that's why number of queries will always be less than ceil(n/2). Also if any particular depth has only single node than we don't require any query to find parent for all the nodes at depth+1 as there is only one option for parent at previous depth.

    my submission - https://codeforces.me/contest/1534/submission/119432261

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    hey, we got a similar approach. I tried to prove it here.

»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Nice problemset. "Lost Problemset"

»
3 года назад, # |
  Проголосовать: нравится +33 Проголосовать: не нравится

जय हनुमान ज्ञान गुन सागर। जय कपीस तिहुं लोक उजागर।।

Nice questionset.

»
3 года назад, # |
  Проголосовать: нравится +22 Проголосовать: не нравится

So interactive :D

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

119391919 Can someone please tell me why I'm getting TLE in problem C?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You're creating array of size 400005 for every test case. There can be a lot of test cases so you're basically trying to create 4*10^9 integers which will take too long.

»
3 года назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

liked the beauty of problem D, great job!

»
3 года назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

For E, I think we can also do greedy:

First, if $$$n$$$ is odd and $$$k$$$ is even, we can never win.

Notice that two moves always results in even number of bits flipped. So if $$$n$$$ is odd and $$$k$$$ is odd, then we will have odd number of moves, so start by turning on some $$$k$$$.

Now we are left with turning on even number of $$$0$$$ bits.

Let's denote $$$action1$$$ as turning on some $$$k$$$ bits using one move, and $$$action2$$$ as using two moves to turn on $$$x$$$ bits, where $$$x$$$ is even, and $$$2\le x\le min(2k, 2(n-k))$$$.

If $$$action2$$$ is unavailable(there are no possible values for $$$x$$$), and our current number of zeros is not divided by $$$k$$$, we lose.

Otherwise, if $$$k$$$ is odd we can just do action2 greedily, and if $$$k$$$ is even, before the greedy, we might need to do a single $$$action1$$$ (this is easy to check).

Code: 119401441

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +26 Проголосовать: не нравится

Kinda different solution to E. I'm sad, tired, and hopelessly hardstuck, so this is the short ver

The answer is NO iff $$$n$$$ is odd and $$$k$$$ is even. The if part is true because we can't distinguish between any two arrays $$$a$$$ and $$$b$$$ such that for all $$$i$$$, $$$a_i \oplus b_i = 111111...$$$.

Let's show that it's possible otherwise. The case where $$$k$$$ divides $$$n$$$ is trivial.

Consider all other cases except for ($$$k$$$ is odd, $$$n$$$ is even, $$$2k$$$ > $$$n$$$). We can greedily form segments of length $$$k$$$ until there is an even number (let this number be $$$r$$$) of elements remaining such that $$$r < 2k$$$. It takes $$$2$$$ more queries to solve the problem. We can show that this is optimal via a parity condition.

In the case ($$$k$$$ is odd, $$$n$$$ is even, $$$2k$$$ > $$$n$$$), we can set $$$k = n - k$$$ and solve the problem as in the former case, except that we query the complement of things. Pretty sure this is provable (the problem becomes strictly harder if you're given the complement xor, and you can solve it in the optimal number of moves in the equivalent easier version), but my head is foggy :/

I think solving it this way is very impractical tho, the bfs solution should be the "correct" one to think of during contest

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain why my O(N^2) solution give TLE in problem D? My Submission

My Approach

I am not sure if the my approach itself is accurate, but still O(N^2) solution must not be giving TLE.?

Thanks.

»
3 года назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Please, someone, explain 2nd problem, Thanks :|

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Basically, for a given array ugliness is sum of abs(arr[i]- arr[i+1]) (insert 0 at front and back of the array). We can perform an operation which is decreasing arr[i] to arr[i]-1 provided arr[i]>0. ugliness in increased by 1 on each operation
    We need to minimize the ugliness of the array by performing the operations >=0 times or 0<=i<=n.

»
3 года назад, # |
  Проголосовать: нравится +38 Проголосовать: не нравится

Benq orz.

Spoiler
»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I really enjoyed the round, I think it was well balanced.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I wonder why so many contestants including me got WA on F1. And my solution is similar to toturial's (using strongly connected components) . That confuses me. (Ahh...I spend more than an hour during the contest on debugging my F1 code!!!)

Here is my submission

Can somebody provide some details to be careful on F1? Or some special test cases?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Hi Dormi, What I have done is that I created a directed graph as mentioned and then choose a vertex with in-degree 0 and started DFS, and marked all reachable vertices as done, Then choose another vertex with in-degree zero and so on, My answer is the number of times I start DFS. Why would this solution give Wrong answer? Why do we need to find SCC instead of doing DFS on the fly?

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I start with 0 indegree vertices but try to visit all of them, here is my submission. Please help.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Consider two rows in distance 2, each completly filled with sands.

        Both rows each build a circle of dependencies (an SCC), the above row triggers the below row, and there is no vertex with indegree 0.

        To see that the whole construct is triggered with only one operation we need to somehow see that the triggereing must start in the top row. That is, a SCC with indegree 0.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Yes, but in this case, my code reports correctly because since there are no vertices with in_degree = 0, it tries to visit unvisited vertices starting from vertices with greater height.

          The case where my code will fail is when an SCC at a lower height triggers two or more SCC's at a height above it.

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
            Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

            Greater hight is not a good criteria, as seen in example 2. We can construct grids where we must not start from greatest height.

            Something like this:

            x....
            x..xx
            x....
            xxxx.
            
            • »
              »
              »
              »
              »
              »
              »
              3 года назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              Yes for your example my code doesn't work. Thank you.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        You could use topological sort here, instead of going with the indegre==0 idea

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Are you suggesting that instead of shrining SCC into one node and then counting vertices with zero in degree, we can just do a topological sort and find the number of in-degree 0 vertices?

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            We don't exactly need to find nodes with indegree==0 as a starting node. Just topological sort it and then start from reverse. This will make sure you are using the least number of nodes as a starting point.

»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

is it possible to get different verdict in pretest and submission?

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone please tell why i am getting wrong answer on pretest 2? 119361708

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    There can be no 'R' or 'W'. Your code fails on test cases when all characters are '.'

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Now, the same code got accepted, does that mean the testcase with only '.' is not there ? Here it is 119409968

      • »
        »
        »
        »
        3 года назад, # ^ |
        Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

        Emmm, last time I use a 3*3 grid only has '.' on your code 119361708 , and it outputs wrong answer. But now your code 119409968 can output right answer.

        So it must have been changed something... (doge

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Please can anyone tell me what does "set" mean in "bipartite set" in problem D?

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

what are the Algorithms i need to learn so that i can consistently solve A and B problems Ps: newbie:)

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

    none. That's the good part.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    You don't need to learn many algorithms, A, B problems are mostly ad-hoc, just solve Div2 A, B, C problems from previous contests and you'll be good.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem E does d, the maximum no. of queries also varies with n and k? I thought we just have to make it within 500 queries irrespective of n or k. T_T

»
3 года назад, # |
  Проголосовать: нравится +15 Проголосовать: не нравится

One of the best Editorial for Problem E. Lost Array, I could find on the net is this one by MagentaCobra, Solution for Lost Array (E).

»
3 года назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

In problem D, sample test case 2. How can we tell for sure after just one query? It's given the tree is

? 5 gives 2 2 1 1 0. So can the tree be not like this?

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +13 Проголосовать: не нравится

    Yes really confusing. It is just authors example, they can know the solution without even attempting:))

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Testcases in interactive problems doesn't make any sense. They are there just for helping understand interaction protocol, without giving hint to solution.

»
3 года назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

Update: Got the wrong case:

What's wrong in my code 119358197 ? got Wrong Answer test case 4. my logic was exactly same as editorial .

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Did anyone solve problem C using maths, like a single formula or something?

  • »
    »
    3 года назад, # ^ |
    Rev. 8   Проголосовать: нравится 0 Проголосовать: не нравится

    Sorry for that, I think sieve and modular exponentiation was the only math in it, no single formula.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      He is using DSU. I am asking if it was possible to solve it using only some mathematical formula?

»
3 года назад, # |
Rev. 5   Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone explain intuition behind hint 4 for problem D

The min(number of black, number of white) nodes is ≤⌈n/2⌉.

why can't it be <=floor(n/2) symbol? any counter example? UPD: problem solved, the editorial is updated.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem B why it is not optimal to reduce the height of a[i] a[i]>a[i+1] when i==0

consider case n=2 5 1 initally ans = 5+4+1

1 1 now ans = 4+1+1

or I missing something.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It is optimal. You just reduced the ugliness score from 10 to 6 right?

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      yeah, but why in the editorial this condition is not given if(a[0]>a[1]) a[0]=a[1]

      the condition for optimal ans given in the ediorial is only when (a[i]>a[i-1] and a[i]>a[i+1]) but what about the end points

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I figured it out. It turns out it's because I loop till the bottom of the table for every sand block I have. I found an optimization for this with storing the first block of sand below each block with DP.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      A possibly easier way than DP (and I used this in White Pawn from a recent Atcoder contest) was to store the values in a set, and use lower_bound / upper_bound depending on what you want. It's slower to run, being $$$O(\log n)$$$ rather than $$$O(1)$$$ after pre-calculation, but it's faster to implement, and that extra computation time is almost always ok.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Can you please elaborate? Store which values

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          In my solution, I had $$$top_j$$$ store the values of $$$i$$$ for which there was a block of sand in $$$grid_{i,j}$$$. So I can easily query the next block under $$$(i, j)$$$ with top[i].upper_bound(j) (upper_bound because lower_bound will give $$$i,j$$$)

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Oh, I got it. Thanks! But tbh for me DP was easier to implement XD But your solution is definitely useful in many other cases

»
3 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

The hints before the solutions are very helpful . I think every editorial should have hints .

»
3 года назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

I think my solution for D seems more creative/easy because you dont need any experience with standard/similar problems, so I am putting it here

So we will root the tree at 1 and evaluate parent for each node.

first we will ask for depths (i.e. distances from 1), and for all nodes with depth 1 we will set 1 as their parent

Now, let's iterate over all nodes in increasing order of depth.

Whenever we encounters a node (let's call it r) without a parent assigned we will make a query with r, and set it as a parent of all nodes with distance 1, except one which already had a parent, let's call it p. Now take r and other nodes which have a dis of 2 with r and same depth as r, set p as their parent.

Proof:

Let's ignore 1st query and node for simplicity

Note, for each node that we made a query with, we skipped at least 1 node (obviously its parent) 
above it...

also notice that each node is covered by at most 1 of its children (analyze the last operation).

So, chosen <= skipped   // ignoring 1st node/query

in the worst case chosen = skipped,
 total_nodes = chosen + skipped + 1 = odd number // added 1 for node 1
 chosen + 1 == ceil(total_nodes/2) // added 1 for 1st query to LHS

code

I only had a blurred picture of it in the contest :(

Please correct me if something was wrong, and also feel to ask.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I wasn't able to take this contest, but I just did a virtual and I have to say — These problems were brilliant! They were unique, easy to understand, and not too complicated to implement (even F1, if you just look at the solution, it seems complicated, but it's really not).

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +39 Проголосовать: не нравится

My solution to E was much more direct than using BFS. The first observation is that each query just gives you a linear equation (mod 2) containing $$$k$$$ of the variables. Then, in order to find the XOR-sum of everything, it's necessary and sufficient to make some queries such that a linear combination of them is the desired XOR-sum. In particular, because we're working mod 2, the coefficient of each equation is either $$$0$$$ or $$$1$$$, and we can just not query the 0's, so we need exactly a set of queries such that the total XOR is the XOR-sum. In other words, we need to query each variable an odd number of times.

Now, the problem is much simpler: first, consider a number of queries $$$d$$$, so that we will make exactly $$$d$$$ queries each with exactly $$$k$$$ distinct variables. Then, if we fix the frequencies of variables, the greedy strategy "always query the $$$k$$$ variables with the greatest remaining frequency" is guaranteed to work unless the max frequency is $$$> d$$$, in which case there is no solution. Thus, we want to minimize the max frequency while keeping all frequencies odd and having sum of $$$dk$$$; this is achieved by making the frequencies as similar as possible, either $$$1+2\lfloor (dk-n)/2/n \rfloor$$$ or $$$1+2\lceil (dk-n)/2/n \rceil$$$. In particular, $$$d$$$ is valid iff $$$dk \ge n$$$, $$$dk \equiv n \pmod{2}$$$, and $$$1 + 2 \lceil (dk-n)/2/n \rceil \le d$$$, so we can find the minimum $$$d$$$ with a simple linear search.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I was thinking of something similar and got all the inequalities for necessary $$$d$$$. But then I got stuck at step of showing that the $$$d$$$ is sufficient (the greedy strategy you mentioned). Can you give some more insight on why it works and how did you think of that? Thanks!

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      The intuition is just that having smaller groups is better than larger groups because small groups can always be split into separate queries, but large groups can get stuck, so you want to prioritize cleaning up the largest groups. The k=2 case is definitely quite well known: you can form pairs of distinct elements as long as there's no single majority element. You can prove this formally by inspecting the greedy algorithm, or you can use a max-flow-min-cut/Hall's marriage theorem argumrnt.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In question D I'm getting an error "wrong output format Unexpected end of file — int32 expected". I don't understand if I used wrong approach because my code works fine on small test cases and counter test cases. If someone could help me resolve it that would be a great help. Thank you. My code: 119541888

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem B would have been even more interesting and difficult if height of bars could be increased as well. Which sadly how I misinterpreted it.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem G, $$$dp[x][y] = (\min{dp[a][b]} \forall y-(x-a) \leq b \leq y+(x-a)) + (\sum{\frac{|y-z|}{2}} \forall potatoes (x,z))$$$ all such b's aren't reachable, won't that create any problem? Also what should be our intial function, I think it should be like $$$f(0) = 0,$$$ else $$$\infty$$$. Somebody please help me out.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Many times, you can analyze the version of the problem where positions are allowed to be reals rather than integers, and then prove they have the same result. This problem has that property, because all inflection points are integers, so you might as well only visit integer points.

    Your initial function is correct. It's actually enough to treat it as $$$f(x) = (n+1)|x|$$$.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +13 Проголосовать: не нравится

      Thanks a lot!! I understood it now. Actually what I understood is :

      • The answer always occur at one of the given points in the final graph(we need not think that whether we have our calculated minimum achievable or not)

      • Whenever we expand our graph about the center, all the points in domain get their new values from the points which were in domain previously(here I want to say that they are not taking any junk value from the previous states which were not achievable). This can be seen from parity of points and the offset we have. Actually this was the main confusion for me.

      Basically at every transition all the points in domain are correct and we do not care about others, and in the end we will surely be having some point (which was given intially) on the global minimum line.

      Please correct me if I am wrong somewhere.

»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Its mysterious to me. Please point out what is wrong in this easy problem A-solution.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    This loop checks only positions with same parity, but should check all positions.

           for(int i=0 ; i<n ; i++){
                for(int j=0 ; j<m ; j++){
                    if((i+j)%2 == tmp && s[i][j] != '.' && s[i][j] != s[p][q])yes = false;
                }
            }
    
»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In F2 we need to mark the minimum number of nodes in a DAG such that a set of "special nodes" can be reached by at least one marked node.

Because any special node can be reached by a contiguous range of other nodes we can solve this problem efficiently in this case.

Is this problem NP-hard in a general graph?

  • »
    »
    13 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The Set covering problem (SCP) can turn into the task that you said. Since the SCP problem is a HP-hard, so the answer is yes, it is a NP-hard problem in a general graph.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem F2 to solve the last task like covering every special node via segments we can use easy dp that seems to work in O(size^2) but actually the sum of all segments` length is O(n). Here is my solution https://codeforces.me/contest/1534/submission/120320191

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For the Problem G,if offsetting all values in the left of the min point by −(j−i) and offsetting all values of the min point by (j−i) to get fj from fi,fi shuold be a convex function.but we need add fi to gi,how to prove that fi+gi is a convex function and can support the next operation?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    That is to say how to prove that we can get a convex function when we add a convex function onto another convex function.