Блог пользователя AlphaMale06

Автор AlphaMale06, 14 месяцев назад, По-английски

Thank you for participating in the first ever Serbian Div. 3 Codeforces round!

Sorry

Also a massive thanks to Vladosiya for giving us a chance to create a divison 3 round!

1878A - How Much Does Daytona Cost?

Problem was authored and prepared by
Hints
Tutorial
Solution

1878B - Aleksa and Stack

Problem was authored and prepared by
Hints
Tutorial
Solution

1878C - Vasilije in Cacak

Problem was authored and prepared by
Hints
Tutorial
Solution

1878D - Reverse Madness

Problem was authored and prepared by
Hints
Tutorial
Solution

1878E - Iva & Pav

Problem was authored and prepared by
Hints
Tutorial
Solution

1878F - Vasilije Loves Number Theory

Problem was authored and prepared by
Hints
Tutorial
Solution

1878G - wxhtzdy ORO Tree

Problem was authored and prepared by
Hints
Tutorial
Solution
Разбор задач Codeforces Round 900 (Div. 3)
  • Проголосовать: нравится
  • +131
  • Проголосовать: не нравится

»
14 месяцев назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Pretty balanced contenst

»
14 месяцев назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

reverse madness was a cool one

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thanks for the fast Editorial.

»
14 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

Very interesting problem set, especially problem F

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

One of the best Div3. D problem, first I just printed valued of a,b (cause nothing else was coming in my mind :)), and then i was not difficult to see that for each possible a there is unique b

»
14 месяцев назад, # |
  Проголосовать: нравится +21 Проголосовать: не нравится

The idea behind E is nice but you can just code a sparse table to handle the query part from the binary search.

»
14 месяцев назад, # |
  Проголосовать: нравится -11 Проголосовать: не нравится

Can you please help me see why the code below times out? This code is for E. Thank you very much!

#include<bits/stdc++.h>
using namespace std;
int a[200010];
int sl,k;
int check(int x)
{
	int res = a[sl];
	for(int i=sl+1;i<=x;i++)
	{
		res &= a[i];
		if(res<k) return 0;
	}
	return 1;
 
}
int main()
{
	int t;
	scanf("%d",&t);
	while(t--)
	{
		int n;
		scanf("%d",&n);
		for(int i=1;i<=n;i++) scanf("%d",&a[i]);
		int q;
		scanf("%d",&q);
		while(q--)//Խ��ԽС
		{
			scanf("%d%d",&sl,&k);
			if(a[sl]<k) printf("-1 ");
			else{
			int l=sl,r=n;
			while(l<r)
			{
				int mid=(l+r+1)/2;
				if(check(mid)) l=mid;
				else r=mid-1;
			}
			printf("%d ",l);
			}
		}
		printf("\n");
	}
}
»
14 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I have a slightly different implementation for Problem G . My approach is the same as the editorial (finding the selective nodes where the bitwise or changes).

However to find these nodes, instead of using binary search, we can maintain a map for every node. The map will have the $$$ closest $$$ $$$ ancestor $$$ to the current node for each distinct bitwise or possible. To find such ancestors we can use a map and iterate on it.

Implementation

»
14 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I'd like to rephrase the proof of Problem C.

The key observation is that we can always change the sum of k numbers from s to s + 1, given that s is not the maximal. The proof comes as follows. Assume that we have a set of k numbers which sum to s. Let denote the maximal among the k numbers as m. If m < n, we can always achieve s + 1 by replacing m with m + 1 and the proof is done. If the maximal is n, we can prove it by contradiction as shown in the tutorial, i.e., n, n — 1, ..., n — k + 1 will belong to the set. This set will yield the maximal and contradicts the assumption that s is not the maximal.

  • »
    »
    14 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

    If m < n, we can always achieve s + 1 by replacing m with m + 1 and the proof is done.

    m < n, so maximal is not n it is mentioned. even if it is n, we have to replace next maximal by +1 if it does not already exist. so on...

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem G, I don't understand why for each bit, we choose the vertex z such that it contains that bit, and z is closest to x (or y). What happen if we choose z such that it is not the closest?

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Because it would produce the same results with the closest one. Therefore we don't have to check that node

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I don't know if I misunderstand something. I think that if we choose another vertex that is not the closest one, the result for the current bit which we want will stay the same, but we don't know whether bits in other positions will increase the answer or not.

      • »
        »
        »
        »
        14 месяцев назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        That's why we need to consider both nodes

        For example assume the path from $$$u$$$ to $$$v$$$ looks like this :

        $$$000, 001, 100, 101, 000, 010$$$

        Here are the path looks for each bits from $$$u$$$ to $$$v$$$ :

        • bit $$$0$$$ : $$$0, 1, 1, 1, 1, 1$$$
        • bit $$$1$$$ : $$$0, 0, 0, 0, 0, 1$$$
        • bit $$$2$$$ : $$$0, 0, 1, 1, 1, 1$$$

        And for bits from $$$v$$$ to $$$u$$$ :

        • bit $$$0$$$ : $$$1, 1, 1, 1, 0, 0$$$
        • bit $$$1$$$ : $$$1, 1, 1, 1, 1, 1$$$
        • bit $$$2$$$ : $$$1, 1, 1, 1, 0, 0$$$

        For the sake of explanation, let's number the nodes in the path with $$$1, 2, 3, 4, 5, 6$$$ (where $$$u=1$$$ and $$$v=6$$$)

        Now the closest nodes with $$$u$$$ that has a bit on for each bits are : $$$2, 6, 3$$$

        And the closest nodes with $$$v$$$ are : $$$4, 6, 4$$$

        Notice that the only important nodes are $$$2, 3, 4, 6$$$

        Node $$$5$$$ — for instance — is not important because from the perspective of $$$u$$$, the bit $$$0$$$ and $$$2$$$ will be "turned on" at node $$$2$$$ and $$$3$$$ respectively. While the bit $$$1$$$ is still "turned off" at node $$$5$$$

        The key observation here is once the bit has turned on within a path, it will never be turned off again because the nature of the $$$\text{OR}$$$ operations

        • »
          »
          »
          »
          »
          14 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Okay, let me think about it for a while, since I don't strongly understand about the fact of picking the first occurrence will give us optimal answer. However, thanks for your help!

          • »
            »
            »
            »
            »
            »
            14 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Maybe the way you view it should be something like this :

            if the first occurence node produces the same outcome with the other nodes for the rest of the path, then it's sufficient for us to only check the first one and no need to check the rest

            • »
              »
              »
              »
              »
              »
              »
              14 месяцев назад, # ^ |
              Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

              Could you provide an example when the second occurrence will be useless? The node 5 you provide above is not one of that type (it is totally 0)

              • »
                »
                »
                »
                »
                »
                »
                »
                14 месяцев назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                Try this :

                $$$101, 100, 011, 110, 101, 110$$$

                The important nodes are respectively :

                • From $$$u$$$ : $$$1, 3$$$
                • From $$$v$$$ : $$$5, 6$$$

                We can see that node $$$4$$$ is useless because it produces the same results as node $$$3$$$ and $$$5$$$

                And node $$$2$$$ is useless because it produce the same as node $$$1$$$

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

For problem E, we can calculate every position, the first 0 to the right of every bit.

Then for each query, iterate each bit from low to high, and check first 0 to the right. If the current bit of k is 1, it needs to satisfy all the smallest right end points that are not 0. If the current bit of k is 0, since we iterate from low to high, the rightmost non-0 can be greater than or equal to k.

I am not sure if my solution is correct, after all, it has not passed the system test. But I want to share my ideas with you, and the code is as follows:

https://codeforces.me/contest/1878/submission/225352692

»
14 месяцев назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

If someone prefers video explanations or interested in knowing how to think in a live contest or reach to a particular solution.
Here is my live screencast of solving problems [A -> E] (with detailed commentary in Hindi language).

PS: Don't judge me by my current rating :(

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello. I have a question about Problem G. Why is the complexity not q*logA*logn*logn? Because finding the intersection points takes logA time, finding LCA takes logn time, and finally calculating by bits also takes logn time. I'm a little confused here,and my code get TLE.

»
14 месяцев назад, # |
  Проголосовать: нравится -11 Проголосовать: не нравится

E had a very long implementation, a little bit much for div3 question imo.

»
14 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

don't say sorry contest was very nice. All the problems were very interesting.

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится -6 Проголосовать: не нравится

there's a very simple O(N+Q) solution for E 225432409

First, notice that the only positions of R that change a[L]&...&a[R] are those where the bit which was set in a[L] gets reset. Calculate the next positions where each bit gets reset and store them. Then for each query iterate over those "important" R positions and update the current value of a[L]&...&a[R]. If at a certain R the answer stops being greater than K, then the answer is R-1, otherwise the answer is N-1.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    That is a O(NlogN + QlogN)

    • »
      »
      »
      14 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      It's not logN, it's log(max a[i]), but fair enough, I guess it shouldn't be considered a constant

      • »
        »
        »
        »
        14 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        That's more than log N lol(n<=2e5, max(a) is 1e9)

        • »
          »
          »
          »
          »
          14 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          my point is it's faster than the editorial solution while not being any more complex

          • »
            »
            »
            »
            »
            »
            14 месяцев назад, # ^ |
              Проголосовать: нравится +16 Проголосовать: не нравится

            In editorial its said it can be O(NlogN) with sparse tables(which is faster than yours) :)

          • »
            »
            »
            »
            »
            »
            14 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Yeah, we said there is a $$$O(N \cdot \log(N))$$$ solution with sparse tables, and it's very simple (if you know sparse table). We said we allowed slower solutions because it's div. 3.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can I solve the problem E using segment tree? For each query I can use binary search to find the value of "r" and find the value of f(l,r) using query method of segment tree. Time complexity O(q*logn*logn). But I am getting wrong answer after performing binary search.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yes

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You can even perform it $$$O(qlogn)$$$ when you are in some node in the segment tree, you see if the $$$and$$$ to the right child is still >=k then you go there. Else you go left.

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      May be there is a misunderstanding. How the complexity is O(qlogn)? ..ans every query by binary search and inside every transition of binary search finding "AND" of (l to mid) take O( q* (log(n) )^2)

  • »
    »
    14 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Yup..after the contest when i was going on through the questions and saw problem E,the first approach which came to my mind was of segment tree. I coded it and it got accepted. It's time complexity was of O(nlogn+q*logn*logn). It crossed 2s time limit. I got little astonished but then i saw that the maximum time limit was of 5s. Here is my code 225420310

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think the most easier way (I mean no need handle bit by bit) to do 1878E - Iva & Pav is to use seg tree. Here is my solution: 225454109

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    YES got it, and one thing the time complexity is better if we solve by using bit by bit than seg tree I guess ?

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      Using pref sums or sparse tables is lowest possible complexity in this problem, but segtree also passes with added log N factor :)

      • »
        »
        »
        »
        14 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Can you please let me know if there exists a way to calculate bitwise between given range l and r using segment tree?

        • »
          »
          »
          »
          »
          14 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          yea its pretty easy. Just search on internet AND segment tree(u can take lets say sum segment tree and just instead + write &). Just note that in E from this round u don't need update(its called modify in some sources), so just use qry and build function.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In D, I used binary search to get the indices, then calculated all the reversals I have to do, then just copy paste a solution of CSES substring reversals. Even if the CSES solution uses treaps, you still can just copy a solution without thinking about the treaps part

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

I've observed this thing regarding all ICPC-style contests on CF(i.e., div3/div4/edu rounds): fast solving doesn't matter as much as it matters in normal rounds.

For eg. contestants who solved the same no. of problems as me in this div3 but half an hour before me are ranked just 400 places ahead of me.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem C it should be the sum of 1 to n-k elements (4th line)

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

Awesome round in general. The problem are balanced and cover a lot of topic. I really like problem F.

However C is a little bit adhoc but I think it is fine.

And in my opinion, the gap between C and D is close enough.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
sparse tables are a little bit too advanced of a topic for div3 E

Meanwhile problem G: range minimum query, binary lifting, LCA.

»
14 месяцев назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

This is a snippet of neal's solution of D

snippet

Not getting why max(x, l[i] + r[i] - x) is not being considered while doing the swap or even during the preprocessing of queries.

Edit: Ok, got it. So the answer lies in this statement — It is easy to see that the modifications x and n - x + 1 are equivalent.

Can this CSES task be also solved by similar trick, or advanced DS like treap is mandatory?

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The task cannot be solved the same way, because here the operations aren't symmetrical, and the order of operations matters.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can Someone explain why my code gave TLE ? 225358802

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

F is a very nice problem. Thanks for this round. Hope we see you again.

»
14 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

During testing, I found a nice offline solution for E.

The idea is to iterate from right to left and keep the value for each current prefix. When the values $$$f(i,i)$$$, $$$f(i,i+1)$$$, $$$f(i,i+2)$$$, ... are stored in some array, we can find the answer to some query by performing a binary search. In fact, we keep the right endpoints (denoted by $$$j$$$) in an array. We update the left endpoints ($$$i$$$) dynamically.

This is $$$O(n)$$$ memory. Updating the values when traversing from $$$i$$$ to $$$i-1$$$ can require up to $$$O(n)$$$ time. It seems like the total complexity is $$$O(n^2)$$$, which is untrue. In fact, it is amortized. Notice how we only need to update the values for which $$$f(i,j)$$$ changes. Modifying them decreases the total number of bits (among all values of $$$f$$$). The number of bits is $$$O(n \ log \ A)$$$, making the complexity $$$O(q \ log \ n + n \ log \ A )$$$.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Man, there's an even simpler solution for B:

for(int i = 0; i < n; i++)
{
    std::cout << i + 5 << ' ';
}
  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I know, I said there are many constructions, but this one is the easiest to prove I think.

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I guess you're right. This one though also fits a possible restriction for the output like a_i <= 3 * 10^5.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Ahhhhhh

    I tried to make the answer table, but the code is too long...

    And I used a strange code:225314696

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I solved problem E by using a segment tree to query each segment in logarithmic time but it TLE'd in python3 during the contest. I found out just now that using Pypy is much faster and the solution is easily accepted. Is pypy a reasonable choice for competitive programming or will I still commonly run into this runtime trouble in the future?

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't know much about python but pypy is much faster than standard python compiler(found that while was working on round xD).

  • »
    »
    5 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You may run into problems with any recursive algorithms in the future. Especially algorithms that go to O(N) depth (seg tree goes up to O(logN) depth). For O(N) depth, python will often run out of stack space. For O(logN) depth and a total O(NlogN) complexity, it may TLE sometimes (depending on the time limit of the problem and the type of ops done in the recursion).

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hall of fame level task names and descriptions

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem D, instead of editorial's approach where it is taking into consideration the parity of number of operations as a condition to swap. I'm maintaining an array (1 based) which stores the number of operations on x [x = min(x, ri + li — x)]. For every query, I perform prefix[x]++ and take prefix sum of this array after all the queries. Now, if parity of number of operations (in the prefix sum array) before starting a new pair of (l, r) and number of operations at any index (in the prefix sum array) during the iteration in this pair is different, that would suggest a swap is required at that index. Otherwise, continue iterating. But I'm getting wrong answer as verdict. Please help me find where I'm going wrong.

Here is the submission link : https://codeforces.me/contest/1878/submission/225566736

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You do realize that $$$a=min(x,ri+li−x)$$$ in problem's text is a gimmick because it can be simplified to $$$a=x$$$ always?

    This means that in your query loop you can directly use $$$prefix[x]++$$$ without calculating $$$a$$$ and $$$b$$$ bounds at all.

»
14 месяцев назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

Tutorial of D is terribly explained. Worst.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Could you elaborate?

    • »
      »
      »
      7 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      I would not call it terrible, but I relate to the person and I can elaborate.
      I did not enjoy how the editorial derived / concluded / arrived at the fact that

      "From this symmetry, we can conclude that if a modification "touches" index $$$i$$$, then it also touches index $$$n−i+1$$$. And also because of the symmetry, $$$i$$$ will always be swapped with $$$n−i+1$$$, and no other index.

      I know you could have presented it in a more proof-ish way because ofcourse its your problem and you know it better than anyone else. But, its just that this way of making the observation by looking at some cases feels a little "accidental" in nature. Low rated people like me are extremely afraid of trusting our "observation" (in the "spot a pattern" sense of the word).

      I would have liked it better if the same fact was expressed as a series of lemmas.
      Note 1: Reversing a string is same as swapping symmetric positions from the front and back.
      Lemma 1: The sum of positions of characters that need to be swapped is always an invariant = length of the substring being reversed.
      Lemma 2: (Observation motivated from lemma 1) If we denote the substring to be reversed for some $$$ith$$$ query as $$$[left,right]$$$, then $$$left+right = r_j + l_j$$$. (where $$$j$$$ is the corresponding position to $$$x$$$ in arrays $$$L$$$ or $$$R$$$).
      Lemma 3: Notice $$$left$$$ and $$$right$$$ are actually positions that need to be swapped. Thus the invariant in Lemma 1 is actually equal to $$$r_j + l_j$$$.

      Now we can conclude that every modification that "touches" index $$$i$$$ will also "touch" index $$$n+1-i$$$ (because that is the swapping-partner for the index $$$i$$$). Also, we have proved that $$$i$$$ will always be swapped with $$$n-i+1$$$ and no other index.

      Let me know if I have made a wrong claim somewhere.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

AlphaMale06 may you explain how sparse table can help here ? as in what we need to implement in the sparse table related to this problem ? sparse tables as far as i know is used to find minimum in a subarray how here it works ?

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I personally feel D was easier than E. Maybe it were the long statements.

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yeah it is, but many people learn some topics that are really advanced for their rank(like segment tree, not including pref sums here because it's easy to learn that idea even for newbie) and that's why E has more solves than D.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

anyone can explain any of the below solutions if possible ? they took very much less time or sometimes quite low memory space. https://codeforces.me/contest/1878/submission/225305916 by satyam343 https://codeforces.me/contest/1878/submission/225321019 by vgtcross https://codeforces.me/contest/1878/submission/225297136 by jiangly

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I'm pretty sure they use sparse tables (the thing mentioned in the end of editorial for E), that's how they get a solution in $$$O(n \log(n))$$$

    • »
      »
      »
      14 месяцев назад, # ^ |
      Rev. 5   Проголосовать: нравится +3 Проголосовать: не нравится

      My solution (linked above) used a segment tree and it is $$$O(n \log^2 n)$$$ (that's why memory is so small). I don't know why it runs so fast.

      Looking at the codes, neither of the other two linked solutions seem to use sparse tables. To me, it looks like they're using the fact that for fixed $$$l$$$, there are only at most $$$1 + \log_2 a = 31$$$ different values of bitwise and of subarray $$$[l, r]$$$ over all $$$r$$$. The "changing points" of the bitwise and can be calculated for each $$$l$$$ and stored in $$$O(n \log a)$$$ time and memory, and each query can be solved in $$$O(\log a)$$$ leading to a $$$O((n + q)\log a)$$$ solution.

      • »
        »
        »
        »
        14 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        thanks for the reply it really helped me to understand now vgtcross AlphaMale06. The '31' is because suppose the l is having all bits set then we can put 1 bit to 0 and then the and will reduce that bit only else it all remains set only . likewise we can have maximum 31 different numbers. is this interpretation right ?

        • »
          »
          »
          »
          »
          14 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Yes. Each bit can change value at most once so the total value can change 30 times, meaning that there can be 31 different values.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem E, how can we use sparse table to get $$$f(l,r)$$$ in $$$O(1)$$$? I thought it's still needed to reconstruct the segment in $$$O(log_2(r-l))$$$ time? (I don't know much about sparse tables)

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If a query result can contain overlapping segments (min, max, or, and) then the sparse table only needs to look at two values for the answer. You find the largest power of 2 less than the range and calculate an overlap (eg for length 7 you do 2 queries of length 4 and include the value at position 4 twice).

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I'm getting TLEd on G. What I am doing is, while binary searching the answer, I am fetching the ancestor. So basically the binary search is from 1 to the number of nodes in the path, and accordingly I'm fetching the count of bits. The implementation isn't the best, but it works. This is heavy on time because during my binary search I'm constantly binary lifting to find the node, so the binary search costs O( log(n) * log(max(a)) ). Combine that with query and it TLEs out. I cannot clearly understand how the editorial does it in O( log(n) + log(max(a) ). Any help will be appreciated, thanks in advance!

https://codeforces.me/contest/1878/submission/225749618

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I couldn't get how to do binary search in $$$O( \log{(n)} + \log{(max(a)} )$$$ too. Also there is no binary search in G's solution that is attached to the editorial, so I couldn't understand it from the code either. But I somehow managed to get an AC with a solution similar to yours. I'm doing binary search on the path from LCA to x to find highest and lowest occurrence of each bit (and from LCA to y). On each iteration of binary search I use binary lifting, so my binary search costs $$$O(\log{^2(n)})$$$ and I'm doing it for each bit, so I have $$$O(\log{(max(a))} \cdot \log{^2(n)})$$$ per query and $$$O(q \cdot \log{(max(a))} \cdot \log{^2(n)})$$$ is my overall asymptotic. With some non-asymptotic optimizations it gets AC:

    https://codeforces.me/contest/1878/submission/241103000

    Sorry for necroposting :D

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For E, solution has mentioned that we can use sparse table also.I am learning segment tree currently..

I think segment tree can also be used here, am i correct?

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Good contest and wonderful tutorial! Thanks.

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится +2 Проголосовать: не нравится

Extended proof for F which may be useful in the future: If $$$x$$$ and $$$y$$$ are coprime, then $$$d(x\cdot y) = d(x)\cdot d(y)$$$

Proof
»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem C: Note that the sum of n over all test cases may exceed 2⋅105.

TC 05: 10000 194598 194598 10085825445 196910 196910 19386872505 193137 193137 6236620375 194427 194427 8160790120 ...

shouldn't O(n) also be accepted according to constraints? have i missed something?

»
14 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For problem F solution 2, I was actually originally going to do this solution for the problem, but I was deterred because to find prime divisors and their exponents for $$$d(N)$$$, I have to find prime factorization of $$$d(N)$$$. Thus, we need to keep track of the smallest prime factor of numbers up to 1e9 (because $$$d(N)$$$ is bounded by 1e9) if we want logarithmic complexity. This uses too much memory. Has anyone figured out an implementation to find this prime factorization without use of smallest prime factor?

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If you have all the primes up to sqrt(1e9) there are less than 3500 primes to check. You can loop through that list and if you don't find a factor then the value is a prime. It isn't logarithmic complexity, but fast enough as there are only 1000 queries.

    • »
      »
      »
      14 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      However, in the editorial it is claimed that a logarithmic time complexity can be achieved for the prime factorization d(n) (it shows this in the time complexity). How is that achieved?

      • »
        »
        »
        »
        14 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It's possible to store d(N) as the map of prime divisors adding/removing new divisors on each query and never needing to factor a large number. Each query can add at most 19 new divisors.

»
14 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Hye guys!

You can see the detailed solutions for the problems A,B and C with proofs from my youtube video below:

https://youtu.be/odTQC7AOL3s?feature=shared

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

my segment tree solution for E 227542958

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is problem G possible in O(n log n)? The part where you calculate the ORs for each node can be optimized by noticing that the total value of the sums of the g function will increase by 1 in the first occurence of the bit and decrease by 1 in the last occurence of the bit (going from x to y). With that in mind we can just go from the closest important node to X to the last important node and keep track of the current sum.

However the bottleneck is still there in finding the important nodes. Is there a way to optimise it O(nlogn)?

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For => 1878C — Vasilije in Cacak

n, k, and x are input integers read from the standard input. max_sum and min_sum are calculated as follows:

max_sum is the maximum sum achievable based on the value of k. It's calculated using the sum of the first k natural numbers.min_sum represents the minimum sum based on the range (2 * n — k + 1) * k / 2. It considers the summation of numbers within a specific range. The conditional check determines if either the min_sum is less than x or the max_sum is greater than x.

If either condition is true, the output will be "NO," indicating that the given condition doesn’t hold. Otherwise, the output will be "YES," signifying that the condition is met.

int t; cin >> t; while (t--) { long long n, k, x; cin >> n >> k >> x; long long max_sum = k * (k + 1) / 2; long long min_sum = (2 * n — k + 1)*k / 2; if (min_sum < x || max_sum > x) { cout << "NO" << endl; } else { cout << "YES" << endl; }

}
»
8 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

why does this code fail 255033091 and when i change the line m=m1 to add_divs(n,m) it gets acc why if i change m=m1 it fails