Kognition's blog

By Kognition, history, 5 years ago, In English

I feel like a majority of the Div 1 problems on Codeforces have $$$n \leq 10^5$$$ or $$$n \leq 10^6$$$ here, which basically means that the solutions have to be between $$$O(n)$$$ and $$$O(n \log n)$$$ (maybe in rare cases $$$O(n \sqrt n)$$$ or $$$O(n \log^2 n)$$$). However, this doesn't seem to be the case in other high level competitions; for example, this rarely seems to be the case in Topcoder Div 1 SRM, Google Code Jam, and ICPC.

With such a high constraint on $$$n$$$, the solution space is pretty constrained, and only certain algorithms will be viable. This might explain why certain data structures like Segment Trees/Fenwick Trees are overrepresented on Codeforces compared to other platforms. Other interesting algorithms like Max Flow, Knuth DP, Blossom Algorithm, etc pretty much never appear on Codeforces, and when there is a difficult problem (eg Div 1 C/D) with constraints that are not $$$10^5$$$/$$$10^6$$$, you can pretty much expect that the solution will be one of these algorithms that rarely appears, as opposed to other platforms where you would have to be on your toes for all problems.

I have never formally set problems before, so perhaps there is something I am missing (eg perhaps $$$n \leq 10^5$$$ are easier to test and block incorrect solutions), but is there any particular reason that there are so many problems with this constraint? I feel like contests would be filled with a more rich variety of algorithms if this was not the case.

EDIT: I suppose I wrote this in a way that seems to discount the effort of many problemsetters. I recognize that problemsetting is hard work and am pretty grateful that CF problemsetters are active enough to allow contests to be created multiple times a week; that's pretty much unheard of on any other site. I didn't intend to say that these types of problems are necessarily bad, I was just curious if there was any reason that I hadn't thought of for why the problems tend to be of this flavor.

Full text and comments »

  • Vote: I like it
  • +154
  • Vote: I do not like it

By Kognition, history, 5 years ago, In English

I think everyone here knows the general formula for improvement: do problems that are not too easy and not too difficult for you. Now, when implementing this in practice, one comes across the very difficult and often-asked question, "How long do I try before giving up and looking at part of the solution?"

I have seen various answers for this question, ranging from days to hours to once where I saw some red guy advocate for 5-10 minutes. It seems as though the median time I've seen advocated is somewhere around 30-45 minutes.

But the 5-10 minutes comment got me wondering: has anyone ever tried using a time threshold of 0 minutes? As in, you just read problems that are not too easy and not too difficult for you, and just immediately look at the solution, try to understand it, and implement it? And then repeat this for say, 20-50 problems, and recalibrate what is too easy/too difficult by competing in contests? Intuitively it feels like the progress should not be as high as if the parameter was 30-45 minutes, but it would be interesting to hear whether anyone has done this, and what the results were.

Full text and comments »

  • Vote: I like it
  • +30
  • Vote: I do not like it

By Kognition, history, 6 years ago, In English

If you had a smart friend who liked puzzles, and you could choose one problem that you thought was accessible and interesting enough to get them curious about competitive programming, what problem would you choose?

Edit: To be clear, I’m curious about specific problems, not just general topics.

Full text and comments »

  • Vote: I like it
  • +74
  • Vote: I do not like it

By Kognition, history, 6 years ago, In English

I posted this as a comment on the last contest, but upon request, I turned it into a blog post so that it may get more attention.

Is there any way to hack consistently with regard to a segfault? There were a few times that I've seen people accessing, say, s[n] when s only had n elements allocated to it, which should cause a segfault, but because of the magic of operating systems, segfaults only happen sometimes. These people will probably fail during systests, but I can't reliably hack because even if I submit a hack, the OS might not complain about a segfault and I'll get an unsuccessful hack. Is there any way around this or should I just not try to induce a segfault in a hack?

Full text and comments »

  • Vote: I like it
  • +29
  • Vote: I do not like it

By Kognition, history, 6 years ago, In English

A friend told me a problem that he had been trying to optimize for a personal graphics project, and it involved having an h × w grid, and wanting to do some pre-processing to answer circle-sum queries. A circle here would be defined as having some center, (yc, xc), where 1 ≤ yc ≤ h and 1 ≤ xc ≤ w and some radius r, and it is the set of integer grid points (y, x) such that (x - xc)2 + (y - yc)2 ≤ r2.

The progress he's made is that in O((hw)2) pre-processing, you can just precompute all the possible queries, so one possible set of complexities is O((hw)2) pre-processing and O(1) queries. Another possibility is to precompute prefix sums in O(hw) time and answer the queries in O(r) time.

Is there any clever way to get a better set of pre-processing and query complexity than the two methods mentioned above?

Full text and comments »

  • Vote: I like it
  • +34
  • Vote: I do not like it

By Kognition, history, 6 years ago, In English

Anybody know where I can access old FHC problems? Every time I look this up I just get links to Facebook that are broken.

EDIT: A proper link has been posted, but the links I was trying before that were broken seem to no longer be broken. Weird.

EDIT2: I figured out why the links were broken. FHC does not support mobile viewing. So don't look at any of these links on your phone.

Full text and comments »

  • Vote: I like it
  • +12
  • Vote: I do not like it