Блог пользователя Una_Shem

Автор Una_Shem, история, 5 лет назад, По-английски

Hi Codeforces!

Colleagues from Google asked to share the announcement. Join, it'll be fun!

Hash Code

Hello!

Google’s team programming competition Hash Code is back for another year of challenging developers around the world to solve a Google engineering problem. Think you could optimize the layout of a Google Data Center? Or how about perfecting video streaming on YouTube?

If you’re up for the challenge, sign up to compete by February 17 at g.co/hashcode.

Hash Code takes place over 2 rounds:

  • an Online Qualification Round: Thursday, February 20 from 17:30 — 21:30 UTC: Compete from this virtual round wherever you’d like, including from a Hash Code hub. Hubs allow for teams from the same community (e.g. university or coding club) to compete side-by-side in a fun and exciting environment.

  • a Final Round at Google Ireland: Saturday, April 25: Top scoring teams from the Online Qualification Round are invited to our Dublin office to vie for cash prizes and the title of Hash Code 2020 champion.

Whether you’ve just started coding or you’ve been participating in programming contests for years, Hash Code is a great chance to flex your coding muscles, get a glimpse into software engineering at Google, and have some fun. Take a look at previous Hash Code problem statements to see the engineering challenges participants have tackled in the past.

Register by Feb 17 at g.co/hashcode

  • Проголосовать: нравится
  • +208
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится +171 Проголосовать: не нравится

I'd be more supportive of this if Google didn't try to pull political stunts in their programming contests, such as creating Google Code Jam for Women-- Either this suggests that Google thinks women can't compete on a level playing field with men, or is Google finding a legal way to artificially bias hiring practices towards women; both of which are ideas I find extremely unappealing.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится -20 Проголосовать: не нравится

    What now, you don't want a contest only for some country because it's discriminatory? There aren't many women in STEM and events/contests like that are meant to help the situation. I understand complaining about unfair interviews and getting a job, but a contest with the goal of popularizing programming for some underrepresented group of people? Come on.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +761 Проголосовать: не нравится

      Bruh, women are much smarter than men. That's why they don't waste their lives on competitive programming.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится -15 Проголосовать: не нравится

      I'm not trying to bring discrimination into this or anything, but I would say, yes, it is pretty silly to artificially restrict participants of an online contest to a geographic location. Why take extra effort to exclude people who might want to participate?

      (Of course, this doesn't include ICPC regionals, for example, where your geographic region matters. In that case it determines the spot at WF that you are competing for)

      In the case of high school contests it makes sense to restrict participation, because you can't expect high school students to be able to be at the same level as college/post-college IGMs who have been competing for years. But that difference in what a person has time to learn isn't present across age/gender/country of residence.

      • »
        »
        »
        »
        5 лет назад, # ^ |
        Rev. 2   Проголосовать: нравится +31 Проголосовать: не нравится

        You only addressed the first sentence of my comment. In particular, my second sentence is the answer to your question "Why take extra effort to exclude people who might want to participate?".

        Your logic says it's stupid to organize a competition only for some region of a country (let's say, the championship of New York) or for a single school. The goal of those is also to popularize something in that place.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится -28 Проголосовать: не нравится

          The only reason you should exclude people from participating is if the trait that causes them to be excluded is relevant to the contest. The college I attend holds a programming contest in order to select ICPC teams. High school students and students from other universities can join, but they are ranked unofficially, since they aren't competing for a spot on an ICPC team. In this case, location is relevant to what you are competing for.

          It would be silly to exclude people from participating in an online contest on a global platform where geography isn't relevant, in the same way it would be silly to limit the contest to a race or gender when those aren't relevant either.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +60 Проголосовать: не нравится

    I agree I'm a female and I think it's embarrassing

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +272 Проголосовать: не нравится

    There is no rule in competition "Google Code Jam for Women" that bans male participants.

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +26 Проголосовать: не нравится

    There is a girls-only contest in CCPC(China CPC), and I have seen no complaints about that. There is also a CGMO (China Girl Mathematical Olympiad), its problems are quite hard (higher than provincial level contestes), also no complaints there.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +55 Проголосовать: не нравится

      Nobody questions authority in China ¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +7 Проголосовать: не нравится

        That's not exactly the case, the computer federation who hosts the contests DO get questions for other contests they hold, and they DO change things according to them.

        I think the bias did not come from the contest host, but the stereotypes of other people, who thinks that computer geeks should be

        • bald
        • male
        • wearing glasses
        • probably with curly hair
        • hackers
        • focusing on his screen all day
        • do not know how to communicate with others
        • etc

        So, the problem is not with google (or the authority, or who holds the contests. etc.), it's with these stereotypes. Only if we change these views can we restore the fairness in computing.

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится -86 Проголосовать: не нравится

    Women can not compete in chess and competitive programming as good as men indeed — leaderboard on this site and ELO ratings shows that.

    Code Jam for Women has nothing to do with politics either. That is just the way to popularize Google's brand, hire more women to have more labor resources. Google's capital allows them to do whatever they want, e.g. infringe the rights of workers.

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +123 Проголосовать: не нравится

      Women can not compete in chess and competitive programming as good as men indeed — leaderboard on this site and ELO ratings shows that.

      It shows nothing of the sort. If 1% of people who do competitive programming are women (seems accurate), then obviously there are not going to be many in the top 100. But that doesn't mean that it's harder for women to get into top 100, it just means there are less women.

      I don't think there is a conclusive way to prove or disprove what you just said. Codeforces leaderboard doesn't show gender (and it shouldn't), and I doubt you looked at 40000 profiles manually and tried to figure out what their gender is. So my guess is that you are talking out of your ass.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится -16 Проголосовать: не нравится

        It shows the current state — that is not "nothing of the sort".

        In the current state of the field, women perform not that bright as men. One of the reason is there are too few women as you said by yourself.

        I have no interest in discussing either cognitive or physical women's capabilities. But I'm open to discuss workers' rights.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +85 Проголосовать: не нравится

          You said that "women can not compete ... as good as men". I'm sorry but it really is hard to interpret it as anything else than "the average woman is not as good as the average man".

          But I'm open to discuss workers' rights.

          Cool, but what the hell does any of this have to do with workers' rights???

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится -30 Проголосовать: не нравится

            Ok, I see. Probably, I should have said women do not compete as good as men.

            I made a couple of points in my first comment, replying to SecondThread, but you decided to discuss women.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    We should stop discrimination, contests should be rated for everyone regardless of their sex

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Honestly it's Google showing that they're woke — lip service, no real effect. They don't need to artificially bias hiring practices, discrimination West-style is only forbidden against specific groups.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +25 Проголосовать: не нравится

    I'd be more supportive of this if Google didn't try to pull political stunts in their programming contests

    Says the one writing political stuff in an unrelated post

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится -26 Проголосовать: не нравится

      You also reacted to political stuff.

      See, this is the shitty thing about political activism: it tends to make one want to respond, so it propagates into unrelated things like programming. Unfortunately, it always ends either with flaming or a circlejerk.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +14 Проголосовать: не нравится

        Then write a new thread, rather than spamming Hash Code announcement.

        You also reacted to political stuff.

        I didn't say anything bad about that ;)

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I didn't say anything bad about that ;)

          ??? You said "Says the one writing political stuff in an unrelated post", it's about reacting to (Google's) political stuff, it doesn't sound like you're praising it or being sarcastic.

          Tbh it doesn't belong on CF in any form but I can understand dude's desire to vent.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +78 Проголосовать: не нравится

    Let me give you my two cents as a woman that participated in the latest "Code Jam for Women" competition. I've met loads of female competitive programmers here in Brazil and discussed why there aren't as many women in competitive programming. Most of them said it is a pretty hostile environment. Guys choose guys over higher-rated girls or refuse to teach to girls as much because they think they won't learn as fast as other dudes. Because of that they are much less likely to ask questions and learn from their peers. What I liked about the Code Jam for Women is that it gave some of the less experienced girls a place to ask questions about previous competitions, discuss with people similar to themselves and be heard. I had never seen them as a group study as hard for anything else. I normally can get people to respect me enough to hear my ideas (in most cases) but the challenge is with girls at the beginning, that before becoming coders face this silly artificial barrier. Code Jam for Women helps with that.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +76 Проголосовать: не нравится

It won't be funny if you'll give us a problem about moving some units on a plane...

It'd be a final fall of a HashCode.

»
5 лет назад, # |
  Проголосовать: нравится -24 Проголосовать: не нравится

It seems there is a political behavior which is more shameful and hostile from Google. I can not represent my country.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +139 Проголосовать: не нравится

I think that it's a good place to share my thoughts. I'm the greatest anti-fan of HashCode's formula. A few reasons:

-- One problem for four people for four hours? Definitely wrong proportions. Do you remember eliminations to Pizza/Deadline24/Marathon24? The last one had five problems for three people. This was something. You had to show your strategic skill: where to spend more time, where to just write something fast and let it run in the background, take problem for yourself or give it to the teammate, etc... In the current formula, it always ends with so small differences. In Marathon, fighting for some epsilons didn't make sense as you can change the problem and get much more.

-- You are not solving the problem, you are solving the testcases. Well, on Deadline/Marathon you were solving it too, but they had $$$10$$$ files with several testcases each. HashCode offers about two seriously hard tests. If the organizers still think that we are solving the problem, come on, you want to compare the solutions on two tests? For example, TopCoder (which is the most famous in the optimization world thanks to marathon matches) runs solutions on $$$2000$$$-$$$5000$$$ tests. OK, you won't force us to run on so many, but still, giving us $$$10$$$ files with several testcases would sometimes force the best teams to analyze some testcase deeply (as you like it so much) -- to find how it's generated, maybe on what should they focus. Currently, teams HAVE to focus on the properties of these two testcases, not on the whole problem. And again, it makes everybody fighting for some small differences in the same places instead of trying to find some other place to get points.

-- The problems (at least on the eliminations) are just boring. There are no places for some nice ideas, everybody ends with almost the same greedy (and has to fight for small differences). A few times I won in a problem on the eliminations to Pizza/Deadline and every time I won thanks to some great idea which nobody else came with.

deadline24 2018 Firstly, cut everything like that:

And then play with merging these beams.

pizza 2018 Make long (almost) diagonal pillars:

(ok, on the drawing they aren't so long, sorry for that) And then play with building one on the other.

I was very proud of myself coming with these (still, rather simple) ideas. What can we come up with on HashCode? "Huh, maybe send some worker to the nearest workplace on the plane"? xd

I guess that even if the problems would be more interesting the aura about good ideas would still be killed with the fact that everybody in the team has to think about the same problem and then more teams would again come up with the same thing. Also, if you think that it's a bad idea to give such a non-standard and complicated problem when there's only one problem in a contest -- you know what to do, give us more problems. In particular, the number of problems $$$\geq$$$ the number of people in one team should hold, then you can feel calm that the best teams would advance to the finals. With more than one problem you can experiment and give us many different types of problems (and then one of them can be boring if you like such ones, it won't hurt so much).

-- The problems could be better prepared. Here's again the fact about solving the testcases instead of problems, but there's more. A typical situation from a short contest: ok, I finished my greedy algorithms, maybe I found something in a smarter way, what now? Everybody should know the answer — metaheuristics like hill climbing, simulated annealing or changing the greedy to beamsearch (it depends on the problem of course). And what turns out on HashCode? The limits are so hight that you won't be able to use any metaheuristic in an efficient way (at least I've always had this feeling). Everybody, do you understand it? They organize a contest to see who's the best in optimization problems and force us to do only some simple greedy shit xd. Of course, giving us only small data would also be bad. Do you have any ideas about what would be good? Maybe $$$10$$$ files with several testcases with various sizes and parameters? We'll never know...

To sum up everything -- maybe each of these defects wouldn't hurt so much (but still would hurt), all of them together make the formula of the contest the worst possible ever. I always wonder, how is it possible that Google, the company which organizes the Code Jam, made such a bad contest as HashCode. Just look here, on the section "Scoring"->"How is my score for a particular problem determined?". It guess that people were thinking about this section several times longer than about the whole HashCode.

"Join, it'll be fun!" -- yea, but what about being professional and making a competition instead of a chaotic game for kids?

Dear HashCode team, not everything is lost. I still believe in you. You can still fix your contest and make it make sense.

Best regards,

Mateusz Radecki a.k.a. Radewoosh

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    I completly don't agree with your opinion about hashcode elimination round problems. Probably simple greedy allows you to eliminate, but if you will try something simple in upsolving you will see that that you are far from the first place result.

    I also don't like that there are only 4-5 tests. May be they do so because all tests are handmade or taken from some "real" data.

    Deadline/marathon eliminations were very cool, but that doesn't mean that one problem format is worse. They are different, one problem format is more about team interaction, several problems format is much more individual.

    HC/SA worked on almost all hashcode problems, you are just wrong about it.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +23 Проголосовать: не нравится

      I remember one time talking with Errichto after the eliminations which were won by them (2018 if I'm not mistaken) to learn what did they do. Guess what. Just greedy with choosing parameters and decisions by analyzing the structure of the test (as there are different amounts of points for different tests, usually one is the most important).

      Maybe people had working SA, but we won't know who was the best as there are O(1) tests. I think that many tests on Deadline/Marathon also were handmade, so still, it's possible to do more.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +2 Проголосовать: не нравится

        Ok, may be 2018 elimination was bad. I don't remember well, but seems that it was issue with test structure.

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +40 Проголосовать: не нравится

    In my opinion it makes a perfect sense to have quals and finals in the same format. And, I guess, Google would never make a 24-hours finals. So what are the options in that setting:

    1. One task for ~5 hours,
    2. More than one task for ~5 hours,

    I don't aware about competitions with the finals in such formats, except the Hash Code. Are there any? (Or in fact, were there any? Cause all 24-hours competitions are shut down)


    -- The problems could be better prepared. Definitely they could be. Do you remember finals of Pizza/Deadline24/Marathon24? Let's take Deadline24 finals 2017 as an example. They changed the rules of the third task during the contest (and erased all points earned) because the rules were easily exploited. And the new version of task still was exploited in a similar way.

    Yeah, it's quite easy to take the best from what you like, and compare to worst of what you don't. Deadline was an awesome competition. I guess Pizza/Marathon also were (but I never participated in them).

    -- Sure, two hard tests is not enough. But what about five, as in Hash Code finals 2018?

    -- Having testcases with the different scale (while not making smaller ones too cheap) sounds like a good idea for me.

    -- In some tasks all participants ends up with the same greedy. Or have to solve exactly the same knapsack. That is really boring.

    Not all problems from past editions are equally good. But maybe it makes more sense to give a feedback on particular problems, than just claiming that format is a trash?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      "In my opinion it makes a perfect sense to have quals and finals in the same format." -- I totally agree, but I don't want to speak too much about finals as I've never been on HashCode finals. I've heard that the problems happen to be a bit better than on the eliminations. Comparing eliminations of Pizza/Deadline24/Marathon24 I guess that it'd be nice to see both eliminations and finals of HashCode with multiple tasks for ~5 hours.

      I'm not suggesting making 24 hours finals, as the competition is definitely about optimization, not about writing bots and so on, but imo there is a better way to run the competition for teams.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    You are basically saying "I enjoy formats of different optimization contests, so this one is bad because it is not the same".

    If all you have to do is to write one greedy, why are you not able to advance?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +12 Проголосовать: не нравится

      No, I really tried to show the defects which I see. I told about a small differences. Are you sure that the teams are really sorted according to the strength of their programs? Look here. Yes, probably past glory had a better solution than others, but look at the eliminations. $$$Second = First * 0.998$$$.

      To show my thoughts: how many cf rounds which you won you'd win if they'd consist of only one problem, let's say C? You'd have to fight for milliseconds, while with a whole problemset you can show how good are you, not worrying that small mistakes disqualify you.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +20 Проголосовать: не нравится

        Yeah, I understand you, but it is still parts of format.

        I don't think that it is legit to compare relative difference. For example, in the finals there was a testcase where you can get either $$$2^{19}$$$ or $$$\varepsilon$$$. All top teams got $$$2^{19}$$$ so the relative difference (on other testcases) became smaller. I feel like my comment is messy, what I want to say that if you compare differences to some baseline instead of total scores, the relative difference will increase.

        Still, I agree that many changes from 2nd place to 1st place solutions feel insignificant and kinda random, but maybe that's what makes this format unique. 1st place team somehow made these changes and won, right?

        Most of your comment is about you not liking some problems and not liking that there is one problem and not many. From my point of view it really is "I enjoy formats of different optimization contests, so this one is bad because it is not the same".

      • »
        »
        »
        »
        5 лет назад, # ^ |
        Rev. 4   Проголосовать: нравится +33 Проголосовать: не нравится

        About this one. Yeah, Past Glory had a better solution. Above the required knapsack and some greedy algos (teams on 2nd-6th also had that part) they had hand made patterns, which are kind of a constructive solution for the given tests.

        While I don't consider task from 2019 finals as an example of a greatest Hash Code problems, I can't agree to you accusations.

        If teams easily get X (let's say X=6M), and then fight for the last Y points. Even if Y << X, it doesn't tell anything about a quality of the problem. It's just non linear scale of the score value (from the solution quality).

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится -10 Проголосовать: не нравится

    In deadline the first greedy that comes to mind gets at least top-30 score (more likely top-10). That's how I qualified for 3/4 deadlines (failed acm-task when not qualified). All tests in deadline are more or less random and you don't have to write separate solutions for some tests.

    Hashcode is harder because:

    • it is more known
    • there are tests where you must write separate solution
    • you can't always know for each test, if your score is fine or you can get more points, you can just estimate it but not compare to other teams' scores

    As I never qualified for hashcode finals, I should say it is more balanced (stronger people more likely get higher scores)

»
5 лет назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

I agree with some of those complaints about the format of Google Hash Code (having participated in the qualification 4 times already). I think there is one simple solution that would improve things considerably — have a few hard inputs that are only revealed during the last hour of the competition. This would favour more generic solutions over the ones tailor made for the specific inputs. Also, this would add way more action and tension in the end, compared to the current scoreboard freeze during the last hour.

»
5 лет назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится
  • registration extended to Feb 19th, 11:00 UTC.
»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I'm looking for a team -- python preferred, C++ is ok. Anyone got spot on their team or looking for a team?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +25 Проголосовать: не нравится

Teams in the top, can you share your scores on tests? Here are ours (team Errecto: Errichto, kostka, Chameleon2460, kabuszki).

A — 21
B — 5,822,900
C — 5,690,369
D — 5,034,770
E — 5,237,345
F — 5,348,248

total 27,133,653

We think that D can be improved quite more.

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +51 Проголосовать: не нравится
    • A — 21
    • B — 5,822,900
    • C — 5,689,622
    • D — 5,107,115
    • E — 5,216,590
    • F — 5,348,248

    total 27,184,696

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +30 Проголосовать: не нравится

      Lol we got 98% of max possible score on D at the beginning and decided to concentrate on other tests, now we have 70k less than you on this test

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 7   Проголосовать: нравится 0 Проголосовать: не нравится

      a

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +39 Проголосовать: не нравится

      how did your team get such a high score in D?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +41 Проголосовать: не нравится

        Perhaps intentionally, there are 15000 books at 2 libraries each, and 63600 books at 3 libraries each. If the former are variables and the latter are clauses, then we have a 3-SAT instance. Then we did local search on the number of satisfied clauses. Probably using some SAT-solver a perfect score can be achieved as well.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          our team also found 3-SAT modeling, but doing such a 'good' local search was difficult part... and congratulations for your team!

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится +50 Проголосовать: не нравится

            Well, our local search is very simple: flip a random variable, keep that change if the number of satisfied clauses does not decrease, wait for as long as you wish :)

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

    We are not really a contender for the top, but we got:

    A-21
    B-5,822,900
    C-5,688,788
    D-5,039,450
    E-5,102,090
    F-5,345,656
    

    total 26,998,905.

    I guess E was our weak point, did you guys use some special algorithm for E?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится -33 Проголосовать: не нравится

    A — 21
    B — 5,822,900
    C — 1,421,219
    D — 4,815,200
    E — 3,748,621
    F — 1,253,555

    total 17,061,516

    We think that everything can be improved quite a lot!

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +3 Проголосовать: не нравится

    Probably not even close to the top after unfreeze, but

    A — 21

    B — 5,822,900

    C — 5,689,820

    D — 5,028,530

    E — 5,121,579

    F — 5,348,248

    Total 27,011,098

    Team circus_wolves(nmakeenkov, linjek).

    In the last hour we did only slightly better (total was 27,009,965), because concentrated on task C

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +16 Проголосовать: не нравится

    A — 21

    B — 5,822,900

    C — 5,645,747

    D — 4,812,015

    E — 4,601,441

    F — 5,317,660

    Total : 26,199,784

    I guess, we were derailed the moment we only tried improving C and F since there was huge margin. But, D and E seems to have been better to focus on.

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +14 Проголосовать: не нравится

    We had
    - A: 21
    - B: 5,822,900
    - C: 5,689,822
    - D: 5,028,790
    - E: 5,034,292
    - F: 5,317,660

    Total — 26,893,485

    With tusg25 and zeyunow

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +9 Проголосовать: не нравится

    With bardek and DamianS we have:

    A -- 17

    B -- 5,822,900

    C -- 5,690,448

    D -- 5,028,530

    E -- 5,215,303

    F -- 5,340,296

    total -- 27,097,494

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    A-21 B-5,822,900 C-5,686,950 D-5,028,530 E-5,131,048 F-5,345,656 Total-27,015,105

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +4 Проголосовать: не нравится
    • A — 21
    • B — 5,822,900
    • C — 5,666,123
    • D — 5,028,790
    • E — 5,222,266
    • F — 5,330,120

    total 27,070,220

    I hope we qualify (((((((

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +14 Проголосовать: не нравится

    TÜ Randomized algorithms (me and semjon.00)

    • A — 21
    • B — 5 822 900
    • C — 5 689 822
    • D — 5 028 790
    • E — 5 236 142
    • F — 5 348 248

    Total — 27 125 923

»
5 лет назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

What are your predictions for the cutoff?

»
5 лет назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Team NZEC_ final Score:

  • A — 21

  • B — 5,822,900

  • C — 5,645,747

  • D — 4,843,540

  • E — 4,613,390

  • F — 5,240,161

Total Score — 26,165,759

AbhiY13 verma_ankit484 21171_somesh

»
5 лет назад, # |
  Проголосовать: нравится +15 Проголосовать: не нравится

Schedule shows that results of the qualification round will be published on Feb 25. I get that you need to check for cheating but will we not even get some "preliminary" results before? Or is scoreboard "unfrozen" on Feb 25?

(We got a huge increase in the last hour and are now anxious :D)

»
5 лет назад, # |
  Проголосовать: нравится +29 Проголосовать: не нравится

Is the live streaming dead?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Pumpkin Patch (WhaleVomit, PlasmaVortex):

A — 21

B — 5822900

C — 5467966

D — 4815395

E — 4854966

F — 5202696

Total — 26163944

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

What is extended round?

»
5 лет назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Team tratata: A – 21; B – 5,822,900; C – 5,679,708; D – 5,067,140; E – 5,205,284; F – 5,348,248; Total score — 27,123,301.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    several people seem to achieve this same score on F. is that optimal? what's the idea behind it? our team only got 5,001,254 on that particular case.

»
5 лет назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

who r the members of *code team who secured first place.

»
5 лет назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится
»
5 лет назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Metaheuristics in vacuum (MrDindows, Furko)

B — 5,822,900
C — 5,690,378
D — 5,038,215
E — 5,210,736
F — 5,345,656

Total score 27,107,906

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

A : 21
B : 5,822,900
C : 5,689,997
D : 5,033,665
E : 4,938,403
F : 5,345,656
Total Score 26,830,642

Team : Little_Piplup dlwocks31 diordhd gratus907 Coffeetea

Several tweaks / random parameters / UNINTENDED BUGS / weird piece of code :(

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

A — 21

B — 5,822,900

C — 5,689,502

D — 5,043,025

E — 5,095,422

F — 5,317,660

Total — 26,968,530 (#121 overall)

from #9 team of last final round.

Congraturations for finalists!

»
5 лет назад, # |
  Проголосовать: нравится -19 Проголосовать: не нравится

A — 21
B — 5,822,900
C — 5,467,996
D — 4,839,510
E — 3,600,887
F — 5,042,544
Total: 24,773,828 Team: me and polarity, but it's basically me solving them all. I still don't understand why my method failed so bad at E...

»
5 лет назад, # |
  Проголосовать: нравится +31 Проголосовать: не нравится

We did pretty badly, but at least we got 5,690,828 from C (with a set-cover IP formulation), which seems to be the best result posted here so far. Actually, I think it is optimal.

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится -49 Проголосовать: не нравится

Joined in the last 90 mins, wrote a stupid O(L^2) heuristics and ended up with a score of 26791720. No idea what happened XD, well it’s hashcode, stupid solution gets decent score

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

    What is your team name?
    As far as I can see the highest score in the contest was 27,203,691 ( < 27,791,720).

    Update: I just read that you solved in 90s. I thought its 90 mins at first. Seriously you destroyed everyone in 90s?

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится -15 Проголосовать: не нравится

      Sorry it is 26791720 :). Just trying to say the scoring algorithm is kind of strange. Friends of mine wrote some brilliant solutions but received a lower score unfortunately. I’m 100% I don’t deserve the score

      • »
        »
        »
        »
        5 лет назад, # ^ |
        Rev. 3   Проголосовать: нравится -36 Проголосовать: не нравится

        S̶t̶i̶l̶l̶,̶ ̶y̶o̶u̶ ̶w̶r̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶s̶o̶l̶n̶ ̶i̶n̶ ̶t̶h̶e̶ ̶9̶0̶s̶?̶ Duration also updated later.
        What was your soln/heuristics?

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
          Rev. 3   Проголосовать: нравится -19 Проголосовать: не нравится

          I just did a sorting with a weighting function involving time for signing-up, and a greedy to scan the best book in the remaining time. (Idea is stupid it was 4:00am here and my brain was not functioning properly but that's the only possible thing u can code in 90 mins I guess)

          It ran for ~5 mins on case D (slowest) and got the score I stated above. I am now trying to optimize some kind of knapsack, hope it can get a better score.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            what was the exact heuristic? I tried something similar but wasn't able to consistently beat my original greedy solution :(

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    what heuristic?

»
5 лет назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится
  • A — 21
  • B — 5,822,900
  • C — 5,689,822
  • D — 5,037,435
  • E — 4,985,816
  • F — 5,340,296

Total: 26.876.290

Does anyone have a good solution idea for E?

»
5 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

For any team who got more than 5M points on E: how did you do that? Our team's local search ended up getting a terrible score for E (less than 4M), and we only got 4.6M after doing some random greedy.

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +13 Проголосовать: не нравится

    Just greedily searching for the next library and then greedily taking all untaken books from it with most score and scoring function for library sum(book_scores) / library.signUpTime gave us 5,034,357 points.

    Seems to be much less than lots of people got though

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +39 Проголосовать: не нравится

      After you get final order of libraries, you can improve results using maximum vertex-weighted matching in bipartite graph. That will give about 5,210,736 points. With some heuristics for book score based on number of libraries with such book in it I got 5,223,111.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +77 Проголосовать: не нравится

      We used the following way to decide the order of the libraries.

      Assume we have some order of the libraries; let's try to quickly (in $$$O(N)$$$) estimate the total weight of books we can take. For each library, we can calculate the number of books it can scan: (number of days after signup) * (number of books that can be scanned per day), and just assume it takes the best books possible. Using precalculated prefix sums, calculate the sum of scores each library takes. At this point, we just ignore that multiple libraries will count the same book and will simply count these books multiple times.

      Now we have some quick way to estimate how good a given order is. We can use some hill climbing or simulated annealing to find a good order. After we have decided the order, use some min-cost max-flow to assign books to libraries (as mentioned above).

      We got 5 236 142 points for E this way. About 200k of it came from this approach to book ordering (before we had a simple greedy ordering + max flow).

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Did you run min-cost-max-flow on the whole dataset or splitted them into parts? Running min-cost-max-flow on the whole dataset wouldn't finish anything after a-set for us and all of our splits gave bad scores.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          The whole dataset. We have a fairly efficient implementation. It didn't finish for C, but finished for D, E and F (at least after compiling with -O2 :P).

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Would it be possible to share that implementation with me?

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится +28 Проголосовать: не нравится

            And there's no assignment to do for C anyway since all books are scanned immediately.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          There is a thing called “zkw mincostflow” which really works well in practice. I don’t know the details, but basically it prunes by considering all paths with same distance, and simultaneously pushing flows of same value like in Dinic’s. This finishes in about 0.5-1s for each iteration.

          Probably, the number of distinct reward will be small, so one can also compress the nodes with same scores and reduce the graph. I didn’t tried it. Maybe I should’ve tried it.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +2 Проголосовать: не нравится

        How do you set up the graph for min-cost max-flow? I created a bipartite graph to maximize the number of books scanned, but I couldn't think of any way that took into account the values of each book.

        The second part seems really essential, our implementation gave almost the exact same scores as just greedily assigning books since we didn't think about costs.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
          Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

          The values of each book is exactly what the "min-cost" is needed for.

          The flow network consists of the following nodes:

          • a source;
          • a sink;
          • a node for each book;
          • a node for each library.

          And edges:

          • from the source to each book an edge with capacity 1 and cost equal to -1 times the book value.
          • from each book to each library it is in an edge with capacity 1 and cost 0.
          • from each library to the sink an edge with capacity equal to the number of books it can scan ((days after signup) * (books per day)) and cost 0.

          Actually, I guess we should have just "min-cost flow" here, maximizing flow isn't what we care about. But we were in a hurry and it worked pretty well ¯\_(ツ)_/¯.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It turned out that there were a bug in our team's local search implementation. After fixing the bug, we got approximately 5.1M on E and minor improvements to C, D and F as well.

»
5 лет назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

It was really fun to participate in hashcode for the first time. Learned a lot. We made a time-lapse video while participating.

Check it out — https://youtu.be/7mzgaE9xiFE

After watching the video, some of the mistakes were quite evident. If you could also point out a few mistakes that we made then that would be really helpful.

Thanks! Looking forward to enhancing my problem-solving skills.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Was anybody contacted regarding the finals?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Anyone got e-mail regarding selection for world finals?

»
5 лет назад, # |
  Проголосовать: нравится +7 Проголосовать: не нравится

My team recently participated in Google's HashCode 2020 and it was my first time yet we ranked 115th in India and 1127th internationally and here is my HashCode 2020 Experience https://www.linkedin.com/pulse/hashcode-2020-experience-jai-kumar-dewani
My LinkedIn post https://www.linkedin.com/posts/jai-dewani_hashcode-hashcode2020-activity-6636693461091352576-ntpE

»
5 лет назад, # |
  Проголосовать: нравится +118 Проголосовать: не нравится

gg

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +57 Проголосовать: не нравится

Why can't onsite finals just be postponed, not canceled?

»
5 лет назад, # |
  Проголосовать: нравится -50 Проголосовать: не нравится

The truth is that it has no connections to the coronavirus. Organizers just realized that this competition is stupid and makes no sense and decided to cancel it to make it completely new with a better format in the future. Finally some good news about #code.

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится +44 Проголосовать: не нравится

Results in the Extended Round:

  • A — 21
  • B — 5,822,900
  • C — 5,690,888
  • D — 5,109,000
  • E — 5,240,809
  • F — 5,348,248

Total 27,211,866

Team: CIMAT E3 (HeathcliffAC, gtvazquez, Mario Guzman, csegura)

Institution: Centre for Research in Mathematics (Mexico, Guanajuato)

Method: Parallel Memetic Algorithm. We will write a paper describing the algorithm for those interested.