Codeforces, welcome!
Registration for the ML competition from Raiffeisenbank that starts on May 17 is now open. The prize fund of the championship is 700,000 rubles. The competition is unrated.
All terms of participation were developed by the Raiffeisenbank eFX trading team in collaboration with Codeforces. Huge thanks to geranazavr555 for the testing and great advice and MikeMirzayanov for Codeforces systems.
Participants are invited to build a predictive model based on the provided historical data.
Cash prizes will be contested only among Russian residents (we expect winning participants to provide descriptions of their solutions in Russian).
Prizes:
- 1-4 places — 100K rubles each
- 5-10 places — 50K rubles each
- 1-60 places — merch packages
Registration is open until the end of the contest. Please fill in the contest registration form.
UPD1: Competition will now run for two weeks until May 31, 19:00 MSK!
UPD2: Use our baseline solution for a quick start
UPD3: Contest results are open. You can download the system testing data by the link http://assets.codeforces.com/rounds/1522/8f2fa64f1730ae12dc37504765d7e012e16613f0/tests2.csv
UPD4: Results will be announced within coming week! Stay tuned!
UPD5: The results are now final and the winners are:
Congratulations! Everyone in global top-60 -- don't forget to check your inbox, we'll message you about merch packages.
Is this Only for russian?
No, for all
It's for tourist))
Hi, Currently I am not looking for any job opportunities. Could you please add "None" under "Are you looking for internships or full-time positions?*". I am just participating for fun. Also, what should I provide under Telegram or another messenger ?
I'm assuming this is unrated, right? (since it seems to be an ML contest rather than an algorithmic contest). It doesn't mention in the description.
Is this contest supposed to be a normal cf contest (with algorithmic problems) or is this something else ?
It is mentioned above that
So, it is not a normal CF contest.
is it rated ?
Nice profile,
Your profile is showing your hard work
yours too.
thanks
yours too
it should not be, since it is for 6 hrs.
Look again , it's for 6 days. Getting ML problem solved in 6 hrs will be very very challenging as only training the model takes upto 3-4 hrs unless you have supercomputer.
Any similar contest from Past, so that we can practice ?
You can refer to the last two problems in Quora Programming Contest this year
https://codeforces.me/blog/entry/86539
The server is long offline, and the test cases are not published. I wrote some explanations for the ML problems in the Codeforces thread.
Where can we find the contest rules please x) ?
I haven't ever tried writing ML algorithms but I do want to learn. Can someone please provide with some problems which are supposed to be solved with ML algorithms?
you can explore kaggle for ML related stuffs. It's like CF for ML. There are regular competitions on kaggle and it also have some good learning resources.
I hope to work remotely at Raiffeisen bank.
What is the round format?
1. Is it a dynamic evaluation or is it a submission-upload format?
2. If dynamic, please share resource and library availabilities. (GPU/CPU specs, max-memory, library availabilities, info if custom libs can be uploaded, internet access)
anyone here?
No.
How to determine the winner of the football match ? Do we have to check other parameters if the total number of final goals of both teams are same ? Or we can say that it is a draw.
We can say that it is a draw.
The interactive part is not clear to me. Why in the output there are 2 blank lines before each bet? I need more clarification on this point: After every response, your program should print a line feed and flush the output buffer. Is the "every response" mean every bet? I do not want to spend 2 — 3 days on this problem only to find out my solution "hung" in the system test.
Has anyone figured out what "Wrong answer on test 1" is supposed to mean here, instead of like a bad negative score?
Hey, I also got the "Wrong Answer on Test 1". What files did you put in zip apart from main.py and train.csv?
Apparently, I was missing some header files. Make sure to keep main.py at the root level of your zip file. Other than main.py I have two other files — the model and a helper dictionary
Don't you have train.csv in your zip file? Can you write names of all the files with their extensions in your zip file?
I am mainly using my pretrained model for predictions so I dont require a train.csv in the zip file
If I am trying to include numpy, pandas or scikit learn, it is giving runtime error. Is there any workaround for it?
Runtime = non-zero exit code or exception has been raised.
I found that each match has a fixed answer with the highest score, which is one of "HOME", "DRAW", and "AWAY". And the score of output "SKIP" is 0. Then for each match, as long as you try 3 times, you can know which output has the highest score. When trying the i-th match, all outputs in other matches will output "SKIP". In this way, the highest score of the i-th match can be tested. So we can get the highest score through n*3 submissions, am I right?
I think that, but then you will try to submit 25,500 submission which is a hilarious number XD
Also, these are preliminary tests and the real tests will be after the contest, and if you did that it will pass the preliminary tests but will fail after that.
I get it.
Anyone tried submitting with Python3 Lib +Zip files? What files did your zip file contain? 1) main.py 2) train.csv
.....??
The only requirement is to have a main.py file at the archive's root.
@geranazavr555 Do we have to submit the training code also?
No, but after contest jury may ask you to provide your training code.
Why after predicting the result of each match we are getting the post-match characteristics as input...I mean it's of no use..
It is of use as you can use that data for improving future predictions.
well, IRL the statistics is useful to understand whom to blame — judges, players, or coaches
Seems that contest has been extended for one week
How to resolve the issue Can not find 'main.py'?
Put your main.py in the archive's root
How to do that? Is there any resource you can refer?
https://codeforces.me/problemset/submission/1/116577356
Is there some problem with reading match stats after SKIP? I get more points when I don't read match stats after skip rather than when I read itIs merch packages for anyone in top places?
Is the training dataset also given in chronological order?
Yes
"Cash prizes will be contested only among Russian residents (we expect winning participants to provide descriptions of their solutions in Russian)."
Does this does hold for merch packages as well?
It seems that the test data in the system has absolutely different nature in compare with training data. The one reason to think so is the following. If training data is splitted somehow into training and validation data, then local score estimation on that validation dataset does not correlate with the score in the test system (e.g. local score $$$\approx 100$$$, test score $$$-150$$$, local score $$$20$$$, test score $$$350$$$, wtf). And this is happening regardless of the split method (I tried a lot of them, but, unfortunately, did not find a good one for comfort local testing and solution estimation(it is really important, for instance, for hyperparameters search)). Usually, in many famous ML contests (e.g. Kaggle), it is solved in the following way: all contestants can choose 2 different solutions and the final score will be the maximum of chosen solutions scores. Why it is impossible in this contest?
A bad workman quarrels with his tools
Gl with the random shuffled Final leaderboard...
While I agree there might be a random shuffle, you are complaining about footballers playing inconsistently and blaming the dataset for not meeting your model.
I am not. That is the usual situation with the real data. And there are enormous number of olympiads with the same situation. I said that in order to prevent a large difference in public and private leaderboard and to find the most effective solutions among participants it might be usefull to add opportunity to estimate the final solution as the max of two bests. And you are trying to say that is not important using not appropriate jokes. If organizers main purpose is to find a good solution then it would be helpful.
Oh sorry. I thought you are complaining.
bruh
raiffeisen Please, add the way to participate out of contest. I want to train my ML skills, but I study at school, not at university so I can't fill the registration form.
Can we put .mat file in zip file which we are submitting?
You can put any files in zip.
Less than 19 hours left until the end of the contest. After the end of the contest, system testing will be launched using the new dataset. Make sure that the last submission is your final solution, it will be used in the final standings.
Now, when the contest is over, i can announce that the teams 280-281 are Man City and Man United (tho my solution doesn't use this fact)
What happen to the standings, they are changed completely. Its more of like the admin reversed the leaderboard. Never seen such a big difference in the public and private lb. Even Kaggle leaderboards doesnt change quite like this.
This leaderboard permutation is common thing for such competitions, it always happen when train and test data have different distributions, meaning the better model you have (fitting train data) the worse it will perform on test set, so the only way to get public test points is overfitting leaderboard, meaning you will have huge gap on private data.
I agree what you said but I have taken part in alot of different ml contest and never seen such a huge difference. My public lb was 280 and private was -30 something and I dont think so it was overfitting keeping in mind that the top scorer got 800+ score.
It is possible to get huge score with an algorithm like that: take some team and make 3 submissions predicting all wins for that team, all ties and all loses, take best predictions, do that for another team, repeat until you got score that you want, but taking in considiration that top scoring (800) competitors from china made just 3 submission, i thought they have some cool solution too.
I guess the train-test sets were completely different but in this case there is no point of the training data because machine can't predict the sets correctly which are completely different from training set and hence the large difference
Yes, but it still possible to train just on those n <= 8500 samples from test inside main.py without using training data (i was doing that), or use train data with little weight and data that you get from test with bigger weight, but still the main problem here that we cant say anything how good our solution is because both 3 sets (train, public test, private test) have different distruibutions, even selecting the solution that you will send is like a casino.
"Cash prizes will be contested only among Russian residents (we expect winning participants to provide descriptions of their solutions in Russian)."
Does this does hold for merch packages as well?
I would like to thank RF for an interesting 2 weeks of competition! This is my first ML competition. During the competition, I had doubts about the ML capabilities to solve this problem, possibly related to my lack of knowledge.
It would be very useful if someone who is a closely associated with ML shared their experience. Did you consider this task as classification or regression problem or something else? Which model was most preferable for you? What features were the most significant? What tricks did you use? What metrics did you have on the test data?
As for me, I tried to solve it as a classification problem and tested various "out of the box" classifiers: xgboost, catboost, keras fully connected neural network (very slow for one sample prediction and would hardly fit into TL), some stacking of them etc. There were features series from different team indicators for the last n matches. The series were updated as the matches progressed. The most significant feature turned out to be the Elo rating.
I am stuck on the following: With some train-test splitting, for example, in the proportion of 32000:8000, the models with some tuning gained +100, +200 (and even +600 points). The accuracy on the validation data was 0.5-0.6, which is of course better than random walk since 3 classes. But at the same time, with different splitting (for example 28000:5000), the same models gained -100, -200 points with 0.5 accuracy. It is clear that this models is not suitable for a successful predictions due to large fluctuations. I have tried to improve this by selecting various combinations of features, but it didn’t work. Also I have tried to predict only home team win — validation data accuracy has increased to 0.7, possible income has naturally decreased, but the worst thing is that fluctuations remain at -50/+50.
The one way I found is to increase accuracy by skipping low probability model answers. Thus, for 8000 matches, it is possible to answer the 300 most predictable matches with accuracy about 0.9.
The problem is that the probable outcomes of these matches have very low odds, such low that the losses on 10% failed predictions balance the income with the 90% guessed ones.
I think, models learns the features that are already "included" in the bookmaker's odds (may be these features are so good for separation because they are "right" in some way?) And the way is to find the features that the bookmakers are not accounted for (overestimated outcomes). I think it is very difficult (impossible) to find such features, or not?
Can the scores gained with using these good features outperform the lucky fluctuations of nominal "xgboost" with an 0.5 accuracy and -N/+N points?
In any case, this is all a newbie reasoning and it would be very interesting to see experts solutions.
geranazavr555
I was in one of the top 30-th at this contest and I'm not Russian.
Can I receive non-cash prizes? If possible, do I have to explain my solution?
raiffeisen, I have also same doubts, please clear it.