Evaluation

Metric

The competition is evaluated on the balanced accuracy score, which is defined as follows:

Where c is the class label.

Leaderboard

While running the competence, each time you submit a prediction, it will be evaluated on a subset of 30% of the test dataset. That’s the score that will be shown on the public leaderboard.

Final evaluation

Once the competition ends, the final score of each participant will be computed by using him/her last submission and scored against the remaining 70% of observations in the test dataset.

In order to ensure that winners’ solutions meet the challenge’s rules, those participants that scored the highest will be contacted to provide the source code used to generate the predictions. The code should contain everything needed to preprocess the data, train the model and generate the submission and must be made available to the community under MIT licence.

Rules

One account per participant.
It is not allowed to register multiple accounts and submit from different accounts.

One account, one person.
The participation is individual. Teams are not allowed.

Three daily submissions.
Maximum.

No manual labeling.
Your submission must be automatically generated by a computer program. No manual labelling may be encoded in software.

No external data.
You can only use the data we provide. Using MercadoLibre’s APIs or any other data sources to increase the feature set is not allowed for this competition.

Pre-trained models are allowed.
You can use any pretrained models, as long as they are publicly available before the competition starts.

Countries eligibles for prizes.
Only participants from Argentina, Brazil, Colombia, Chile, Mexico and Uruguay are eligibles for winning prizes.

No MercadoLibre's employees.
MercadoLibre's employees can not participate in the challenge.