Your task is to predict how long it will take for the inventory of a certain item to be sold completely. In inventory management theory this concept is known as inventory days.
In the evaluation set you will be given the item target stock, and you will have to provide a prediction for the number of days it will take to run out. Possible values range from 1 to 30. Rather than giving a point estimate, you are expected to provide a score for each the possible outcomes.
To put it simply, you need to answer the following question:
'What are the odds that the target stock will be sold out in one day?', 'What about in two days?' and so on until day 30.
Submissions will be scored using the Ranked Probability Score (RPS) metric.
Ranked Probability Score is a squared measure that compares the estimated cumulative density function of a probabilistic forecast with the actual cumulative density function of the corresponding observation. Under a discrete scenario of possible outcomes, the RPS formula is as follows:
- N is the number of rows in the dataset,
- O represents the target,
- K is the number of classes, and
- Y is the predicted probability.
The RPS metric can be more easily understood graphically. The figure below illustrates how the value RPS takes is simply the square of the area where the predicted cumulative probability function and the ground truth do not overlap each other.
The main advantage of RPS, in comparison with other common metrics for forecasting problems, is that it is sensitive to distance, which means that a forecast is increasingly penalized the more its predictions differ from the actual outcome. This property is especially attractive for the task at hand from a business perspective: in order to manage inventory efficiently at MercadoLibre’s fulfillment centers, we would like the forecast to be as close as possible to the actual value of inventory days, besides the fact of just hitting the target number of inventory days with the highest point-probability estimate.
Disclaimer: you do NOT have to submit CUMULATIVE PROBABILITIES. You are expected to provide point-probability estimates and the scorer itself will compute the accumulation.
Results should be submitted in a .gzip file. The compressed file should be a csv with 30 values per row, separated by commas, with no header. Each value in a row should represent the estimated probability of each possible outcome from 1 to 30.
Each row in the submission file is expected to provide a forecast for the corresponding row number in the evaluation set. (Yes, we will assume that your predictions are sorted in the same way as the original test dataset).
Keep in mind that the way you sort the probabilities within each row matters: the first value is expected to be the probability of running out of inventory in one day, the second value is the probability of running out on the second day and so on.
All values MUST lie on the interval [0,1], as they represent probabilities. Any submission violating this criteria will be considered invalid.
Moreover, all the values in one row should add up to one, as the possible outcomes 1-30 are mutually exclusive and exhaustive events for the target variable. If you submit a file where some rows violate this criteria, values within each row will be scaled to enforce this property.
Last but not least, all numbers in the submission file must have a maximum precision of 4 decimal places.
Together with the Challenge Data, we are providing a sample submission file to illustrate the expected format and make sure that you don’t miss a detail!
Each time you submit a prediction, it will be evaluated on a subset of 30% of the test dataset (public test set ). That’s the score that will be shown on the public leaderboard.
Once the competition ends, we will compute a final score for each participant using his/her last submission on the remaining 70% of observations in the test dataset (private test set).
No MercadoLibre's employees.
MercadoLibre's employees cannot participate in the challenge.
One account per participant.
It is not allowed to register multiple accounts and submit from different accounts.
One account, one person.
The participation is individual. Teams are not allowed.
Three daily submissions.
No manual labeling.
Your submission must be automatically generated by a computer program. No manual labelling may be encoded in software.
No external data.
You can only use the data we provide. Using MercadoLibre’s APIs or any other data sources to increase the feature set is not allowed for this competition.
Pre-trained models are allowed.
You can use any pretrained models, as long as they are publicly available before the competition starts.
Countries eligible for prizes.
Only participants from Argentina, Brazil, Colombia, Chile, Mexico and Uruguay are eligible for winning prizes.