tl;dr: We are experimenting with new cloud-based checkers for the April Cook-Off to cope with the heavy traffic. There should not be much noticeable change in the execution of your submissions.
Over the years we have been steadily increasing the number of checkers (the machines which judge all the submissions made on CodeChef) to meet the increasing number of users participating in our contests. But in the last few weeks, the number of participants has increased so much, that it is no longer feasible for us to scale up the same infrastructure that we had been using till now. Hence, for this April Cook-Off, we are going to experiment with a new type of checker:
Till now, we had been using dedicated machines as checkers. But we are now planning to shift to using cloud-based checkers. This allows us to scale up as and when needed, and also reduces our cost. But this does come with its own set of disadvantages: Our control over the machines is restricted, and the machines change over time. We have been testing with these and configuring them so as to maintain as much stability as possible, and we are satisfied with the results that we’ve got so far. We are still running more tests, and we will also be analyzing the Cook-Off submissions to see how well the checkers perform.
The execution times on the new cloud-based checkers differ from our old checkers by around 10%. The exact difference is subject to the problem and the code being evaluated. What this means is that if a code ran on the old checkers in 1 second, in the new checkers we expect it to run in a time between 0.9 and 1.1 seconds. This should not be a major concern, because it affects only past contest problems, and the time limits usually have a large enough leeway to accommodate these changes.
The other factor is the execution time fluctuations. No two machines are exactly the same, and even in the same machine, the environment during two different runs is also not exactly the same. We try our best to configure the machines and provide as uniform an environment as possible, but even with those, multiple runs of the same code can take slightly different times to run. These changes are pretty small and in the range of tenths of a second difference. This has not been an issue for us with the old checkers, except for some very rare cases. Our tests show that even with the new checkers, the execution times are pretty consistent, but the range could be slightly larger than with the old checkers. We expect the execution times to be within +/- 13% difference. That is, if a code runs in 1 second, on running it many times, the different execution times might go up to 1.13 seconds sometimes, and similarly lower.
For you to get familiar with the new checkers, we have created a new practice contest, with a few problems. Submissions made on these problems will be evaluated by the same checkers which will be used during the Cook-Off.
Our usual Practice section problems will still run on the old checkers for now.
Note: Sometimes codes which seem deterministic actually turn out to be non-deterministic due to various internal processes. So if you do see an instance where the execution times of the exact same code varies a lot while running it multiple times [say more than 1-second difference], then please test that code locally on your system using some random large data, and check whether it is consistent on your system. Because more often than not, large fluctuations in execution times are a result of some inherent non-determinism which we have no control over, and there’s not much that can be done about them.