Important Rule Change for July Contest

1 min read

Chocolate Shakes,
We’ve had some complaints in the last contest about participants reverse engineering test data. While this isn’t necessarily “wrong,” in keeping with the spirit of finding the best solutions to the problems given, we’ve instituted a small rule change. This time around, the challenge problem will be rejudged at the end of the contest, with entirely new test data.

We considered other approaches for all problems including:
* Rejudging all solutions at the end of the contest with new test data (not just the challenge problem)
* Capping the number of submissions per user per day
* Allowing users to only submit solutions at certain time intervals (i.e. one solution per five min)

In the end we have done our best to come up with better, bigger and more random test cases and are holding off on these other change (for now). If you believe there are additional ways to improve the contest format, please let us know.

Cheers,
Amit (The Chef)

p.s. if you wanna see how pretty I am, check out our new Twitter background.

p.p.s don’t forget our first DesignChef competition along starts tomorrow, help us out… spread the word.

Exciting Updates for the month of June 2022

Updates in the Practice Section We have launched the Practice skill score. It contains Platform statistics: Where you can see the overall stats of...
surajmsharma
1 min read

We’re Celebrating Our Birthday Month, And There’s Some Exciting…

The Chef turns 13 this month, and as we step into our promising teen years, we’d like to bring you some good news.  Special...
debanjan321
1 min read

Exciting updates for February 2022

A month back we re-designed the practice landing page to improve the user experience. That was just the start of some amazing enhancements that...
surajmsharma
1 min read

16 Replies to “Important Rule Change for July Contest”

  1. Hi Chef,

    I know that you really want to make it fair but if you are re-judging the solutions , what about those solutions which will fail the new test cases but pass the old, in that case it will not give the participants to re-submit their codes, in case it is wrong. While in the online competition one can submit as long as his solution is wrong.

    Why the cap ?? Most of the programmers here are college students and will probably not be coding all days of the week. So those who will feel like coding on one or two days only are gonna suffer badly…anyway I have no objections to this.

    Again the cap at one solution per five minutes… If you are already implementing the cap per day then why the need for another cap ?? I mean a person will have to fruitlessly wait for the 300 seconds to pass before he actually submit again. Now if there is a compilation error or some stupid mistake in the code and one wants to resubmit again.. I am sure he/she will be damn..pissed ..

    I fully feel that instead of days the contest could be reduced to hours just like topcoder and that way it will be much better otherwise I have a feeling that some of the solutions are not entirely the coder’s own work.. but I would rather call it collaboration ..and it’s gotta be fair right Chef??

    1. Hi Mohit,
      Please note that only the Challenge problem will be rejudged. The other suggestions will not be implemented. The goal is to keep within the spirit of the competition (to find the best solutions to problems) and not reverse engineer test data.
      Thanks,
      Chef

  2. Hi Chef,

    I know that you really want to make it fair but if you are re-judging the solutions , what about those solutions which will fail the new test cases but pass the old, in that case it will not give the participants to re-submit their codes, in case it is wrong. While in the online competition one can submit as long as his solution is wrong.

    Why the cap ?? Most of the programmers here are college students and will probably not be coding all days of the week. So those who will feel like coding on one or two days only are gonna suffer badly…anyway I have no objections to this.

    Again the cap at one solution per five minutes… If you are already implementing the cap per day then why the need for another cap ?? I mean a person will have to fruitlessly wait for the 300 seconds to pass before he actually submit again. Now if there is a compilation error or some stupid mistake in the code and one wants to resubmit again.. I am sure he/she will be damn..pissed ..

    I fully feel that instead of days the contest could be reduced to hours just like topcoder and that way it will be much better otherwise I have a feeling that some of the solutions are not entirely the coder’s own work.. but I would rather call it collaboration ..and it’s gotta be fair right Chef??

    1. Hi Mohit,
      Please note that only the Challenge problem will be rejudged. The other suggestions will not be implemented. The goal is to keep within the spirit of the competition (to find the best solutions to problems) and not reverse engineer test data.
      Thanks,
      Chef

  3. In algorithmic contests the time limits should be such that most programs with the right complexity pass. Solutions that use standard data structures like set and map from standard template library of C++ should pass because otherwise you are asking the programmers to reinvent the wheel. Similarly using cin/cout instead of scanf/printf should not lead to TLE. You can always have at least one huge test case such that you can tolerate larger multiplicative and additive constants. Otherwise the focus gets removed from algorithms and we have to focus on optimizations that discourage use of standard libraries. Using time limits that tolerate standard library features will lead to many elegant solutions that currently get rejected.

  4. In algorithmic contests the time limits should be such that most programs with the right complexity pass. Solutions that use standard data structures like set and map from standard template library of C++ should pass because otherwise you are asking the programmers to reinvent the wheel. Similarly using cin/cout instead of scanf/printf should not lead to TLE. You can always have at least one huge test case such that you can tolerate larger multiplicative and additive constants. Otherwise the focus gets removed from algorithms and we have to focus on optimizations that discourage use of standard libraries. Using time limits that tolerate standard library features will lead to many elegant solutions that currently get rejected.

  5. I second the point raised by Saurabh above. In real world software development, performance is a very important but not the only driving factor. Design and elegance of solutions is equally important and code reuse by means of standard or previously developed libraries is not only encouraged but also essential to keep things sane. You don’t go and code-up a new hash-table implementation unless it’s has been nailed down as ‘the’ cause of performance bottleneck in the system by extensive, targeted testing (which is rare). Most of the time, it is the design / archtitecture of the system (or algorithm) which is the culprit not some corner-case idiosyncracies.

  6. I second the point raised by Saurabh above. In real world software development, performance is a very important but not the only driving factor. Design and elegance of solutions is equally important and code reuse by means of standard or previously developed libraries is not only encouraged but also essential to keep things sane. You don’t go and code-up a new hash-table implementation unless it’s has been nailed down as ‘the’ cause of performance bottleneck in the system by extensive, targeted testing (which is rare). Most of the time, it is the design / archtitecture of the system (or algorithm) which is the culprit not some corner-case idiosyncracies.

  7. I believe that the rejudge is a brillient idea. I guess all other solutions mentioned are not good. The rejudge should only be done on tie breaker and also the other limits dont make much sense to me, the waiting time could lead to an unsatisfactory experience from the website :).

    I havent checked out this month’s problems yet. I hope they have good test data and should have tiem limit which stresses more on the algorithm rather than low level optimizations unlike problem D4 of last month.

  8. I believe that the rejudge is a brillient idea. I guess all other solutions mentioned are not good. The rejudge should only be done on tie breaker and also the other limits dont make much sense to me, the waiting time could lead to an unsatisfactory experience from the website :).

    I havent checked out this month’s problems yet. I hope they have good test data and should have tiem limit which stresses more on the algorithm rather than low level optimizations unlike problem D4 of last month.

  9. I think some kind of a limit on the number of submissions (per day, per hour, or total) would be preferable to rejudging on new test data.

    Some reasons:
    – the scoreboard would be “real”
    – accepted solutions may break during rejudging, especially if only the last one is taken into account; for example, one might use too much memory (the limits are not clear)

    The “input data mining” problem wouldn’t be so bad if the number of submissions is capped. In a problem like the one in July’09 contest, it’s hardly a problem at all, because the data is so large.

    It might actually be good to allow some research on input data statistics – otherwise the contest may hinge in *guessing* what data the judges are going to provide, in order to make a good solution adapted for that kind of data.

    Without rejudging, contestants will do some input stats and that’s quite fair (e.g. test distribution of input sizes).

  10. I think some kind of a limit on the number of submissions (per day, per hour, or total) would be preferable to rejudging on new test data.

    Some reasons:
    – the scoreboard would be “real”
    – accepted solutions may break during rejudging, especially if only the last one is taken into account; for example, one might use too much memory (the limits are not clear)

    The “input data mining” problem wouldn’t be so bad if the number of submissions is capped. In a problem like the one in July’09 contest, it’s hardly a problem at all, because the data is so large.

    It might actually be good to allow some research on input data statistics – otherwise the contest may hinge in *guessing* what data the judges are going to provide, in order to make a good solution adapted for that kind of data.

    Without rejudging, contestants will do some input stats and that’s quite fair (e.g. test distribution of input sizes).

Leave a Reply