Following are some Notes pertaining to the Contest. These Notes dealt with issues arising while the Contest was running.
22 November 2015
The generation of the 1000 series relies on the generation of random numbers. That presents a difficulty, because current computers do not generate truly random numbers. There is a widely-used method of addressing the difficulty: use a computer routine that generates numbers that seem to be to random (i.e. fake it). Numbers generated by that method are called “pseudorandom”.
A computer routine that generates pseudorandom numbers is called a “pseudorandom number generator” (PRNG). PRNGs have been studied by computer scientists for decades. All PRNGs have weaknesses, but some have more serious weaknesses than others.
The Contest was announced on 18 November 2015. Shortly afterward, a few people pointed out to me that the PRNG I had used might not be good enough. In particular, it might be possible for researchers to win the Contest by exploiting weaknesses in the PRNG. I have been persuaded that the risk might be greater than I had previously realized.
The purpose of the Contest is to test researchers' claimed capability to statistically analyze climatic data. If someone were to win the Contest by exploiting a PRNG weakness, that would not conform with the purpose of the Contest. Ergo, I regenerated the 1000 series using a stronger PRNG, together with some related changes. Note that this implies that the files Answers1000.txt and Series1000.txt were both revised.
The 1000 regenerated series were posted online four days after the Contest was announced—on 22 November 2015. (Each person who submitted an entry before then was invited to submit a new entry, with no fee.) When the Contest closes, the computer program for the original 1000 series and the encryption key for the original Answers1000 file will be posted here—together with the program and encryption key for the regenerated series.
The paper is based on the assertion that the Contest “used a stochastic model with some realism”; the paper then argues that the Contest model has inadequate realism. The paper provides no evidence that I have claimed that the Contest model has adequate realism; indeed, I do not make such a claim. Moreover, my critique of the IPCC statistical analyses (discussed above) argues that no one can choose a model with adequate realism. Thus, the basis for the paper is invalid. I pointed that out to lead author of the paper, Shaun Lovejoy, but Lovejoy published the paper anyway.
When doing statistical analysis, the first step is to choose a model of the process that generated the data. The IPCC did indeed choose a model. I have only claimed that the model used in the Contest is more realistic than the model chosen by the IPCC. Thus, if the Contest model is unrealistic (as it is), then the IPCC model is even more unrealistic. Hence, the IPCC model should not be used. Ergo, the statistical analyses in the IPCC Assessment Report are untenable, as the critique argues.
For an illustration, consider the following. Lovejoy et al. assert that the Contest model implies a typical temperature change of 4 °C every 6400 years—which is too large to be realistic. Yet the IPCC model implies a temperature change of about 41 °C every 6400 years. (To confirm this, see Section 8 of the critique and note that 0.85×6400/133 = 41.) Thus, the IPCC model is far more unrealistic than the Contest model, according to the test advocated by Lovejoy et al. Hence, if the test advocated by Lovejoy et al. were adopted, then the IPCC statistical analyses are untenable.
I expect to have more to say about this in the future.
01 December 2016
Regarding the 1000 series that were generated with the weak PRNG (prior to 22 November 2015), the ANSWER, the PROGRAM (Maple worksheet), and the function to produce the file Answers1000.txt (with the random seed being the seventh perfect number minus one) are now available.