Answer by Stu Kennedy


In 1986, is his book The Blind Watchmaker, atheist Evolutionist and Oxford biologist Richard Dawkins referred to his WEASEL computer program, which was designed to demonstrate that the mechanism of Darwinian evolution was feasible. The WEASEL program he used has been widely copied in school biology programs as confirming the Theory of Evolution.

Dawkins was aiming to provide a response to creationists, who have asserted that a purely random approach to generating biological information is theoretically impossible, due to the excessively low probability of randomly generating even a short meaningful sequence. Dawkins contends that real evolutionary processes would not work in the way creationists claim, but would behave in an adaptive way, so each change or ‘evolution’ is able to improve on the previous generation, thus allowing the mechanism to produce (or converge on) a meaningful solution.

Several people have tried to debunk the Dawkins Weasel program, but all have incorrectly assumed he’s written something similar to a “one-armed bandit” program, where randomly generated new letters (mutations) get locked in (naturally selected) whenever a new sequence matches a meaningful result (equivalent to new DNA). But what Dawkins has done is slightly more sneaky than that. He’s smuggled a hidden locking mechanism into a probability device, which achieves roughly the same effect, but makes it look like he’s allowing the proper randomness of evolution, where life is firstly generated from non-life without any pre-existing intelligence (Creator) being involved in originating the complex information that is found in life molecules (DNA), and then once DNA exists, all survivable options for change are possible and have been generated by similar random or naturalistic means. The result is that Dawkins’ intelligently designed and cleverly hidden mechanism makes any claim to duplicate naturalistic evolution invalid, as what he has really done is duplicate a low IQ form of creation.

My method to demonstrate this:
1 Show you what Dawkins has done (A-C)
2. Analyse Dawkins method (D)
3. Expose what Dawkins has hidden (E)
4. Give you a helpful conclusion

A. Getting from non meaning to meaningful code.

Dawkins set out to show that an essentially meaningless arrangement or random string of a certain number of uppercase letters and spaces can be transformed stepwise into a meaningful arrangement or string. This problem, and Dawkins solution, is a variation on the famous claim that a monkey typing randomly and unintelligently on a typewriter will, given enough time, recreate the complete works of Shakespeare.

B. What steps did Dawkins’ weasel program use?

You can probably remember learning the steps required to perform long division at school. Such steps when applied to computing are called an algorithm which is defined as a set of logical rules for solving a problem in a finite number of steps. Computers are simply machines that run algorithms. Dawkin’s algorithm for transforming his random string proceeds as follows:

1. Choose any meaningless string consisting of a random selection of only uppercase letters and spaces, which has length k=28 (the total number of characters used to make the string). There are n=27 possible characters (uppercase letters or a space) that we can select from.

This will be our initial string e.g. “WDLTMNLT DTJBKWIRZREZLMQCO P”

2. Choose a target string which is a meaningful phrase from our set of (n=27) characters so that the end result will have a length of 28 (our chosen value of k=28). For this example Dawkins chose the famous Shakespeare quote “METHINKS IT IS LIKE A WEASEL”. (Did you count the letters and spaces to check?)

3. Generate (p) copies of the random string generated in step 1. (Let’s say p=100 copies)

4. Cycle through each of the (k=28) letters in each of the (p=100) strings and allow the probability of mutation or change for each letter to be 1 in 25 i.e. 4% (so m=0.04). This choice is arbitrary. Our value of m was chosen to show simply how the method works, and is much higher than any natural mutation rates. Each letter that is to be mutated is replaced with a random choice from the set of (n=27) available characters.

5. Score (s) the 100 newly mutated strings by counting any characters that match the target string chosen in step 1. e.g. “OERYUSKS ….” matches “METHINKS ….” in 3 places.

6. Select the mutated string which has the highest score. i.e. MELDINLS IT ISWPRKE Z WECSEL (you can see some of the words are starting to become like the target phrase)

7. After that go back to step 3, and repeat until the score for one of the mutated strings equals k (the length of the chosen string)

C. What results from this?

If we start the program with the string of letters shown below which we label Generation 01, and allow the program to run 43 times or generations, note what actually happens:








In fact we note that whenever we use n = 27, k = 28, p = 100 and m = 0.04, the meaningless random string tends to head toward or converge to a meaningful solution in roughly 50 steps or iterations of the algorithm.

D. Analysis

The success of this algorithm’s ability to change the string from meaningless to meaningful is achieved by the fact that for any reasonable sized population (p is large, giving us many copies to mutate at each stage), combined with a small enough mutation rate (m), there is likely to be at least one improved mutation in the population which will increase the score (s) from that of the previous generation. Setting these two variables (m and p) correctly will ensure that we tend to get an improved mutation.

The limiting factors from a math perspective are as the number of copies approaches one (p approaches 1), there can only ever be one available mutation that is preserved through to the next generation. Therefore no selection is possible and the process of trying to get to the string becomes totally random i.e. it becomes the same as the initial monkey and the typewriter problem. This process is unlikely to ever converge, that is, it will simply generate a meaningless string at every iteration and never become our chosen meaningful string.

Likewise if we vary the mutation probability so m appraoches 1 all letters will be allowed to mutate at every stage, so even if several letters match in a mutated offspring they will all be replaced at the next iteration. Again, this exhibits a fully random behaviour and will never converge.
So, under what circumstance can the probability of a mutated offspring be improved, so that the score (s) increases?

The ‘probability mathematicians’ answer is that such an increase can be calculated by finding the probability of selecting currently wrong letters from the string and multiplying that by the probability of correcting that letter.

P(i) = i(k-s)/(nk)
(where i less than or equal to k is the number of letters that are mutating)

So in a population of p, the number of expected improved mutations is E(i) = i(k-s)/(nk)

If we take a simple situation where i = 1, the probability simplifies to ‘the expected number of improved mutations’ e.g. for the aforementioned values of n, p and k at step 0, where we assume we have a score of 0, we would expect to get an average of 3.7 improved options in our population. To ensure that i = 1 we must set m = 0.0357. (as the expected value of i = km)

We see this average when running the program. Below is the number of improvements we get at each stage when running 15 iterations of the algorithm:

4 7 2 5 2 3 6 2 7 3 2 5 3 3 2

Taking the average, we get the mean value of improved options from this run of the program to be 3.73. This average decreases as we match more letters until (s) = 27 and we are one step from complete. In this case we would expect to generate 0.132 better options per generation, i.e. we have a 13% chance of matching the whole word once we have matched all but one letter.

E. Model Limitations
Despite the popularity of such computer modelling, such a model bears no resemblance to the evolutionary processes which Dawkins claims produces an increased complexity in DNA and here’s why not.

E.1 The One Survivor Problem

In real life, entire populations don’t die out leaving the one best survivor as the starting point for improvements at the next generation. This is one of the largest flaws in Dawkins’ approach, as success is dependent on the new improvements being selected in the offspring of the fittest of the last generation. Ignoring what mechanism could possibly select and identify ”fitness”, a real-world application would see the whole population creating an average of 2.5 offspring and all of those creating offspring. There is no observed or known basis for such selection to occur. Interestingly, this model doesn’t allow for the possibility of extinction.

E.2 Locking Mechanism?

Although there is no direct locking mechanism in Dawkins’ algorithm (as some creationists have accused him of), the fact that he has inbuilt a scoring (s) mechanism for a candidate string of letters on a piece-wise basis, acts as a ‘disguised or hidden’ locking mechanism, whenever the population is of a reasonable size and the mutation rate is low. The reason is due to it making it unlikely for the score to improve if a previously matched letter is destroyed when only a few mutations are made. In other words this program is intelligently designed to appear to be random when it isn’t. Note that Dawkins program doesn’t prevent the best string from regressing (being worse than the previous generation’s best), but it does mean that on average when running the program for a number of steps, you are guaranteed to get improvements and eventually the correct string. This is the same as throwing a pair of dice until you get two sixes. It may take a number of goes to get a pair of sixes, but on average you should expect to get a pair of sixes if you throw the dice 36 times and this is the key to seeing how Dawkins algorithm cheats.

E.3 Non-Functional Intermediate Stages

The assumption that each intermediate improved step can be held and used as the basis for the next is flawed. Even after life has somehow evolved and DNA exists evolution is supposed to start with a functioning (non-random) sequence of DNA and each persisted mutation is supposed to be not only functional, but advantageous. This constitutes neither, as we start with a meaningless string and only achieve a meaningful string at the last step. A more realistic model would be to start with a sentence, require that each generation be a meaningful alternative sentence and converge on the target sentence. This is a common misunderstanding of the genetics that Dawkins has adopted, where ”advantage” is an arbitrary concept.

E.4 Targeted Iterations

Although the process of variability here is random, the application of it is not. The knowledge of the terminal conditions (what you are aiming for) is not something that a genuine randomly evolving genome would have. There would be no template to compare any results with, so no way of scoring the results would even exist. The argument would be made that nature itself does the selecting here by eliminating the mutations that are less beneficial; but as we see here all intermediary steps are equally benign, there are no such things as ‘fitter options’ when all intermediate steps are non-functional or meaningless.

E.5 Realistic Parameters

Current research suggests that around 50-100 de novo mutations occur from parent to child, in any generation, but the vast majority of those are understood to be in areas that will produce neutral variations or disease and deformity, none of which are improvements. There is no research that suggests any beneficial mutations occur in humans.

E.6 Improvements Are Arbitrary

The concept of natural selection both in this ‘algorithm’ context and in the context of real DNA turns out to be impossible to define. How can an intermediary state (even if it is functional) be measured in terms of fitness? In fact, evolutionists claim that natural selection is blind and not intelligent, therefore natural selection cannot calculate the fitness of an organism (e.g. by measuring values in the DNA) it is simply a rule that says that things that are not suitable don’t survive. Trying to justify how one human not being killed in a famine, a plague or an attack from a neighbouring tribe could possibly demonstrate that natural selection has preserved the fittest candidate is absurd. I think most would agree that the survivor would be considered lucky, and not special because of a genetic mutation that gave them extra powers of survival or reproduction.

Conclusion: The problem of meaning

In addition to exposing Dawkins use of the score factor to drive the results in a non random way, it also needs to be pointed out that Dawkins’ use of alphabet CAPITALS is not the equivalent of a random string devoid of meaning. His analogy appeals to humans trained to use the Latin Alphabet who speak English and who can access an Oxford dictionary to decide that WEAKZL is approaching the pre-existing meaningful combination of letters WEASEL.

If he were to really simulate the random evolution of information rich life molecules such as DNA from meaningless matter, he would have to start with shapeless random squiggles that had no meaning at all to anyone or anything, and then first ‘evolve’ the letters such as the Ms and the Zs. But Dawkins and almost all evolutionists in this debate want to skip this step and hope you don’t notice that they have inserted a pre-existing intelligence (a god with a lower case g) into the situation, and even though they are trying to duplicate randomness; they have actually demonstrated a low IQ form of creation.

Further Investigation
For those who want to investigate further, check out the program algorithm which Stu developed to implement the Weasel algorithm on your own computers:

For a live demonstration of the program click here.

Were you helped by this answer? If so, consider making a donation so we can keep adding more answers. Donate here.

About The Contributor