www.evolutionnews.org

Michael R. Egnor #fundie #dunning-kruger evolutionnews.org

It is amusing that, despite the pretensions of atheist “skeptics” such as Novella, atheists are much more likely to believe pseudoscientific claims such as UFOs, Bigfoot, the Loch Ness Monster, psychics, Atlantis, and astrology than are traditional religious believers. Four times as likely, to be precise (31% vs. 8%). Yet this should come as no surprise. Nearly all atheists believe that the genetic code and the intricate nanotechnology in living cells arose entirely by random mutations and natural selection. Compared to the belief that life arose by chance and tautology, Bigfoot and astrology seem downright plausible.

Jonathan Witt (The article author) #fundie #dunning-kruger evolutionnews.org

Two Recent Papers Buttress Michael Behe’s Thesis in Darwin Devolves

On a new episode of ID the Future, Darwin Devolves author and biologist Michael Behe discusses two recent technical papers that the news media billed as dramatic evidence for evolution. As Behe explains in his conversation with host Eric Anderson, a careful look at the papers themselves shows that both cases involve devolution. That is, the biological forms in question did not evolve novel structures and information; instead they threw away things to achieve a niche advantage.

In the first study, in the journal Nature Microbiology, the researchers found that in Africa, where “most rapid diagnostic tests (RDTs) for falciparum malaria recognize histidine-rich protein 2 antigen,” the malaria parasite has repeatedly evolved a way to sometimes elude detection, giving it a selective advantage, since this sneakier form of the parasite is less likely to be treated with anti-malaria drugs and eliminated. But what gets lost in the media hype is that the trick is managed by deleting histidine-rich protein 2 (pfhrp2) and 3 (pfhrp3) genes — devolution.

Tossing Off Genetic Information
A similar story unfolds in a Current Biology article focused on the yeast S. cerevisiae. Behe says the thinking used to be that, as an earlier and simpler evolutionary form, it was no wonder this yeast had fewer introns than later, more sophisticated organisms higher up the evolutionary tree. But as Behe underscores and as this recent paper argues, it looks instead like the yeast devolved, tossing off genetic information to achieve a niche advantage while sacrificing functionality outside the niche.

But evolution’s grand tree-of-life story requires constructive evolution, not more and more cases of organisms tossing parts overboard. Instead, here we have two more examples strengthening Behe’s thesis that devolution dominates the biological scene, swamping by many orders of magnitude cases of genuine, complexity-building evolutionary mutations (if any such exist), rendering the prospect of substantive constructive evolution hopeless.

John West #fundie evolutionnews.org

Don't expect the "mainstream" media to notice the biting irony here: The people they like to portray as the champions of free inquiry and scientific literacy are the very ones trying to dumb-down science curricula in order to suppress information they find uncomfortable. Fortunately, Americans still have the freedom to investigate the truth for themselves, which is why the Darwinists' current strategy will be such a loser over the long term. Trying to stamp out the discussion of ideas you don't like is a sign of insecurity, and thoughtful people will eventually see through such tactics.

Cornelius Hunter #fundie evolutionnews.org

Yale's Steven Novella Argues with Michael Behe -- Here's Why Novella Is Wrong

Steven Novella, a neurologist and noted "skeptic" at Yale University School of Medicine, has commented on a recent Harvard University experiment for visualizing bacterial adaptation to antibiotics. In doing so, he argues with Michael Behe whose take on the subject was noted at Evolution News. Here is why Dr. Novella is wrong.

The Harvard researchers constructed a giant petri dish with spatially varying antibiotics to watch how bacteria adapt over time and space (the researchers came up with a great name for the experiment: The microbial evolution and growth arena [MEGA]-plate). And adapt they did. Those adaptations were instantly claimed as an example of evolution in action. The researchers wrote that the "MEGA-plate provides a versatile platform for studying microbial adaption and directly visualizing evolutionary dynamics" (emphasis added). And the press release informed the public that the experiment provided "A powerful, unvarnished visualization of bacterial movement, death, and survival; evolution at work, visible to the naked eye." Likewise, Novella called it "a nice demonstration of evolution at work in a limited context." There's only one problem: The experiment did not demonstrate evolution, it falsified evolution.

First off, Novella deserves some credit for acknowledging at least some limitations in the experiment's results:

Of course, this one piece of evidence does not "prove" something as complex and far ranging as the evolution of life on Earth.

Novella also deserves credit for acknowledging that evolutionary change that requires a few mutations, rather than merely one, is a big problem. Novella has solutions that he believes resolve this problem, but at least he acknowledges what too often is conveniently ignored.

What Novella does not acknowledge, however, is that bacteria adaptation research, over several decades now, has clearly shown non-evolutionary change. For instance, bacterial adaptation has often been found to be rapid, and sensitive to the environmental challenge. In other words, when we look at the details, we do not find the evolutionary model of random variation slowly bringing about change, but rather environmentally directed or influenced variation.

That is not evolution. And indeed, the Harvard experiment demonstrated, again, very rapid adaptation. In just ten days the bacteria adapted to high doses of lethal antibiotic. As one of the researchers commented, "This is a stunning demonstration of how quickly microbes evolve."

True, it is "stunning," but "evolve" is not the correct term. The microbes adapted.

The ability of organisms to adapt rapidly falls under the category of epigenetics, a term that encompasses a range of sophisticated mechanisms which promote adaptation which is sensitive to the environment. Given our knowledge of bacterial epigenetics, and how fast the bacteria responded in the Harvard experiment, it certainly is reasonable to think that epigenetics, of some sort, may have been at work.

Such epigenetic change is not a new facet of evolution, it contradicts evolution. Not only would such complex adaptation mechanisms be difficult to evolve via random mutations, they wouldn't provide fitness improvement, and so would not be selected for, even if they did somehow arise from mutations.

Epigenetic mechanisms respond to future, unforeseen conditions. Their very existence contradicts evolution. So the Harvard experiment, rather than demonstrating evolution in action, is probably yet another example of epigenetic-based adaptation. If so, it would contradict evolution.

Another problem, one that Michael Behe points out, is that it appears that most of the mutations that occurred in the experiment served to shutdown genes. In other words, the mutations broke things, they did not build things. This is another way to see that this does not fit the evolutionary model. It's devolution, not evolution. Novella begs to differ, and says Behe has made a big mistake:

Behe is wrong because there is no such thing as "devolution." Evolution is simply heritable change, any change, and that change can create more complexity or more simplicity. Further, altering a protein does not "degrade" it -- that notion is based on the false premise that there is a "correct" sequence of amino acids in any particular protein. Evolution just makes proteins different. Proteins perform "better" or "worse" only in so much that they contribute to the survival and reproduction of the individual. If it is better for the survival of the organism for an enzyme to be slower, then the slower enzyme is better for that organism.

First, Novella ignores the fact that many of the mutations introduced stop codons, and so did not merely slow an enzyme but rather shut it down altogether.

Secondly, it is not Behe here who is making the mistake, it is Novella. He says "Evolution is simply heritable change..." But this is an equivocation.

On the one hand, evolutionists want to say that shutting down or slowing a gene is "evolution," but on the other hand, they say that a fish turning into a giraffe is "evolution."

Unfortunately evolutionists routinely make this equivocation. This is because they don't think of it as an equivocation. In their adherence and promotion of the theory, the distinction is lost on them. All change just smears together in one big long process called evolution. You can see other examples of this here and here.

So the comments, press releases, and articles send a misleading message. Readers are told that the researchers have seen "evolution in action." The message is clear: This is evolution, the evolution. But it isn't. There is nothing in these findings that show us how a fish turns into a giraffe.

Multiple Mutations

As I mentioned, Novella also believes that evolution coming up with designs requiring multiple mutations is not a problem. His reasoning is that while this would be a problem if most mutations were harmful, they aren't. Most mutations are neutral, so evolutionary drift can introduce the many needed mutations, and once the set of required mutations are in place, then you have the new design.

This is a profound misunderstanding of the problem evolution faces. You can't evolve a protein, for example, with drift. That most mutations are neutral does not suddenly resolve the curse of dimensionality and resolve this astronomical search problem. There just is no free lunch.

Similarly, Novella makes yet another profound mistake involving what he calls "the lottery fallacy."

The first is basically the lottery fallacy - considering the odds of John Smith winning the lottery by chance alone and concluding it could not have happened by chance. Rather, you should consider the odds that anyone would win the lottery. This is actually pretty good. Behe looks at life on Earth and asks -- what are the odds that this specific pathway or protein or whatever evolved by chance alone. He is failing to consider that there may have been billions of possible solutions or pathways down which that creature's ancestors could have evolved. Species that failed to adapt either migrated to an environment in which they could survive, or they went extinct. In other words, Behe should not be asking what the odds are that this bit of complexity evolved, but rather what are the odds that any complexity evolved. It is difficult to know the number of potential complexities that never evolved -- that number may dwarf the odds of any one bit evolving. Right there Behe's entire premise is demolished...

This is a terribly flawed argument for several reasons. First, life needs proteins. All life that we know of needs proteins. Thousands of proteins.

Yet proteins are far beyond evolution's reach. It is true, per Novella's point, that there are a whole lot of ways to make a given protein. There are many, many different amino acid sequences that give you a globin. But "many, many" is like a grain of sand compared to the astronomical amino acid sequence search space. Again, there is no free lunch.

But Novella goes further than this, which brings us to the second flaw. Novella is not merely arguing there are many different ways to construct life as we know it. He is pointing out that there are, or at least there could be, a whole bunch of different ways to make life in the first place.

If you take them all together, you could have a pretty big set of possibilities. Perhaps it is astronomical. So what we got in this world -- the life forms we observe -- are not point designs in an otherwise lifeless design space. Rather, the design space could be chock full of life forms. And hence, the evolution of life becomes likely, and "Right there Behe's entire premise is demolished."

What Novella is arguing for here is unobservable. He is going far beyond science, into an imaginary philosophical world of maybes.

Not only is Novella clearly appealing to the unobservable, but even that doesn't work. At least for any common sense approach. There is no question that the design space is full of useless blobs of chemicals that do nothing. A speculative claim? No, that is what this thing called science has made abundantly clear to us. Even the simple case of a single protein reveals as much. Only a relatively few mutations to most proteins rob them of their function. Protein function is known to dramatically reduce as different amino acids are swapped in.

Of course this is all obvious to anyone who understands how things work. Sure, Novella may be right that there are other, unknown, solutions to life. But that isn't suddenly going to resolve evolution's astronomical search problem. The problem was never contingent on the life we observe being the only possible life forms possible

[Submitter's Note: Emphasis original]

Jonathan Wells #fundie evolutionnews.org

Furthermore, the similarity of HOX genes in so many animal phyla is actually a problem for neo-Darwinism: If evolutionary changes in body plans are due to changes in genes, and flies have HOX genes similar to those in a horse, why is a fly not a horse?

Bruce Chapman #fundie evolutionnews.org

The State of Scientific Research on Intelligent Design

I keep getting asked about the scientific research projects underway that relate to Darwinism and intelligent design. So why aren't we talking more about them publicly? For several good reasons:

The most important is that the Darwinist establishment would like nothing better than to "out" research programs before they are finished. The idea is to shut down damaging evidence as early as possible. Strangle the infant in the crib. Demand answers now to questions still being explored.

Paranoia? Hardly. There are too many examples of ID scientists and other scholars who have been hassled and harassed by the Darwinist Inquisition. [...]

As for foes and critics who pester us for information about research now underway and who insinuate that, unless we oblige them, we must accept their opinion that such research is not happening, we owe them nothing.

John West #fundie evolutionnews.org

"Ironically, the only reason Florida Darwinists would have to fear that this bill might protect intelligent design somewhere down the road is if they already have concluded they cannot win the debate over whether ID is science."

Michael Flannery #fundie evolutionnews.org

What the Piltdown Hoax Tells Us, 104 Years Later

A curious anniversary falls this weekend. On December 18, 1912, the infamous Piltdown hoax was unveiled to an astonished audience of the Geological Society of London by lawyer and amateur archeologist Charles Dawson (1864-1916) and Arthur Smith Woodward (1864-1944) of the British Museum. What they showed was nothing short of amazing: the apparent remains of a human-like skull attached to an ape-like jaw. Allegedly unearthed at the Piltdown gravel pit in East Sussex, England, it was hailed as the missing link -- a truly history-making discovery!

It would take nearly 41 years to expose the artifact as a fraud. On November 21, 1953, officials of the British Natural History Museum revealed the shocking truth: Piltdown man was a hoax, the combination of three species, a medieval human cranium, the jaw of a centuries-old young orangutan, and some fossilized chimpanzee teeth. Various culprits have been proposed, including famed Jesuit philosopher Teilhard de Chardin (1881-1955) and physician/novelist Sir Arthur Conan Doyle (1859-1930). But most recent investigation suggests that the imposture was likely perpetrated by Dawson alone in an effort to gain recognition and election as a Fellow into the Royal Society (see "Piltdown hoax solved," Forbes, August 10, 2016).

Writing for Harper's on the second anniversary of the Piltdown exposure, paleontologist Loren Eiseley (1907-1977), not one to look at an event or a phenomenon superficially, asked, "Was Charles Darwin Wrong About the Human Brain?" Eiseley noted that Alfred Russel Wallace (1823-1913), co-discoverer of the theory of natural selection, was unimpressed with the Piltdown "find" from the beginning. Writing to a friend in August 1913 (just three months before his death), Wallace exclaimed, "The Piltdown skull does not prove much, if anything!" Why, asked Eiseley, had Wallace, almost alone among the scientific community, so summarily dismissed this apparently stunning missing link? The answer was simple: "he did not believe in a skull which had a modern brain box attached to an apparently primitive face and given, in the original estimates, an antiquity of something over a million years." The archeological "discovery" would have confirmed Darwin's Descent of Man in dramatic fashion. Indeed Piltdown man was, from a Darwinian perspective, even something that would have been predicted.

But Wallace's "voice of lonely protest," observed Eiseley, underscored "the abyss which yawned between man and ape" that Darwinians at the time blissfully ignored. Having observed primitive cultures in South America and the Malay Archipelago for more than twelve years, Wallace concluded (quoting Eiseley) that humans' "mental powers were far in excess of what they really needed to carry on the simple food-gathering techniques by which they survived." Certainly no process of natural selection was adequate to produce such superior powers of art, reason, and morals. For Wallace, the human brain freed mankind from the tyranny of natural selection:

Here, then, we see the true grandeur and dignity of man. On this view of his special attributes, we may admit, that even those who claim for him a position as an order, or a sub-kingdom by himself, have some show of reason on their side. He is, indeed, a being apart, since he is not influenced by the great laws which irrestistibly modify all other organic beings (Contributions to the Theory of Natural Selection, 1870).

How, then, do we account for this impressive array of human attributes? Wallace thought that mankind might well have emerged comparatively recently, and that the rapid evolution of the modern human brain would confirm that "distinct and higher agencies" have been responsible for these mental attributes and attainments.

Eiseley confessed, "Since the exposure of the Piltdown hoax all of the evidence at our command -- and it is considerable -- points to man, in his present form, as being one of the youngest and newest of all earth's swarming inhabitants. . . . Today, with the solution of the Piltdown enigma, we must settle the question of the time involved in favor of Wallace, not Darwin." Although Eiseley thought some other wholly naturalistic explanation might account for the late and virtually saltationist expansion of the human intellect, he confessed that "science . . . has yet to explain how we have come so far so fast, nor has it any completely satisfactory answer to the question asked by Wallace long ago."

Today we still wait for an explanation, and it must be admitted that various speculations along the lines of blind chance and necessity or natural selection remain as unsatisfactory as when Eiseley was writing more than sixty years ago. A century after Wallace's dismissal of Piltdown man, science still confirms Eiseley's assessment and Wallace's vindication. The chart below shows the timeline for ascending brain size/body weight estimates for Sahelanthropus, Australopithecus afarensis, early Homo, Homo habilis, Homo erectus, H. heidelbergensis, Neanderthals, and H. sapiens.

[chart omitted]

This chart shows relative brain size as cm3 per 50 kg of body weight. Adapted with modifications from Robert Jurmain, Lynn Kilgore, et al., Introduction to Physical Anthropology, 2013-2014 ed. (Wadsworth, Cengage Learning, 2014), p. 357, and "Homo habilis," Encyclopedia Britannica, updated August 15, 2015.

Clearly brain size and capacity has not only increased, but increased at a very late and remarkably accelerated pace. Of course brain size is not the only measure of intellectual capacity, other factors may be involved. Some, for example, emphasize that Neanderthals, the closest historically to humans, possessed brains that were larger in absolute size to us. But as recent analysis has uncovered, the Neanderthal brain was quite different from its human counterpart. Being much more elongated than globular, the indications are that Neanderthals "reached large brain sizes along different evolutionary pathways." Their speculation that unique patterns of brain development in H. sapiens would have become "a target for positive selection" merely begs Wallace's original question (see Gunz et al., "Brain development after birth differs between Neanderthals and modern humans," Current Biology, Nov. 2010).

So the question remains: How did humans acquire such vast intellectual capacities so comparatively recently and so rapidly? Wallace called upon an "Overruling Intelligence" to explain human intelligence and many other features of complexity in biology and the cosmos. While Darwinians continue to search for some naturalistic cause, others, like British physician James Le Fanu, point out that the disappointments in high-tech solutions to the nature of the intellect and the human mind so touted by the human genome project and promised in the "Decade of the Brain" in the 1990s should force a reassessment of our species as truly unique (Why Us?: How Science Rediscovered the Mystery of Ourselves, 2009).

Eiseley's long forgotten but intriguing article is fortunately now available as "The Real Secret of Piltdown" in a new 2-volume set of his collected essays. As we reflect on the 104th anniversary of arguably science's greatest fraud, Eiseley's conclusion rings is as pertinent today as when it was first written:

The true secret of Piltdown, though thought by the public to be merely the revelation of an unscrupulous forgery, lies in the fact that it has forced science to reexamine carefully the history of the most remarkable creation in the world -- the human brain.

If the Cambrian period of 530 million years ago poses serious challenges to Darwin's insistence upon slow, incremental change in the amazingly rapid proliferation of animals over a mere 5 to 6 million-year timespan (see Darwin's Doubt), then how much more should the transformational changes in the human brain over the past 100 to 200,000 years cause as serious reevaluation of the nature of human beings and the means by which they came to be. If the Cambrian "explosion" is just too much change over too little time to be explained by Darwinian processes, the human brain is way too much change over way too little time. Perhaps Wallace's view of the Piltdown hoax still holds an important lesson for us today. Maybe the most dramatic "explosion" of all is the one that rests within our crania.

Micheal Egnor #fundie evolutionnews.org

We are more different from apes than apes are from viruses. Our difference is a metaphysical chasm. It is obvious and manifest in our biological nature. We are rational animals, and our rationality is all the difference. Systems of taxonomy that emphasize physical and genetic similarities and ignore the fact that human beings are partly immaterial beings who are capable of abstract thought and contemplation of moral law and eternity are pitifully inadequate to describe man.

The assertion that man is an ape is self-refuting. We could not express such a concept, misguided as it is, if we were apes and not men.