The Biblical Textual Criticism of Bart Ehrman

Bart Ehrman has become quite (in)famous for his views on the textual accuracy of the Bible. Having read Bart Ehrman’s books Misquoting Jesus and Lost Christianities and watched his debate with James White, I’ve noticed a common thread.

Bart holds that we cannot know the original text of the books in the Bible. His argument boils down to this: the oldest manuscripts we have are also the ones with the greatest number of textual variants. He reasons that, if you go back farther to the period of 50AD to 350AD where we lack manuscripts, these were the periods where the least skilled individuals made copies, thus producing the greatest number of errors. This did not sit right with me from a logical standpoint and it seemed like both an oversimplification and overconfidence.

An obvious problem is that if you keep going back in time, you do not get an ever increasingly larger number of variants, because eventually you must get fewer and fewer copies until you arrive at the single original. When did the percentage of variants and errors reach its peak? Was it closer to 350AD or 100AD? 350AD is just after the time of Constantine. Before this period the church had been expanding in geographic scope consistently. There is no reason to believe that the number of copies (and thus cumulative errors) being made had slowed in any way. On the contrary, we have plenty of reason to believe that the peak of textual variants was towards the end of this period, right around the time of Constantine when the religion was “standardized” and scripture canonized.

This is important to keep in mind for the rest of this post, but it was not the point Ehrman was making. He was saying that the chain of custody during that period was far worse before 350AD than it was after, so we can have little confidence that what we have now is anything close to the original. So let’s look at this more closely.

His first assumption is that the chain of custody was much “dirtier.” His second assumption is that the common ancestor (a single document or family of documents) for the diverging manuscripts we have now is late. If they shared a common ancestor from around 120AD, then the chain of custody must have been very good for the later divergent manuscripts to be as close to each other in content as they are. But if they shared a common ancestor from the Council of Nicea in 325AD, then we’d essentially have only one document family stretching in a single line back almost 300 years. In this latter case we would have very little confidence at all that the text was close to the original.

Both probability and intuition would suggest that they must have shared an ancestor that was neither very early nor very late. Still, this is an unknown and being dogmatic on the point is unwise as the margin of error in any estimate will be quite high.

Chain of Custody

His first assumption is that the chain of custody was much “dirtier.”

Let’s look at what happened to the documents during this period. When the originals were written, they were delivered to a particular Christian community. From there, copies were made, and those were distributed. More and more copies were made at that location and the process continued. Obviously the original didn’t simply cease to exist. It could have stayed at the original location where more copies were made. Or the letter could have been passed on and the copies stayed behind. We don’t know.

Nevertheless, the most accurate early copies would have been the sources that were spread quite rapidly throughout the geographical spread of Christianity. Many of these copies would have been made when some of the apostles were still alive and able to correct false doctrines that may have arisen from forgeries or errors.[1]

One possibility is that the spread of the early copies was extreme and that the early manuscripts we had were based on a broad geographical set of texts. This is important because the faster the spread the more any single error could not be reproduced in all the copies, meaning the original still existed in at least some of the copies. It also prevented corruption from any single authority trying to enforce a particular textual variant.[2]

The earlier the time, the less likely that any meaningful variants could ‘stick’. Factors include the apostles correcting the errors directly. This would extend to the second generation as well: those who knew the apostles directly, church elders, would be able to correct certain errors.

200 years is a long time, but if we’re going to extrapolate what we know about humanity in order make conjectures about the number of variants, let’s use the KJV as an example. This beloved translation is just over 400 years old, and people are still using it. The original texts would have been cherished by the Christian communities. Sure they would have been copied any number of times, but these communities would have formed their own rigid doctrines, much like the communities we are familiar with. The rigidity that the religious are accused of serves to support the idea that they would have kept their copies close.

These were also not the same types of folk that we are. Many were illiterate, but they had superior capacity for memorization compared to the average modern person. These people would have memorized vast sums of the texts that they heard read to them and would have had decades to detect any errors in copies made. Sure errors would and did creep in, but large scale doctrinal changes would be much more difficult to insert and survive to become the only or “most probable” version of what we have now.

Here is a summary of some of the issues at play:

Memorization: Undistributed copies within a community would be vetted by the community itself. Anyone with children has seen how they can spot extremely minor changes in a reading.[3] This is stronger in the oral tradition of a largely illiterate population.

Authorization: The apostles and their authorized representatives would have had an early corrective influence. We can see some hints of this in some of the letters themselves and in the external writings of the time period.

Dogmatism: Religious communities tend to be very dogmatic about their texts, likely to hold on to them dearly, extending their lifespan. They might make dogmatic ‘adjustments’ to the text, but these are likely to be in a theme and detectable (like the Gnostic gospels), but also likely to be limited in number. Once a doctrine is established, it is very hard to change it, especially undetected.[4][5]

It is important to remember that simply because there are textual variants does not imply in any way that we do not have the original. We may have great difficulty deciding between the original and various forgeries, but that’s not the same as not having it. Interpretation of the text cannot be avoided[6], but we do have reasonable confidence that we can do it correctly. By all indications, the letters were distributed quickly and broadly.[7]

Common Ancestry

His second assumption is that the common ancestor (a single document or family of documents) for the diverging manuscripts we have now is late. If they shared a common ancestor from around 120AD, then the chain of custody must have been very good for the later divergent manuscripts to be as close to each other in content as they are.

If the transmission of documents between 50AD and 350AD was as bad as Ehrman suggests, then the common ancestor of the manuscripts we have must have been late in the period.[8] It is not only a logical deduction based on the initial assumption, but a requirement. If it could be shown that the common ancestor was not late, then his assumption falls to pieces and the chain-of-custody must have been better than he assumed.

Ehrman rightly points out that probability is of little usefulness when determining which manuscripts are more reliable than others. But his statement on the probability of error at a given point in time can be evaluated mathematically. If his statements were true, we would expect a random distribution of document families among the manuscripts we do possess so long as there was no external influence to selectively save some documents while destroying others. We would then expect to see extreme variation among those manuscripts we have to reflect the divergent nature and lateness of their origin. But this is not what we see at all. There are very few highly contested portions in the manuscripts we possess. Yes there are many variations, but most are minor and scholars are not in disagreement over many large issues. Their relative harmony does not coincide with widely divergent origins and a poor chain of custody.

Where did the manuscripts from the assumed widely divergent document families go? The manuscripts we do have come from an assortment of geographic locations and sources. Either there was a widely spread conspiracy to destroy those documents that opposed the ones that survived or the number of errors was much lower than assumed. The latter clearly favors the authenticity of the texts. What about a conspiracy?

A conspiracy implies one of the following:

1) That the documents were selectively and completely destroyed after they were transmitted. This is possible and certainly some documents were destroyed, but where is the evidence that it successfully occurred across the whole spread of Christianity? If it did, why would we still have so many other ancient documents of various competing sub-sects of Christianity, such as the gnostics? On such a widely geographically spread basis, this seems hard to accept.

2) There was editorial control over the chain of custody. This directly contradicts the original assumption that the chain of custody was significantly worse during this period. You can’t have it both ways.

Conclusion

The real problem with Ehrman’s belief is that the incidence of errors in the period 50AD to 350AD is not independent of the manuscripts we have after that period. He is extrapolating backwards based off a theory and existing documents, but he fails to extrapolate forward from his theories. If the transmission was so poor and inaccurate during the period where we have no manuscripts, then we would see evidence of that in later manuscripts, because the error is cumulative.

The evidence that we have suggests not that the documents are unreliable, but that they have a relatively early common ancestor and/or that the chain of custody was more reliable than Ehrman assumes.

Now of course we could discover additional manuscript evidence that pushes our confidence in either direction, but we can only base our belief on what we do know, and what we know is pretty good stuff.

NOTE: While I believe that I have identified inconsistencies in the presentation of the argument, this is not a refutation of Bart Ehrman’s positions. It raises a lot of questions that might be easily addressed. If they become addressed or mistakes are pointed out, I will update this article accordingly.

For a refutation of the book Misquoting Jesus, see this paper by Professor Tom Howe.

[1] It is interesting, but certainly not conclusive, that we do not have much in the way of such records. The New Testament writings do contain references to other teachers and other teachings, but not explicit corrections of errors or forgeries to an author’s own previous work, even though we do see citations of those previous works. Seeing as how even personal letters (e.g. Philemon) were cherished by the church at large, how much more so would letters that fixed doctrines caused by critical manuscript errors that Paul himself would have written to correct.

[2] A linear chain of officially authorized messengers is one way to ensure a chain-of-custody, assuming you have some way to authenticate the authorized messages, otherwise the chain is not valuable. But this is not the only way. Rapid mass transmission allows consensus to be determined to find and correct errors among many copies. There is plenty of indication in the existing manuscripts that this is, in fact, what happened, and what we know of as the Bible owes a lot to these acts—whether or not they were accurate or in error. Such an organic process makes it highly resistant to tampering: significant errors (if they happened) would almost certainly be detected, even if not resolved, allowing us to reason about whether significant errors did, in fact, occur.

[3] Have you ever had a child, who was a diligent reader, read Roald Dahl’s “Charlie and the Chocolate Factory” and then watch the movie? They can immediately and accurately reference the book to show how the content was altered.

[4] The Roman Catholic Church has a history of expounding on non-scriptural doctrines based primarily on later writings, such as the Trinity and the Incarnation.  This is why John C. Wright rejects alternate sects: because they simplify doctrine, losing information (e.g. the Protestant Reformation): “But the truths of the Catholic faith would be compressed and corrupted and suffer simplification into a master idea. [..] Heresies are always simplifications of a complex and interdependent organism of ideas into a single master idea, which, upon consideration, has no warrant for supremacy” It is very hard to change dogma once established. Even the Roman Catholic Church experienced its largest expansion of doctrine early in its existence, mostly during the 4th and 5th centuries.

[5] There are many documented attempts of church scribes attempting to alter passages of scripture to deemphasize or condemn the role of women in the church. This type of major alteration to scripture and doctrine did not go undetected for precisely the reason stated.

[6] As a matter of semantics, this includes the Holy Spirit revealing the correct interpretation of scripture

[7] The creed that Paul cites in 1 Corinthians 15:3-5 is thought to originate within 3 to 5 years of the resurrection and to have spread rapidly.

[8] The greater the time elapsed, the greater and more significant the variations must be during that particular fixed time period. Thus, if the original of an extant manuscript were determined to originate in 50AD, we would expect the extant variations for it to be far, far worse than another composed in 180AD, assuming Ehrman’s initial assumption of the error rate in transmission in that period is correct. Moreover, the earlier the divergent origins of different manuscripts are, the more you would expect them to disagree with each other, due to accumulated errors accrued separately.