- Joined
- Dec 23, 2012
- Messages
- 500
- Reaction score
- 0
- Points
- 16
Statistical Illiteracy in Textual Scholarship
The paper being discussed is from the:
Journal Bibliotheca Sacra. BibSac 148 (1991) 150-169.
A Journal put out by Dallas Theological Seminary.
And then in June 2004 this paper was posted on the internet.
======================================
Please note the math in this article. Daniel Wallace responding to Wilbur Pickering. An article which has been available for reading for over 25 years, since 1991.
This section was recently quoted on Facebook, by a respected creationary scholar, as part of an attack on the TR-AV and Majority positions. And this is frequently quoted online. The numbers that were given here, 98% and 99%, were a major part of the argument against the significance of the Received Text and Byzantine/Majority text positions.
======================================
======================================The Majority Text and the Original Text: Are They Identical? (1991)
Daniel Wallace
https://bible.org/article/majority-text-and-original-text-are-they-identical
"There are approximately 300,000 textual variants among New Testament manuscripts. The Majority Text differs from the Textus Receptus in almost 2,000 places. So the agreement is better than 99 percent.
How different is the Majority Text from the United Bible Societies’ Greek New Testament or the Nestle-Aland text? Do they agree only 30 percent of the time? Do they agree perhaps as much as 50 percent of the time? This can be measured, in a general sort of way. There are approximately 300,000 textual variants among New Testament manuscripts. The Majority Text differs from the Textus Receptus in almost 2,000 places. So the agreement is better than 99 percent. But the Majority Text differs from the modern critical text in only about 6,500 places. In other words the two texts agree almost 98 percent of the time " **
** "Actually this number is a bit high, because there can be several variants for one particular textual problem, but only one of these could show up in a rival printed text. Nevertheless the point is not disturbed. If the percentages for the critical text are lowered, those for the Textus Receptus must also be correspondingly lowered."
The problem we have here is that this is not fuzzy math, this is full-blown bogus math. The methodology used is totally false.
The number of textual variants calculated globally (i.e. all Greek mss), in this case 300,000, the divisor, is an unrelated number to the affinity between any two specific texts. Two texts do not get closer together if the total variant count is seen to be 1,000,000 instead of 300,000. They do not get farther apart if the total variant account is seen to be 50,000 or 20,000. Since the number is unrelated, it is quite flexible in wiggle room, e.g. translatable, significant, printed variants could be specified.
Thus, if you plugged in 20,000 as the divisor (a calculation, perhaps, of total printed or significant variants) your affinity number for the two texts. Byz/Maj-CT would be close to 67% affinity instead of 98%. Yet the two texts being compared have not changed in even one letter. And then this conclusion in the paper simply would not be possible:
This conclusion has other difficulties, because it is simply not true that the vast majority of the 6,500 Byz/Maj-CT differences are not translatable. So you have GIGO, with a false "very much alike.." conclusion. This bogus conclusion was keyed off the statistically false 98% number, essentially a plug-in by choosing the unrelated 300,000 number as the divisor."Not only that, but the vast majority of these differences are so minor that they neither show up in translation nor affect exegesis. Consequently the majority text and modern critical texts are very much alike, in both quality and quantity" - Daniel Wallace, ibid
And note, this statistical problem in the paper should be easily recognized by the smell test. 6,500 variants in 8,000 verses can have various measurements of affinity (see below) .. coming up to 98% is extremely unlikely, with any sensible measure. About forty full verses omitted in the CT that are in the Byz (a few more in the TR) and thousands of significant variants. How could it be 98%?
There is also nothing complicated in realizing that the math does not fit. Anybody who read and understood the classic Darrell Huff book How to Lie With Statistics should be able to find the problem in a couple of minutes. And recognizing the problem here does not require any special skills or training.
Incidentally, we do not know if Daniel Wallace played with these numbers with the purpose to deceive about the texts (hopefully not) .. or if he is simply statistically illiterate. Perhaps he did not think about it and put the statistics in the paper forth as a sort of hopeful monster attempt. His footnote indicates that he had some second thoughts, yet he never realized that his divisor is improper. An unrelated number that does not have anything to do with the textual affinity that was claimed to be measured.
========================================
What can be done?
If a very simple statistical calculation is totally wrong in textual science, and is not noticed by the writer, his review, peers and students, for years, for decades ... how about graphs and more sophisticated presentations?
An example: articles on the topic of manuscripts through the centuries by Daniel Wallace and James White have used graphs that are similarly using false methodologies. There is one in the article above. The purpose: to present a "revisionist history" (Maurice Robinson's phrase), the impression that the Byzantine text and its variants only entered the Greek text late. Sort of a back door method to keep the Syrian (or Lucian) recensions alive. To give an impression of the 400 to 900 AD period that is against all accepted textual history. And to give the impression that the early centuries were massively Alexandrian. (E.g. 100+ localized papyri fragments, technically each one a mss, from gnostic-influenced Egypt, totaling a couple of NT books, is capable of skewing any statistical calculation that is based only on numbers of mss. And this is one of many problems. Some graphs do not even have an X or Y axis description, one of the tricks pointed out by Huff. This was covered separately on a textual forum.) We can see in textual science that the goal of agitprop against a text like the Reformation Bible (TR) can outweigh scholarly study. This started with Hort ("vile" and "villianous" describing the TR, even before he began) and the beat goes on. And the math, statistical and graphic presentations will be unsound and unreliable.
And statistics can be manipulated on all sides, however papers that are published are supposed to go over a high bar of correctness and examination. If a Byz or TR-AV supporter, or an eclectic, makes a similar blunder, it should be quickly caught and corrected.
Maybe SBL and ETS should have seminars teaching about the basics of statistical manipulation. And should reviewers of papers be vetted for elementary statistical competence? What do we say about students educated today in such a statistically illiterate environment?
My concern here is not just Daniel Wallace, it is also what this says about a type of scholastic and statistical dullness in the textual studies realm as a whole. This should not have lasted in a paper one week without correction, much less 25 years and standing.
========================================
Similarly, the problem is not only statistics. One can look at the recent 2008 paper by Van Alan Herd, The Theology of Sir Isaac Newton, which was successful as a PhD dissertation, and see elementary blunders that passed review at the University of Oklahoma. Here is one of many examples:
Virtually everything here is factually wrong, which anyone who has read and understood Newton's Two Corruptions would easily see.The Theology of Sir Isaac Newton (2008)
Van Alan Herd
https://books.google.com/books?id=nAYbLOKKq2EC&pg=PA97 (the paper can also be found at gradworks.umi.com , however the google url goes right to this quote)
The error here, according to Newton, is assuming the word "God" as the antecedent to the Greek pronoun, ὃς,, ("who"), as the King James translators had assumed it and replaced the pronoun with the noun, "God" in the Authorized (KJV) version. Newton questioned this translation on the grounds that it is incorrect Greek syntax to pass over the proximate noun "mystery" which is the closest noun to the pronoun ὃς,, in the text.
========================================
And here is a kicker:
If a textual writer flunks the elementary logic of statistical understanding, and publishes false information for argumentation against our historic English Holy Bible, are they likely to be strong in other areas of logical analysis? Are they a good choice for making up your variants, for choosing your version?
========================================
Sidenote: finding an agreed-upon method to measure the % of affinity between two texts, even two clearly defined printed texts, is a bit complex and dicey. Since the measurements used are subjective and variable (What is the standard size of comparison? maybe verses? or words? how many variants is a 12-verse omission/inclusion? and are you weighing variants?) And there can be a variety of results. This complexity, a bit more sophisticated than choosing the wrong divisor, is rarely mentioned when affinity numbers are given in textual literature. (Even if the numbers have some sense, unlike the Daniel Wallace numbers above.) This is a more general critique of the use of numbers in the textual science.
By contrast, for a three-way comparison of the nature of:
"The Peshitta supports a Byz-TR text about 75%, the Alex text about 25%"
it is easier to establish a sensible methodology that can be used with some consistency and followed by the readers and statistic-geeks quite easily.
Although even there the caution lights should be on, especially about the weight of variants, for which I offer a maxim for consideration:
"Variants should be weighed and not counted"
========================================
Steven Avery