|
Since my picture is featured prominently on Mark Ward’s KJB Study Project, I figure he shouldn’t have any problem with me exposing some of the flaws inherent in his survey. I mentioned a few of these flaws to Ward as a passing comment in one of our discussions, and he dismissed my concerns by saying, “I did not perform this survey with academic standards in mind, because I did not and do not have academic resources.” That’s a poor excuse for the type of errors I found in this study. An honest evaluation of Ward’s infamous study reveals significant weaknesses in survey design, execution, data presentation, and interpretive framework that should lead any serious researcher to question the validity of the results. Sample Selection and Representation
The survey claims to have polled "100 pastors who preach and teach exclusively from the King James Bible," but provides no information about how these pastors were identified, selected, or recruited. The only data given about their selection is “Volunteers built a database of KJV-Only pastors by checking countless church websites and reading their doctrinal statements.” The absence of any significant demographic breakdown beyond this basic statement raises immediate concerns. Were these pastors randomly selected from a comprehensive database of KJV-using churches? Were they volunteers who responded to a call for participants? Were they selected through convenience sampling? The complete lack of transparency about sampling methodology makes it impossible to assess whether these 100 individuals represent any definable population. Furthermore, the survey provides no information about the actual views of the respondents regarding the KJV. Ward admits in his book that the term "KJV-Only" encompasses a wide spectrum of positions, from those who simply prefer the KJV for its literary value while acknowledging the legitimacy of other translations to those who hold to the Ruckmanite view that God re-inspired the KJV translators. The survey claims that Extremists/Ruckmanites were not targeted, but admits that some were included. Without disaggregating these distinct groups, the survey lumps together pastors with vastly different theological commitments and levels of linguistic sophistication. Methodology and Verification Problems This survey of 100 pastors was conducted entirely by phone between November 2022 and April 2023. That statement in itself is very concerning. It took Mark Ward and a team of an undisclosed number of volunteers 5 months to make 100 phone calls. Such an extreme length of time raises serious questions about how selective Ward was in determining which pastors to call. If he just wanted a random sampling of KJV pastors, he could have personally completed the entire survey in less than a week. Having a team of volunteers should have reduced that time even further. The fact that the survey took five months to complete reveals that the goal was something other than a representative sampling of pastors who preach from the KJV. A phone survey was a poor choice for this type of research. Phone surveys rely on the immediate recall and articulation of respondents without opportunity for reflection, reference materials, or careful consideration, and they are often conducted at inopportune times for the respondents. The survey acknowledges that pastors "were offered the opportunity to look at a Bible or to hear the verses read to them again," but this format does not allow the researcher any control at all over environment factors that may have affected the respondent’s ability to think clearly. More importantly, there is no information about who conducted these phone interviews, whether the interviewers were trained in standardized survey methodology, how the responses were coded, or whether any quality control measures were implemented. The potential for interviewer bias, inconsistent question presentation, or subjective interpretation of verbal responses is extremely high in this type of survey. The survey also provides no information about response rates. How many pastors were contacted? How many declined to participate? Did those who participated differ significantly from those who declined? The website states that “every single response was used,” but that doesn’t answer the question of how many declined to respond. These are standard methodological questions that any peer-reviewed study should address. Question Design Issues The survey's questions reveal significant problems in design and framing. For example, all of the questions were asked in isolation, divorced from any pastoral, theological or linguistic context. Asking a pastor on a phone call to immediately classify a pronoun tests something quite different from asking whether that pastor's actual preaching and teaching demonstrates an accurate understanding of the text. A pastor might answer a random grammar question incorrectly when put on the spot while nevertheless preaching passages with full and proper comprehension of their meaning when given time to think and prepare. Additionally, the lack of randomization of the questions in the pronoun section likely led many pastors to second guess their initial thoughts and give wrong answers simply because they had already given the exact same answer to several questions in a row. The false friends section also demonstrates serious methodological problems. Each question asks pastors to define a single word or phrase, but the survey's open-ended format likely invited pastors to give theological or pastoral interpretations of entire passages rather than grammatically precise definitions of particular words. The survey then treats any answer that strays from a precise definition as incorrect, even when those answers were viable interpretations of the passage. Data Presentation and Interpretation The survey does not provide a detailed breakdown of how individual responses were coded. The "Full Survey Responses" page does provide the raw data from each respondent’s answers, but there is no explanation of the methodology used to determine which answers were correct and which were incorrect. This flaw is even more obvious if the data is organized to collect all of the responses to one question into two columns labeled “correct” and “incorrect.” Comparing the two columns leaves one with the impression that Ward was intentionally baiting respondents into a trap. See the chart at the end of the article for an example. The graphs show percentages but provide no measures of statistical significance, confidence intervals, or any other standard statistical information. More importantly, the survey makes no attempt to establish any baseline for comparison. What percentage of pastors using modern translations correctly understand the same passages in their Bibles? What percentage of users of other translations struggle with understanding ambiguous second-person pronouns? Without comparative data, the survey's results exist in a vacuum and have very little relevance to the real world. The primary issue is that the survey appears designed to generate predetermined conclusions rather than to find answers to genuine questions. Every aspect of the design, execution, and presentation seems calculated to maximize the appearance that KJV pastors do not understand their Bibles. For these reasons, the results should be regarded with extreme skepticism and should not be accepted as reliable evidence about biblical literacy among KJV churches. A Specific Example: “What is a help meet?” The aforementioned flaws are readily apparent from the very first “false friend” question in the survey. The pastors who responded to the phone call were read the text of Genesis 2:18 and then asked “What is a help meet?” The question "What is a help meet?" is most naturally interpreted as a request for information about the thing being referenced—its nature, function, or identity. Most people would not view this question as a request for grammatical parsing or etymological analysis. When someone asks "What is a fire truck?" the expected answer is "a vehicle firefighters use to fight fires," not "a compound noun where fire modifies truck." Similarly, when pastors heard "What is a help meet?" after listening to Genesis 2:18, they would have reasonably understood this as asking "What is this thing God created?" or "What role does a help meet fulfill?" Their answers of "a wife," "a partner," "a companion," "a helper," or "one who completes" are entirely appropriate responses to that question as naturally understood. Ward apparently wanted a metalinguistic explanation that help meet consists of two words where meet is an adjective meaning "suitable." But he never actually asked that question. He didn't ask "What does the word meet mean in this verse?" or "How many words is help meet?" or "Define each component of this phrase." He asked an ambiguous question that naturally invited a theological or functional answer, then marked as incorrect those pastors who gave exactly that type of answer. The flaws in this question become especially egregious when we note that Ward created an online version of the survey with multiple-choice options. The online survey presents four choices for the question “What is a help meet:” "I don't know," "A helper suitable," "A helpful companion or partner, especially one's husband or wife," or "A servant or keeper." The difference between the phone survey and the online survey reveals a calculated deception on multiple levels. First, Ward marked pastors incorrect on the phone survey for giving answers like "a partner," "a companion," or "a wife," yet he includes these exact words—"a helpful companion or partner, especially one's husband or wife"—as one of the multiple-choice options in the online version, presumably as the wrong answer he expects people to see and avoid. This suggests Ward knowingly designed the phone survey to generate wrong answers that he understood to be natural, reasonable interpretations of his ambiguous question. He created a situation where pastors would predictably give answers he could then portray as incorrect, while simultaneously demonstrating through the online version that he knew these answers were the obvious responses to how the question was phrased. Second, the disparity between open-ended phone questions and multiple-choice online options creates an unconscionable double standard. Multiple-choice testing is dramatically easier than producing answers spontaneously because test-takers can recognize correct answers they couldn't generate unprompted, eliminate obviously wrong options, and take time to compare choices without pressure. Educational testing literature consistently demonstrates that recognition tasks (multiple choice) require far less knowledge than production tasks (open-ended responses). By using the harder format for pastors and the easier format for online users, Ward engineered a comparison designed to make the pastors appear incompetent. The online version serves as an invitation for readers to take the "same" test and feel intellectually superior when they pass using multiple-choice options that the pastors never received. Additional Concerns The FAQ page of the KJB Study Project website claims "grading was done according to the standard set by the Oxford English Dictionary," yet the OED says that meet is a current English word with modern usage examples through 1983 and beyond, defining it as "suitable, fit, proper for some purpose or occasion." If Ward had actually consulted the OED as his grading standard, he would have discovered that meet is not a “false friend,” which of course, would have invalidated his entire hypothesis. Either he did not actually consult the OED despite claiming to use it as his standard, or he consulted it, saw that it contradicts his premise, and proceeded anyway. Either possibility indicates dishonesty rather than mere incompetence. The FAQ section also reveals that "survey organizers" graded the false friends responses themselves. Having the same people who designed the study to validate Ward's predictions about false friends also grade the responses themselves means that the very people with the strongest investment in achieving particular results controlled the scoring. The website provides no information about inter-rater reliability, blind grading procedures, or any safeguards against scorer bias. When we combine this with the ambiguous question like "What is a help meet?" that naturally invite theological rather than grammatical answers, the potential for subjective scoring becomes a near certainty. Most damning of all is the study's explicit statement of purpose: "It is a delicate and difficult matter to persuade people that they don't understand their favored traditional Bible translation as well as they think they do." The survey's goal was persuasion, not discovery. The website claims the survey was designed "to find out whether Mark Ward's predictions about false friends were accurate, AND to inform the Bible-reading public about the existence of such words” (emphasis mine). The fact that the study was designed to inform people about the existence of the very thing it was supposedly designed to test for proves that this is not neutral research seeking truth wherever it leads; it’s a brazen attempt to fabricate evidence for a predetermined conclusion. This isn't research; it's propaganda. Ward deliberately engineered every element to produce a predetermined outcome. The survey's reliability as evidence of biblical literacy is less than zero—it stands instead as yet another piece of evidence of Ward's willingness to manipulate, misrepresent, and publicly ridicule his opponents. No honest conclusion about KJV readability or pastoral competence can be drawn from a survey so thoroughly corrupted by malicious intent. Get more quality analysis of Mark Ward’s claims in my new book False Friends and True Enemies, available now on Amazon at: https://www.amazon.com/dp/B0G2BDX3NQ
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Bill Fortenberry is a Christian philosopher and historian in Birmingham, AL. Bill's work has been cited in several legal journals, and he has appeared as a guest on shows including The Dr. Gina Show, The Michael Hart Show, and Real Science Radio.
Contact Us if you would like to schedule Bill to speak to your church, group, or club. "Give instruction to a wise man, and he will be yet wiser: teach a just man, and he will increase in learning." (Proverbs 9:9)
Search
Topics
All
Archives
December 2025
|




RSS Feed