Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 2 (Test Design)
Today’s blog post is the second in a series featuring a 1996 article titled “Investigation of authorship in facilitated communication.” In my previous blog post, I outlined the intent of the researchers to investigate FC authorship by using “blinded” facilitators who “supported” participants in spelling out “rudimentary information” (words) during three phases of the study: Baseline 1, Facilitated Condition, and Baseline 2. I put “blinded” in quotes because the authors of the study gave facilitators open access to test stimuli which negates their claim that the facilitators did not have access to (or were blinded from) test stimuli.
Image by Agence Olloweb
To recap briefly, Baseline 1 and Baseline 2 were purported to be “unfacilitated” conditions where participants were shown target word(s) on flashcards and then later asked to spell out the words without the visual reference. Their facilitators were present during the spelling session but were not allowed to physically touch the participants.
Note: While physical touch is one way facilitators influence and control letter selection, cueing can and does occur without physical touch (see An FC Primer). The Cardinal et al. study did not directly address this issue of visual and auditory cueing.
The Facilitated Condition (conducted between Baseline 1 and Baseline 2) involved spelling out a target word presented to the participants out of auditory and visual range of the facilitators. The facilitators were later allowed to “support” the participants using touch-based FC and plastic laminated letter boards while attempting to spell out the target word(s). A recorder was also present during these sessions. Recorders in the study (people who presented the words to the participants and wrote down the facilitated responses) had access to the word list used in the study (as did the facilitators) and could be a source of (inadvertent) visual or auditory cueing.
Cardinal claimed in a brief email exchange I had with him in 2025 that the study was not intended to prove authorship. If true, I wonder why Cardinal et. al. bothered to blind the facilitators (or, rather claim to blind the facilitators) from the test stimuli. From a scientific perspective, the whole point of “blinding” facilitators during testing is so their behaviors during letter selection can be separated from those of the participants’. To prove communication independence in FC, researchers must design tests that control for facilitator behaviors and limit their ability to influence or control letter selection before or during the testing.
Since facilitators are often unaware of the extent to which their own behaviors influence and control letter selection, one of the most reliable ways to achieve this separation is to “blind” facilitators from test stimuli (e.g., words, pictures). This allows participants the opportunity to independently (e.g., without facilitator influence) respond to the stimuli (in this case, to spell out words presented to them on flashcards). This type of test design works particularly well for testing situations in which facilitators are concerned that wearing blindfolds or inserts in their glasses or using partitions to block the facilitators’ view from the letter board and/or from pictures or flashcards will cause undue stress on the participants. By preventing facilitators from having access to the test stimuli or questions, researchers can adjust to facilitators’ desires to “support” their clients as usual (e.g., in a “natural” setting) while controlling facilitator knowledge of the test stimuli.
Note: Current-day facilitators aggressively resist this type of testing, some even say that it’s not fair to their clients if they (the facilitators) aren’t aware of the test stimuli. But, I would argue that if the FC-generated messages were truly independent and the participants had requisite language and literacy skills, then why would it matter whether the facilitator has access to the answers?
According to Cardinal, the facilitated condition was supposed to be a practice session for participants. Any progress the students could (potentially) make in their ability to spell during the 6-week interim between Baseline 1 and Baseline 2 would be documented at the end of the study (e.g., during the Baseline 2 testing).
As I mentioned in my previous blog post, I’m unclear about what, exactly, the participants were practicing in the “facilitated” condition—unless the object of the lessons were for the individuals to learn how to point to the letter board on cue.
If the participants were being taught (and practicing) specific words from the word list used in the study, then it makes no sense that the facilitators would be blinded from the target words. Supposedly, the facilitators were there to support their clients in learning how to spell. How beneficial was the 6-week practice session if, for example, the participant wanted to spell “shoe” and the facilitator was physically prompting them to spell “star”? The Baseline 1 condition of this study demonstrated that the participants had very low literacy skills (at best 20% accuracy rate if all the participants were able to spell 1 word out of 5, but likely less than that), so why, in the facilitated condition were the researchers expecting success rates to increase when nothing had changed except the introduction of a facilitator to hold their wrists, elbows, shoulders or other body parts?
Wouldn’t it make more sense in a practice session for the facilitator to know what target word was being spelled? Then, the two could practice spelling the word(s) before the facilitator stopped providing prompts and allowed the participant to independently spell out the word. In addition, if the participant learned to spell a series of target words independently and without physical touch by the end of the six weeks, then what purpose do the facilitators serve in Baseline 2? Why would they even need to be present?
Researchers at the O.D. Heck Center in Schenectady, New York conduct a reliably controlled message-passing test. (Image from Prisoners of Silence, 1993)
At first glance, the Cardinal et al. study shares some commonalities with the reliably controlled studies we’ve listed on our website:
Individuals participating in the study had experience with FC and were paired with facilitators with whom they were familiar (e.g., school personnel, parents).
Informed consent was obtained from parents or guardians and school officials for participation in the study. I did not see any mention of getting approval from an Institutional Review Board (IRB) or similar organization, but there appeared to be some oversight by school officials and general agreement among the educators, parents, and guardians about the purpose of the testing and its procedures. Although Cardinal et al. obtained permission from the students using their “preferred method” (i.e., facilitated communication), it’s important to note that informed consent cannot be obtained using FC alone, since the veracity of FC authorship itself is in question.
Testing was conducted in a familiar setting (e.g. classroom) and attempts were made to make the activities as “naturalistic” as possible. Even though Cardinal et al. later complained that the participants were not able to perform at expected levels during the testing, the researchers also reported that the classroom setting(s) used for the testing “tended to be the environment in which the facilitated communication user had historically and most successfully learned.” In other words, concerns about test anxiety due to environmental issues can be ruled out.
Test stimuli included 100 common words from the participants’ everyday life that were deemed by the researchers to be familiar to all the participants. The authors would later complain (after seeing the poor test results) that the vocabulary words were, perhaps too simple and uninteresting for participants to spell correctly, but the words were chosen by the researchers specifically because they were familiar to the students, age-appropriate, community-based, and part of the functional curriculum presented in an inclusive school program. The authors also chose the words because they wanted to “create a rudimentary protocol that was not contaminated by student characteristics such as age or grade level.” In other words, their expectations at the outset were that even the youngest participant in the study would be able to recognize and spell the target words, especially under the facilitated condition.
Spelling out single words—or rather, copying letters in sequence by memory after seeing the word printed on a flashcard--is, in my opinion, rather a low bar, considering that anecdotal and testimonial stories by proponents often include boasts of unexpected literacy skills and sophisticated “independent” typing. Cardinal et al. mention that the participants were selected for the study because they had (unfacilitated) Baseline 1 scores of no more than one correctly spelled word in five. However, they claim that, in less structured/controlled settings, participants were able to answer open-ended questions, carry on conversations, and participate in grade-level classes using FC (when paired with a literate facilitator). And, if the FCed participants truly understood the purpose of the testing (e.g., to prove authorship in FC) and had agreed to the testing, then facilitators’ complaints that the words were “too easy” for the participants to spell makes no sense whatsoever.
Cardinal et al.’s study did not address the language comprehension or literacy skills of the participants. Participants weren’t, for example, required to use the word correctly in a sentence or indicate in any other way that they understood the meaning of the letter sequence (i.e., word) they were asked to type out. Nor was there any indication that the authors were aware that some individuals with autism are particularly good at recognizing and copying shapes (e.g., letters) without necessarily understanding what those shapes mean.
Despite sharing commonalities with reliably controlled testing procedures, however, the Cardinal et al. study deviated from rigorous test protocols by allowing facilitators open access to the test stimuli (in this case, 100 words on a target list) instead of ensuring that the facilitators remained blinded to the test stimuli throughout the entire study.
In the next installment, I will discuss further the importance of blinding facilitators from test stimuli in authorship testing and how the researchers’ decision to share test stimuli with the facilitators undermined the validity of the testing procedures.
References and Recommended Reading:
Bligh, S., & Kupperman, P. (1993). Evaluation procedure for determining the source of communication in facilitated communication accepted in a court case. Journal of Autism and Developmental Disorders, 23, 553-557. DOI: 10.1007/BF01046056
Eberlin, M., McConnachie, G., Ibel, S., & Volpe, L. (1993). Facilitated communication: A failure to replicate the phenomenon. Journal of Autism and Developmental Disorders, 23(3), 507-530. DOI: 10.1007/BF01046053
Felce, David. (1994). Facilitated Communication: Results from a Number of Recently Published Evaluations. British Journal of Learning Disabilities. Vol. 22
Jacobson, J.W., Mulick, J.A., and Schwartz, A.A. (1995, September). A history of facilitated communication: Science, pseudoscience, and antiscience. Science Working Group on Facilitated Communication. American Psychologist. 50 (9), 750-765.
Klewe, Lars. (1993). Brief report: An empirical evaluation of spelling boards as a means of communication for the multi-handicapped. Journal of Autism and Developmental Disorders. Vol. 23 (3). 553-557. DOI: 10.1007/BF01046057
Moore, Donovan and Hudson. (1993). Brief report: Facilitator-suggested conversational evaluation of facilitated communication. Journal of Autism and Developmental Disorders. Vol 23 (3); 541-552. DOI: 10.1007/BF01046055
Prior, Margot and Cummins, Robert. (1992). Questions about Facilitated Communication and Autism. Journal of Autism and Developmental Disorders. Vol. 22 (2); 331-337.
Smith and Belcher. (1993). Brief report: Facilitated communication with adults with autism. Journal of Autism and Developmental Disorders. 23 (1), 175-183. DOI: 10.1007/BF01066426
Szempruch, J. and Jacobson, J.W. (1993). Evaluating facilitated communications of people with developmental disabilities. Research in Developmental Disabilities. Vol 14 (4), 253-264. DOI: 10.1016/0891-4222(93)90020-k
Wheeler, DL, Jacobson, JW, Paglieri, RA, and Schwartz, AA. (1993). An experimental assessment of facilitated communication. Mental Retardation. Vol 31 (1), 49-60.
Blog posts in this series (Links will be added once the blog posts are published)
Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 1 (Rudimentary Information)
Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 3 (Competing for words)
Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 4 (Facilitator Behaviors)
Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 5

