Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 3 (Competing for words)

Today’s blog post is the third in a series featuring a 1996 article titled “Investigation of authorship in facilitated communication.” I will provide links to the previous blog posts below.

To recap, Cardinal et al.’s investigation into FC authorship shares some common elements with the reliably controlled tests we have listed on our website. Among these commonalities is an awareness that in FC authorship testing, it is important that the behaviors of the facilitators be separated somehow from the behaviors of the participants (e.g., those being subjected to FC). In fact, Cardinal et al. stated that one of their goals for the study was to develop a protocol that “controlled for variables that could threaten the study’s validity.” They specifically mentioned the need to “blind” facilitators from test stimuli.

What separates the Cardinal et al. study from reliably controlled studies of FC authorship, however, is that rather than blinding facilitators from the test stimuli, the researchers gave their facilitators open access to the 100 word list that was used in the trials.

A facilitator selects the answer to a math problem while the student looks away from the keyboard. (from Prisoners of Silence, 1993)

Blinding of facilitators, also known as keeping the facilitators “naïve,” involves preventing facilitators from hearing, seeing, or interacting in any way with the test stimuli (e.g., words, pictures, books, videos or other materials). The presumption is that, when facilitators are blinded from test stimuli, then their physical and emotional support of the participant during the spelling activities will not interfere with the individual’s ability to communicate his or her independent thoughts.

As Cardinal et al. alluded to (but didn’t implement in their study), controls can be put into place during authorship testing that restrict facilitators’ access to test stimuli but allow participants to receive physical support during the testing in a “naturalistic” way. If facilitators are truly prevented access to test stimuli before and during the testing, then there is no need for partitions or blindfolds in order for the testing to be conducted.

In the past, researchers have employed a variety of methods to “blind” facilitators from test stimuli, including placing a barrier between the facilitator and the participant, having the facilitator wear sunglasses with cardboard inserts to block his or her view from the letter board, and having the facilitator wear headphones. Each of these options have limitations. Even though there is no documented evidence to support this claim, proponents complain that artificial barriers prevent participants from interacting with facilitators in their customary way. They worry that the change in environment for the testing (e.g., an addition of the barrier in an otherwise familiar setting) would break the trust-bond between participant and facilitator and prevent the participant from performing at expected levels (e.g., at the conversational and academic levels reported anecdotally by facilitators, parents, teachers, and other care givers in unstructured or uncontrolled settings).

Researchers at the O.D. Heck Center in Schenectady, New York used a barrier that allowed facilitators and participants to see each other, but blinded them from test stimuli (e.g., the facilitators and participants were unable to see pictures shown to the other person). Participants were given an opportunity to become familiar with the set-up in the days leading up to the testing. See my blog post here. (Image from Prisoners of Silence, 1993)

Eyeglasses and headphones, proponents claim, pose a similar problem by altering the appearance of the facilitator.

Critics of FC note that although the intent of wearing headphones during testing is to prevent facilitators from hearing conversations between the participants and the researchers, headphones do not always block the sound adequately or prevent facilitators from (unintentionally) gaining access to (hearing or seeing) test stimuli.

Cardinal et al. attribute the failure of past controlled studies to the artificial nature of these methods to blind facilitators and the “over-controlling” of the test environment.

In response to these concerns by proponents, researchers have devised ways to control for facilitator behaviors (e.g., blind them from test stimuli) by positioning facilitators out of visual and auditory range while the test stimuli is being presented to the participants. Cardinal et al. employed this strategy in their “facilitation condition.” The test sessions consisted of five trials or words each session.

As an aside, I wonder how long each trial took, since the facilitator presumably had to leave the test area with each new word. And how did the participants react to these breaks in between every trial? The researchers did not report any adverse reactions from participants from being in a “controlled” setting.

In pro-FC literature, skeptics of FC conducting authorship tests are characterized as overbearing and confrontational to the participants (although there is no evidence to support this). But, when proponents, such as Cardinal et al., design tests containing similar activities (e.g., like the message passing test in the “facilitated condition” of the study I’m reviewing today), these concerns seem to evaporate. Perhaps participants (the nonspeaking individuals, not the facilitators) are more adaptable and compliant than proponents give them credit for.

Regardless, as I’ve pointed out in previous blog posts, this strategy (to control facilitator behavior) only works if facilitators remain blinded to the test stimuli throughout the testing. They can’t, for example, participate in the development of test protocols or, as they were in the Cardinal study, be given open access to the word list to be used in the testing.

Another effective strategy to control for facilitator behaviors during authorship testing is a “distractor condition.” This condition is used by researchers to determine what happens if the facilitator and participant are shown two separate pieces of information. For example, the participant (without the facilitator’s knowledge) is shown a picture of a shoe and the facilitator (without the participant’s knowledge) is shown a picture of a hat. If the communications were independent, the FC-generated response would be “shoe”—or what the participant saw. But, in all reliably controlled tests to date, when distractor conditions are implemented, the responses during this activity are based on information provided to the facilitator (in this example, “hat”) and not what the participant saw (in this example “shoe”).

I find it interesting that Cardinal et al. chose not to include in their test design a “distractor condition” that prior researchers have found successful in detecting facilitator control over letter selection. Facilitators may not intend to influence letter selection while supporting their clients, but Wegner and Sparrow’s article “Clever Hands: Uncontrolled Intelligence in Facilitated Communication,” alerts us to the fact that, facilitators can’t help but guess at the correct answers (it’s human nature).

By giving facilitators access to the word list used in Cardinal et al.’s authorship testing, the researchers likely increased facilitators’ (human) impulse to guess at the target word(s) when “supporting” participants in spelling out words—whether the facilitators were aware of this impulse or not. This action may not have been an intended consequence of showing the word lists to the facilitators, but, even without the distractor condition, we can still infer facilitator guesses in the list of errors the researchers provided in the study. I’ll discuss what the facilitated errors show us about facilitator influence in an upcoming blog post.

Image by Maksim Larin

While explaining their reason for omitting a distractor condition in their study, Cardinal et al. (unconsciously?) seemed to admit an awareness that facilitators can and do control letter selection. They wrote:

We did not use a distractor condition (showing the facilitator a different word than the facilitated communication user), and thus, we were less concerned about facilitator influence because the facilitator was not competing with the facilitated communication user to produce the correct word. (p. 233, emphasis mine)

For me, this comment raises a couple questions:

1) If proponents believe that, in some circumstances, facilitators compete with their clients to produce words, then where are the safeguards to prevent this from happening? Surely this “competition” exists outside the testing situation and in unstructured settings. And, surely proponents don’t want their clients’ words to be usurped (however inadvertently) by their facilitators. (Or do they?)

2) In any given FC session, how are we to know which facilitated words have been “overridden” by the thoughts of the facilitators (in a “competition” to produce correct words) and which ones represent the authentic thoughts of the individuals being facilitated? To me, this represents a major flaw in the technique that has yet to be addressed by proponents.

Despite claiming in their study that they “blinded” facilitators from the test stimuli, Cardinal et al. gave facilitators open access to the word list used in the testing, In doing so, the researchers essentially sabotaged their own stated goal of “controlling for variables” and keeping facilitators “blind.” As I’ll explain, It makes no difference that the researchers “randomized” the words used in each of the trials.


Word list used in the Cardinal et al. study


Cardinal et al. posited that facilitators only had a 1 in 100 chance of guessing words from the target list. For each session in each of the three conditions (Baseline 1, Facilitated Condition, and Baseline 2), participants were asked to spell five target words using the protocol outlined in previous blog posts (links below). Cardinal et al. argued that the word pool was too large for facilitators to guess the words. However, because facilitators had open access to and knowledge of the words on the list, these odds were narrowed as soon as any letter was selected on the letter board even if the facilitators were “blinded” from knowing which of the 100 words were targeted in any given trial. (See Mostert, 2001)

For example, if the first letter selected—either on purpose or accidentally—was an “I”, “q”, “u”, “v”, “x”, “y”, or “z”, then the facilitator (having seen the word list ahead of time) would likely know to pull the participant’s hand back or “reset” the letter board. There are no words on the list beginning with those letters.

In addition, the word list included only two options for the beginning letter “a” (arm, apple) or the letter “g” (girl, green). Selecting the letter “a” (even mistakenly) as the initial letter narrows a guess at the answer to a 50/50 chance. There are four words on the list (jump, key, leg, orange) that represent a single initial letter of “j,” “k,” “l,” and “o.” Selecting the letter “k” as an initial letter narrows the answer to 100% (providing the facilitator remembers the words on the list). Even with an initial letter of “b,” the chance of guessing the correct word is narrowed to 1 in 16. A well-placed guess at the second letter (“a,” “e,”, “i,” “l,”r” or “u”) narrows the options even further. Selecting the initial letter “r” narrows the options from the list to 1 in 3 (read, red, run), and so forth.

I’m sure the word list can be categorized in other ways, but my point is that because the word list is comprised of terms familiar to the facilitators, even a cursory glance at the word list gives facilitators a (perhaps unconscious) advantage when it comes to guessing which words from the list might be targeted.

I’m not saying that facilitators purposely or even consciously sought to guess which words from the list were being targeted. I expect that the facilitators were sincere in their belief that they were assisting participants without intending to influence letter selection. Even if the facilitators were vigilant and actively seeking to “clear their minds” (which is difficult to do), they could not be fully aware of their own thoughts and actions a hundred percent of the time, which is what it would take for them to never influence or control letter selection. (And, even with hyper-awareness, facilitators may still be susceptible to cueing through the ideomotor effect, or small, non-conscious muscle movements).

The only reliable way to control for inadvertent facilitator influence over letter selection, then, is to take away the (very human) compulsion to “guess” at the correct words or “help” the clients to the correct answers by blinding facilitators from test stimuli before and during the testing. That’s why, despite Cardinal et al.’s intent to do otherwise, their decision to give facilitators open access to test stimuli was a major error on their part.

In my next blog post, we’ll continue to explore facilitator behaviors that can affect letters selection in FC-generated messaging..


References and Recommended Reading:

Kezuka, E. (1997). The role of touch in facilitated communication. Journal of Autism and Developmental Disorders, 27, 571-593. DOI: 10.1023/A:1025882127478

Mostert, M. (2001, June). Facilitated communication since 1995: A review of published studies. Journal of Autism and Developmental Disorders, 31 (3), 287-313. DOI: 10.1023/A:1010795219886

Spitz, H. (1997). Nonconscious Movements: From Mystical Messages to Facilitated Communication. Routledge. ISBN 978-0805825633

Wegner, DM, Fuller, VA, Sparrow, B. (2003, July). Clever hands: Uncontrolled intelligence in facilitated communication. Journal of Personality and Social Psychology. 85 (1): 5-19. DOI: 10.1037/0022-3514.85.1.5


Blog posts in this series (Links will be added once the blog posts are published)

Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 1 (Rudimentary Information)

Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 2 (Test Design)

Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 4 (Facilitator Behaviors)

Does Cardinal, Hanson, and Wakeham’s 1996 Study Prove Authorship in FC? Part 5

Previous
Previous

FC Past and Present: A Now-Normalized Routine that Solves Nothing

Next
Next

ASHA 2025: The Empty Review of FC/RPM/S2C/Spellers Method and its aftermath