Schwartz & Dell, 2010

  • Upload
    fanyest

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

  • 8/2/2019 Schwartz & Dell, 2010

    1/19

    Case series investigations in cognitive neuropsychology

    Myrna F. Schwartz1 and Gary S. Dell2

    1Moss Rehabilitation Research Institute, Elkins Park, PA, USA2Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA

    Case series methodology involves the systematic assessment of a sample of related patients, with thegoal of understanding how and why they differ from one another. This method has become increas-ingly important in cognitive neuropsychology, which has long been identified with single-subjectresearch. We review case series studies dealing with impaired semantic memory, reading, and languageproduction and draw attention to the affinity of this methodology for testing theories that areexpressed as computational models and for addressing questions about neuroanatomy. It is concludedthat case series methods usefully complement single-subject techniques.

    Keywords: Aphasia; Case series; Lexical access; Semantic memory; Computational models.

    To many in the broader cognitive neurosciencecommunity, cognitive neuropsychology (cn) isidentified with a rigorous single-subjectmethodology. Leaders in the field have longpromoted this view. For example, in theirintroduction to the 20th anniversary issue of

    Cognitive Neuropsychology, Alfonso Caramazzaand Max Coltheart state, It is deeply characteristic of cognitive neuropsychology thatit studies symptoms rather than syndromes andcarries out single case studies rather than groupstudies (Caramazza & Coltheart, 2006, p. 5).This pronouncement is supported by the specialissue articles, which summarize the impressivecontributions of cn research to many differentdomains of cognition; in virtually every domain,

    single-subject research features prominently. Yet, to many who practise cognitiveneuropsychology, the fields identification withsingle-subject research is overstated, and therhetoric on the topic is unnecessarily contentious.In journals devoted to cognitive psychology,

    neuropsychology, and cognitive neuroscience,one can easily find high-profile cn papers thatreport data from sizeable collections of patients. Just like single-subject studies, those testingseveral patients are carried out to draw inferencesabout the functional organization of cognition. The most successful of these, in our view, haveused the case series approach. Our goal in thisarticle is to show why, as an alternative andcomplement to the study of individual cases,

    Correspondence should be addressed to Myrna F. Schwartz Moss Rehabilitation Research Institute, 50 Township Line Rd.,Elkins Park, PA 191027, USA. (Email:[email protected]).

    We acknowledge with gratitude the support of NIH/NIDCD(National Institutes of Health/National Institute on Deafness andOther Communication Disorders) Grant RO1DC000191 28 to M.F.S. and helpful suggestions from Dan Mirman and SimonFischer-Baum on an earlier version of the manuscript.

    # 2011 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business 477http://www.psypress.com/cogneuropsychology DOI:10.1080/02643294.2011.574111

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6), 477494

  • 8/2/2019 Schwartz & Dell, 2010

    2/19

    case series methodology is important to cn andwill probably contribute even more to the cn ofthe future.

    INTRODUCTION TO THE CASE

    SERIES DESIGN

    In clinical medicine and epidemiology, the caseseries design is a recognized alternative to cohortand case-control designs. Participants are followedfor a period of time with the same, relevant datarecorded from each. There is no control group,and data from the sample are not aggregated.Instead, the target event (e.g., disease onset) ismodelled in relation to patient, time, and/or treat-ment variables using regression techniques (e.g.,

    Farrington, Nash, & Miller, 1996).The situation is similar in cn case series investi-

    gations. For the target cognitive ability, onestudies a sample of brain-injured patients whoare expected to vary on that ability, obtaininguniform performance measures on the target andpotentially related clinical, anatomical, and per-formance measures and analysing how themeasures covary. The goal of the analysis is tounderstand the cognitive mechanisms responsiblefor the covariance, and this often involves develop-

    ing or testing a specific statistical or processingmodel. The patient sample may be definedbroadly (e.g., chronic aphasia; Schwartz, Dell,Martin, Gahl, & Sobel, 2006) or narrowly (e.g.,dysgraphic individuals with lexical and sublexicalspelling deficits; Fischer-Baum & Rapp, 2008).As well, the assessment given to each patientmay vary in its breadth and depth. Such methodo-logical decisions are determined by the study ques-tion, the maturity of relevant theory, and the usualpractical trade-offs.

    Case series investigations in cn should be dis-tinguished from the many neuropsychologicalstudies in which patient group means are ofprimary interest. These kinds of studies treat within-group variability on some dependent vari-able as a source of noise, effectively removingthat variability as an object of consideration.A case series, in contrast, preserves and uses the

    individual data by characterizing the distributionof scores and, most importantly, what factorscovary with the scores. Of course, because thereare several patients in a case series, one cangroup them in various ways for purposes ofdescription and even for statistical inference. The

    grouping can be done on an a priori basis(Jefferies & Lambon Ralph, 2006). Alternatively,the groups could be assembled based onscores from the case series data itself, providedthat inferential tests are pursued with greatcaution given the potential for contaminationfrom peeking at the data. In either event,grouping can help the researcher see the patternsin a case series.

    Case series methodology also should be distin-guished from single- or multiple-case methods.

    Multiple-case studies are concatenations of twoor more related case studies. Key differences canbe illustrated by considering an early case seriesreport authored by Patterson and Hodges (1992).The report featured six individuals with a historyof semantic memory loss, as demonstrated ontests of naming and comprehension. Noted atthe outset was the reports unusual format: it isneither a group study, nor a typical single-casestudy, nor even a series of complete single-casestudies. Rather it represents a focus on one specific

    aspect of six single-case studies (p. 1026). Thatfocus was on how the reading of regular and excep-tion words was impacted by two independent vari-ables: severity of the semantic deficit and wordfrequency. Their analysis showed that lexical fre-quency had a dramatic impact on the subjectsreading of exception words but not regular words, and that the exception-regular difference was greater for those with severe semantic lossthan for the milder subjects.

    This proto case series lacked some elements

    that are now considered important: a sample sizesuitable for identifying and parameterizing linearand more complex trends in the data (usually ann of 10 or more) and uniform data gathering oneach patient. Nevertheless, it well exemplifiesthat the essence of a cn case series is the analysisof patient variation in relation to other, theoreti-cally relevant variables. Consistent with this goal,

    478 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    3/19

    and in contradistinction to the multiple-casestudy, the cognitive analysis tends to be circum-scribed. Typically, tests are administered orreported only if they bear on the selection criteriaor the study hypothesesthat is, there is not a sys-tematic attempt to fully characterize the deficits of

    each individual. There is no principled reason whycase series analyses cannot be combined withintensive cognitive characterization of eachpatient. In practice, though, such characterizationtends to be reserved for patients who deviate fromthe group trends, in order to track down the reasonfor the deviation (e.g., Schwartz et al., 2006).

    In a notable departure from cn orthodoxy, caseseries studies often define the sample clinically(e.g., poststroke aphasia; anomia; semanticdementia) and not purely along cognitive lines.

    The traditional objection to clinical classificationsis, of course, that they invite unhelpful heterogen-eity with respect to the cognitive ability in ques-tion. Because case series have as their goal toidentify and explain variation, some degree ofheterogeneity is deemed welcome, and evennecessary. As a general rule, one can say thatproponents of cn case series are willing to relaxthe restrictions on heterogeneity in the explora-tion of theoretically interesting patient variation. We have more to say about this in subsequent

    sections.Prior discussions of the use of case series

    methods in cn have tended to draw the contrastwith the single-subject approach differently fromthe way we do here. For example, Patterson andPlaut (2009) associate single-subject methods with the search for functional dissociations thatsupport modular models of cognition. Theyassociate case series methods with the search forfunctional associations that support parallel, dis-tributed processing models. This division reflects

    the historical record but does not do justice tothe flexibility of either approach. For example,as discussed below, Dell and colleagues usedcase series methods to uncover evidence of bothassociations and dissociations in aphasic namingperformance, evidence that was used to supporta processing model that combines elements ofmodularity and interactivity. Similarly with the

    single-subject approach, the evidence used tosupport processing accounts need not be limitedto within-subject dissociations. For example, Warrington (1975) first demonstrated the now-familiar profile of semantic memory dissolutionin three intensively studied patients with pro-

    gressive cortical degeneration. Each patient wasshown to exhibit hierarchically structured com-prehension loss for both words and pictures (anassociation) in the context of relatively preservedepisodic memory and nonsemantic language func-tions (a dissociation).

    Why case series?

    Our central claim is that case series have an impor-tant function in cnthat there is a good reason totest a set of individuals on a common set ofmeasures and analyse the data as a group. So, letus start with the standard objection to groupstudies. When comparing two or more patientgroups, it is perilous to assume that members ofa group have similar deficitsthat is, that thegroup is homogeneous. The mean performanceof the group may characterize few or none of itsmembers (e.g., McCloskey & Caramazza, 1988). The traditional solution to this concern is toadopt the single-subject approach. Given this,

    McCloskey (1993, p. 725) then asks a cogentquestion:

    One could conduct patient-group studies in which each patient was tested extensively on a substantial number of tasks andresults were then analyzed at both individual and grouplevels. However, the following question would then arise:Given the acknowledged need to consider individual patientsperformance patterns, what function would be served by aggre-gating the data over subjects . . . ?

    As indicated above, for case series studies, thefunction of analysing the patients together is to

    identify theoretically important quantitativetrends in the sample, particularly those in whichsome model makes precise predictions about thenature of those trends. Such trends simplycannot be determined unless patients are analysedtogether.

    Although the kind of grouping that is carriedout in case series studies can lead to valuable

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 479

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    4/19

    findings, we do not say that these studies areimmune from the problems of group studies. Onthe contrary, any patient sample will lack hom-ogeneity, and that will compromise interpretationof any cross-sample statistic, not just claims aboutsample means. For example, any group quantitat-

    ive trend, such as a regression slope relating one variable to another, may vary among patientsand, in so doing, may reflect multiple factors. This is especially likely when the case seriessample is broadly defined, and the cognitiveassessment is not extensive. We considermethods of dealing with this concern below, inHeterogeneity in Case Series.

    It is our firm belief that the case seriesapproach is neither more nor less suited to thegoals of cn than other methods. One should

    evaluate each case series for whether it succeedsin advancing cognitive theory based on the usualcriteria: Are the measures appropriate to the ques-tion being asked and the patients being studied?Have they generated data that are valid andreliable? Have important confounds beenaddressed? Do the results advance knowledgeabout the nature of human cognition? In the fol-lowing sections, we illustrate case series researchin cn, by describing particular studies for which we have a good understanding of their motiv-

    ations and methods, either because we wereinvolved in the study or because we are familiar with the research domain. These can serve asexamples of the kinds of research questions thatare particularly suited to a case series approachand highlight methodological issues that mayarise when this approach is adopted.

    IDENTIFYING TRENDS ANDTESTING MODELS THROUGH CASE

    SERIES ANALYSIS

    Severity-related interactions in lexical accessdeficits in aphasia

    Speech-error evidence from normal and aphasicspeakers demonstrates that some errors index dif-ficulty in mapping semantic representations onto

    lexical representations, whereas others reveal pro-blems in retrieving and sequencing phonologicalsegments. One can take a semantic error as indica-tive of the former mechanism and a nonword erroras indicative of the latter. The literature reportsclear examples of patients in whom nearly every

    error was semantic and patients in which mosterrors were nonwords, suggesting a qualitative dis-tinction between the cognitive processes respon-sible for the two error types (see Ruml &Caramazza, 2000, for review). In a case series,though, Dell, Schwartz, Martin, Saffran, andGagnon (1997) presented evidence that differ-ences in semantic and nonword-error rates were,to a large extent, related to overall severity of thenaming impairment. In fact, the associationsbetween naming correctness and semantic-error

    proportion and between correctness andnonword-error proportion revealed that the mildpatients errors were generally semantic, whilethe severe patients produced many nonwordresponses. Figure 1 shows the severity functionsfor the two error types as second-degree poly-nomials based on the data from the 21 patientsin this case series. Note that severity in thefigure runs from high (few correct) to low (manycorrect).

    Semantic errors occur in the relatively error-

    free normal pattern and increase with severity upto a point and then decrease. Nonword errors arerare in mild cases, but increase dramatically insevere cases, reaching their maximum in themost severe cases. This analysis of patient per-formance across the severity range suggests analternative to the hypothesis that semantic andnonword errors arise from qualitatively distinctmechanisms. Dell et al. (1997) proposed thatwhat underlies variation in both kinds of error isquantitative variation in global processing mech-

    anisms that interact with the structure of thelexical access system to promote semantic errors when it is mildly damaged and nonword errors when damage is more severe. One hypothesizedmechanism was the rate of activation decay;another was the rate at which activation spreads.Subsequently, these researchers modified the ideaof global processing deficits because of findings

    480 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    5/19

    from a larger case series that had the power toreveal systematic variations within the severity-based patterns exemplified in Figure 1 (Schwartzet al., 2006). So, on top of the severity-based pat-

    terns discovered in the earlier case series study, thelarger study found, for example, patients with poornaming, but many fewer nonword errors thanwould be expected just from the severity functions.Later, we expand on the specific contribution ofthe Schwartz et al. case series. For now, wesimply draw attention to the goals and themethods of the case series in Dell et al. (1997):

    It aimed to understand lexical access by examininghow the pattern of different naming errors(semantic and nonwords) varies among aphasicsubjects, and particularly how the error patternchanges with severity. It did so by identifyingquantitative trends in the data using regression,

    specifically a nonlinear regression model as itbecame clear that the relations between someerror types and severity were often complex, suchas that shown with the semantic errors in Figure 1.

    Statistical versus processing models incase series

    We have said that the goal of a case series is toexplain the variation in the primary measurestaken from a patient sample in order to draw

    inferences about cognitive functions. This expla-nation typically takes the form of a quantitativemodel. Often, as illustrated in the previoussection, that model is a statistical model, suchas regression. One or more dependent variablesin the case series are predicted from othermeasures or patient characteristics by construct-ing prediction equations. The form and coeffi-cients of these equations are then interpreted inthe light of theory. For example, this has beenthe preferred strategy for determining how the

    naming performance (accuracy and/or errorscores) of a neuropsychological population isinfluenced by target properties such as frequency,length, and age of acquisition (e.g., Kittredge,Dell, Verkuilen, & Schwartz, 2008; LambonRalph, Graham, Ellis, & Hodges, 1998;Nickels, 1995; Nickels & Howard, 1994, 1995,2004; Woollams, Cooper-Pye, Hodges, &Patterson, 2008). Care should be taken, though,in using statistical techniques that only considerlinear relations among the measures (e.g., linear

    regression, principal components analysis). Theexample in Figure 1 comparing semantic andnonword naming errors illustrates the need for aconsideration of nonlinear and especially non-monotonic relationships between measures. The ability to detect such relationships is aparticular strength of large case series studies(e.g., Woollams et al., 2008). But curvilinear

    Figure 1. Severity curves based on the 21 modelled patients in Dell,Schwartz, Martin, Saffran, and Gagnon (1997). Y-axes showerror rates as proportions of all responses. Both curves aresecond-degree polynomials.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 481

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    6/19

    associations can easily be missed if researchers donot consider the possibility.1

    Ultimately, the best tool for understanding thecomplex relations among the measures taken in acase series may be an explicit processing modelrather than a statistical model such as regression.

    This is especially important for case series in cn,which uses the analysis of deficits to draw infer-ences about unimpaired processing (see Dell &Caramazza, 2008; Goldrick, 2008, for discussionof models in cn).

    A processing model can be computationallyimplemented to mimic the operations thatunderlie the tasks being performed. The bestmodels will directly generate analogues to thereal data. So, a model of deep dyslexia will some-times make, say, a semantic error when attempting

    to read a concrete word (e.g., Plaut & Shallice,1993). Moreover, different versions of the modelcan be set up to represent relevant between-patient differencesfor example, in lesion sever-ity, distribution, or premorbid cognitive capacity.Such a model is then poised to simulate the func-tional relations between the patient measures. Weexemplify the coupling of case series methods andcomputational processing models with published work in the areas of semantic memory, reading,and lexical access.

    Semantic memory

    The influential distributed-plus-hub theory ofsemantic memory (Lambon Ralph, Sage, Jones,& Mayberry, 2010; Patterson, Nestor, & Rogers,2007; Rogers, Lambon Ralph, Garrard et al.,2004) derives support from three empirical pillars:

    1. Case series data from patients diagnosed withsemantic dementia (SD; e.g., Bozeat, Lambon

    Ralph, Patterson, Garrard, & Hodges, 2000)or with the fluent form of primary progressiveaphasia (Adlam et al., 2006) reveal commonpatterns of covariation within and acrossverbal and nonverbal semantic tasks. The hubtheory takes this commonality as evidence for

    amodal semantic representations and their vul-nerability to disruption.

    2. Studies that correlate the multimodal semanticsymptoms of SD with in vivo measurement ofbrain atrophy and blood flow have highlightedthe importance of the inferior and lateralaspects of the anterior temporal lobes(Mummery et al., 2000; Mummery,Patterson, Wise, Vandenbergh, Price, &Hodges, 1999; Rogers et al., 2006) and haveled to the view that this is the hub that

    houses amodal semantic representations.3. A parallel, distributed processing (PDP)

    implementation of the hub theory has beendeveloped (Rogers, Lambon Ralph, Garrardet al., 2004), within which the hub functionsto abstract the similarity structure across themultiple, modality-specific knowledge rep-resentations with which it interacts. We nowdescribe the Rogers, Lambon Ralph, Garrard,et al. (2004) semantic memory model andshow how its development and evaluation are

    inextricably linked with case series data.

    The models architecture features one pool oflocalist units representing verbal descriptors(names and perceptual, functional, and encyclo-paedic propositions), another pool representingvisual features, and recurrent semantic (hidden)units that interconnected with the verbal and visual units via bidirectional connections whose weights were set by learning. The recurrentsemantic units constitute the amodal hub.

    1A related concern about nonlinearity arises because most neuropsychological tests yield a percentage valuethat is, a value on ascale from zero to a logical maximum such as 100%. Analyses that treat percentage score differences as equivalent across the scale maybe misleading. Logistic regression, in which percentages are transformed into the logit of the response typesthat is, ln[p(correct) /p(incorrect)]is considered an appropriate remedy for this problem in percentage/proportion data (Jaeger, 2008), and we rec-ommend that these methods be adopted where appropriate in neuropsychological case series analysis (Dilkina, McClelland, &Plaut, 2008). Moreover, with the advent of new software, logistic regression can now be done in a multilevelled manner, so thateach measurement trial can be associated with a specific participant and item, and both participants and items can be treated asrandom effects (e.g., Nozari, Kittredge, Dell, & Schwartz, 2010).

    482 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    7/19

    A training environment was constructed to capturethe similarity structure of items identified in pub-lished attribute-norming experiments (Garrard,Lambon Ralph, Hodges, & Patterson, 2001;McRae, de Sa, & Seidenberg, 1997). The modelwas trained on visual and verbal patterns until it

    learned the associations among the names, appear-ances, and descriptions of items in its virtualenvironment. Items were assigned unique namesthat were either general (e.g., bird, tool) orspecific (robin, drill); unique descriptors werealso either general (e.g., living, nonliving) ormore specific (mammal, tool, fruit).

    In the test phase, simulations were run on ana-logues of four semantic memory tests used withactual SD patients. Picture naming was tested inthe model by inputting a prototypical visual

    pattern (representing a pictured target) and requir-ing as output the unique name (general or specific).Word and picture sorting were tested by inputtinganame or a visual pattern and requiring as output thecorrect verbal descriptor (general or specific). Forwordpicture matching, the model was presentedwithanamefollowedbyaseriesofvisualinputscor-responding to the target and distractor pictures. The models choice was defined as the visualinput that generated within the semantic units aninternal state most similar to that produced by the

    target name. Finally, to simulate drawing tocommand, the model was given a name as input,and its response was defined by the resultingpattern of activity across the visual units.

    On each task, Rogers, Lambon Ralph, Garrard,et al. (2004) compared the models performanceunder varying degrees of impairment severity(simulated by removing an increasing proportionof the semantic weights) with the data fromactual patients, which were similarly described inrelation to severity. In this way, the manipulation

    of severity provides the link between the modellingand the case series data. It is the hypothesizedsource of variance in the simulated and the actualbehaviour. For example, as naming accuracydecreased (i.e., severity increased), the proportionof semantically related substitutions declined, while the proportion of superordinate responsesand omission increased. In addition, sorting

    performance declined with increasing deficit ofsemantic impairment (as measured by wordpicture matching), with specific-level sorts (e.g.,animals and tools) suffering a steeper declinethan more general-level sorts (living andnonliving). These severity-related interactions

    were shown to be comparable in the model andthe patients. Model and patients also performedsimilarly in other theoretically relevant respects.In naming, for example, the likelihood ofomissions was greater for artifacts than animals, whereas semantic and superordinate errors weremore likely with animals. In drawing, model andpatients were more likely to omit distinctivefeatures of objects, whereas they were more likelyto inappropriately add features that were widelyshared.

    The models simulations prompted Rogers,Lambon Ralph, Garrard, et al. (2004) to concludethat the functional explanation for these andother features of semantic dementia lies in thestructure that the semantic system forms throughlearning, coupled with the dynamics of processingas knowledge degrades. Reflecting the structure inthe environment, the model develops a densesemantic space (or neighbourhood) for animals,whereas the space for artefacts is sparse. A densesemantic neighbourhood affords more

    opportunities for the system to be captured by thewrong attractor, which explains why, in naming,semantic errors are more likely for animals whereas omission errors are more likely forartefacts. Generally speaking, as the semanticsystem degrades, it has a harder time distinguishingamong concepts. In a mildly damaged system, thenetwork may settle into an inappropriate attractor,from which it might be unable to producedistinguishing information (that zebras havestripes) but might succeed in producing

    information of a more general nature (yieldingerrors such as zebra horse or zebra animal). With greater damage, the only informationavailable might be information that is common tomany items in the domain (superordinate response)or to no known entity (omission). Later empirical work has supported this functional account byshowing that the typicality of an item within its

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 483

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    8/19

    semantic category strongly impacts SD patientsperformance on a variety of tasks, not all of which have an obvious semantic component(Patterson et al., 2006; Rogers, Lambon Ralph,Hodges, & Patterson, 2004; Woollams et al.,2008).

    Surface dyslexia

    The best known of the nonsemantic typicalityeffects in semantic dementia is surface dyslexia,which features the tendency to regularize excep-tion words in reading aloud (e.g., pint pro-nounced as /pInt/). Earlier we mentionedPatterson and Hodgess (1992) proto case series,which first posited a causal relation between lossof meaning and impaired exception word

    reading. The argument has been bolstered withcase series data from larger SD samples(Graham, Patterson, & Hodges, 1995; Pattersonet al., 2006; Woollams, Lambon Ralph, Plaut, &Patterson, 2007). In the Woollams et al. (2007)study, data from 51 patients (100 data points,since some were tested at more than one point intime) confirmed earlier reports of a quantitativerelation between the semantic and reading deficits with the impressive finding that a compositesemantic measure explained half of the variance

    in exception word reading. At the same time, thestudy confirmed prior evidence that there areoccasional patients who violate this association(Blazely, Coltheart, & Casey, 2005; Cipolotti & Warrington, 1995; Schwartz, Saffran, & Marin,1980), here exemplified by 3 patients whoshowed unexpectedly good reading, given theirsemantic scores. Pointing to the fact that all 3did develop surface dyslexia as their SD pro-gressed, the authors rejected the view that theoccasional dissociation of reading performance

    and semantic status indicates that semantics andword pronunciation are functionally independent(Blazely et al., 2005; Coltheart, 2006; Coltheart,Rastle, Perry, Langdon, & Ziegler, 2001).Instead, they argued that the overall associationbetween semantics and reading and the occasionaldissociations between them could be explained in asingle functional account that incorporates

    semantic mediation of word pronunciation andindividual differences in the degree of reliance onthis mediating input.

    The proposed functional account is based onthe PDP triangle model (orthography, phonology,semantics) and its phonological-semantic division

    of labour. This refers to the fact that as the modelis trained to pronounce words, the phonologicalpathway (orthography to phonology) becomesspecialized for reading words with frequent and/or consistent mappings (i.e., regular words),whereas the semantic pathway (semantics to pho-nology) assumes importance for the more arbitrarymappings that characterize exception words.Simulating surface dyslexia by diminishing thestrength of the semantic contribution negativelyimpacts the models performance with exception

    words, particularly those of low frequency (Harm& Seidenberg, 2004; Plaut, McClelland,Seidenberg, & Patterson, 1996).

    To explain the apparent independence ofsemantics and reading in some patients, Plaut(1997) introduced a variant of the triangle modelthat simulated premorbid individual differencesin how much semantic activation is suppliedduring the training phase. Woollams et al.(2007) implemented a similar model to achievemultiple instantiations of an intact network with

    individual differences in semantic reliance. Themultiple instantiations were then aggregated toinvestigate the impact of lesioning the semanticinput to phonology, which they achieved by pro-gressively decreasing activation and adding noise. The models performance was compared to theempirical, case series data, which were analysedcross-sectionally and longitudinally. In both themodels and patients, slopes of the function relatinglesion severity/semantic loss to reading accuracyexceeded zero for all types of words, but not for

    nonwords. Consistent with surface dyslexia,slopes were steeper for low- than for high-frequency exception words and steeper for excep-tion words than for regular words. Outliers,relative to best fit lines, were rare. Longitudinalanalysis of the patient data revealed similareffects of worsening semantic status.Furthermore, as we noted earlier, the 3 patients

    484 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    9/19

    who, on first testing, qualified as having unim-paired reading in the presence of semantic lossall developed surface dyslexia as their semanticdementia worsened. Overall, there was an impress-ive match between the performance of the patientsand the individual differences triangle model.

    In a subsequent development, Dilkina et al.(2008) merged features of the individual differencestriangle model (Plaut, 1997; Woollams et al., 2007) with the Rogers, Lambon Ralph, Garrard, et al.(2004) semantic memory model described earlier,in order to simulate both the reading and picturenaming performance of 5 patients (Figure 2).Individual differences were incorporated into themodel, representing premorbid differences in theeffectiveness of the direct (orthography to phono-logy) reading pathway and postmorbid differences

    in spatial distribution of the brain atrophy. Theresulting model gave a good account of the positive,curvilinear relation between reading and picturenaming that was present in the data. Critically, it

    also explained the behaviour of those patientswhose naming and reading were dissociated, suchthat reading was unexpectedly spared in relation tonaming (e.g., patient E.M. in Blazely et al., 2005).It did this by showing comparable performancein versions of the model that departed from

    the norm on one or more individual differenceparameters.

    While neither this nor the Woollams et al.(2007) study rules out the possibility that semanticsand reading are functionally dissociable (note thatneither study provides empirical evidence that thepatients actually differed in reading experience,etc.), the authors of these studies give a goodaccount of their case series data by putting individ-ual differences into the semantic-mediation theoryof reading. Spurred by these seminal studies, future

    case series investigations are likely to include muchmore in the way of individual difference data and tofeature these in the processing accounts they offer.

    The studies of semantic memory and surface dys-lexia illustrate the value of linking up a case serieswith a processing model. The model can offer pre-dictions about quantitative trends regarding howperformance can vary (and even the shape of thefunction, e.g., Dilkina et al., 2008) and possiblemechanisms for thisvariation (e.g., individual differ-ences in reading experience). It is the focus of the

    case series on quantitative accounts of the variationthat makes the data of interest and particularlysuited to model development and testing.

    Lexical access in naming

    Our final example of the valuable link betweencase series and processing models returns to thestudy by Dell et al. (1997) on aphasic picturenaming. This study was carried out to test thetwo-step interactive model of production (Dell &

    OSeaghdha, 1992). Like the two previousmodels presented here, it performs lexical tasksby spreading activation in a network and can simu-late individual trials resulting in either correctresponses or various kinds of errors (e.g., semanticor nonword errors). However, the interactive two-step model associates brain damage with changesin model processing parameters, rather than the

    Figure 2. Based on Dilkina, McClelland, and Plaut (2008). The

    model combines features of Rogers et al.s (2004) semantic memorymodel with the triangle model of reading. The integrativesemantic hub consists of hidden units that connect to one anotherand also bidirectionally with each of the four input/output levels(action, orthography, etc.). The direct orthographic to phonologicalmapping becomes specialized for the pronunciation of regularwords, while the semantic hub plays a stronger role for exceptionwords. Semantic damage leads to both semantic dementia andsurface dyslexia.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 485

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    10/19

    removal of network units and connections. Forexample, damage in such models can be attributedto increases in activation noise (e.g., Rapp &Goldrick, 2000) or activation decay rate (Dellet al., 1997; Martin, Dell, Saffran, & Schwartz,1994). The most recent version of the model

    attributes aphasic variation to two parameters:s-weight, which represents the strength of theconnections between semantic and lexical units;and p-weight, the strength between phonologicaland lexical units (Schwartz et al., 2006). Patientsare assigned parameter values by a fitting process(Dell, Lawler, Harris, & Gordon, 2004). First,the patients response profile in the naming taskis determinedfor example, .78 correct, .03semantic errors, .03 phonologically related word(formal) errors, .02 mixed semanticphonological

    errors, .01 unrelated word errors, and .13 nonworderrors. Then the model attempts to generate thispattern by varying its s- and p-weights. In thiscase, the model would find an excellent fit withan s-weight of .027 and a p-weight of .019, indi-cating that this case is somewhat more impairedin lexicalphonological mappings than in seman-ticlexical mappings. Notice that, in the contextof a case series study, the models parameters canthemselves then act as dependent variables in theseries, and their distribution and association with

    other measures can be assessed. For example,Dell, Martin, and Schwartz (2007) demonstratedthat the p-weight determined from picturenaming strongly predicts word repetition perform-ance, thus suggesting that the lexicalphonologi-cal component of naming is shared withrepetition (see also Nozari et al., 2010).

    Variations in model parameters can explainhow measures vary with severity in the caseseries. Recall that semantic and nonword namingerrors behave quite differently across severity (see

    Figure 1). Varying model parameters generates aseverity continuum, and thus it can be seen whether the model simulates this behaviour.Figure 3 shows an example in which weights vary from their normal to nonfunctional values. Just as in the data (see Figure 1), the modelssemantic errors show a nonmonotonic relation with severity, while nonword errors are clearly

    associated with the more severe cases. Why doesthe model work this way? It has to do with differ-ences between the models normal behaviour andits behaviour when it is completely broken downand thus generating only random responses.The models normal error pattern was deliberatelyset up to match the error profile of normal con-trols. In this profile (called the normal point),errors are rare, but semantic errors are the domi-

    nant error. At the other end of the severity conti-nuum, the model was set up to match thedistribution of errors that would result from theproduction of random word-length strings thatare nonetheless phonotactically legal. This iscalled the random point. In English, most suchstrings would be nonwords, but some would bewords (most of which would be unrelated to the

    Figure 3. For the interactive two-step model, semantic andnonword error probability (relative to total responses) as a

    function of the proportion of correct responses (severity). Compareto Figure 1. The points of differing severity were created by

    reducing both s- and p-weights by the same proportions.

    486 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    11/19

    target). Thus, the model defines a space of possibleerror patterns, a space that runs from the normal tothe random point. The reason that semantic errorsincrease and then decrease as damage becomesmore severe is largely a consequence of the rarityof semantic errors at the random point.

    Nonword errors behave differently because theyare nonexistent at the normal point, but overwhel-mingly predominant at the random point. Inessence, the model realizes the continuity thesisof aphasia (Dell et al., 1997; Freud, 1891/1953)that quantitative variation between normality andrandomness defines the possible deficits.However, it is important to recognize that themodel does not attribute everything to severity.According to the model, the aphasic space hastwo dimensions, one for s-weight and one for

    p-weight. Thus, there is a qualitative aspect(kind of damage) along with quantitative degreeof damage that emerges from the application ofthe model to the case series data. The combinationof model and case series was thus crucial for sortingout the quantitative and qualitative sources of vari-ation in naming performance by aphasic speakers.

    We noted that the model defines a space of poss-ible error patterns. If the model is a good one, itshould be able to fit the naming data from anypatient in the target populationnamely, post-

    stroke aphasia. In Schwartz et al. (2006), namingdata were obtained from 94 individuals withdiverse presentations of chronic aphasia. In thelarge majority of cases, the model provided a goodfit to the observed error pattern. Schwartz et al.then took a closer look at patients whose errorpattern deviated from that of the model by a cri-terion amount, in effect treating these patients assingle cases. Most of these patients (N 9) exhib-ited what was called the pure semantic pattern,meaning that their errors were overwhelmingly

    semantic, despite a below-normal level of accuracy. This error pattern is unexpected by the model,because a breakdown in the s-weight that producesmany semantic errors tends also to create formal

    and unrelated errors. Schwartz et al. were able toexplain some, but not all, of these deviatingpatients. For example, further testing showed thatsome deviators had an additional central semanticdeficit. Consequently, some of their semanticerrors may have arisen from poor conceptualiz-

    ation. The model, which assumes correct semanticinput as a simplifying assumption, thus does nottake note of these errors. The deviations fromother patients could be attributed to another sim-plification, the treatment of omission errors (e.g.,Dell et al., 2004). Ultimately, though, there were3 patients with the pure semantic pattern thatcould not be explained by these factors. Thesepatients thus present a challenge to the modelsunderlying theory, possibly to the models keyassumption that activation flows from lower levels

    (e.g., phonemes) to higher levels (words andsemantics) with as much strength as it does in thetop-down direction (Goldrick & Rapp, 2002;Rapp & Goldrick, 2000; Ruml, Caramazza,Capasso, & Miceli, 2005).

    We have said that the strength of a model/case-series combination is the potential to testprecise predictions about patient variation. Forexample, in the interactive two-step model, formalerrors in naming (e.g., mat or sat, for cat),have a double nature. Formal errors can be caused

    either by the selection of the wrong word duringthe step associated with lexical access, or by theincorrect phonological encoding of the correctword during a later step when that selected wordsphonemes are chosen. If these errors occur duringlexical access, they must, according to the model,match the grammatical class of the target (in objectpicture naming, they must be nouns, e.g., mat forcat), but if they are phonological, they couldmake non-nouns (e.g., sat). It turns out that, inthe model, the extent to which such errors are

    lexical as opposed to phonological is directly pro-portional to the value (p s)that is, the extentto which the models assigned phonologicalparameter is stronger than the semantic parameter.2

    2This is because formal errors at the lexical level require a sufficiently low semantic parameter so that lexical selection of the targetis impaired, but a sufficiently strong phonological parameter that phonological feedback activates formal competitors (such as mat),and that, if mat is selected, it is correctly pronounced.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 487

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    12/19

    Schwartz et al. (2006) tested this prediction with their case series by selecting all formalerrors and determining the extent to which theywere nouns, and then binning those errors by the(p s) values of the patients that generatedthem. Thus, a greater (p s) should be associated with a greater tendency for lexical-level formalsand hence a greater tendency for these to benouns. The predicted trend was confirmed(Figure 4), thus supporting the dual nature offormals and the models claims about processing

    steps and interaction. Case-series methods arecritical to this finding. Without the common setof measures on the sample, it is not possible tofit the model to the patients to determine theirparameters. And without a sizeable patientsample with variation in those parameters and informal-error production, it is not possible to testfor the trend relating properties of those errors tothe parameters. Note, also, that this is clearly acase in which heterogeneity in the samplehere with regard to potential mechanisms for formal

    errorsis useful.

    ANATOMICAL CASE SERIES

    Traditional cn is considered to be part of cognitivepsychology and hence is concerned with cognitive

    function rather than neural implementation(Caramazza & Coltheart, 2006; Coltheart,2006). Over the last 10 years, though, the sharpdivision between cognitive psychology and neuro-science has been breaking down. We see evidenceof this in computational models of behavioural

    data that build in anatomical and/or neurophysio-logical assumptions (Gotts & Plaut, 2002;Lambon Ralph, McClelland, Patterson, Galton,& Hodges, 2001; Plaut, 2002) and in brainimaging and lesion localization studies thataddress issues of relevance to cognitive theory(Bedny, Caramazza, Grossman, Pascual-Leone,& Saxe, 2008; Damasio, Grabowski, Tranel,Hichwa, & Damasio, 1996). A recent studymakes this general point through a new appli-cation of the cn case series method that we call

    the anatomical case series. The study used voxel-based lesion symptom

    mapping (VLSM) to identify left hemisphere voxels that, when lesioned, are associated withsemantic errors in naming (Schwartz et al., 2009).Behavioural data were obtained from 64 patients with chronic, poststroke aphasia, along with con-temporaneous, high-quality computerized brainscans. For each patient, there were three behaviouralscores: proportion of semantic errors in naming,verbal comprehension accuracy, and nonverbal com-

    prehension accuracy. Additionally, two derivedscores were computed, representing the semanticerror score after controlling for verbal and nonverbalcomprehension, respectively. The two derived scoresindex, in slightlydifferent ways, semantic errors gen-erated during the production stage of lexical accessand not during the semantic stage (i.e., target con-ceptualization). After the lesions were registered toa common template, the association betweenpatients semantic error score and their lesionstatus (presence vs. absence of a lesion) was tested

    in each voxel using t tests. The results were clear:One area of the brain showed an association withsemantic errors when comprehension was controlled(i.e., using the derived scores). That area was the leftanterior temporal lobe (L ATL), especially mid toanterior middle temporal gyrus. A follow-up studyshowed that the association between semanticerrors and L ATL lesions also survived correction

    Figure 4. Percentages of formal errors that are nouns as a function of(p s) and chance expectations. Figure based on data fromSchwartz, Dell, Martin, Gahl, and Sobel (2006).

    488 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    13/19

    for phonological errors and for total lesion size(Walker et al., 2010).

    Finding an area of the brain that is specificallyassociated with the production of semantic errors,above and beyond its association with comprehen-sion ability and phonological error production,

    constitutes the anatomical footprint of thepostsemantic, prephonological generation ofsemantic errors (Caramazza & Hillis, 1990; Dellet al., 1997; Graham et al., 1995; Rapp &Goldrick, 2000). Of course, this mechanism forsemantic errors has been hypothesized in cognitivetheories of language production. For example, it isa central premise of the interactive, two-step modeland is readily compatible with any model thatpostulates a lexical level in between semantics andphonology (Levelt, Roelofs, & Meyer, 1999). The

    fact that the area in question happens to be in theleft ATL carries other implicationsfor example,for debates about the precise role of the ATL andits anatomical subdivisions in semantics andnaming (for discussion, see Walker et al., 2010).

    This example of an anatomical study with cog-nitive ramifications brings to mind these prescient words from Tim Shallice, written more than 20years ago:

    I have argued that the use of group studies is not likely to lead torapid theoretical advance and that, in general, information

    about the localization of lesions is not vital for cognitive neu-ropsychology. However, this is not to argue that groupstudies and information on lesion localization should beexcluded from cognitive neuropsychology. The rather negativeassessment made of group studies and localisation informationis not one of principle, but a pragmatic one specific to themethods available at present. With, say, advances in neurologi-cal measurement techniques, the situation might very wellchange. (Shallice, 1988, p. 214)

    We believe that the anatomical case series is exactlythe kind of group study with localization infor-mationthat, because of advancements in neurologi-

    cal measurement, now warrants inclusion as a vitalmethodological tool in cognitive neuropsychology.

    HETEROGENEITY IN CASE SERIES

    As we mentioned earlier, the standard cn concern with group studies is the likelihood of

    heterogeneity of the deficits in the group,making the group mean, at best, unrepresentativeof all cases and, at worst, irrelevant. Althoughcase series are less concerned with means thanwith more complex trends, the lack of homogen-eity of the groups can also limit conclusions

    there, because the trends themselves can be hetero-geneous. But in a case series, one has the potentialto understand this heterogeneity to advancetheory. We consider two particular approaches. The first is what we call track down the devi-ations. Because case series analysis creates somekind of model (even if it is only a simple linearregression), each patients measurement can becharacterized as consistent with the model ornot, and the explanation for the deviating casescan be sought, by using single-subject style assess-

    ment. We already saw this approach taken in theSchwartz et al. (2006) study. Another example isFischer-Baum and Rapps (2008) case seriesstudy of perseveration in spelling. These authorsexamined perseverations of letters from previoustrials in 12 dysgraphic patients. A failure-to-acti- vate account of perseverations predicts that eachpatients perseveration proportion should be pre-dictable from their rate of other nonperseveratoryerrors. This relation was found. However,Fischer-Baum and Rapp then used a precise

    technique to identify outlier patients from thefunction predicting perseverations from othererrors. There was one clear outlier who persever-ated much more than would be expected. Thus,this patient cannot be explained by the failure-to-activate account. Fischer-Baum and Rappthen did additional testing of the outlier patientfinding that he perseverated in many tasksspel-ling, copy transcoding, naming, immediate serialrecall, and even in the retention of visual shapes.They hypothesized that, in this case, the patient

    suffered from a general deficit in inhibition.Thus, by tracking down the deviate in their caseseries, Fischer-Baum and Rapp used theheterogeneity in their sample to support the ideathat some perseverations are simply amanifestation of weak retrieval (the failure-to-activate account), but others result from a failureto a specific inhibitory mechanism.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 489

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    14/19

    A second way to confront sample heterogeneityis to attempt to remove it. The simplest method isto remove patients based on available measures.For example, Dell et al. (1997) eliminated patientswho made many omission or articulatory errorsfrom their sample, so as to focus on patients who

    make errors of commission that arise duringlexical access. Another way to dispose of hetero-geneity is statistical control, as, for example, when Schwartz et al. (2009) used regression tocontrol for the effect of comprehension ability onthe production of semantic errors. This leads to aresidualized dependent variable, in this case,semantic errors made above and beyond what would be expected from any central semanticdeficit that is present. Similarly, one can draw con-ceptual distinctions within ones dependent

    variable to create more uniform measures. So,instead of a raw measure of semantic errors innaming, Schwartz et al. (in press) separatelyanalysed taxonomically related errors (cat fordog) and thematically related errors (bone fordog) and found that they had distinct neuralcorrelates.

    By using these methods, one can limit the het-erogeneity of the patient sample and the data. Ofcourse, as we have emphasized, one does notwant to throw out the baby with the bath water.

    A case series depends on there being some differ-ences in the measurements taken across patients.Fully homogenizing a sample just leaves theresearcher with a collection of indistinguishablepatients.

    CASE SERIES AND SINGLE-SUBJECTMETHODS

    We maintain that single-subject and case series

    methods complement one another. Single-subjectstudies lead to the discovery of hypothesizedcognitive mechanisms that are thought to interactin particular ways. Often these interactions can beexpressed as predictions about quantitative trendsregarding behavioural covariation, and questionscan be posed regarding covariation betweencognitive mechanisms and lesion locations. If so,

    a case series can provide a test. And, after that,patients that deviate from the principal trendscan be examined using single-subject techniquesto track down the source of the deviation.

    Ultimately, both single-subject and case seriesmethods are, as they shouldbe, centred on explaining

    variability; it is just that the sources of variabilitydiffer. In a single-case study, the variability comesfrom the many tests that are administered to thepatient. Some tests reveal deficits (to varyingdegrees), and others do not. This variation allowsthe researcher to draw conclusions. The sameapplies to a case series, except that there are morecases, but typically fewer tests. The variability inpatient performance on the tests and, particularly,the covariation between tests, provide the basis forscientific inference.

    Without doubt, case series and single-subjectstudies can be in tension, as when a theoreticallymeaningful association, established through largecase series investigations, is violated by compellingevidence of functional dissociation in one or a fewcases. Yet as long as one considers this thebeginningof the story and not the end, considerableprogress can be made in explaining why theassociation is partial rather than complete (e.g.,Dilkina et al., 2008) or, from the alternativeperspective, how a dual-mechanism, dissociable

    system manages to associate so often (Coltheartet al., 2001). One thing is clear: The growingsophistication of statistical techniques of lesionanalysis that control for functional and anatomicalconfounds (Schwartz et al., 2009) provides aunique opportunity: The possibility that a strongpartial association is merely an accident of anatom-ical overlapa classic argument for why associ-ations data are suspect in cn (e.g., Marin, Saffran,& Schwartz, 1976; Shallice, 1988)can nowactually be tested with anatomical data.

    CONCLUDING THOUGHTS

    The preceding review and analysis of case seriesmethodology have described its many strengths.To summarize, case series are good for:

    490 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    15/19

    . Identifying quantitative trends.

    . Revealing curvilinear, particularly nonmono-tonic relations.

    . Jointly considering associations and dis-sociations, where a large enough Naffords stat-istical power to detect complex associations and

    the potential to identify theoretically importantdissociations.

    . Providing data for modelling, particularly mod-elling of how patient scores vary with severityand other individual difference variables.

    . Providing data for advanced lesion analyses andthereby interfacing with theories of both brainand cognition.

    The review has also identified certain weaknessesor issues that require the researcher to take care:

    . There is the temptation to stop with a statisti-cally significant correlation and not track downdeviating cases, theoretically important dis-sociations, or nonlinear relations.

    . There is the danger of committing prematurelyto the functional significance of a demonstratedassociation, without carefully consideringalternative possibilities, particularly alternativesthat cannot easily be assessed with the limitedset of measures taken. For anatomical caseseries, uninteresting explanations for an associ-

    ation such as lesion size or overlap must beruled out.

    . There are no rules for how broadly or narrowlyto set the inclusion criteria for the sample, andmisjudgements in either direction can havesignificant consequencesfor example, limitingthe range with too homogeneous a group, orobscuring important relationships with onethat is too heterogeneous.

    . Theresourcesrequiredtoconductawell-designedcn-style case series, with ample sample size and

    adequate background testing, are formidable.Consequently, the approach may be possibleonly for a few institutions or groups.

    While most of the identified weaknesses areavoidable and, with the usual back-and-forthbetween advocates and critics, self-correcting,the last one is more troubling. To address this

    problem, we believe that individuals and organiz-ations should promote greater sharing of patients(within the bounds of geography and confidential-ity), test materials, and data. As a step in this direc-tion a large, searchable web-based databaseavailable to the research community was developed

    and is described in an accompanying paper(Mirman et al., 2011).

    In closing, we return to something we assertedat the beginningnamely, that case series meth-odology should, indeed will, continue to be animportant part of cn. Future researchers whostudy cognitive impairments in patients will workcollaboratively to amass data that are treatedby both single-subject and multiple-subjectsmethods. They will discover associations anddissociations, and these discoveries will have

    relevance not just for cognitive psychology, butfor all scientific and clinical disciplines withwhich we share interests.

    Manuscript received 23 August 2010Revised manuscript received 6 March 2011Revised manuscript accepted 7 March 2011

    First published online 5 July 2011

    REFERENCES

    Adlam, A. L., Patterson, K., Rogers, T. T., Nestor, P. J.,Salmond, C. H., Acosta-Cabronero, J., et al. (2006).Semantic dementia and fluent primary progressiveaphasia: Two sides of the same coin? Brain,129(11), 30663080.

    Bedny, M., Caramazza, A., Grossman, E., Pascual-Leone, A., & Saxe, R. (2008). Concepts are morethan percepts: The case of action verbs. Journal of

    Neuroscience, 28(44), 1134711353.Blazely, A., Coltheart, M., & Casey, B. J. (2005).

    Semantic impairment with and without surface dys-lexia: Implications for models of reading. Cognitive

    Neuropsychology, 22, 695717.Bozeat, S., Lambon Ralph, M. A., Patterson, K.,

    Garrard, P., & Hodges, J. R. (2000). Non-verbalsemantic impairment in semantic dementia.

    Neuropsychologia, 38, 12071215.Caramazza, A., & Coltheart, M. (2006). Cognitive

    Neuropsychology twenty years on. CognitiveNeuropsychology, 23(1), 312.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 491

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    16/19

    Caramazza, A., & Hillis, A. E. (1990). Where dosemantic errors come from? Cortex, 26(1), 95122.

    Cipolotti, L., & Warrington, E. K. (1995). Semanticmemory and reading abilities: A case report. Journalof the International Neuropsychology Society, 1,104110.

    Coltheart, M. (2006). Acquired dyslexias and the com-putational modelling of reading. CognitiveNeuropsychology, 23(1), 96109.

    Coltheart, M., Rastle, K., Perry, C., Langdon, R., &Ziegler, J. (2001). DRC: A dual route cascadedmodel of visual word recognition and readingaloud. Psychological Review, 108, 204256.

    Damasio, H., Grabowski, T. J., Tranel, D., Hichwa,R. D., & Damasio, A. (1996). A neural basis forlexical retrieval. Nature, 380(6574), 499505.

    Dell, G. S., & Caramazza, A. (2008). Introduction tospecial issue on computational modeling in cognitiveneuropsychology. Cognitive Neuropsychology, 25,131135.

    Dell, G. S., Lawler, E. N., Harris, H. D., & Gordon, J.K. (2004). Models of errors of omission in aphasicnaming. Cognitive Neuropsychology, 21(24),125145.

    Dell, G. S., Martin, N., & Schwartz, M. F. (2007).A case series test of the interactive two-step modelof lexical access: Predicting word repetition frompicture naming. Journal of Memory and Language,56, 490520.

    Dell, G. S., & OSeaghdha, P. G. (1992). Stages oflexical access in language production. Cognition,

    42(13), 287314.Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M.,

    & Gagnon, D. A. (1997). Lexical access in aphasicand nonaphasic speakers. Psychological Review, 104,801838.

    Dilkina, K., McClelland, J. L., & Plaut, D. C. (2008).A single-system account of semantic and lexical def-icits in five semantic dementia patients. Cognitive

    Neuropsychology, 25(2), 136164.Farrington, C. P., Nash, J., & Miller, E. (1996). Case

    series analysis of adverse reactions to vaccines: Acomparative evaluation. American Journal of

    Epidemiology, 143(11), 11651173.Fischer-Baum, S., & Rapp, B. (2008, October).

    Underlying causes of letter perseveration errors indysgraphia. Talk given at the 45th Annual Meetingof the Academy of Aphasia, Turku, Finland.

    Freud, S. (1953). On aphasia. (E. Stengel, Trans.).New York City, NY: International University Press(Original work published 1891).

    Garrard, P., Lambon Ralph, M. A., Hodges, J. R., &Patterson, K. (2001). Prototypicality, distinctiveness,and intercorrelation: Analyses of the semantic attri-butes of living and nonliving concepts. Cognitive

    Neuropsychology, 18(2), 125174.Goldrick, M. (2008). Does like attract like? Exploring

    the relationship between errors and representationalstructure in connectionist networks. CognitiveNeuropsychology, 25, 287313.

    Goldrick, M., & Rapp, B. (2002). A restricted inter-action account (RIA) of spoken word production:

    The best of both worlds. Aphasiology, 16, 2055.Gotts, S. J., & Plaut, D. (2002). The impact of synaptic

    depression following brain damage: A connectionistaccount of access/refractory and degraded-storesemantic impairments. Cognitive, Affective, andBehavioral Neuroscience, 2, 187213.

    Graham, K., Patterson, K., & Hodges, J. R. (1995).Progressive pure anomia: Insufficient activation ofphonology by meaning. Neurocase, 1, 2538.

    Harm, M. W., & Seidenberg, M. S. (2004). Computingthe meaning of words in reading: Cooperative div-ision of labor between visual and phonological pro-cesses. Psychological Review, 111, 662720.

    Jaeger, T. F. (2008). Categorical data analysis: Awayfrom ANOVAs (transformation or not) andtowards logit mixed models. Journal of Memory andLanguage, 59, 434446.

    Jefferies, E., & Lambon Ralph, M. A. (2006). Semanticimpairment in stroke aphasia versus semanticdementia: A case series comparison. Brain, 129,

    21322147.Kittredge, A. K., Dell, G. S., Verkuilen, J., & Schwartz,

    M. F. (2008). Where is the effect of frequency in word production? Insights from aphasic picturenaming errors. Cognitive Neuropsychology, 25(4),463492.

    Lambon Ralph, M. A., Graham, K. S., Ellis, A. W., &Hodges, J. R. (1998). Naming in semantic demen-tiaWhat matters? Neuropsychologia, 36, 775784.

    Lambon Ralph, M. A., McClelland, J., Patterson, K.,Galton, C., & Hodges, J. R. (2001). No right tospeak? The relationship between object naming and

    semantic impairment: Neuropsychological evidenceand a computational model. Journal of Cognitive

    Neuroscience, 13, 341356.Lambon Ralph, M. A., Sage, K., Jones, R. W., &

    Mayberry, E. J. (2010). Coherent concepts arecomputed in the anterior temporal lobes.Proceedings of the National Academy of Sciences,107(6), 27172722.

    492 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    17/19

    Levelt, W. J.M., Roelofs, A., & Meyer, A. S. (1999).A theory of lexical access in speech production.Behavioral and Brain Sciences, 22, 175.

    Marin, O. S. M., Saffran, E. M., & Schwartz, M. F.(1976). Dissociation of language in aphasia:Implications for normal function. Annals of the

    New York Academy of Sciences, 280, 868884.Martin, N., Dell, G. S., Saffran, E. M., & Schwartz, M.F. (1994). Origins of paraphasias in deep dysphasia:

    Testing the consequences of a decay impairment toan interactive spreading activation model of lexicalretrieval. Brain and Language, 47, 609660.

    McCloskey, M. (1993). Theory and evidence in cogni-tive neuropsychology: A radical response toRobertson, Knight, Rafal, and Shimamura (1993).

    Journal of Experimental Psychology: Learning,Memory, and Cognition, 19, 718734.

    McCloskey, M., & Caramazza, A. (1988). Theory andmethodology in cognitive neuropsychology: Aresponse to our critics. Cognitive Neuropsychology, 5,583623.

    McRae, K., de Sa, V. R., & Seidenberg, M. S. (1997).On the nature and scope of featural representationsfor word meaning. Journal of ExperimentalPsychology: General, 126, 99130.

    Mirman, D., Strauss, T., Brecher, A., Walker, G.,Sobel, P., Dell, G., et al. (2011). A large, searchable,web-based database of aphasic performance on picturenaming and other tests of cognitive function.Cognitive Neuropsychology.

    Mummery, C. J., Patterson, K., Price, C., Ashburner, J.,

    Frackowiak, R. S., & Hodges, J. R. (2000). A voxel-based morphometry study of semantic dementia:Relationship between temporal lobe atrophy andsemantic memory. Annals of Neurology, 47, 3645.

    Mummery, C. J., Patterson, K., Wise, R. J. S.,Vandenbergh, R., Price, C. J., & Hodges, J. R.(1999). Disrupted temporal lobe connections insemantic dementia. Brain, 122, 6173.

    Nickels, L. (1995). Getting it right? Using aphasicnaming errors to evaluate theoretical models ofspoken word recognition. Language and CognitiveProcesses, 10, 1345.

    Nickels, L., & Howard, D. (1994). A frequent occur-rence? Factors affecting the production of semanticerrors in aphasic naming. Cognitive Neuropsychology,11, 289320.

    Nickels, L., & Howard, D. (1995). Phonological errorsin aphasic naming: Comprehension, monitoring andlexicality. Cortex, 31, 209237.

    Nickels, L., & Howard, D. (2004). Dissociating effectsof number of phonemes, number of syllables, and syl-labic complexity on word production in aphasia: Itsthe number of phonemes that counts. Cognitive

    Neuropsychology, 21, 5778.Nozari, N., Kittredge, A. K., Dell, G. S., & Schwartz,

    M. F. (2010). Naming and repetition in aphasia:Steps, routes, and frequency effects. Journal of Memory and Language, 63, 541559.

    Patterson, K., & Hodges, J. R. (1992). Deteriorationof word meaning: Implications for reading.

    Neuropsychologia, 30, 10251040.Patterson, K., Lambon Ralph, M. A., Jefferies, E.,

    Woollams, A., Jones, R., Hodges, J. R., et al.(2006). Pre-semantic cognition in semanticdementia: Six deficits in search of an explanation.

    Journal of Cognitive Neuroscience, 18(2), 169183.Patterson, K., Nestor, P. J., & Rogers, T. T. (2007).

    Where do you know what you know? Therepresentation of semantic knowledge in thehuman brain. Nature Reviews Neuroscience, 8,976987.

    Patterson, K., & Plaut, D. (2009). Shallow draughtsintoxicate the brain: Lessons from cognitivescience for cognitive neuropsychology. Topics inCognitive Science, 1, 3958.

    Plaut, D. C. (1997). Structure and function in the lexicalsystem: Insights from distributed models of wordreading and lexical decision. Language andCognitive Processes, 12, 767808.

    Plaut, D. C. (2002). Graded modality-specific specialis-

    ation in semantics: A computational account of opticaphasia. Cognitive Neuropsychology, 19(7), 603639.

    Plaut, D. C., McClelland, J. L., Seidenberg, M. S., &Patterson, K. (1996). Understanding normal andimpaired word reading: Computational principlesin quasi-regular domains. Psychological Review,103(1), 56115.

    Plaut, D. C., & Shallice, T. (1993). Deep dyslexia: Acase study of connectionist neuropsychology.Cognitive Neuropsychology, 10, 377500.

    Rapp, B. C., & Goldrick, M. (2000). Discreteness andinteractivity in spoken word production.

    Psychological Review, 107(3), 460499.Rogers, T. T., Hocking, J., Noppeney, U., Mechelli, A.,

    Gorno-Tempini, M. L., Patterson, K., et al. (2006).Anterior temporal cortex and semantic memory:Reconciling findings from neuropsychology andfunctional imaging. Cognitive, Affective, andBehavioral Neuroscience, 6(3), 201213.

    COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6) 493

    CASE SERIES INVESTIGATIONS

  • 8/2/2019 Schwartz & Dell, 2010

    18/19

    Rogers, T. T., Lambon Ralph, M. A., Garrard, P.,Bozeat, S., McClelland, J. L., Hodges, J. R., et al.(2004). Structure and deterioration of semanticmemory: A neuropsychological and computationalinvestigation. Psychological Review, 111(1), 205235.

    Rogers, T. T., Lambon Ralph, M. A., Hodges, J. R., &

    Patterson, K. (2004). Natural selection: The impactof semantic impairment on lexical and objectdecision. Cognitive Neuropsychology, 21(2), 331352.

    Ruml, W., & Caramazza, A. (2000). An evaluation of acomputational model of lexical access: Comment onDell et al. (1997). Psychological Review, 107(3),609634.

    Ruml, W., Caramazza, A., Capasso, R., & Miceli, G.(2005). Interactivity and continuity in normal andaphasic language production. Cognitive

    Neuropsychology, 22, 131168.Schwartz, M. F., Dell, G. S., Martin, N., Gahl, S., &

    Sobel, P. (2006). A case series test of the interactivetwo-step model of lexical access: Evidence frompicture naming. Journal of Memory and Language,54, 228264.

    Schwartz, M. F., Kimberg, D. Y., Walker, G. M.,Brecher, A., Faseyitan, O., Dell, G. S., et al.(in press). A neuroanatomical dissociation for taxonomicand thematic knowledge in the human brain.Proceedings of the National Academy of Sciences.

    Schwartz, M. F., Kimberg, D. Y., Walker, G. M.,Faseyitan, O., Brecher, A., Dell, G. S., et al.

    (2009). Anterior temporal involvement in semantic word retrieval: VLSM evidence from aphasia.Brain, 132(12), 34113427.

    Schwartz, M. F., Saffran, E. M., & Marin, O. S. M.(1980). Fractionating the reading process in demen-tia: Evidence for word-specific print-to-sound

    associations. M. Coltheart, K. Patterson, & J.C.Marshall (Eds.), Deep dyslexia. London, UK:Routledge & Kegan Paul.

    Shallice, T. (1988). From neuropsychology to mental struc-ture. Cambridge, UK: Cambridge University Press.

    Walker, G. M., Schwartz, M. F., Kimberg, D. Y.,Faseyitan, O., Brecher, A., Dell, G. S., et al.(2010). Support for anterior temporal involvementin semantic error production in aphasia: Newevidence from VLSM. Brain and Language.Advance online publication. doi:10.1016/j.bandl.2010.09.008

    Warrington, E. K. (1975). The selective impairment ofsemantic memory. Quarterly Journal of ExperimentalPsychology, 27, 635657.

    Woollams, A., Cooper-Pye, E., Hodges, J. R., &Patterson, K. (2008). Anomia: A doubly typical sig-nature of semantic dementia. Neuropsychologia, 46,25032514.

    Woollams, A., Lambon Ralph, M. A., Plaut, D., &Patterson, K. (2007). SD-squared: On the associ-ation between semantic dementia and surfacedyslexia. Psychological Review, 114(2), 316339.

    494 COGNITIVE NEUROPSYCHOLOGY, 2010, 27 (6)

    SCHWARTZ AND DELL

  • 8/2/2019 Schwartz & Dell, 2010

    19/19

    Copyright of Cognitive Neuropsychology is the property of Psychology Press (UK) and its content may not be

    copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

    permission. However, users may print, download, or email articles for individual use.