Which process is essential to effectively implement RTI because it measures students response to interventions and determines if additional intervention is needed?

  • Journal List
  • HHS Author Manuscripts
  • PMC2823081

Sch Psychol Q. Author manuscript; available in PMC 2010 Feb 17.

Published in final edited form as:

PMCID: PMC2823081

NIHMSID: NIHMS169567

Abstract

Response to Intervention (RTI) models of diagnosis and intervention are being implemented rapidly throughout the schools. The purposes of invoking an RTI model for disabilities in the schools clearly are laudable, yet close examination reveals an unappreciated paucity of empirical support for RTI and an overly optimistic view of its practical, problematic issues. Models are being put into practice without adequate research and logistical support and neglect the potential negative long-term impact on students with disabilities. Many implementation problems exist: (a) the vagaries of critical details of the model in practice; (b) the lack of consideration of bright struggling readers; (c) the relativeness, contextual, situation dependent nature of who is identified; (d) the worrisome shortcomings of the RTI process as a means of diagnosis or determination of a disability; and (e) the apparent lack of student-based data to guide effective choice of approaches and components of intervention. Practiced as a model of prevention, the authors agree with the concept of RTI. As the authors witness its application to disability determination sans the benefit of a reliable and valid empirical basis, the potential benefits to some children with disabilities remain an unproven hypothesis while the potential detriment to some children with disabilities remains a very real possibility.

Keywords: learning disabilities, intelligence, prevention, intervention, diagnosis

RTI as a Prevention Strategy: What is the State of the Evidence?

Current Evidence

Fuchs, Mock, Morgan, and Young (2003), Gresham, Restori, and Cook (2008), Fletcher and Vaughn (2009), Reschly (2005), and many others have described the 2004 changes to the federal laws governing special education eligibility for specific learning disabilities (Individuals With Disabilities Education Improvement Act; IDEIA) focusing on what is commonly know as Response to Intervention (RTI). In doing so, they typically also review evidence in support of the shift to an RTI model of service delivery, most clearly explicated in the recent revisions to rules for implementation of the IDEIA (U.S. Department of Education, 2004). Reschly (2005), Shinn (2005), Fletcher and Vaughn (2009), and others also provide a description of what an RTI model might look like, describing what they consider to be standard protocol models of RTI implementation. Although the notion of RTI as a process of service delivery may have potential to be helpful to both regular education and special education, the extant evidence does not support the seemingly unbridled enthusiasm for its current readiness status from its proponents who appear to have been overly optimistic and often incomplete in their presentations of the RTI model, with regard to its research support, ease of implementation, breadth of applications in the schools, clarity of what constitutes responsiveness, and the ability of the RTI model to benefit children with learning disabilities (LD).

Moreover, the lack of a trustworthy evidence-base strongly suggests that implementation, especially given the Federal government's emphasis on empirically supported treatments is notably premature. In fact, Fuchs et al. (2003) concluded that “proponents of RTI as an alternative means of LD identification must still prove that their problem-solving approach is worthy of the descriptor “scientifically based” (p. 167). The research base is not yet sufficient to provide adequate or reliable guidance to practitioners in how to implement RTI as an effective service delivery process, as many details remain to be elaborated and specific aspects of RTI defined (e.g., many as central as just what constitutes a response, i.e., what is the R in RTI). The effect sizes reported for research studies of RTI are less consistent than many of its supporters profess and those studies reporting strong results are highly likely to have levels of treatment fidelity that are atypical of what is attained in day-to-day real life educational practice (i.e., not a closely supervised, small scale research project) and many do not use as samples students typical of students currently receiving special education services (e.g., see Graham & Perin, 2007). Of even more concern, RTI is already being used as the only means of diagnosis of LD (circumventing the requirement of a comprehensive evaluation for disability determination by defining RTI as a comprehensive evaluation) in many states in the United States and others are proposing to follow suit.

Many of these problems directly reflect the lack of critical knowledge concerning just about every assumption influencing the RTI implementation process. Without such evidence, the Federal regulations can only be vague and nonspecific in their guidance for implementation and evaluation of treatment effects in RTI, and in fact, they are. As a result, implementation is left to the vagaries, inconsistencies, and nonevidence-based particular sets of beliefs of individual school psychologists, teachers, principals, and school system administrators. Thus, the USDE Office of Special Education and Rehabilitation (OSERS) regulations for special education promote the use of RTI but, at the same time, are as yet unable to provide sufficient guidance to allow for any hope of reasonable consistencies in its development, application, and outcomes. This lack of procedural guidance creates a guarantee that RTI will lack fidelity of implementation, suffer from inconsistent measurement models, and see enhanced levels of subjectivity in both diagnosis and in treatments.

Given this state of affairs, wide variations in both the conceptualization of RTI and in the actual specifics of practice are inevitable in matters that are nontrivial and that will affect directly which students and how many receive special education services. Special education placement decisions are important to the individuals involved and can have life-changing effects, both positive and negative.

Furthermore, whereas writers such as Reschly (2005), Gresham et al. (2008), and Fletcher and Vaughn (2009) seem to see a consensus view of RTI as well founded, ready, and with clearly established practices, many in the field see it as more immature and in many ways controversial and even polemic in some circles (e.g., one might go to the List Serv of the National Association of School Psychologists and search the archives for “RTI” to get a flavor of the intense, and at times ad hominem, debate generated by advocates and opponents of RTI). Others notables in the field (e.g., Kavale, Kauffman, Bachmeir, & LeFever, 2008) have argued that RTI is a sociopolitical phenomenon almost entirely.

Scaling issues in RTI implementation are substantial as are such issues as treatment fidelity and treatment (i.e., intervention) monitoring on a large scale. Treatment fidelity (or treatment integrity) is simply the degree to which an intervention is implemented as planned. Unfortunately, treatment integrity has been largely ignored in the schools and it may be sobering to consider that treatment integrity is a necessary component to assessing effectiveness of the intervention, may not be regularly measured by professionals in the field of school psychology or special education and when it is measured, it is not measured by disinterested external observers, but rather relies most often on teacher self-reports—the very people whose treatment methods are being evaluated. This is problematic even at the scale of a school building level and the scaling issues in monitoring treatment fidelity district-wide, state-wide, and nationwide are clearly substantial.

RTI proponents seem to present RTI as ready to go, with well established and supported practices. However, the research that does support RTI for the most part is based on small-scale studies that have intensive oversight of intervention or treatment fidelity through university research and training programs. Often this research has not been conducted on truly low achieving students as well (see later discussion of Graham & Perin, 2007). Pogrow (1996), writing about the great myths of educational reform, pointed out the disconnect between small-scale research and large-scale implementation as the major component of his seventh myth of educational reformers.

“Myth 7. You can understand large-scale change by understanding what happens on a very small scale. This is the biggest myth of all! … Researchers study small-scale phenomena for very short periods of time. Their knowledge comes from controlled laboratory research, pilot studies, case studies in a few schools, or a few examples of unusually effective schools. Newer research techniques, such as meta-analysis, have been developed that “pretend” that outcomes were generated on a large scale” (p. 659). Pogrow goes on to argue that although those who would advocate rapid reform convince themselves the proposed reforms are now ready for installation, such a belief does not coincide with the reality of schools. To Pogrow (1996), reality rears its ugly head very quickly, or in his own words, in response to Myth 7 he posits, “Reality 7. It's the scale, stupid! Large-scale change reflects properties that are often diametrically opposed to those in effect in small-scale research. While small-scale success is inspirational, the methods are not necessarily workable on a large scale. The fact that something works in a few classrooms, in a few schools, with a few teachers, at a few grade levels, for a few weeks, and so on says nothing about whether or how it can be disseminated or will actually work on a large scale” (p. 659). With the publication of the Report of the National Reading Panel (2000), education took a giant step into the world of modern science's standard of not relying on opinion, belief systems, and mass frenzy, but rather on evidence in making educational decisions. Within such a context, the calls for rapid implementation of RTI– currently lacking a sufficient evidence base—seem to reflect a giant step backward and reek more of educational faddism and political correctness than of science-based, effective educational practices (also see Kavale et al., 2008).

Questions About RTI and Its Effectiveness

Answers to the following critical questions, among others, are central to developing RTI as a successful, evidence-based service delivery model. How is effective instruction defined and taught to teachers, and how does this enhance instruction? Moreover, parenthetically, how does this differ from what is or should be happening now? What in this process is specific to RTI, that is, what differentiates teacher training in RTI over any other method to teach teachers about effective teaching? Over what period of time is success evaluated? How much additional instruction is given in the regular education setting and how does the additional instruction differ from what the teacher already has been doing in this classroom? What is adequate progress or a response to the so-called new teaching strategies implemented under RTI? For example, Speece and Walker (2007, p.294) question what is meant by the R in RTI–responsiveness: “At this point, the definitions of “response” and “nonresponse” in the literature are quite variable … If RTI is to be used as an ingredient in the definition of LD … much more work is required on the thorny issue of responsiveness” indicating that “ … the promise of RTI swamps the evidence.” (p. 287). Resolving the vagaries and the highly varied state of defining the R in RTI is crucial if we are to see effective instructional guidance from the research and effective practice for many reasons, perhaps the most vital being that significant LD definition-treatment interactions exist across the primary evidence-based studies most commonly cited as supporting RTI as a model (e.g., see Swanson & Hoskyn, 1999). Contrary to many proponents' contention that a consensus exists regarding the superiority of RTI over past methods (e.g., Fletcher & Vaughn, 2009; Gresham et al., 2008; Tilly, 2006, as cited in Fuchs & Deschler, 2007), in addition to Speece and Walker, other leaders in the learning disability, special education, school psychology and related fields have expressed reservations about RTI (e.g., see Fletcher-Janzen & Reynolds, 2008; Reynolds, 2005). For example, regarding the prematurity of RTI, following a review of the empirical literature on RTI, with special attention to its effectiveness and feasibility, Fuchs, Mock, Morgan, and Young (2003) reached the following conclusions: “ … more needs to be understood before RTI may be viewed as a valid means of identifying students with LD.” (p. 157)

Regarding the measurement problems surrounding the actual determination of whether an RTI has occurred, Fuchs, Fuchs, and Compton (2004) concluded that the varying methods currently used to determine response identify different children and result in varying prevalence rates of reading disability:

“As demonstrated in our analyses … different measurement systems using different criteria [all with reference to RTI] result in identification of different groups of students. The critical question is which combination of assessment components is most accurate for identifying children who will experience serious and chronic reading problems that prevent reading for meaning in the upper grades and impair their capacity to function successfully as adults. At this point, relatively little is known to answer this question when RTI is the assessment framework” (p. 226).

Little beyond argument has been published since that time to alter this comment on the state of RTI effectiveness research substantively. Because definition × treatment interactions are clearly established in the literature (Swanson & Hoskyn, 1999), the issue of measurement and determination of what constitutes a “response” to intervention must be resolved consistently for RTI to have a fair chance to succeed.

Thus, whereas seemingly intuitively appealing, RTI has, in fact, a very weak experimental base, particularly longitudinal studies of the effects of RTI on a student's progress toward high school graduation and beyond. Currently there does not exist an evidence-base for RTI to address adequately the questions raised above, and it remains unclear if RTI represents a permanent, effective service delivery model or just another passing educational fad. Fletcher and Vaughn (2009) review studies of the effect sizes attached to RTI models and laud their relative impact and success, citing in particular work by Swanson, Hoskyn, and Lee (1999) as being the most comprehensive of the meta-analyses of this literature. However, more recently, Swanson (2008) himself has remarked on the tenuousness of RTI research as regards its instructional effectiveness and suggests less of a relationship between instructional methods and reading success. In reviewing what are considered the best studies of the effectivesss of RTI, Swanson concludes that:

“In summary, our results indicated that “best evidence” studies are influenced by a host of environmental and individual-differences variables that make a direct translation to assessing children at risk for LD based on a RTI only model difficult. In addition, although RTI relies on evidence based studies in the various tiers of instruction, especially in the area of reading, it is important to note that even under the most optimal instructional conditions (direct instruction) for teaching reading less than 15% of the variance in outcomes is related to instruction (see Table 5, Swanson, 1999).” (Swanson, 2008, p. 34)

Swanson (2008, p. 34) goes on to discuss the relevance of traditional forms of psychometric testing in evaluating these outcomes, testing no longer required under the RTI process and abhorred by such proponents as Gresham et al. (2008) in particular: “More importantly, studies that left out critical information commonly used in most neuropsychological test batteries (e.g., IQ and achievement scores) on individual differences data (or aggregated differences) greatly inflated treatment outcomes.”

Proponents of RTI also tend to cite the findings of the Carnegie Corporation supported meta-analysis of writing skills (Graham & Perin, 2007), as supportive of the effectiveness of RTI and providing guidance for evidence-based instruction, focusing on several reports of effect sizes that were certainly strong, but seem to ignore others reported in the same document that are more tenuous (i.e., below .25). However, the types of studies and populations on which this work has been done is rather limited. “So, even though there is an impressive amount of research testing different approaches to writing instruction, the lack of information on effective writing instruction for low-income, urban, low-achieving adolescent writers remains a serious gap in the literature” (Graham & Perin, 2007, p. 25), and these are in fact the populations most likely to come before referral sources and to be subjected to RTI methods, the very populations where we have the least amount of evidence upon which to base our decisions.

Before educators are encouraged to incorporate such a radical change into school practices, affecting tens of thousands of children, these issues, critical to the actual implementation and practice of an RTI-based approach, must be addressed. OSERS did not address them adequately, most likely because they could not, that is, the data do not yet exist or are less consistent than is politically popular in the current zeitgeist.

Fact or Fad

The frenzy surrounding RTI is reminiscent of that characterizing the whole language approach a decade or so ago. Questioning RTI is seen by some as nearly heretical, taking the position that those who oppose immediate, in depth application of the RTI model now are simply uninformed. Like much in special education, RTI is characterized by moral imperative and political activism rather than science (also see Kavale et al., 2008). Fuchs and Deschler (2007) have commented frankly on this issue as well:

“In May, 2006, David Tilly and seven of his colleagues sent a letter to the Office of Special Education and Rehabilitation Services (OSERS), which was subsequently given to us by one of the authors. The letter writers chided OSERS on several counts, including that, in their view, the agency wrongly promotes a notion that researchers and practitioners need to understand more about responsiveness to intervention (RTI) to ensure its successful implementation. Tilly and associates expressed an opposite belief, claiming that practitioners have all necessary and sufficient information to conduct RTI competently. They wrote, “University- and field-based research strongly supports the use of Problem-Solving/RTI as the service delivery model that results in the most equitable outcomes for the diverse learners in the United States today …. RTI implementations have improved outcomes in all students and [have] shown reductions in referrals for special education …. The [only remaining] problem is one of scaling [a view echoed by Fletcher and Vaughn, 2009], which is a different research question than …. whether practices like RTI are effective or implementable” [sic]. This “we-know-all-we-need-to-know” message about RTI implementation has also been delivered at conferences and in-service programs across the nation. The obvious intent of the message is to instill confidence among teachers and administrators about RTI implementation and to encourage them to get on with the difficult work of reforming service delivery. A less obvious intent seems to be to characterize those raising questions about RTI as uninformed or (worse) temporizing or (much worse) attempting to obstruct wider use of RTI through passive—aggressive intellectualizing.

The “we-know-it-all” message seems both flawed and counterproductive. It is flawed because the message is factually incorrect. We are not anywhere near having all necessary basic supporting information to ensure that RTI, to quote Tilly and colleagues, improves outcomes “in all students”—a fact of which at least some teachers, administrators, and researchers are well aware. The message is counterproductive because practitioners must understand what we collectively do not know so they can avoid costly mistakes. Moreover, by recognizing what we do not know, practitioners and researchers can work diligently to find solutions, thereby strengthening our shared capacity to implement RTI successfully on a wide scale.” (p. 129)

Like Tilly, the writings of Gresham et al. (2008), Gresham (2004), Fletcher and Vaughn (2009), Reschly (2005), and Shinn (2005), among others, conclude that RTI is ready for implementation and that scaling of RTI is the last remaining major issue. Clearly it is a major issue, but not the only issue.

RTI in Diagnosis

RTI: Another Discrepancy-Based Model

A key component, and one lauded by Gresham et al. (2008), Gresham et al. (2004), Fletcher and Vaughn (2009), Reschly (2005), Shinn (2005), and promoted by Siegel (1989) among others is the removal of IQ and the so-called “severe discrepancy” component of LD-diagnosis from consideration in RTI, especially as it pertains to diagnosis of a learning disability. OSERS never defined what was meant by a “severe discrepancy,” although a Federal task force did examine the issue and recommended a model of best practice (see Reynolds, 1984, for the official report). Nevertheless, each SEA and in most instances each LEA was free to devise and use their own method. They did so with great variability in how severe discrepancies were determined and with inconsistencies at both the SEA and LEA levels. In fact, the use of the severe discrepancy criteria for learning disability diagnosis was never really tested adequately because of its many incarnations. There is a now a similar environment with the regulations lack of guidance in assessing whether an RTI has occurred. RTI inevitably will suffer from the inconsistencies in measurement models that also plagued severe discrepancy analyses (e.g., see Reynolds, 1984, for a discussion of these issues). RTI, in fact, is another form of discrepancy analysis, here between the response of an individual student and his or her class or some other designated comparison group (that will also vary across jurisdictions). The issues in determining gain scores under RTI models are many and potentially even more complex than the issues surrounding IQ-achievement discrepancy models, and many variations of how to approach such comparisons will be proffered with varying levels of mathematical sophistication—but, one can be quite certain there will be numerous applications that produce different results and identify different children under the different nonconsensual models that will be in use. Such criticism is unresolved at best and potentially worsened by RTI models. Many proponents of RTI seem not to be aware of the measurement issues involved in RTI models and their similarity to so many of the issues that plagued aptitude-achievement discrepancy models. Gresham et al. (2008) for example state that “RTI offers an improved approach to assessment that allows educators to help children they know are struggling without many of the problems associated with the IQ-achievement approach” (p. 7, italics added). Fletcher (2008) argues that assessment of a response to intervention is “easily accomplished” (p. 13). This clearly is not the case—determining a response to intervention in single cases is mathematically complex, potentially even more so than in past discrepancy models, and is plagued at this stage with even more vagaries than the IQ-achievement model, which at least is well understood mathematically (e.g., see Reynolds, 1984, 2008).

Others have noted dire concerns as well (e.g., see the Fuchs et al., 2003, 2004; Reynolds, 2005, 2008; and Speece & Walker, 2007, as well as the references and discussion above). Simply put, RTI lacks a consistent means of determining responsiveness and the application of different methods identifies different children, that is, the method is unreliable and inconsistently applied.

Take as just one example, the language of many recently issued state regulations (e.g., see North Carolina and Indiana in particular) as well as the OSERS discussions in the Federal Register of the regulations related to diagnosis of a learning disability under IDEIA 2004. Many of these speak to achievement levels persistently being below grade level standards and the finding of a lack of progress (change) relative to peers when empirically validated instruction is provided under RTI. However, few of these rules, including the OSERS discussion (see the Federal Register, vol. 71, no. 156, August 14, 2006; the regulations exceed 300 Federal Register pages) even define peer or give any indication as to how grade level standards are to be defined or determined.

  • With regard to differences in achievement with peers, should it be age peers or grade peers; do gender and other nominal variables matter in defining peer group, especially given the overrepresentation of boys in special education (but then girls are performing at a higher level academically on average), so should boys only be compared to other boys?

  • More importantly, however, do you define peer group achievement as the average level of progress of others in the same classroom, in the same school building, in the same school district, in the same state, or nationally, and are age or grade norms more appropriate?

  • What metric is best for determining a response to an intervention and how should it be chosen? Are raw scores the answer, or raw scores converted to an equal interval scale (often referred to as growth scores or W-growth, or other IRT derived scores), or is an age or grade corrected deviation standard score more appropriate (we would argue for one of the latter standard scores)? Each of these score types addresses a very different question with regard to changes in performance and which type of score is used will affect who is determined to have evidenced a response to intervention directly and will also dictate the conceptual basis for identification of students with a disability (e.g., see Reynolds, 2009). Many of the extant studies of RTI use different metrics and often they are simply arbitrary or convenient (also making the outcomes of such studies noncomparable and not subject to accurate grouping for meta-analyses). The use of arbitrary metrics in research on response to any intervention in any setting often leads to inappropriate conclusions of progress (e.g., see Blanton & Jaccard, 2006, and especially Kazdin, 2006, for a detailed review and analysis).

  • These are nontrivial concerns; the lack of consensual, scientific resolution will inevitably cause clinicians in different locales to identify very different groups of kids as in need of or eligible for special education and will also fail to identify different groups of students who are struggling readers. Moreover, who is identified matters for many reasons, including, among others, instructional effectiveness, availability of related or allied services, various accommodations in school, and disability status in a host of Federal and in some cases state programs.

  • Determining a response to intervention is a very complex form of discrepancy analysis—one is subtracting scores on some performance measure from some predetermined standard or a score on another measure. It can be viewed as a discrepancy analysis that assumes either that all children have equal ability levels when it comes to academic performance (an IQ = 100 for everyone) or that intelligence generally and patterns of abilities are irrelevant to response to intervention as long as empirically validated teaching methods are used (Reynolds, 2008). Once again, we note that RTI inevitably will suffer at a minimum from the inconsistencies in measurement models that also concerned severe discrepancy analyses. Intelligence also is related to RTI and should continue to have a role in thinking about learning disabilities, beyond just ruling out mental retardation (see below).

Is Intelligence Irrelevant to the Diagnosis, Intervention, and Outcomes of LD?

The relationship of IQ to outcomes in RTI is complex and not easily understood. Swanson (2008) has undertaken an analysis of this issue in particular and there is recent specific work aimed directly at testing such an hypothesis (Fuchs & Fuchs, 2006; Fuchs & Young, 2006). Contrary to the views of Gresham et al. (2008), Gresham et al. (2004), Fletcher and Vaughn (2009), Reschly (2005), Shinn (2005), and Siegel (1989), among others, these researchers conclude that IQ continues to be relevant and does predict responsiveness. In discussing the complexity of the issue and analyzing the results of published research, Swanson (2008) concluded:

“… although the degree of discrepancy between IQ and reading was irrelevant in predicting effect sizes, the magnitude of differences in performance (effect sizes) between the two groups were related to verbal IQ. They found that when the effect size differences between discrepancy (reading disabled group) and nondiscrepancy groups (low achievers in this case) on verbal IQ measures were greater than 1.00 (the mean verbal IQ of the reading disabled (RD) group was approximately 100 and the verbal IQ mean of the low achieving (LA) group was approximately 85, the approximate mean effect size on various cognitive measures was 0.29. In contrast, when the effect size for verbal IQ was less than 1.00 (the mean verbal IQ for the RD group was approximately 95 and the verbal IQ mean for the LA group were at approximately 90) estimates of effect size on various cognitive measures was close to 0 (M = −0.06). Thus, the further the RD group moved from IQs in the 80 range (the cut-off score used to select RD samples), the greater the chances their overall performance on cognitive measures would differ from the low achiever.” (p. 35) “My point in reviewing these major syntheses of the literature is to suggest that removing IQ as an aptitude measure in classifying children as LD, especially verbal IQ, from assessment procedures is not uniformly supported by the literature.” (p. 36)

Even critics of cognitive testing have produced research demonstrating that intelligence mediates responsiveness to intervention in reading instruction (e.g., see Table 10 of Vellutino, Scanlon, Small, & Fanuele, 2003), although such effects are seldom discussed adequately.

Noting the incomplete nature of existing literature on the issue, Swanson continues by telling us “The obvious implication is that IQ has relevance to any policy definitions of LD. Groups of students with LD who have aptitude profiles similar to generally poor achievers or slow learners (low IQ and low reading), produced higher effect sizes in treatment outcomes than those samples with a discrepancy between IQ and reading. Given there has been very little research on why these discrepancies occur, it is important to recognize that some parts of the equation, such as IQ may still have a role.” (p. 37)

For highly practical reasons, consideration of IQ is relevant, both in consideration of the RTI process and in the diagnosis of LD. In the RTI process itself, as it proceeds from one tier to another, consider, what is the impact of so-called peer comparison to classmates if a specific child is highly intelligent or even gifted? And particularly, consider the impact if such a child is in a class of “peers” who are functioning at lower cognitive levels. Such a bright student might be functioning below his or her capability but at an absolute level comparable to the class average of his or her less able peers. That struggling reader, of whom we and others have seen very many, would be entirely invisible and overlooked in such an RTI process. In fact, often, the only way such struggling readers are identified is through a complete, comprehensive assessment in which cognitive abilities and psychological processes are evaluated. Within the RTI process, such students now would never be detected, much less referred for a full evaluation of their cognitive and psychological processing abilities. And most critically, such struggling readers would not receive helpful interventions or accommodations “despite the fact that their relative deficit in a particular domain could cause severe psychological distress as well as unexpected underachievement” (Boada, Riddle, & Pennington, 2008, p.185) and could be ameliorated by such interventions and accommodations.

The data clearly show these bright students, although reading at higher levels still are performing below ability and share many qualities (e.g., phonological deficits) with lower functioning struggling readers (Hoskyn & Swanson, 2000; Steubing et al., 2002). It would be no fairer to leave out these bright struggling readers than it would be to leave out their lower functioning classmates.

Doing Away With Evaluation of Cognitive and Psychological Processes In the Identification of LD: Helpful or Harmful?

Many proponents of RTI (e.g., Reschly, 2005; Shinn, 2005) argue that children who do not respond to instructional methods that have been validated to be effective with the majority of children, should then be considered to be children with a learning disability and moved on to a special education placement. Those in this particular camp do not see any further diagnostic evaluation or assessment as being useful either for designing instruction or determining the presence or absence of a disability. Gresham et al. (2008) argue that no standardized testing is necessary. Even proponents of RTI who agree that comprehensive evaluations are necessary, are often quite restrictive in what they consider to be a comprehensive assessment. For example, Fletcher and Vaughn (2009) note that children who fail in RTI are then referred for a “comprehensive” evaluation. However, they are vague about the details of what constitutes such an evaluation, but they do note that they consider RTI together with assessment of academic achievement and consideration of exclusionary criteria the elements of a “comprehensive evaluation.” Here, and in other writings and statements, they appear to indicate that evaluation of cognitive ability (unless disabilities other than LD are suspected; see Fletcher, 2008) and psychological processes should not be included. Fletcher (2008) does argue that screening for other affective and behavioral variables (e.g., the presence of ADHD) that could be associated with learning problems should be accomplished and on a prereferral basis and there are accurate and yet brief means for doing such Tier 1 screenings (e.g., Kamphaus & Reynolds, 2007). However, here, it is important to note that there is a substantial comorbidity between ADHD and LD (Shaywitz, Fletcher, & Shaywitz, 1994) so that the occurrence of ADHD in no way rules out the presence of an LD causing academic difficulties and that must be sought for and identified.

The vagaries of the OSERS rules for implementation of IDEIA 2004 leave open the determination of just what constitutes a comprehensive assessment. Many states are now issuing regulations that define the RTI process itself as a comprehensive evaluation, relieving the state from providing any additional diagnostic work before declaring a child to be a student with a disability, often because of the arguments that assessment of cognitive skills has nothing to lend to designing instruction (e.g., Gresham et al., 2008). The determination of a learning disability then would be made under an RTI model based upon a student's failure to progress at the same rate as that of other children in the same classroom, once it had been ascertained that appropriate instructional methods had been applied (how this would be determined is unspecified). Intellectual level would be ignored so long as it was above those levels generally regarded as reflecting the presence of mental retardation and no disturbance or dysfunction of any form of psychological processing would need to be considered or demonstrated. This approach represents a fundamental alteration and cuts out the very roots basic to the concept of an LD as an unexpected difficulty in learning intrinsic to the child. It also runs counter to what is seen as the intent of the Federal regulations, and indeed Posny (2007) while Director of OSERS, clearly stated that a comprehensive evaluation was required before a student can be declared to have a disability (also see Naglieri, 2007).

The approach proffered by Gresham et al. (2008), Reschly (2005), Shinn (2005), and Fletcher and Vaughn (2009) fosters the dangerous concept of relativity of a disability in the context of the individual classroom as opposed to residing within the individual him or herself, when aptitude is compared to achievement. For example, Reschly (2005) argued that it was not only reasonable but a desirable and expected outcome of RTI that a child would be considered learning disabled in one teacher's classroom but not in a different classroom where the general achievement level and progress rate of other students were different. This fundamentally alters the concept of disability at its very roots. A disability is recognized as a psychopathological condition primarily associated with the individual. The RTI model focuses on the failure of a child-school interaction that is complex and modified by the overall achievement level of an individual classroom. While focusing on potential failures of the child-school interaction and seeking remedies other than special education is entirely appropriate, the latter is not a disability as traditionally understood but more accurately reflects a failure of general education to accommodate normal variations in learning, and while we strongly support correcting such failures, we also disagree that they represent a disability. We share the concern expressed by others (e.g., Boada, Riddle, & Pennington, 2008, pp. 184–185): “the methods of identification would seem to result in the term SLD specifying simply a group of low-achieving children, who were not responding well to good instruction.” Such an approach appears to be a regressive step back to the era of “slow learners” and educational tracking. Furthermore, these investigators go on to state, “children with “grade level” performance on some discrete measures may in fact be very discrepant on other more integrative tasks relative to their peers and to skills in other domains. These children would also benefit from intervention.” (p. 190). Furthermore, one can easily manipulate the number and type of students identified as having a disability by manipulating classroom assignments of students in systematic ways that group low-achieving students together such that there are no students who are discrepant from classroom peers. Cynics of RTI legislation might even see this as a purpose of such legislation since it would reduce the costs of special education tremendously by significantly reducing the number of students eligible for services.

Standardized assessment of cognitive skills and processing do have research support as being able to contribute to instructional planning. As we have noted above, IQ does mediate response to intervention and we consider this information important. Gresham et al. (2008) attack the position that assessment of cognitive processes is useful by pointing out the failure of subtest level profile analysis of intelligence test results to withstand careful psychometric scrutiny. The latter is a point with which we have been in agreement for some years (e.g., see Reynolds & Kamphaus, 2003). However, Gresham et al. appear to generalize from the failure of subtest level profile analysis on a narrow range of tests (IQ tests) to all measures of cognitive processing, and subtest level profile analysis is not the only means of evaluating cognitive functioning in relation to academic achievement. Neuropsychological models have a significant history of their ability to contribute to determining more effective instructional approaches (e.g., see Hartlage, 1975a, 1975b; Hartlage & Lucas, 1973; Hartlage & Reynolds, 1981; Naglieri & Kaufman, 2008; and Reynolds, 1988). More recently, individual research reports support the use of models of assessment steeped in knowledge of brain function and brain-behavior relationships as being capable of providing quite useful guidance to instructional approaches (e.g., Haddad, Garcia, Naglieri, Grimditch, McAndrews, & Eubanks, 2003; Naglieri & Gottling, 1995, 1997; Naglieri & Johnson, 2000). Reviews of the literature have also demonstrated the applicability and efficacy of such models (e.g., Berninger & Richards, 2002; Naglieri, 2008; Naglieri & Goldstein, 2006; Naglieri & Kaufman, 2008; Swanson, 2008). The individual studies as well as the reviews, especially as compiled by Berninger and Richards, demonstrate the efficacy of pedagogical principles based on the outcomes of careful neuropsychological assessments (also see Section V of D'Amato, Fletcher-Janzen, & Reynolds, 2005). Berninger has provided additional descriptions of complete programs for the diagnosis and treatment of written-language as well as arithmetic difficulties validated in the neuroscience and the instructional literatures (e.g., see Berninger & Holdnack, 2008, especially pp. 74–75 for discussion and additional references).

There are other examples of the role of standardized assessment in determining what to teach to students who are experiencing learning problems. As one clear example, research over the past 50 or more years in educational, school, and related areas of psychology has demonstrated repeatedly that students who engage in strategic learning and test-taking perform at higher levels academically than those who do not (e.g., see Stroud & Reynolds, 2009, for a review). Academic achievement levels can often be improved significantly by improving the study skills, learning, reading comprehension, test-taking, and related strategies of learners at all ages and is effective with both regular and special education students. However, standardized testing of students' skills in the relevant domains of study is needed to determine which students require instruction (not all do) and if so, what strategies need to be taught to them and how are they best approached (e.g., see Stroud & Reynolds, 2009, 2006). Certainly, this information can be gleaned via other methods, but standardized assessment of students' skills in these domains is faster, more accurate, and more cost-effective than alternative approaches. And finally, the relatively recent and growing awareness of the profound role of lack of fluency in reading brings to light a group overlooked by RTI, that is, bright students who are often particularly and adversely impacted by their inability to read quickly. Such bright students who may read accurately in the so-called average range are often overlooked unless more comprehensive histories and evaluations are performed. Fluency impairments are persistent and there are no known cures. Bright dyslexic students can improve their word identification but will continue to read slowly, not automatically and with great effort, and with support and accommodations, especially, extended time, can comprehend at high levels. However, such struggling students must first be identified, which they are not in current RTI approaches.

The Dangerous Slippery Slope of an RTI Definition of LD

Given the historical intent and current state of scientific knowledge, just how should a learning disability be determined? From its inception, the term learning disability has referred to, has been, and is currently conceptualized, as an unexpected difficulty in learning in one of seven or so areas of achievement but most commonly occurring in the domain of reading. The approach and definition embedded in RTI followed to its ultimate conclusion have the strong probability of eliminating the basic concept of learning disability as it was intended and as it is currently understood. This would be extraordinarily unfortunate, particularly since so much progress has been made in neuroscience in understanding and validating for example, dyslexia, the most common SLD. With the advent of functional brain imaging it became possible literally to peer into the brain and observe different neural systems at work in, for example, typical and in RD readers. As a result, an invisible disability has become visible and accessible to scientific study at a neurobiological level.

Shaywitz (2005), among others, has noted the extraordinary complexity of the brain and its relationship to learning. The brain has long been known to be a dynamic organ of information processing and learning and one that constantly changes itself in response to its environment and at the same time modifies the environment in which it resides. Just as neuropsychology demonstrated many years ago that children with different neuropsychological profiles, representing or modeling brain behavior relationships for the individual child, predicted differential response to variable pedagogical approaches (e.g., Reynolds, 1988), the work to which Shaywitz refers has already made visible and provided evidence for the validity of a formerly hidden disability, dyslexia, and provided the neurobiological basis underlying the lack of fluency and the neurobiological evidence for the need for the accommodation of extended time on examinations for those who are dyslexic (e.g., see also Shaywitz et al., 2002). Such studies have the strong promise of providing hard neurobiological evidence for the reasons for the effectiveness of different methods with different brains. As these methods develop and are further refined, more fine-grained and deeper levels of understanding become possible leading to more clearly targeted and refined intervention methods that are truly individualized and consequently, more effective.

RTI as a diagnostic model, lacks not only in diagnostic coverage and validity, it also provides few clues guiding what to do as far as instruction is concerned after a child fails to respond. More of the same ineffective instruction does not seem likely to work. One of the major purposes of a comprehensive assessment is to derive hypotheses about a student's cognitive profile that would allow the derivation of different and more effective instruction. The evidence is clear that remedial efforts focused on nonacademic process variables are not effective. Still, teaching methods for academic deficiencies that have been tried with a student and have proven ineffective need modification as well and should not be continued. Elimination of an evaluation of cognitive abilities and psychological processes seems to revert to a one size fits all mentality where it is naively assumed that all children fail for the same reason. In the area of reading, a model suggesting that remediation of phonological awareness deficits will remedy reading problems in virtually all children has, alas, proven not to be correct. Today, we are witnessing many children whose phonological skills have been remediated, and remediated well, and who continue to struggle to read fluently and to comprehend what they have read. At the current stage of scientific knowledge, it is only through a comprehensive evaluation of the full extent of a student's cognitive and psychological abilities and processes that insights into the underlying proximal and varied root causes of reading difficulties can be ascertained and then specific interventions be provided targeted to each student's individual needs.

We (e.g., Reynolds, 1988; Shaywitz, 2005) recognize the need for differentiated instruction driven by student characteristics and do not subscribe to the “one instructional model (if evidence-based) fits all.” Shaywitz (2005) has provided a narrative summary of the essential nature of the contributions behavioral and neurosciences have for not only the assessment and identification of learning disabilities but to the development of interventions driven by student characteristics:

“Modern brain imaging technology now allows scientists to noninvasively peer into, and literally watch the brain at work-in children and in adults. Using this new technology, we and other laboratories around the world have now identified the specific neural systems used in reading, demonstrated how these systems differ in good and struggling readers, pinpointed the systems used in compensation, and identified the systems used in skilled or fluent reading. In addition, we have identified different types of reading disabilities and also demonstrated the malleability or plasticity of the neural circuitry for reading in response to an evidence-based reading intervention. These and other studies have provided a new and unprecedented level of insight and understanding about common neuropsychological disorders affecting children and adults, their mechanisms, their identification, and their effective treatment.” (p. vii).

Clearly, there are many compelling reasons to conduct real comprehensive assessments of students who fail RTI and not to declare RTI to be a comprehensive assessment. For RTI to be effective, the interventions need to be tailored to the needs of the individual child. Even staunch proponents of RTI recognize such a rubric in their own research. For example, Vellutino et al. (2003), in discussing the results of an effective intervention for early reading problems, note their results “ … add to the growing body of evidence demonstrating that reading difficulties in most beginning readers can be corrected if such children are provided with early and comprehensive remedial intervention tailored to their individual needs” (p. 31). Knowing the individual needs and how to remediate them comes from a comprehensive assessment. It cannot be emphasized enough—and what seems to be overlooked in the current frenzy to claim RTI and, perhaps, a narrow assessment of academic skill, is sufficient for identification and intervention of learning difficulties—the most optimal and hence, ultimately effective match of individual needs and specific intervention components for each child rests on and requires the knowledge of that child's history and individual profile of (cognitive and psychological) strengths and weaknesses. We should also note that it is not only specification of which components require intervention, but also critical elements of the process of effective implementation, for example, intensity (group size) and duration (minutes per day and length of intervention over time) that are currently guess work and not evidence-based for RTI procedures.

In summary, there are many reasons that an RTI fails as a reliable approach to the accurate diagnosis and effective intervention for students with a SLD; these include: (a) RTI fails to identify bright, albeit, struggling readers who require and would benefit from intervention and accommodation; (b) RTI delineates neither which specific components of, for example, reading (phonological awareness, fluency, vocabulary, orthographic processing, attention or other) require intervention, nor which specific strengths can assist in bootstrapping weaker areas; (c) how is RTI best implemented: intensity and duration of intervention are currently unknown, and, of course; (d) what constitutes the R in RTI?.

As an approach to diagnosis, RTI does not have proven value as either a rule out or a rule in process for a disability. Simply stated, a student who does RTI successfully, especially, if he or she is above classmates in ability or achievement, may have a disability. Conversely, one who fails in RTI may or may not have a disability and the nature of the disability, when one is present, is unknown after a failed RTI. A failed RTI is neither a necessary, nor a sufficient condition for determination of the presence of a learning disability. Although it is intuitively appealing to argue that RTI is a strong process for ruling out a disability, RTI, in its relativistic form for comparisons, cannot be applied accurately even in this manner. More unfortunately, it is becoming the sole or major criterion for the identification of a SLD.

Some Nagging, Assorted Issues

Although space limitations prevent the provision of a full critique of RTI in its current state, the goal of this review is to have raised issues the field will address conceptually as well as empirically. Many of the underlying assumptions and practices in RTI seem problematic as well. The following concerns also need to be addressed as the field struggles with the many aspects of RTI implementation.

In the case of children falling behind in academic learning in the public schools, they have been attending school and individuals certified to teach in their state of residence have been providing instruction. Tier 1 of RTI is the provision of empirically validated instruction to referred children to see if they respond to instructional techniques known to be effective for most children. This raises a host of issues, the first being the presumption that a referred child has not been receiving academic instruction that is empirically validated as effective (a presumption that ignores the exceptional accountability required under NCLB and Reading First, which includes the consequence of being taken over or placed on monitoring by Federal authorities). If that is true, neither have the student's classmates. Of course, this means the teachers in the regular classroom are not using validated instructional methods. Why not?

Should not accredited colleges and universities that graduate teachers be required to train these teachers to use validated instructional methods?

Should not schools accredited by their state and/or regional certification body be required to engage in validated pedagogical techniques and to use curriculum programs and materials that have empirical support of their effectiveness?

Principals are looked to as the instructional leaders in schools. Should they not be versed in the research on effective instructional techniques and monitor in some way teaching effectiveness in their respective schools?

Are not all children entitled to receive instruction using empirically validated teaching methods and materials, and not just those students who have failed to learn in the face of teaching methods not known to be effective?

Given the extraordinary uncertainty and lack of trustworthy empirical data about the role of RTI in the identification and remediation of LD, the field is left with the question “is RTI the answer to the search for the most effective strategy for the early identification and accurate diagnosis of a reading disability and for providing effective reading instruction and timely intervention services? Or is RTI more of a Trojan horse, outwardly appealing but filled with risky, unproven, and in the end, potentially harmful practices, or is it somewhere in-between?” (Shaywitz, 2008, p. xiii). Given the current rush to RTI in the face of lack of an evidence-base, it may be that a “wait to fail model” (the catchy characterization of the severe discrepancy model) is now being replaced by a “watch them fail” model known as RTI. In conclusion, review of the evidence surrounding RTI indicates that the best interests of potential SLD students will be served when identification and intervention are guided by the evidence and not anecdote nor politics, and that all children, including bright students, receive equal opportunity for identification and remediation and accommodation.

Based on the currently available evidence related to the use of RTI for SLD diagnosis we offer the following practice guidelines.

  1. Before placing a student in special education, conduct a comprehensive assessment to identify the areas of academic weaknesses and strengths experienced by the student as well as determining why the classroom based intervention provided under the RTI model was ineffective. Determine or rule out the presence of a disability in all categories specified in the federal law and do not presume the presence of a SLD. Rule out/rule in comorbid disorders and assess the need for interventions in multiple domains, including emotional and behavioral domains. Gather information to guide the choice of interventions. This is especially necessary in the emotional and behavioral domain where the evidence-based intervention literature requires that interventions be matched carefully to the underlying behavioral or emotional domain where problems are extant (e.g., see Vannest, Reynolds, & Kamphaus, 2009). Interventions that are effective for treating and managing aggression for example are unlikely to be effective in treating and managing depression, post-traumatic stress disorder, and anxiety disorders, even though these disorders in children may have aggressive behavior as one component of their manifestation.

  2. RTI should not be used as a model of diagnosis or disability determination. Where rules are written suggesting that RTI constitutes a comprehensive assessment, RTI still should not be used as the sole or even primary determinant of the presence of a SLD (or any other diagnosis) without further empirical evidence of accuracy and equity of diagnosis, studies of gender and ethnic bias as a result of using RTI as a diagnostic method, and misclassification rates of students as SLD when other disabilities are present, including the failure to diagnose comorbid disorders. RTI is best practiced as a model of prevention, providing early interventions to struggling students that averts the development of a disability.

  3. In the case of dyslexia, RTI is particularly problematic as a model of disability determination. By its nature, RTI based on comparison of the individual to the group, is inappropriate and fails to identify bright children whose reading skills are at the level of the average student in the class (or school), but far below his or her ability. Dyslexia, the most common of the SLDs, by definition, represents an “unexpected” difficulty in reading in comparison to that person's ability. RTI fails to identify such children, particularly if they are very bright. For these children, a full evaluation including consideration of their history, oral language acquisition, literacy skills (including fluency), and cognitive ability is necessary. Anything short of this depth and breadth of assessment will fail to identify struggling bright readers.

  4. Given that research has found that early reading intervention should be individualized by selecting from and/or combining “validated” interventions, more comprehensive forms of assessment are necessary at the outset of intervention including comprehensive documentation of academic weakness areas and the subskills comprising them (e.g., a diagnostic reading battery). Reading requires the acquisition of a host of subskills (e.g., fluency) that are important but may be insufficient for becoming an adept reader, that is, achieving high rates of comprehension. It is necessary to assess not only these subskills and provide direct instruction in deficient areas, but also to assess and teach comprehension and more complex reading comprehension strategies as well. Indeed, the teaching of comprehension strategies in conjunction with basic skills produces larger effect sizes in RTI models with reading problems than just providing direct instruction in the basic subskills (e.g., Scammacca et al., 2007; Stroud & Reynolds, 2009).

  5. Emphasize early identification through objective screening of all students in both the academic and the emotional and behavioral domains, thereby adopting RTI as a model of primary prevention. Waiting for students to be referred or designated by a teacher for intervention in RTI defeats much of the real rationale for intervening early. By the time students come to the attention of teachers and others, they are substantially behind academically or have developed obvious emotional and behavioral disorders that could have been prevented if students at high risk for these disorders had been identified through objective screening and evidence-based interventions used early.

  6. Systematically monitor and assess treatment/intervention fidelity via independent professionals who are not involved in direct service delivery to students. Avoid monitoring treatment fidelity through the use of self-report by those providing the intervention.

  7. When evaluating a student's response to intervention (the R in RTI), know which measurement model you are using, specifically what questions are answered by the measurement model, why it is the best model to use, and understand that other models exist and that each model not only answers different questions about a student's response (or change or discrepancy from a prior state), but will lead to the identification of different children as disabled (e.g., see Reynolds, 2009).

  8. Require the instructional leaders in each school to document that teachers are in fact using evidence-based teaching strategies in the classroom with all students and evaluate teachers on their applications of these methods in everyday classroom activities.

Footnotes

Contributor Information

Cecil R. Reynolds, retired Texas A&M University.

Sally E. Shaywitz, The Audrey G. Ratner Professor in Learning Development, Yale University School of Medicine, Co-Director, Yale Center for Dyslexia and Creativity.

References

  • Berninger V, Holdmack J. Nature-nurture perspectives in diagnosing and treating learning disabilities: Responses to questions begging answers that see the forest and the trees. In: Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention. New York: Wiley; 2008. [Google Scholar]
  • Berninger V, Richards TL. Brain literacy for educators and psychologists. San Diego: Academic Press; 2002. [Google Scholar]
  • Blanton H, Jaccard J. Arbitrary metrics in psychology. American Psychologist. 2006;61:27–41. [PubMed] [Google Scholar]
  • Boada R, Riddle M, Pennington B. Integrating science and practice in education. In: Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention. New York: Wiley; 2008. [Google Scholar]
  • D'Amato R, Fletcher-Janzen E, Reynolds CR. Handbook of school neuropsychology. New York: Wiley and Sons; 2005. [Google Scholar]
  • Fletcher J. Identifying learning disabilities in the context of Response to Intervention: A hybrid model. 2008. An online paper for the RTI network sponsored by the NCLD: Http://www.rtinetwork.org/Learn/LD/ar/HybridModel.
  • Fletcher J, Vaughn S. Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives. 2009;3:30–37. [PMC free article] [PubMed] [Google Scholar]
  • Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention. New York: Wiley; 2008. [Google Scholar]
  • Fuchs D, Deschler D. What we need to know about responsiveness to intervention (and shouldn't be afraid to ask) Learning Disabilities Research & Practice. 2007;22:129–136. [Google Scholar]
  • Fuchs D, Fuchs L, Compton D. Identifying reading disabilities by responsiveness-to-instruction: Specifying measures and criteria. Learning Disability Quarterly. 2004;27:216–227. [Google Scholar]
  • Fuchs D, Fuchs LS. What the inclusion movement and responsiveness-to-intervention say about high-incidence disabilities. Keynote for the Inaugural International Conference of the University of Hong Kong's Center for Advancement in Special Education; Hong Kong. 2006. [Google Scholar]
  • Fuchs D, Mock D, Morgan P, Young C. Responsiveness to intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research and Practice. 2003;18:157–171. [Google Scholar]
  • Fuchs D, Young C. On the irrelevance of intelligence in predicting responsiveness to reading instruction. Exceptional Children. 2006;73:8–30. [Google Scholar]
  • Graham S, Perin D. Writing next: Effective strategies to improve writing of adolescents in middle and high schools: A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education; 2007. [Google Scholar]
  • Gresham F, Reschly D, Tilly D, Fletcher J, Burns M, Christ T, et al. Viewpoint: Response to AASP. Comprehensive evaluation of learning disabilities: A response to intervention perspective. Communique. 2004;33:34–35. [Google Scholar]
  • Gresham F, Restori A, Cook C. To test or not to test: Issues pertaining to response to intervention and cognitive testing. Communique. 2008;37:5–7. [Google Scholar]
  • Haddad F, Garcia Y, Naglieri J, Grimditch M, McAndrews A, Eubanks J. Planning facilitation and reading comprehension: Instructional relevance of the PASS theory. Journal of Psychoeducational Assessment. 2003;21:282–289. [Google Scholar]
  • Hartlage LC. Neuropsychological approaches to predicting outcome of remedial education strategies for learning disabled children. Pediatric Psychology. 1975a;3:23. [Google Scholar]
  • Hartlage LC. Preventing initial reading failure by prescreening for learning style. Paper presented to the annual meeting of the Association for Children with Learning Disabilities; New York. 1975b. [Google Scholar]
  • Hartlage LC, Lucas DG. Group screening for reading disability in first grade children. Journal of Learning Disabilities. 1973;6:48–52. [Google Scholar]
  • Hartlage LC, Reynolds CR. Neuropsychological assessment and the individualization of instruction. In: Hynd G, Obrzut J, editors. Neuropsychological assessment and the school-age child: Issues and procedures. New York: Grune & Stratton; 1981. [Google Scholar]
  • Hoskyn M, Swanson HL. Cognitive processing of low achievers and children with learning disabilities: A selective meta-analytic review of the published literature. School Psychology Review. 2000;29:102–119. [Google Scholar]
  • Individuals With Disabilities Education Improvement Act (IDEIA) Public law. 2004:108–446. [Google Scholar]
  • Kamphaus RW, Reynolds CR. Behavioral and emotional screening system. Minneapolis: NCS Pearson; 2007. [Google Scholar]
  • Kavale K, Kauffman J, Bachmeir R, LeFever G. Response-to- Intervention: Separating the rhetoric of self-congratulation from the reality of specific learning disability identification. Learning Disability Quarterly. 2008;31:135–150. [Google Scholar]
  • Kazdin AE. Arbitrary metrics: Implications for identifying evidence-based treatments. American Psychologist. 2006;61:42–49. [PubMed] [Google Scholar]
  • Naglieri JA. RTI alone is not sufficient for SLD identification: Convention presentation by OSEP Director Alexa Posny. Communiqué 2007;35:52–53. [Google Scholar]
  • Naglieri JA. Best practices in linking cognitive assessment of students with learning disabilities to interventions. In: Thomas A, Grimes J, editors. Best practices in school psychology. 5th. Bethesda: National Association of School Psychologists; 2008. pp. 679–696. [Google Scholar]
  • Naglieri JA, Goldstein S. The role of intellectual processes in the DSM-V diagnosis of ADHD. Journal of Attention Disorders. 2006;10:1–6. [PubMed] [Google Scholar]
  • Naglieri JA, Gottling SH. A cognitive education approach to math instruction for the learning disabled: An individual study. Psychological Reports. 1995;76:1343–1354. [PubMed] [Google Scholar]
  • Naglieri JA, Gottling SH. Mathematics instruction and PASS cognitive processes: An intervention study. Journal of Learning Disabilities. 1997;30:513–520. [PubMed] [Google Scholar]
  • Naglieri JA, Johnson D. Effectiveness of a cognitive strategy intervention in improving arithmetic computation based on the PASS theory. Journal of Learning Disabilities. 2000;33:591–597. [PubMed] [Google Scholar]
  • Naglieri JA, Kaufman AS. IDEIA 2004 and specific learning disabilities: What role does intelligence play? In: Grigorenko E, editor. Educating individuals with disabilities: IDEIA 2004 and beyond. New York: Springer; 2008. pp. 165–195. [Google Scholar]
  • Pogrow S. Reforming the wannabe reformers: Why education reforms almost always end up making things worse. Phi Delta Kappan. 1996;77:656–663. [Google Scholar]
  • Posny A. IDEA 2004—Top ten key issues that affect school psychologists. Invited address to the annual convention of the National Association of School Psychologists; New Orleans. Mar, 2007. [Google Scholar]
  • Report of the National Reading Panel. Teaching children to read: An evidence based assessment of the scientific research literature on reading and its implications for reading instruction. U.S. Department of Health and Human Services, Public Health Service, National Institutes of Health, National Institute of Child Health and Human Development; 2000. [Google Scholar]
  • Reschly D. RTI paradigm shift and the future of SLD diagnosis and treatment. Paper presented to the Annual Institute for Psychology in the Schools of the American Psychological Association; WA DC. Aug, 2005. [Google Scholar]
  • Reynolds CR. Critical measurement issues in assessment of learning disabilities. Journal of Special Education. 1984;18:451–476. [Google Scholar]
  • Reynolds CR. Putting the individual into the aptitude-treatment interaction. Exceptional Children. 1988;54:324–331. [PubMed] [Google Scholar]
  • Reynolds CR. RTI, neuroscience, and sense: Chaos in the diagnosis and treatment of learning disabilities. In: Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention. New York: Wiley; 2008. pp. 14–27. [Google Scholar]
  • Reynolds CR. Determining the R in RTI: Which score is the best score?. Miniskills workshop presented at the annual meeting of the National Association of School Psychologists; February, Boston. 2009. [Google Scholar]
  • Reynolds CR. Considerations in RTI as a method of diagnosis of learning disabilities. Paper presented to the Annual Institute for Psychology in the Schools of the American Psychological Association; Washington, DC. Aug, 2005. [Google Scholar]
  • Reynolds CR, Kamphaus RW. Reynolds Intellectual Assessment Scales and Reynolds Intellectual Screening Test: Professional Manual. Lutz, F.L.: Psychological Assessment resources; 2003. [Google Scholar]
  • Reynolds CR, Shaywitz SE. Response to Intervention: Prevention and Remediation, Perhaps. Diagnosis, No. Child Development Perspectives. 2009;3:44–47. [PMC free article] [PubMed] [Google Scholar]
  • Scammacca N, Roberts G, Vaughn S, Edmonds M, Wexler J, Reutebuch CK, et al. Reading interventions for adolescent struggling readers: A meta-analysis with implications for practice. Portsmouth, NH: RMC Research Corporation, Center on Instruction; 2007. [Google Scholar]
  • Shaywitz BA, Shaywitz SE, Pugh KR, Mencl WE, Fulbright RK, Skudlarski P, et al. Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry. 2002;52:101–110. [PubMed] [Google Scholar]
  • Shaywitz S, Fletcher J, Shaywitz B. Issues in the definition and classification of attention deficit disorder. Topics in Language Disorders. 1994;14:1–25. [Google Scholar]
  • Shaywitz SE. Foreword. In: Amato RD', Fletcher-Janzen E, Reynolds CR, editors. Handbook of school neuropsychology. New York: Wiley; 2005. pp. vii–viii. [Google Scholar]
  • Shaywitz SE. Foreword. In: Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations for diagnosis and intervention. New York: Wiley; 2008. [Google Scholar]
  • Shinn M. Who is LD? Theory, research, and practice; Paper presented to the Annual Institute for Psychology in the Schools of the American Psychological Association; Washington, DC. Aug, 2005. [Google Scholar]
  • Siegel LS. IQ is irrelevant to the definition of learning disabilities. Journal of Learning Disabilities. 1989;22:469–478. 486. [PubMed] [Google Scholar]
  • Speece DL, Walker CY. What are the issues in response to intervention research? In: Haager D, Klingner J, Vaughan S, editors. Evidence-based reading practices for response to intervention. Baltimore: Paul H. Brookes; 2007. pp. 287–301. [Google Scholar]
  • Steubing K, Fletcher J, LeDoux J, Lyon R, Shaywitz S, Shaywitz B. Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. American Educational Research Journal. 2002;39:469–518. [Google Scholar]
  • Stroud KC, Reynolds CR. School motivation and learning strategies inventory. Los Angeles: Western Psychological Services; 2006. [Google Scholar]
  • Stroud KC, Reynolds CR. Assessment of learning strategies and related constructs in children and adolescents. In: Gutkin T, Reynolds CR, editors. The handbook of school psychology. 4th. New York: Wiley; 2009. in press. [Google Scholar]
  • Swanson HL. Neuroscience and response to instruction (RTI): A complementary role. In: Fletcher-Janzen E, Reynolds CR, editors. Neuropsychological perspectives on learning disabilities in the era of RTI: Recommendations of diagnosis and intervention. New York: Wiley; 2008. pp. 28–53. [Google Scholar]
  • Swanson HL, Hoskyn M. Definition × treatment interactions for students with learning disabilities. School Psychology Review. 1999;28:644–658. [Google Scholar]
  • Swanson HL, Hoskyn M, Lee C. Interventions for students with learning disabilities: A meta–analysis of treatment outcome. New York: Guilford Press; 1999. [Google Scholar]
  • U.S. Department of Education. Individuals with Disabilities Improvement Act of 2004, Pub. L. 108–466. Federal register. 2004;70(No 118):35802–35803. [Google Scholar]
  • Vannest K, Reynolds CR, Kamphaus RW. BASC-2 intervention guide for emotional and behavioral problems. Bloomington, MN: Pearson Assessments; 2009. [Google Scholar]
  • Vellutino FR, Scanlon DM, Small S, Fanuele D. Response to Intervention as a Vehicle for Distinguishing Between Reading Disabled and Non-Reading Disabled Children: Evidence for the Role of Kindergarten and First Grade Intervention. Paper presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention Symposium; Kansas City, MO. 2003. Dec, [PubMed] [Google Scholar]

Which process is essential to effectively implement RTI?

Data-based decision making is the essence of good RTI practice; it is essential for the other three components, screening: progress monitoring and multi-leveled instruction. All components must be implemented using culturally responsive and evidence based practices.

What is an essential element of RTI implementation?

The National Center on RTI says the four essential components of a research-based framework for RTI are: universal screening, continuing progress monitoring, multi-level prevention system, and data-based decision making.

What is the process of response to intervention?

Response to Intervention, or RTI, is an educational strategy used in schools to: Provide effective and high-quality instruction, Monitor all students' progress to make sure they are progressing as expected, and. Provide additional support (intervention) to students who are struggling.

How is the effectiveness of RTI measured?

A big part of RTI is measuring students' skills using a scientifically based assessment. This means that researchers have studied the test or way of looking at your child's skills and say it's reliable. A common form of progress monitoring is curriculum-based measurement (CBM).