COMPREHENSION STRATEGY INSTRUCTION IN CORE READING PROGRAMS

by:

Peter Dewitz

Accomack County Public Schools, Virginia, USA

Jennifer Jones

Radford University, Virginia, USA

Susan Leahy

University of Richmond, Virginia, USA

 

ABSTRACT

Core reading programs provide the curriculum and guide the instruction for many classroom teachers. The pur­pose of this study was to conduct a curriculum analysis of comprehension instruction in the five most widely used core reading programs. The recommended comprehension instruction in grades 3, 4, and 5 was examined to answer four questions: 11) What skills and strategies are recommended to be taught? (2) How are these skills and strategies recom­mended to be taught? (3) What instructional designs do the programs employ? and (4) How do the spacing and timing of comprehension skills and strategy instruction in core programs compare with how these skills were taught in original research studies? The results of the authors’ analysis revealed that core reading programs recommend teaching many more skills and strategies than the researchers recommend and may dilute the emphasis on critical skills and strategies. In addition, comprehension strategy instruction does not meet the guidelines of explicit instruction as recommended in a number of research studies. Rarely do the five core programs follow the gradual release-of-responsibility model nor do the programs provide the amount of practice for skills and strategies that were employed in original research studies.

A core reading program is the primary instructional tool that teachers use to teach children to learn to read and ensure they reach reading levels that meet or exceed grade-level standards. A core program should address the instructional needs of the majority of students in a respective school or district.

Historically, core-reading programs have been referred to as basal reading programs in that they serve as the “base” for reading instruction. Adoption of a core does not imply that other materials and strategies are not used to provide a rich, comprehensive program of instruction. The core program, however, should serve as the primary reading program for the school and the expectation is that all teachers within and between the primary grades will use the core program as the base of reading instruction. Such programs may or may not be commercial textbook series.

Introduction

Basal reading programs have always served a prominent role in directing and guiding reading instruction in the United States. Since the devel­opment of the graded reader by McGuffey in the 1830s (Smith, 1965/1986), basal reading programs have pro­vided both the content and the methods of American reading instruction.

According to Education Market Research (2007), 73.2% of the schools surveyed stated that they either closely follow a basal program or use it selectively. Only 25.1% reported not using a basal program.

The purpose of the present study is to examine com­prehension curriculum and suggested instruction in the five major core reading programs, focusing on grades 3, 4, and 5. We limited our study to the instruction of reading-comprehension skills and strategies, assuming that comprehension also requires word decoding, vo­cabulary knowledge, prior knowledge, and motivation (Alexander, 2003; Pressley, 2000). Our content analysis explored four questions: (1) What skills and strategies comprise the curriculum of core reading programs and how do these curricula align with recommendations of research panels and research syntheses? (2) How do core reading programs direct teachers to teach these skills and strategies? What are the most frequent in­structional methods or teacher moves for teaching these skills and strategies? (3) Does the instructional design in core reading programs follow the release-of-respon-sibility model so that students learn to apply the skills and strategies? (4) Finally, do the core programs pro­vide as much massed and distributed practice, as did the research studies that originally validated these skills and strategies? Because the core programs use the terms skills and strategies somewhat interchangeably, the researchers will not distinguish between the terms at this point.

 

The Comprehension Curricula of Core Reading Programs

Comprehension instruction in core reading programs has a relatively short history compared with the over­all development of these programs (Venezky, 1987). Authors of 19th century programs recommended that pupils give the substance of a passage after reading it or respond to comprehension questions. By the 1930s and 1940s, comprehension skills had been introduced in core programs after classroom teachers were sur­veyed about what students needed to comprehend content material (Gray, 1925, as cited in Smith, 1986).

In the late 1980s and early 1990s, comprehension strategies also entered basal reading programs, taking their place alongside comprehension skills. Whereas comprehension strategies are thought to require con­trolled and intentional effort, skills reflect well-learned automatic and fluid mental acts (Afflerbach, Pearson, & Paris, 2008). Research in reading comprehension and comprehension instruction in the 1980s and 1990s validated the importance of a number of strategies, and these strategies were later endorsed with varying de­grees of acceptance in expert literature reviews (Duke & Pearson, 2002; Pearson, Roehler, Dole, & Duffy, 1992), panel reports (National Institute of Child Health and Human Development [NICHD], 2000; RAND Reading Study Group, 2002), and multiple-strategy instruction­al routines, such as reciprocal teaching (Palinscar & Brown, 1984) and transactional-strategies instruction (Brown, Pressley, Van Meter, & Schuder, 1996). Core reading programs now include such comprehension strategies as predicting, self-questioning, comprehen­sion monitoring, summarizing, evaluating, and narra­tive structure.

Durkin (1981) conducted one of the first and most influential studies of core reading programs after her classroom observational research (Durkin, 1978/1979). Finally, Durkin (1981) stated, “one of the many questions for which au­thors of basal readers provide no answer has to do with the way they decide what will be covered in manuals” (p. 541). When Durkin (1990) looked at these programs 10 years later, she found that all programs covered many more topics than did their predecessors, with instruc­tion offered quickly and often superficially.

Since Durkin’s (1981) curriculum analysis, no other complete comprehension curriculum study of core pro­grams has been conducted. Other studies of core read­ing programs examine how comprehension is taught, not auditing what is taught (Afflerbach & Walker, 1992; Franks, Mulhern, & Schillinger, 1997; Jitendra, Chard, Hoppes, Renouf, & Gardill, 2001; Miller & Blumenfeld, 1993; Schmitt & Hopkins, 1993). The composition of reading skills and strategies in core reading programs appears to be a product of research, tradition, and the demands of state and district assessments (Chambliss & Calfee, 1998; J J.

 

Instruction of Comprehension Skills and Strategies

Three lines of research provide support for instruction in comprehension skills and strategies. Protocol analy­sis (Pressley & Afflerbach, 1995) synthesized evidence that skilled readers do engage in strategic processing. Several integrative literature reviews and meta-analyses, summarizing instructional research, supported the fact that instruction in reading strategies contributed to im­proved reading comprehension (Block & Duffy, 2008; Duke & Pearson, 2002; Gajria, Jitendra, Sood, & Sacks, 2007; Graesser, 2007; NICHD, 2000). Finally, correla­tional studies provided support that ability to engage in strategic processing contributes to overall reading com­prehension (Cain, Oakhill, &r Bryant, 2004). Within this body of research are instructional principles that can be divided into two broad questions. How should these strategies and skills first be introduced and taught? How do teachers ensure that students will internalize and employ these strategies? Central to the first question is the need to provide clear direct explanations about how these strategies work and how they facilitate the pro­cess of understanding text. Teaching students to engage in this thinking process requires teacher scaffolding, where the locus of control for executing comprehension strategies is gradually released from teacher to students, with students assuming more and more responsibil­ity for the use of comprehension strategies (Pearson & Gallagher, 1983).

Effective comprehension instruction begins with direct explanation of strategies, including when, why, and how they should be used (Duffy, 2003; Duffy et al., 1986; Duke & Pearson, 2002). Duffy has argued that the teacher has to make explicit statements about the strategy (declarative knowledge), what critical attributes of the strategy must be employed and the text cues that can guide the reader in using the strategy (procedural knowledge), and why and when during reading the strat­egy should be used (conditional knowledge). Duffy has cautioned that teachers can be explicit in their teaching, developing knowledge, and setting goals, without pro­viding a full explanation of strategy use.

After direct explanation, the teacher must model the strategy (Collins, Brown, & Holum, 1991; Duke & Pearson, 2002; Pearson & Gallagher, 1983). Because comprehension is a cognitive process, teachers must think aloud to verbalize the thought processes taking place for each step of strategy application. The com­prehension process is hidden from the students, and through modeling the teacher makes the covert overt. Thinking aloud is central to the Informed Strategies for Learning program (Paris, Cross, & Lipson, 1984), re­ciprocal teaching (Palinscar & Brown, 1984), transac­tional strategy instruction (Brown, Pressley, Van Meter, & Schuder, 1996), and studies that developed students’ abilities with individual strategies (Bereiter & Bird, 1985; Kucan & Beck, 1997; Silven & Vauras, 1992).

When Durkin (1981) examined comprehension in­struction in core reading programs, she found a dearth of direct explanation. Overall, Durkin found that only 5.3% of the teacher’s edition directions were focused on instruction. Instead, practice (32.6%), preparation for reading (18.4%; including development of vocabulary and prior knowledge), assessment (17.4%), and appli­cation (15.6%) predominated in the manuals. Durkin reported that the programs had “to teach by implica­tion rather than with direct, explicit instruction” (p. 524). Durkin was further concerned that when teachers engaged in direct instruction, they did so with sorted pieces of text, one or two sentences that made it difficult for children to generalize the instruction to complete reading selections. It is important to note that the de­velopment of vocabulary and prior knowledge was not considered part of the instruction that focused on the how of comprehension. Durkin (1990) herself took an­other partial look at core programs 10 years later. This time she examined only main ideas and story-structure instruction and concluded that the new programs were not much different from their predecessors.

In subsequent evaluations of comprehension in­struction in core reading programs, researchers con­sistently found a lack of direct explanation. Miller and Blumenfeld (1993) looked at main-idea and cause-and-effect instruction in two core reading programs of the mid 1980s and found that the instruction did not ex­plain the process underlying the skills. When Afflerbach and Walker (1992) examined main-idea instruction in programs failed to focus on the strategic nature of the skill, with the programs teaching main idea implicitly rather than explicitly. Schmitt and Hopkins (1993) also examined comprehension skills and strategy instruction in basal reading programs with early 1990 copyright dates. They found that these programs taught important skills and strategies but failed to do so with the explicit-ness suggested in the research. Franks, Mulhern, and Schillinger (1997) studied core reading programs to de­termine how well they helped children learn to make inferences, concluding that although the programs pro­vided the context for students to make inferences, the instruction lacked the necessary explicitness suggested in the research. Comprehension skills and strategy instruction has not been studied in the currently avail­able core reading programs.

 

 

Guided Practice and Release of Responsibility

How well core programs help teachers guide students to apply skills and strategies is the next concern. “Guided practice is the primary means by which the teacher ensures that the students can apply the concepts [or strategies] that are taught” (Carnine, Jitendra, & Silbert, 1997, p. 69). Guided practice is collaborative; the teach­ers and students share the responsibility of employing the strategy (Duke & Pearson, 2002). It is during guid­ed practice that students increasingly model strategies (Meichebaum & Asnarow, 1979), with metacognition as the trigger for their use. According to Duffy et al. (1986), it is the perception of text difficulty that stimu­lates the need to reread, question, infer, or summarize. Prior to handing total responsibility for strategy use to the students, the teacher gives prompts, offers construc­tive comments and/or critiques, and provides hints for students as they construct their understanding and use strategies. Carnine et al. (1997) noted that systemati­cally controlled prompts and questions during guided practice leads to efficient learning and also minimizes erroneous thought and application among students.

Questioning during the guided-practice phase is intended to both scaffold learning for students as well as check students’ understanding. Such questioning is intended to instruct, guiding students along the path­way to independence. Beck, McKeown, McCaslin, and Burkes (1979), after examining teacher manuals for questions, found that questions in basal readers did not cohere with one another or facilitate the embodiment of guided practice, which should lead students to un­derstanding; rather, according to Durkin (1981), ques­tions assessed comprehension. Initially, Rosenshine and Meister (1994) hypothesized that the amount of direct instruction employed in the highly effective technique of reciprocal teaching (Palinscar & Brown, 1984) served as the reason for its significant treatment results; how­ever, they gleaned that the guided-practice portion of the instructional design of reciprocal teaching is what really made the difference.

No other studies have examined the instructional models in core reading programs or specifically looked to see whether they followed a release-of-responsibility model. Durkin’s review in 1981 found a predominance ol practice but did not characterize whether that prac­tice was guided by the teacher or done independently by the students. Chambliss and Calfee (1998) char­acterized core reading programs as having a routine structure that moves from prereading activities to text reading to question answering, with little time devoted to guided practice. In a recent study, McGill-Franzen, Zmach, Solic, and Zeig (2006) examined two core pro­grams, attempting to relate the instructional charac­teristics of the programs to children’s success on the Florida Comprehensive Assessment Test. They found that the program that stressed more questioning to de­velop interpretations yielded greater achievement for low-functioning students but no differences for average and above-average students.

In our study of the instructional design of core programs, and of the spacing and time of instruction discussed later, we narrowed our focus to just three strategies: following narrative structure, summarizing, and inference generation. The National Reading Panel (NICHD, 2000) and the RAND Reading Study Group (2002) endorsed the teaching of narrative structure and summarizing. Teaching students to make inferences was recommended by the RAND Reading Study Group, at least one major literature review (Graesser, 2007), and is at the core of the construction-integration model of comprehension (Kintsch, 1998). Cain et al. (2004), in their regression analysis, found that narrative struc­ture, inference generation, and comprehension moni­toring contributed to a significant proportion of the variance for general comprehension after accounting for decoding, general verbal ability, and working memory. The researchers examined summarizing instead of comprehension monitoring because two of the core programs the researchers stud­ied did not provide explicit instruction in comprehen­sion monitoring.

 

Spacing and Timing of Instruction

When educational researchers design comprehension-instructional studies, they must determine the duration of the study, the frequency of direct instruction and guided practice, and the number of passages that stu­dents will read. These considerations, often called in­structional spacing and timing, are critical to students’ learning and the success of the study. When core read­ing programs elect to teach a well-researched compre­hension skill or strategy, they too must determine how often teachers should teach that skill, how much prac­tice to provide, and for how many weeks or months the students will be guided to use that skill or strategy.

Teaching summarization has been studied in a wide range of age groups, and students required multiple hours of instruction to achieve some competence with the strategy. The duration and intensity of instruction in these studies range from one hour a week for seven weeks (Taylor & Beach, 1984) to 45 minutes per day for 11 consecutive days (Armbruster, Anderson, & Ostertag, 1987). Studies on the teaching of story structure, all with significant results, have demonstrated a wide range of instructional and practice sessions. In Buss, Ratliff, and Irion’s (1985) investigation, third graders received one hour of instruction daily for two weeks.  With regard to instructional research and tim­ing and spacing, although Carnine, Kame’enui, and Woolfson (1982) reported that students in fifth grade can learn to generate inferences in just three days with systematic instruction, other studies have reported in­terventions requiring considerable longer durations. Hansen and Pearson (1983) conducted inference re­search with 40 fourth-grade students who were in­volved in project-related activities for two days out of every week for 10 consecutive weeks, for a total of 20 lessons. Dewitz, Carr, and Patberg (1987) investigated the effects of strategy instruction to support inferential thinking with 101 fifth graders. Students received in­struction for eight weeks during 40-minute social stud­ies classes. Students worked on strategies for a total of 24 days, with the treatment group achieving superior gains in comprehension.

The research studies reviewed generally support the spacing-effect hypothesis, which affirms, over a wide range of studies and subjects, that distributed practice produces greater verbal learning than massed practice does (Dempster, 1987). The spacing effect demonstrates that any verbal-learning task that is spaced out over sev­eral days or longer with regular repetitions results in greater retention of the knowledge or skill being taught. The research review further suggests that when a skill or strategy is first introduced, it should receive frequent guided practice, but the frequency of these practice ac­tivities should diminish as the skill becomes more es­tablished (Glenberg, 1976; Smith & Rothkopf, 1984).

Specifically, when new skills were introduced, students did not receive enough teacher-guided or independent practice opportunities to ensure maintenance or transfer of the skills (Gropper, 1983; Smith & Rothkopf, 1984). Jitendra et al. (2001) examined main-idea instruction in grades 2, 4, and 6, in four core reading programs from the mid-1990s. They found that most programs provid­ed insufficient amounts of practice to learn the strategy. It remains to be seen whether contemporary core read­ing programs provide sufficient practice for students to learn comprehension skills and strategies and use them independently.

Teaching summarization has been studied in a wide range of age groups, and students required multiple hours of instruction to achieve some competence with the strategy.

 

Method

The researchers began the study of comprehension instruction in reading programs by first defining instruction as the pedagogy offered in the teacher’s manual. Durkin (1981) defined reading comprehension in manuals as the following: “A manual suggests that a teacher do or say something that ought to help children acquire the ability to understand, or work out, the meaning of connected text” (p. 518). The researchers concurred and adopted Durkin’s definition but also limited the parameters of our inquiry. The researchers recognized that developing strong comprehension also requires work on word recognition, fluency, knowledge development, and vocabulary. For the purposes of our investigation, the researchers decided to limit the scope of our study to those program elements that specifically focus on comprehension skill and strategy instruction and their application to the texts.

            The researchers began the study of comprehension instruction in reading programs by first defining instruction as the pedagogy offered in the teacher’s manual. The researchers recognized that developing strong comprehension also requires work on word recognition, fluency, knowledge development, and vocabulary

 

The Programs

The researchers chose to review the top five best-selling basal reading programs in the country, as identified by the Educational Market Research Group (www.ed-market research.com). The researchers studied and evaluated McGraw-Hill Reading, SRA Open Court, Harcourt Trophies, Houghton Mifflin Reading, and Scott Foresman Reading. The researchers refer to these programs as Programs A—E. The researchers examined the 2005 editions of all the programs ex­cept Program D, for which the researchers used the 2003 edition. Subsequent analysis suggested that the 2005 copyright of Program D was almost identical to the 2003 copy­right. Specific program names are not revealed be­cause our intent is not to highlight specific programs’ strengths and weaknesses but rather to describe the way in which the top-five selling programs in our coun­try address comprehension instruction. The recent tra­dition in evaluating core reading programs that began with Durkin (1981) has avoided reporting results by specific programs (Foorman, Francis, Davidson, Harm, & Griffin, 2004; Hoffman et al., 1994; McGill-Franzen et al., 2006; Miller & Blumenfeld, 1993).

For their analyses, the researchers focused on grades 3, 4, and 5. The researchers chose these grade levels because although com­prehension instruction should certainly be fostered and facilitated in the primary grades, it is likely within these upper elementary grade levels the researchers would expect to see the greatest emphasis on direct comprehension instruction.

The researchers chose to review the top five best-selling basal reading programs in the country. The recent tra­dition in evaluating core reading programs that began with Durkin. For their analyses, the researchers focused on grades 3, 4, and 5. The researchers chose these grade levels because although com­prehension instruction should certainly be fostered and facilitated in the primary grades

 

Data Analysis

For each of the five basal reading programs, the researchers read every lesson in every program as it was presented in the teacher manuals. Each program contained six units or themes per grade level, with approximately three to five lessons per unit/theme, amounting to approximately 20—30 lessons per grade level of instruction. In sum, approximately 90 lessons per program were read and rated. Program D had noticeably less content, covering only 23 lessons per grade.

The lesson plans of the five core programs are quite similar and each follows a five-day plan. Each starts with a short oral read-aloud selection that provides directions for teachers to model or explain some com­prehension strategies. Next comes introductory instruc­tion in comprehension skills and strategies, followed by directions to build background and vocabulary knowl­edge. Next, students read the main anthology selection, and teachers are directed to ask questions, reteach strat­egies, and model and discuss the selection. At the end of the main text selection, there are questions for re­view and discussion, reteaching of skills and strategies, and in some programs, another short reading selection with its accompanying teacher directions. The researchers read and rated all of these components except for those that ad­dressed vocabulary, word-level tasks (e.g., decoding and structural analysis), and fluency. The researchers did not examine any of the lessons that focused on spelling, grammar, writing, listening, or speaking. Supplemental materials like workbooks were not read and rated except for the facsimile pages that were reproduced in the teacher’s edition.

            Each program contained six units or themes per grade level. Each starts with a short oral read-aloud selection that provides directions for teachers to model or explain some com­prehension strategies. At the end of the main text selection, there are questions for re­view and discussion, reteaching of skills and strategies, and in some programs, another short reading selection with its accompanying teacher directions

 

Analyzing the Comprehension Curricula

To identify what skills and strategies are taught in the core programs, the researchers began with each program’s published scope and sequence. The scope and sequence contain extensive lists of comprehension skills and strategies. Each listed skill and strategy was then located in the in­dex of the program to locate the instruction for the skill. Page numbers in the index were then verified to deter­mine whether the program actually provided teacher directions for teaching the listed skill. Later, when the researchers conducted our analysis of instruction, the researchers double-checked that the program provided instruction for the skills and strategies listed in the scope and sequence.

To identify what skills and strategies are taught in the core programs, the researchers began with each program’s published scope and sequence. The scope and sequence contain extensive lists of comprehension skills and strategie. Page numbers in the index were then verified to deter­mine whether the program actually provided teacher directions for teaching. Researchers conducted our analysis of instruction, the researchers double-checked that the program provided instruction for the skills and strategies listed in the scope and sequence.

 

Analyzing Comprehension Instruction

As the researchers read the teacher manuals, the first task was to determine what would be the unit of analysis. Through recursive discussion the researchers sought to define an instruction­al move. Each instructional move in the teacher’s manual was coded three ways. First, the researchers coded what skill or strat­egy was being taught. The researchers coded 51 different skills and strategies. The researchers had developed our list of comprehension skills and strategies by listing all skills and strategies as was finding the main idea, as was finding the au­thor’s purpose. Some obviously similar strategies, like comprehension monitoring and clarifying, were coded under the same category (comprehension monitoring) but for the most part, the researchers adhered closely to the labels used by the programs. We did not distinguish between skills and strategies because the programs did not offer clear distinctions between the two and, in some cases, used the labels skill and strategy interchangeably and si­multaneously. The researchers added a category called content of the passage when a program did not specifically focus on a given skill or strategy but directed the teacher to ask a question on the content of a selection. In four of the five programs, during- and after-reading questions were labeled by skill, so a question would be labeled drawing conclusions or cause and effect. In program D, questions were not labeled by skill.

Second, each instructional move was categorized by what the teacher was directed to do. To code the instructional moves, the researchers constructed a priori categories based on what research says about effective comprehen­sion instruction. The researchers used the six categories in Durkin’s (1981) rating system as a starting point—preparation.

However, the researchers judged that these six categories lacked the sensitivity to capture the nuances of instruc­tion in the core programs. The researchers expected to view acts of direct explanation, modeling, guided practice, and op­portunities for independent application, and indeed, as the researchers read lessons together, the researchers found instructional direc­tions that did not fit the simple six-category breakdown. The researchers ultimately settled on 10 categories, expanding on those developed by Durkin. Table 1 includes the codes used to categorize instructional moves and a descrip­tion of each.

The codes for instructional moves reflect the most frequently encountered patterns within a core program. Teachers consistently mention skills and strategies, ask questions, and provide information about the content of a selection. Other codes, like questioning* modeling or skill + explanation, are also frequently reoccurring in­structional patterns. Rather than coding these as two moves (e.g., questioning and modeling), the researchers coded them as one because across all five programs these occurred frequently and together. The same can be said for the teacher move of skill + explanation, for which the teacher is directed to identify a skill and provide some informa­tion about it, but no direct explanation of the skill or strategy is prompted by the teacher manual.

Lessons that were coded direct explanation received further analysis in order to assess the characteristics of this direct instruction. Direct-explanation lessons were typically longer when a skill or strategy was first intro­duced or was a point of major review. To code these lessons, the researchers employed the criteria developed by Duffy et al. (1986) for evaluating the explicitness of class­room comprehension instruction. In this analysis, the direct-explanation lessons were coded for procedural knowledge, how well the manual described the men­tal process, the use of text features to guide the mental process, the sequence that should be used to execute the strategy, and the quality of the teacher modeling. Additionally, the researchers coded two aspects of conditional knowledge: statements about when a skill or strategy should be used and statements about its value or utility. Each of these seven areas was given a code of 2, 1, or 0 following Duffy’s guidelines.

Third and finally, instructional moves and strate­gies were coded for their place within the lesson, either inside or outside of the text. Inside the text refers to any­thing a teacher might say or do while the students are reading the story. Instructional moves were coded as inside the text after all preparation for the lesson was completed and before any post reading review began. By coding instructional moves as inside the text, the researchers sought to capture how the teacher supported compre­hension and guided its development. Outside-the-text moves consisted of any instructional preparations and strategy instruction, plus the questioning, discussions, responses, and strategy instruction before or after the text was read.

In summation, each move was coded for the skill or strategy being addressed, the type of instructional move, and whether the instruction occurred inside or outside the text. For example, the following paragraph would receive multiple codes:

Tell students that in the next selection, they will read about a group of immigrants who are about to become United States citizens. Discuss with students what they like about living in the United States. Then ask students why they think people would want to come to the United States and become citizens.

For skill or strategy, the paragraph would be coded as content of the passage as no specific skill is mentioned. For teacher moves, students are receiving information (4), engaging in a discussion (10), and being asked ques­tions (5); web or graphic organizer was also mentioned (1) with no elaboration. All this took place before the students read the selection (i.e., outside the text).

For skill or strategy, the paragraph would be coded as content of the passage as no specific skill is mentioned. For teacher moves, students are receiving information (4), engaging in a discussion (10), and being asked ques­tions (5); web or graphic organizer was also mentioned (1) with no elaboration. All this took place before the students read the selection (i.e., outside the text).

After the researchers established the coding system, the researchers inde­pendently coded two complete lessons from each of the five programs to establish interpreter reliability. Kappa coefficients demonstrated respectable levels of agree­ment among the three coders. Overall agreement be­tween each pair of coders was 81% between Coders 1 and 2, 84% between Coders 2 and 3, and 83% between Coders 1 and 3. When discrepancies were encountered, we resolved them through discussion and agreement. Then the researchers recoded the questionable lessons. This check on the coding process was repeated three more times during the reading and coding of the teacher’s manuals, and reliability was checked on 10% of the lessons. Each time, the agreement between pairs of coders remained above 80% with significant kappa coefficients. The total coding process constituted five months of work.

Each coded lesson was then entered into an SPSS database where the researchers noted the program, the theme, the lesson, and the rater. Because the researchers had coded the se­quence of instruction, the researchers were able to track any indi­vidual skill across each of the six-week themes and the lesson within the themes. This allowed us to look at both the frequency of instructional moves and the flow of instruction by comprehension skill. During the data analysis, the researchers also kept anecdotal notes on the structure of each program, unique features of the program, and the language suggested to teachers for explaining or modeling a skill.

Each instructional move in the teacher’s manual was coded three ways. First, the researchers coded what skill or strat­egy was being taught. Second, each instructional move was categorized by what the teacher was directed to do. Third and finally, instructional moves and strate­gies were coded for their place within the lesson, either inside or outside of the text

Analyzing Guided Practice—Release of Responsibility

In each program, a comprehension skill or strategy receives extensive emphasis in one or more units of instruction. The researchers tracked three skills/strategies, sum­marization, narrative structure, and making inferences, through one unit of the fourth-grade curriculum for each core program to determine whether a release-of-instructional moves and the flow of instruction by comprehension skill. During the data analysis, the researchers also kept anecdotal notes on the structure  and reliability was checked on 10% of the lessons. Each time, the agreement between pairs of coders remained above 80% with significant kappa coefficients. The total coding process constituted five months of work.

To examine instructional design, the researchers first located the theme or unit where each of these skills was first introduced and where they received a primary empha­sis. The number of instructional moves for each mode of instruction (direct explanation, modeling, guided prac­tice, questioning, discussion, and independent practice) was plotted for each lesson in the unit. A lesson typi­cally lasts five days and each unit has three to five les­sons. So the researchers examined for how many lessons (or weeks) a skill was taught and the nature of that instruction. If a release-of-responsibility model was being followed, the researchers expected to find an initial emphasis on direct explana­tion with teacher modeling. Then students would en­gage in guided practice with further modeling, followed by questioning and independent practice.

In each program, a comprehension skill or strategy receives extensive emphasis in one or more units of instruction. To examine instructional design, the researchers first located the theme or unit where each of these skills. Then students would en­gage in guided practice with further modeling, followed by questioning and independent practice

Examining Spacing and Timing of Instruction

One of the goals of this research project was to exam­ine the pacing and timing of comprehension strategy and skills instruction. The researchers wondered whether the com­prehension skills taught in these five core programs were addressed with the same thoroughness as found in a sampling of original research cited by the National Reading Panel (NICHD, 2000) and the RAND Reading Study Group (2002).

Their goal was to compare the amount and duration of instruction in the original comprehension research studies that validated a skill or strategy to the amount and duration of instruction provided for the same skills in the core programs. For this part of the study, the researchers ex­amined the same three comprehension strategies: mak­ing inferences, narrative structure, and summarizing. In this analysis, the researchers considered the fourth-grade level of each program because students are generally past the need for decoding instruction by this grade, and the scope and sequence of comprehension skills and strate­gies is very similar in grades 3,4, and 5. First, the researchers iden­tified the theme or unit where the skills or strategies achieved a primary focus or initial focus. Within each unit, the researchers counted the type and number of instructional moves for each strategy. The totals for direct explana­tion, modeling, guided practices, and independent prac­tice were then compared with the amount of instruction reported in research studies that sought to validate these skills and strategies. Ultimately, the researchers compared total instructional sessions in core programs to total instructional sessions in the research articles.

One of the goals of this research project was to exam­ine the pacing and timing of comprehension strategy and skills instruction. The goal was to compare the amount and duration of instruction in the original comprehension research studies that validated a skill or strategy to the amount and duration of instruction provided for the same skills in the core programs. Ultimately, the researchers compared total instructional sessions in core programs to total instructional sessions in the research articles.

Results

The results of the curriculum analysis of the five core reading programs are presented in four sections. In the first section, the researchers define what skills and strategies are taught and then explore the depth and breadth of these curricula. In the second section, the researchers look at how these curricula are taught, exploring how much direct expla­nation, modeling, guided practice, and questioning the students receive in core reading programs. A subset of this analysis looks specifically at lessons labeled direct explanation and evaluates the explicitness of these les­sons. In the third section, the researchers examine the instructional-design models employed by the core programs and compare them to the release-of-responsibility model (Pearson & Gallagher, 1983). In the fourth section, the researchers examine how thoroughly three comprehension skills and strategies are taught, comparing their treatment in core programs to research studies that validated meth­ods for teaching these skills and strategies.

            In this term, the researchers define what skills and strategies are taught, look at how these curricula are taught, examine the instructional-design models employed, and examine how thoroughly three comprehension skills and strategies are taught.

 

Comprehension Curricula

The comprehension curricula in the five core programs consist of a mix of comprehension skills, comprehen­sion strategies, and genre or text-structure elements. Each program categorizes the comprehension curricu­lum into a different number of skills, strategies, and genre elements. In each program, some of the compre­hension skills and strategies are regularly assessed at the end of an instructional cycle, a lesson or a theme, whereas other skills and strategies are not assessed. Our analysis began with an inspection of the published scope and sequence and a verification of the scope and sequence by reading each lesson. The distinction between skills and strategies varies across the five programs.

Program A makes no clear distinction between skills and strategies. In the published scope and sequence, all items are listed under the common heading Strategies and Skills. Within the lesson plans, some items are la­beled as strategies, others as skills. Strategies include such processes as previewing, predicting, summariz­ing, self-questioning, fix-up, self-monitoring, reflect, re­sponse, and text features.

Program B provides guidance for both comprehen­sion skills and comprehension strategies and all are listed in the scope and sequence under the common head­ing Comprehension and Analysis of Text. Each lesson focuses on both skills and strategies. The comprehen­sion strategies include predicting, mental images, self-questioning, summarizing, read ahead, reread to clarify, use context, use text structure, adjust reading rate, and decoding.

Program C includes lesson plans for both com­prehension skills and comprehension strategies and maintains a clear distinction between the two. Seven comprehension strategies—asking and answering ques­tions, making connections, monitoring and clarifying, adjusting reading rate, predicting, summarizing, and visualizing—reoccur throughout all lessons. Eleven comprehension skills (e.g., drawing conclusions, fact and opinion, author’s purpose) are taught within each lesson, typically on the second reading of the lesson.

Program D’s lesson plans include 17 comprehension skills and 6 strategies with an almost complete distinc­tion between the two. The comprehension strategies are predicting, summarizing, phonics/decoding, eval­uating, questioning, and monitoring. Only predicting repeats as both a strategy and a skill. Comprehension skills include main idea, making inferences, drawing conclusions, and others.

Finally, Program E has lesson plans for 15 skills and strategies with everything labeled as both a skill and a strategy, plus two text-structure concepts— narrative structure and author’s point of view.

The curricula of all five core reading programs in­clude comprehension skills and strategies that do not appear on the National Reading Panel’s (NlCHD, 2000) recommendations nor are they found in the RAND Reading Study Group (2002) report or other literature reviews. Four patterns describe the greater complex­ity of the curricula in the published programs. First, commercial programs divide some skills or strategies into components and have the teacher teach each com­ponent separately. This increases the number of skills and strategies taught in one year of instruction. In Program D, noting details is taught as a separate skill and then taught again in conjunction with finding the main idea. In Program B, main idea is taught as either stated or unstated and is then taught again as main idea and details.

All programs cover narrative structure with differ­ent numbers of comprehension skills. In Program A, character, setting, plot, and theme are taught as separate skills. Program B lists narrative structure as a skill but also has separate lessons on sequence of events, charac­ters, and setting. Program C covers narrative structure, characterization, setting, and other literary concepts. Program D includes story structure and sequence of events within a story as separate skills. Finally, Program E focuses on just two tested skills, character and setting, but not the global concept of narrative structure.

The second pattern that explains the large number of comprehension skills and strategies is the tendency to include a skill under more than one label. The way in which the five programs handle the problem of teaching students to make inferences illustrates this point. Each of the programs presents lessons for making inferences but also provides lessons on drawing conclusions and making generalizations. A close read of these lessons in­dicates that the same mental process is presented under two or three different labels. Program D presents draw­ing conclusions and making inferences as very similar skills. To make inferences, the program asks the teachers to,

Remind students that authors do not always put every bit of information about characters and events on the page. By leaving some mlormaiion out, they let readers appU what they know from personal experience to the story they are reading. By looking at characters’ actions, for example, read­ers often make inferences about those characters’ feelings and personalities.

In the first section, the researchers define what skills and strategies are taught. The researchers look at how these curricula are taught. The researchers examine the instructional-design models employed. The researchers examine how thoroughly three comprehension skills and strategies are taught

 

To draw conclusions, the program instructs teachers to say.

Tell students that authors often use clues instead of explain­ing everything that happens in a story. A reader must use those clues plus what he or she knows about stories and real life to draw conclusions about events and characters.

Program B asks teachers to explain drawing conclusions by saying,

A conclusion is a judgment that you infer or deduce from a story’s facts and details. To draw conclusions you combine facts and details in the text with personal knowledge and experience… When you draw conclusions you also use them to make generalizations that go beyond the story. A general­ization always extends beyond the information provided by the story. (Program B, Theme 2, p. 1381)

Later in the same unit, the program asks the teachers to explain making inferences as follows: An inference is a connection that a reader makes between information that is given and what he or she already knows from experience.

The similarity in these explanations extends to the teacher modeling and to the explanation of mental pro­cesses underlying the skill when provided by the teach­er’s edition. The third pattern that accounts for the large num­ber of comprehension skills in core reading programs is their tendency to label elements of genre and text struc­ture as comprehension skills. Four of five programs label author’s purpose, fact and opinion, reality and fantasy, and sequence of events or cause and effect as comprehension skills. Elements of text structure, or fea­tures of a text, such as its graphic aids or text-structure patterns (compare and contrast or cause and effect), are also considered comprehension skills in core programs. The researchers concede that readers make use of these elements to comprehend (Williams, 2006), but they can be thought of as elements or characteristics of text and not mental processes. However, these text features can be used in mental processes such as summarizing, determining importance, or following an author’s argument.

Fourth and finally, all programs have another set of instructional activities that are labeled comprehen­sion skills and strategies, but these are more properly thought of as modes of response to a text. When stu­dents are asked to make judgments, compare and con­trast (not in the sense of looking for a text-structure pattern), and categorize and classify, they are not us­ing skills and strategies to construct meaning but rather are responding, reflecting, and organizing the meaning they have developed. All of the five programs studied include at least two of these three response modes as comprehension skills and strategies and three of the programs include all three.

 

Comprehension Skills and Strategies Instruction

Frequency of Instructional Moves in Core Reading Programs

These results include all instructional moves that occurred before reading the selection, while read­ing the selection, and after reading the selection. In the next section, the researchers focus on just those instructional moves that occurred during the reading of the selections.

Overall, few of the instructional moves were cod­ed as mentioning, a code derived from Durkin (1981). The core programs rarely just mentioned a skill and assumed students would perform it. Mentioning was most frequent in Program E, hut infrequent in the oth­er programs. Rather, when skills were mentioned, the manual tended to give some explanation of its value or its procedure, but the manual stopped short of direct explanation. There was a noticeable difference among the programs in the percentage of instructional moves coded as skill + explanation. Program C provided far more explanations (18.5%) of a skill/strategy than did the other four programs, with Program D (1.6%) pro­viding the fewest moves where a skill was mentioned and explained.

The five core programs differed in the amount of modeling provided for the students. Programs B (7.9%) and C (6.2%) provided more modeling than did the other three programs. In Programs A (2.5%), D (3.7%), and E (1.7%), the directives to model occurred when the strategy or skill was first introduced or reviewed, typically before or after the main reading selections. Nonetheless, Program C (8.3%) provided more information about the passages than did the other four programs: Program A (0.2%), Program B (2.4%), Program D (2.3%), and Program E (2.0%).

The asking of questions was the predominant activ­ity in four of the five core programs, accounting for a large proportion of instructional moves in Program A (49.7%), Program B (55.7%), Program D (45.5%), and Program E (55.5%). Questioning in Program C (11.8%) consumed a much smaller proportion of the instruc­tional moves compared with the other four programs. Adding in the times when a core program directed a teacher to both ask a question and then model the re­sponse, questioning could account for up to 60% of the teacher moves in a core program. Related to the issue of questioning is discussion. The programs differed in the use of discussion. Programs C and D included the highest proportion of discussion, 16.0% and 19.2%, respectively, with Programs A (9.2%), B (2.1%), and E (3.3%) employing the smallest proportion of instruc­tional moves on discussion.

Guided practice occurs when a program directs a teacher to give students hints, prompts, and suggestions on how to understand a passage or use a skill. Programs C and D included the highest proportion of guided practice, 18.4% and 17.9%, respectively. The other three programs includ­ed significantly smaller proportions of guided prac­tice—7.9% for Program A, 3.1% for Program B, and 6.7% for Program E.

Overall, few instructional moves were coded inde­pendent practice because the researchers confined our study to ma­terial presented in the teachers’ editions. There was a difference in the proportion of independent practice among the five programs. Programs A and B included the highest proportions, 9.7% and 13.7%, respectively, with Program C including 5.4%, Program D 5.7%, and Program E 6.5%.

Direct explanation occurs in lessons where a skill or strategy is introduced or reviewed, and the program seeks to provide declarative, procedural, and condi­tional information. Consequently, Program A (14.9%) and E (7.2%) included the highest proportion of direct explanation. The other three programs included fewer incidents of direct explanation: Program B in­cluded 4.3%, Program C included 5.3%, and Program D included 2.6%.

The researchers took a closer look at the lessons categorized as direct explanation to examine their characteristics using guidelines developed by Duffy et al. (1986). The results of this analysis are presented in Table 3. All programs, except Program E, provided teachers with an explicit description of the mental process, but they varied in how well they did so. Program A provided a clear focus on the text features the reader should use to employ the strategy. None of the others did so with any degree ol explicitness. None of the programs provided informa­tion on the sequence of the mental process. All of the programs neglected conditional knowledge, failing to inform students about the usefulness of the skill or strat­egy and when it should be applied, with the exception of Program B. All of the programs engaged the teachers in modeling of the strategies, but only two of the pro­grams, A and E, directed the teachers to give students explicit feedback on their use of the skills or strategy. Although all the programs provided direct explana­tions, in many cases the instructions fell short of the explicitness that Duffy et al. sought for the teachers.

These results include all instructional moves that occurred before reading the selection, while read­ing the selection, and after reading the selection. In the next section, the researchers focus on just those instructional moves that occurred during the reading of the selections.

 

Frequency of Instructional Moves While Reading

The researchers coded instructional moves while reading separately because the researchers wanted to understand how teachers might coach and scaffold comprehension instruction dur­ing the meaning construction phase of reading (Beck, McKeown, Sandora, & Kucan, 1996). Researchers have argued that during reading, coaching and scaffolding promote the use and internalization of comprehension strategies (Meichebaum & Asnarow, 1979; Rosenshine & Meister, 1994). As Table 4 shows, in four of the five core programs, A, B, D, and E, questioning was the predominant instructional move. Summing across the codes of questioning and questions + modeling dem­onstrates even more clearly the overwhelming number of questions that students might be asked while reading. In Program A, 62.5% of the instructional moves were questions. In Program B, 71.5% were questions; in Program D, 45.5% were questions; and in Program E, 78.3% were questions. Only Program C departed from questioning students while reading. In its place, the teacher explained skills and strategies, modeled them, and provided guided practice.

Researchers have argued that during reading, coaching and scaffolding promote the use and internalization of comprehension strategies. The researchers coded instructional moves while reading separately because the researchers wanted to understand how teachers might coach and scaffold comprehension instruction dur­ing the meaning construction phase of reading.

 

Guided Practice and Release of Responsibility

The researchers sought to describe the instructional model that un­derlies skill and strategy instruction in each program. The researchers did so by following three skill/strategies—using narrative structure, making inferences, and summarizing—across one instructional unit. The researchers studied the unit in which the skill/strategy received the most extensive treatment. The researchers tracked the number and type of teacher moves for each skill/strategy across each weekly lesson. The results are presented in Tables 5, 6, and 7. Each column rep­resents one week of instruction. It is important to note that Program D has three- and four-week units, whereas Programs A, B, C, and E have five- or six-week units. The numbers indicate how frequently an instructional move was employed in a given lesson. In this analysis, the researchers ig­nored whether the instruction took place before reading the selection, during reading the selection, or after read­ing the selection.

The researchers sought to describe the instructional model that un­derlies skill and strategy instruction in each program. The researchers did so by following three skill/strategies—using narrative structure, making inferences, and summarizing. In this analysis, the researchers ig­nored whether the instruction took place before reading the selection, during reading the selection, or after read­ing the selection

 

Narrative Structure

In Program A, narrative structure receives its most extensive treatment in Unit 4 and again in Unit 5. In earlier units of instruction, other aspects of narrative structure, such as character development and setting, have been addressed but the overall concept has not. Within Unit 4, students receive direct explanation in the second and fifth weeks of the unit, very minimal guided practice, and extensive questioning during all weeks except for the first week of the unit. In Program B, narrative structure is taught in Unit 2, and students are questioned on their knowledge of narrative struc­ture during subsequent units of instruction. During the second week of Unit 2, the manual provides one direct-explanation lesson, several opportunities for the teacher to model the use of narrative structure, and question­ing during that and the following week. Program C provides equal focus on narrative structure in Units 1, 3, and 4. In Unit 1, the program provides one in­stance each of direct explanation, guided practice, and questioning. In subsequent weeks, the program covers other narrative topics, such as character development, structure of historical fiction, dialogue, and features of biography, but does not return to narrative structure. Program D covers narrative structure most extensively in Unit 1 and again in Unit 4. In the first week of Unit 1, the program provides one direct-explanation lesson, several instances of explanation and guided practice and extensive questioning. Later in the week, in Lesson

3, students receive more guided practice and questions. Program E provides the most extensive instruction in narrative structure in Unit 1 but continues to provide questions on it in subsequent units. In Unit 1, the pro­gram provides four direct-explanation lessons and some additional explanation of the skill, with discussion and questioning. Noticeably absent from almost all lessons is teacher-guided practice.

            In Program A, narrative structure receives its most extensive treatment in Unit 4 and again in Unit 5. In Program B, narrative structure is taught in Unit 2, and students are questioned on their knowledge of narrative struc­ture during subsequent units of instruction. Program C provides equal focus on narrative structure in Units 1, 3, and 4. Program D covers narrative structure most extensively in Unit 1 and again in Unit 4. Program E provides the most extensive instruction in narrative structure in Unit 1 but continues to provide questions on it in subsequent units.

 

Making Inferences

How core program guide inference generation is pre­sented in Table 6. The teaching of inference generation (actually drawing conclusions) in Program A receives its most extensive treatment in Unit 3, with one addi­tional direct-explanation lesson in Unit 6. In that unit, direct explanation is provided in Weeks 2, 4, and 5. Extensive questioning is provided throughout the unit. No modeling while students read, guided practice, or additional explanations are noted. In Program B, Unit 2, the teacher’s manual provides modeling in Lesson 2, direct instruction on making inferences in Lesson 3, and extensive inferential questions throughout the unit, but no guided practice. Some follow-up instruction for making inferences is provided in Unit 4. In all other units of Program B, making inferences is addressed by asking inferential questions. In Program C, the stron­gest focus on making inferences is found in Unit 3, with additional lessons in Units 2 and 6. The teacher’s edi­tion provides direct explanation for making inferences in Lesson 2 with considerable guided practice and ques­tioning. During the balance of that unit, students are questioned on the skill/strategy. In Program D, mak­ing inferences is taught in Units 2, 4, and 6 with equal emphasis. In the second week of Unit 2, the teacher’s manual provides one direct-explanation lesson for mak­ing inferences. This is followed by more explanation, guided practice, and questioning. After that, the stu­dent receives no guided practice in the skill. In Program E, making inferences is taught in Units 1 and 5, with slightly greater emphasis in Unit 5. In the interven­ing units, the teacher asks inferential questions. The teacher’s manual provides direct-explanation lessons for making inferences in Lessons 1, 2, and 3. Each is followed by modeling, teacher questioning, and inde­pendent practice but no guided practice.

 

Summarizing

Summarizing is not taught in Program A until Unit 4, halfway through the school year, and receives its stron­gest emphasis in Unit 5. Here, the manual provides four direct-explanation lessons spread over four weeks fol­lowed by teacher questioning and some independent practice but no guided practice. In Program B, the pri­mary focus for summarizing is found in Unit 2, but it is reviewed again in Unit 3 and Unit 6. The manual provides direct-explanation lessons in Weeks 4 and 5, negligible guided practice in Week 5, and questioning

throughout the unit. In Program C, summarizing re­ceives almost equal treatment in all units. There is no strong initial instruction of direct explanation, but the strategy is modeled regularly by the teacher while read­ing the selections and is incorporated into discussions. Program D provides some exposure to summarizing at the beginning of the school year and some direct ex­planation in Unit 2. The most extensive instruction is provided in Unit 6. Here, the manual provides for direct explanation in Weeks 1 and 2, modeling in Weeks 1 and 3, and discussion that stresses summarizing throughout the first two weeks of the unit. Although students are asked to summarize in all six units of Program E, direct explanation and modeling are provided only in Unit 3. Direct explanation is provided in Weeks 1 and 2, with the rest of the teacher moves consisting of discussion and questioning.

Looking across these five programs, the researchers noted that direct explanation, discussion, and questioning are more common than modeling, skill explanations, or guided practice. The manuals provide little support for learning these skills and strategies while students are reading. Instructional design is not consistent from one unit to another within the same program. Program E provides much more support for teaching narrative structure than it does for teaching inference generation and summarizing. Similarly, Program C provides more focus and support for developing students’ summarizing ability than it does for teaching inference generation and narrative structure.

Looking across these five programs, the researchers noted that direct explanation, discussion, and questioning are more common than modeling, skill explanations, or guided practice. Summarizing is not taught in Program A until Unit 4, halfway through the school year, and receives its stron­gest emphasis in Unit 5.

 

Spacing and Timing of Instruction

The researchers sought to compare the amount of instruction and practice provided in the five reading programs and to selected research studies that validated the teaching of making inferences, narrative structure, and summariz­ing. These comparisons are presented in Tables 8, 9, and 10. On the left-hand sides of the tables are data from the core reading programs—one fourth-grade unit or theme where the strategy or skill received the primary emphasis. Each time a particular strategy was addressed via direct explanation, guided practice, in­dependent practice, or questioning, we noted it. The numbers in the tables represent instances of each type of instruction. On the right-hand sides of the tables are data from a sample of the original research studies. The Methods sections in most instructional studies do not provide enough detail for us to determine whether the researchers were engaged in direct explanation, guided practice, modeling, or other instructional moves so the researchers decided to record the number of instructional sessions for the total study.

In the original research, making inferences was taught anywhere from 3 times in three days to 24 times in eight weeks. Although the range of the research is somewhat broad in terms of time, each study made a clear effort to teach the strategy with focus. In direct contrast, however, are the five core reading programs. Program A, for example, has the highest number of in­stances of making-inference instruction over the course of one unit: 11 instances of direct instruction, with the rest being independent practice and teacher question­ing. However, in the Dewitz et al. (1987) study, making-inference skills received direct instruction or guided practice 24 times in eight weeks. In the Hansen and Pearson (1983) study, making-inference skills received direct instruction or guided practice 20 times in 10 weeks. Meanwhile, the other four programs do not pro­vide instruction in making inference more than eight times over the course of an instructional unit. Program B provides two instances of direct instruction. Program C one instance. Program D two instances, and Program E four instances. It is important to note that all the core programs cover making inferences under two or more different labels.

The frequency and density of instruction for identi­fying the narrative structure is presented in Table 9. A sampling of the original research ranges from 10 lessons in concentrated periods of time (Baumann & Bergeron, 1993; Buss, Ratliff, & Irion, 1985) to 16 lessons over seven weeks (Fitzgerald & Spiegel, 1983). Programs A, B, and C provide only one direct-explanation lesson for teaching narrative structure within a theme, and it is important to note that the researchers selected for study those units where narrative structure was first introduced for the grade. Programs D and E provide more direct explana­tion and guided practice but none achieve the amount of instruction found in the three original research stud­ies that validated the skill.

A sample of the original research studies indicates that researchers needed anywhere from 5 to 11 les­sons in summarizing within a range of three to seven weeks to achieve significant change in students’ ability to summarize. Taylor and Beach (1984) provided one hour of instruction per week over seven weeks. Morrow (1984) provided 8 instructional sessions over four weeks, and Armbruster et al. (1987) provided 11 in­struction lessons over four weeks. The studies, as was the case for the three other strategies investigated, all provided concentrated instruction in the strategy. The five core reading programs, however, provide fewer in­structional sessions on summarizing over the course of one five- to six-week unit than did the researchers. Program C provides six direct-explanation lessons and achieves the intensity of the research studies. Programs A, B, D, and E provide, at most, two or three direct-ex­planation lessons and. except for Program B, even fewer guided-practice sessions.

Although a perfect comparison between instruc­tional methods in the research studies and in the core reading programs is not possible, the data suggest that within a common timeframe, three to six weeks of in­struction, the researchers almost always provided more direct-explanation and guided-practice lessons than the core reading programs do. In a few research stud­ies, the number of instructional sessions was few, and these studies lend some support to the instructional plans in core programs. Yet most research studies pro­vided far more instruction and practice than do the core programs. As our other findings suggest, the attempt to cover many skills and strategies within a year of in­struction makes it difficult to devote sustained attention to any one skill over time.

            The researchers sought to compare the amount of instruction and practice provided in the five reading programs and to selected research studies that validated the teaching of making inferences, narrative structure, and summariz­ing.  In the original research, making inferences was taught anywhere from 3 times in three days to 24 times in eight weeks.

 

Discussion

Their goal was to study comprehension skills and strat­egy instruction in the five best-selling core reading pro­grams. The researchers sought to provide a rich description of their teacher’s manuals, which serve to define the curriculum and guide the instruction for many classroom teachers. In doing so, the researchers sought to update the findings of Durkin (1981), who examined the five best-selling programs of her era, and other researchers who have studied compo­nents of comprehension instruction in core reading pro­grams (Afflerbach & Walker, 1992; Franks et al., 1997; Jitendra et al., 2001; Miller & Blumenfeld, 1993).

The researchers chose to engage in a close curriculum study and to explore four facets of instruction. The first con­cern in our analysis was the content of the comprehen­sion instruction. What skills and strategies are being taught and are these the same skills and strategies rec­ommended in reviews of the research? Second, we ex­amined the instruction in these core programs. How much direct instruction, guided practice, and indepen­dent practice exist within each program? Twenty-five years ago, Durkin (1981) developed evidence that these programs failed to give teachers the tools required to help students understand how to comprehend. Third, the researchers examined the instructional design of the programs, trying to determine how closely these programs con­form to the release-of-responsibility model (Pearson & Gallagher, 1983). Finally, the researchers sought to determine whether these programs taught skills and strategies with the thoroughness found in the original research studies that validated them.

Their goal was to study comprehension skills and strat­egy instruction in the five best-selling core reading pro­grams. The researchers chose to engage in a close curriculum study and to explore four facets of instruction. Finally, the researchers sought to determine whether these programs taught skills and strategies with the thoroughness found in the original research studies that validated them.

 

Comprehension Curricula

The curricula in core reading programs cover more skills and strategies than are recommended in the research literature, with the number of skills and strategies vary­ing from 18 to 29 per program per year. The National Reading Panel has recommended seven strategies plus multiple-strategy instruction. Duke and Pearson (2002) have endorsed prediction, thinking aloud, story struc­ture, informational text structure, visual representa­tions of text, summarizing, questions/questioning, plus the self-regulated strategies in reciprocal teaching (Palinscar & Brown, 1984) and transactional strategy instruction (Brown et al, 1996). Pressley’s (2000) re­view of comprehension instruction largely parallels the recommendations of Duke and Pearson.

The lack of parsimony in the comprehension cur­ricula has several causes. First, skills or strategies that might be taught as a unitary concept are often dissected into components. It makes little logical sense to sep­arate the teaching of main ideas from the details that support them, as is done in three of the core programs (Afflerbach & Walker, 1992). Nor does it make sense to introduce character and setting before and separate from the larger issue of narrative structure (Fitzgerald & Spiegel, 1983). Deconstructing comprehension into many skills leaves the reassembling of those skills into some coherent whole to the teacher and the reader, and the core programs rarely reference an old skill when in­troducing a new one.

Second, core programs often employ the same cognitive process under two or three different labels. In almost all of the programs, students are taught to make inferences, draw conclusions, and make gener­alizations, yet each demands very similar mental pro­cesses. In Program E, students are taught two skills for determining importance—main idea and details and later important and unimportant ideas. Giving different names to the same cognitive process may lead students to believe that they need to learn a new skill when in fact they may have already learned and practiced the mental process. Using multiple labels for the same skill might even confuse the teachers, especially those who are directed to follow the program with fidelity. One goal in comprehension instruction is to make the un­derlying thinking of good readers public and obvious. Using multiple terms for the same mental process hin­ders these efforts.

Third, the comprehension curricula in the core programs classify elements of text structure and genre knowledge as skills. Research supports the teaching of text structure and genre (Duke & Pearson, 2002; NICHD, 2000; Williams, 2006) but not in the fractionated way common in core reading programs. Reality and fanta­sy is taught separate from narrative structure. In two programs, character development and setting are in­troduced before general narrative structure. Finally, all programs offer instruction in what the researchers labeled response to text, such as comparing and contrasting or making judgments about what was read. The large numbers of skills and strategies taught in the core programs means all get superficial treatment, often demanding one skill a week. Programs A and D introduce one skill a week and review one or more skills each week. Program B offers a more concentrated focus on just two or three skills/strategies over the course of one six-week unit plus review of previously taught skills. Program C is the only program that regularly has teachers and students use the same set of seven comprehension strategies, approaching a multiple-strategies routine. In several programs, important skills/strategies, like using narra­tive structure, get the same attention as does steps in a process or fact and nonfact [sic]. Students’ repertoire of strategies does not gradually expand, and teachers are not encouraged to engage their students in the use of multiple strategies. This practice is not consistent with the growing research base on multiple-strategy instruc­tion (Brown, 2008; Palinscar & Brown, 1984; Pressley et al., 1992).

The core programs have not clarified the distinc­tion between what is a comprehension skill and what is a strategy. In some programs, every mental act is la­beled a skill and a strategy, whereas in other programs some mental acts are labeled skills, others strategies. In two programs, a skill/strategy can be first taught as a skill and then as a strategy; only Program C draws a clear distinction between comprehension skills and strategies. Failing to make a distinction, the programs do not acknowledge when students are acquiring a new strategy that demands deliberate and thoughtful use and when a strategy has become a reasonably automatic skill (Afflerbach, Pearson, & Paris, 2008). It is ques­tionable whether teachers and their students fully un­derstand the curriculum they are, respectively, teaching and learning.

The lack of parsimony in the comprehension cur­ricula has several causes.

First, skills or strategies that might be taught as a unitary concept are often dissected into components. Second, core programs often employ the same cognitive process under two or three different labels.  Third, the comprehension curricula in the core programs classify elements of text structure and genre knowledge as skills. The core programs have not clarified the distinc­tion between what is a comprehension skill and what is a strategy. In some programs, every mental act is

 

Comprehension Instruction

Durkin (1981) criticized core programs for engaging in too much assessment and practice at the expense of instruction. The current core programs continue some of those problems, but with noticeable improve­ments. The most frequently missing ele­ments of direct explanation are a focus on the thinking process that underlies a strategy and conditional knowl­edge, stressing when and why a skill or strategy is im­portant (Duffy et al., 1986). All of the programs include modeling of skills and strategies by teachers, but very seldom are students asked to model the skills and strat­egies themselves. In some programs, during the read­ing of the text over 70% of the instructional moves are questions, with little modeling and guided practice. The core programs still need to find ways to help students interact with texts in ways that go beyond questioning. None of the programs suggest the reciprocal dialogues promoted by Palinscar and Brown (1984) or Brown et al. (1996). Although not in our analysis, it is important to note that the 2008 California edition of StoryTown employs the queries of Beck and her colleagues that help students to relate one text segment to another or text-based ideas to readers’ prior knowledge (Beck, McKeown, Sandora, Kucan, Worthy, 1996).

Although Durkin (1981) found that 18% of instruc­tion involved practice, the researchers found that independent prac­tice was limited to less than 10% of the instructional moves. The relatively modest amount of independent practice in the contemporary programs may be a func­tion of the program segments the researchers studied. The researchers limited our analysis to the teacher editions, which included fac­simile pages of the student’s edition and the primary workbook, but did not study the additional workbooks that accompany the programs.

All of the programs include modeling of skills and strategies by teachers, but very seldom are students asked to model the skills and strat­egies themselves. In some programs, during the read­ing of the text over 70% of the instructional moves are questions, with little modeling and guided practice. The core programs still need to find ways to help students interact with texts in ways that go beyond questioning.

 

Release of Responsibility

If the release-of-responsibility model (Pearson & Gallagher, 1983) is the preferred manner of assisting students to acquire and internalize strategies, then none of the programs have employed this model with any consistency and some not at all. The missing link in most programs is the lack of guided practice and the need for students to model the strategies. In Programs A, B, D, and E the instructional design moves from di­rect explanation to questioning with very limited guided practice. Students are not guided to acquire and try out the strategies. Program C is the exception; it employs explanations, modeling, and guided practice while stu­dents read in some, but not all. instructional units.

 

Spacing and Time of Instruction

Finally, the researchers found that none of the programs cover comprehension skills and strategies with the intensity employed by the original researchers. In most cases, students receive far fewer instructional lessons than re­searchers used to originally validate strategy instruction for narrative structure, making inferences, or summa­rizing. In some programs, a skill might be taught for just 1 week and not reemerge until 8 or 10 weeks have passed.

The programs lack massed practice when skills and strategies are first introduced and lack distributed prac­tice throughout the instructional units (Dempster, 1987; Glenberg, 1976; Smith & Rothkopf, 1984). In some programs, critical strategies, like making inferences or summarizing, are not introduced until halfway through the year.

 

Limitations

The researchers did not examine how core programs develop students’ knowl­edge and whether the knowledge that is developed is sufficient to enable comprehension. Walsh (2003) has argued that knowledge is poorly developed in core pro­grams, and reading selections are not ordered to devel­op students’ knowledge over a unit of study. Guthrie et al. (1998) have demonstrated that strategy instruction is more effective when embedded in a rich, meaning­ful context. None of our analyses looked at content of the units in core programs nor the relationship between the texts students read and the comprehension skills and strategies instruction. Knowledge, strategies, and motivation (Alexander, 2003) are all essential for ef­fective comprehension, and further curriculum studies should examine how knowledge is developed in core programs.

The researchers did not examine vocabulary instruction or vocabulary-learning strategies, critical components of text comprehension. It would be important to study how many words are taught, which words are taught, how they are taught, and how much review is provided on a weekly and monthly basis. Vocabulary and prior knowledge comprise two of the most critical compo­nents of comprehension, perhaps more important than skill and strategy instruction  (Pressley, 2000).

 

Conclusions

Core programs provide the methods and the content of reading instruction for large numbers of classrooms in the United States. As such, they may be the most influential textbook series in the country (Chambliss & Calfee, 1998). An extensive review of effective schools in California found that although the use of core reading programs may have had a significant impact on student achievement, the programs’ influence is tempered by leadership, achievement expectation, a regular assess­ment system, and staff development (EdSource, 2006). McGill-Franzen et al. (2006) found little evidence that the use of a core program had any impact on improv­ing the reading achievement of at-risk students. The structure of core reading programs and the methods of instruction may contribute to their negligible impact on at-risk students.

Their analysis of comprehension instruction in core reading programs demonstrates several shortcomings that may undermine their efficacy. First, the compre­hension skills and strategies curricula are wide but not terribly deep. Core programs should be educative for teach­ers and students (McGill-Franzen et al., 2006), helping both understand how readers develop. Core programs do not provide enough practice to ensure that any given skill will be learned, and this probably jeopardizes the weakest readers in the room. Finally, the core programs do not provide sufficient support or scaffolding so that students can learn to use these skills on their own. Too often the instructional lessons move from teach to ques­tion or assess, without guided practice. Although vali­dated comprehension strategies are taught, thus partially justifying the label “scientific-based reading research,” they are not taught with the rigor, persistence, or de­sign principles to ensure students’ acquisition of these strategies.

The problems that continue to plague core reading programs stem from the process used to develop them. Core reading programs are the products of three compet­ing interests (Wile, 1994): those of the author team, the publishers and editors, and the marketing and sales peo­ple. The author team, typically comprised of reading edu­cators and researchers, brings to the program-development process both a broad background in reading research, plus each author’s specific area of expertise. Wile (1994) has claimed that the author team has the least influence on the construction of the program.

The research has important implications for public schools and publishing companies. Much of what exists in core programs is useful, but schools and their teach­ers need to know that all core programs have flaws. Many of the instructional guidelines that are inherent parts of reciprocal teaching (Palinscar & Brown, 1984) or transactional strategy instruction (Brown, 2008) can be incorporated into the compre­hension instruction in core programs. Teachers can do more than ask their students comprehension questions. The asking of an inference or main-idea question by the teacher may undermine the very metacognitive process that students need to acquire, the decision to invoke a strategy when the reader needs it. Ultimately, teachers and schools need to see core programs as a structure, one that provides text and a general curriculum but also allows for elaboration.

Core programs need to provide a clearer rationale for what is taught, when it is taught, how it will be taught, and how often it will be reviewed. If comprehension instruction makes sense to teachers, then it is likely that comprehension instruction will make more sense to students. This may require editors to write more and teachers to read more. Core reading programs can more closely reflect the research base on comprehension in­struction, but schools must allow for teacher judgment and innovation in comprehension instruction, and pub­lishers must attempt to adhere more closely to what the research says about the content and methods of reading instruction.

The selection and adoption of an effective, research-based core reading program in the primary grades is a critical step in the development of an effective schoolwide reading initiative. The investment in identifying a core program that aligns with research and fits the needs of learners in your school will reap long-term benefits for children’s reading acquisition and development.

A critical review of reading programs requires objective and in-depth analysis. For these reasons, we offer the following recommendations and procedures for analyzing critical elements of programs, using the Consumer’s Guide to Evaluating a Core Reading Program Grades K – 3: A Critical Elements Analysis. First, we address questions regarding the importance and process of a core program. Following, we specify the criteria for program evaluation organized by grade level and reading dimensions. Further, we offer guidelines regarding instructional time, differentiated instruction, and assessment. We trust you will find these guidelines useful and usable in this significant professional process.

Ideally, every teacher involved in reading instruction would be involved in the review and selection of the core reading program. Realistically, a grade-level representative may be responsible for the initial review and reduce the “possible” options to a reasonable number. At minimum, we recommend that grade-level representatives use the criteria that follow and then share those findings with grade-level teams. 

Schools often ask whether the adoption should be K-6 or whether a K-3/4-6 adoption is advisable. Ideally, there would be consensus across grades K-6; however, it is imperative to give priority to how children are taught to learn to read. Therefore, kindergarten and first grades are critical grades and should be weighted heavily in adoption decisions. This may entail a different adoption for grades 4-6. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s