Monday, March 5, 2012

Using higher order questioning to accelerate students’ growth in reading

Peterson, Debra S., & Taylor, Barbara M. (2012). Using higher order questioning to accelerate students’ growth in reading. The Reading Teacher, 65(5), 295-304.

This article describes how elementary teachers in diverse urban schools worked to transform the kinds of talk they did with children about texts, with the goal of raising those students’ thinking to higher levels than had been reached before. The emphasis is on what is called “higher-order questions.” The authors define higher-order questioning as questioning that “requires students to think at a deeper level and to elaborate on their oral and written responses to literature” (p. 297). First, students were taught to respond to higher-order questions both orally and in writing. The ultimate goal, though, was to teach students to generate their own higher-order questions about texts, and then to use their new skills in generating and responding to higher-order questions to engage in student-led, student-centered small group discussions about texts. The article includes several vignettes that capture the kinds of small group discussions the researchers observed.

The authors generated three categories which they used to classify the types of higher-order questions they observed: 1) Theme, 2) Character Interpretation, and 3) Making connections to students’ lives. The texts used with the children were primarily fiction or at least narrative texts. I caught myself speculating on what sorts of categories might have been generated for higher-order questions about nonfiction/expository texts. With the current stress on increasing the emphasis on nonfiction texts, even in the earliest grades, I wondered why the authors chose to focus on narratives. Narrative texts are often thought to be easier to comprehend than expository texts (though I’m not sure I completely believe that is true), and perhaps because this new emphasis on higher-order questions probably was a big change in how reading comprehension was perceived in these elementary schools, working with fiction was seen as the first step. We don’t really get the full rationale for that in this article, though reference is made to an online version of this research report that may contain that information; I plan to check that out.

Other questions raised for me by the article involves my desire for more information on assessments. The students here are described as “making accelerated growth in their reading achievement” (p. 299). On what basis was that assessment made? How exactly was the success of this push for higher-order questioning documented? I realize that this information may well be in the longer online article, but even brief references to the assessments used here would be helpful and desirable, and would have strengthened the article while not necessarily taking up much space or having to go into excruciating detail.

Overall, though, I found the article helpful and hopeful. It provides concrete, authentic examples of what higher-order thinking and talk might sound like in an elementary classroom. Children are portrayed having meaningful and engaging conversations about literature, and that is a breath of fresh air. The recently adopted Common Core Standards may be an impetus toward raising the bar on student thinking, and those standards are referenced briefly in the article. My biggest worry relevant to the push for “higher-order” thinking is that as with any reform linked to high-stakes testing, allocation of scarce resources, and political agendas, there will be the inevitable push toward all things that can be quantified, packaged, and sold. I hope the kinds of change that led to the kinds of student talk that we see in this article won’t ultimately be reduced to formulaic models, scripts, and programs. I worry, but I’m still hopeful that the kind of collaborative work teachers did here to change the way they and their students thought about and talked about texts will be the trend that spreads in this country. The key here was the development of human resources and learning, not the development of materials and models. As long as we as educators keep asking some higher-order questions of our own, we will be on the right track.

Twenty Discussion Prompts:

1. How would you define a “higher-order question”? What makes a question a “higher order” one vs. a “lower-order” one? Give your own examples of higher-order questions. Then give examples of what you consider lower-level questions. What are the differences in the kinds of wording you used to frame the two kinds of questions?

2. Theoretical classifications are sometimes used to label questions as relatively high-level or low-level. Two classification systems that might be used to classify questions are Bloom’s Taxonomy with its six categories and Norman Webb’s four Depth of Knowledge categories. Which of the categories in Bloom’s and Webb’s models could be used to describe high-level questions?

Information on Bloom’s Taxonomy:

http://www.casdk12.net/ghs04/SRB/5-Curriculum/Blooms%20Taxonomy%20chart.pdf


Information on Webb’s Depth of Knowledge:

http://dese.mo.gov/divimprove/sia/msip/DOK_Chart.pdf


3. Look at the two classroom vignettes on page 296 of the article. What seem to be the goals of the first vignette? What seem to be the goals of the second vignette? How else do the two vignettes differ?

4. How has reading comprehension traditionally been assessed? How might higher-order comprehension be assessed? How would/should a focus on higher level thinking change the way reading comprehension is assessed?

5. Look at your state’s standards for reading. Which outcomes would you consider “higher-order” outcomes?

For an example of one state’s document, see the Missouri Grade Level Expectations for Communication Arts, which may be accessed at:

http://dese.mo.gov/divimprove/curriculum/GLE/


6. Look at the Common Core Standards that have recently been adopted by many states. These standards have been described as “rigorous” and involving a high level of thinking that students will need for later success in college and careers. Do you think the Common Core Standards represent high-level thinking? Why or why not?

For information on the Common Core Standards, go to:

http://www.corestandards.org/the-standards


7. Look at the Table on page 297 of the article, which lists three types of higher- order questions (Theme, Character Interpretation, Making connections to students’ lives) and gives examples of each type of question. Do you agree that the example questions are high level questions? Why or why not?

8. Discuss how the following kinds of teacher talk suggested by the article’s authors could help scaffold higher-order thinking:

• “If someone were to ask me that question, I might answer it this way . . .”

• “Please tell me more about that.”

• Complete the following: “I followed my dream when . . . “

• “Each of you will have a special role in your discussion group.”

• “If you agree with something one of your group members says, say ‘I agree with that because . . . “
• “Remember to follow the four discussion guidelines we have posted here on the wall.”

9. Look at the exchange between Mr. Flemings and Jorge on page 298. Do you think the “coaching” Mr. Flemings provided has actually helped Jorge respond at a higher level?

10. Look at the four “discussion guidelines” from Ms. Mallory’s third grade classroom (page 298). Are these guidelines sufficient? Are they developmentally appropriate? What cultural norms do they reflect? Can you think of examples where these guidelines might conflict with students’ cultural norms?

Kathryn Au’s classic research with Hawaiian students revealed that in some cultural backgrounds, typical response patterns might differ from the kinds of response patterns traditionally honored in most schools, which involve orderly turn-taking. The Hawaiian children in Au’s study were used to response patterns where several group participants might chime in together, and they got in trouble with their teachers for not taking turns. Look at this research at the following link and discuss its significance in light of this article and today’s culturally diverse classrooms.

For an article by Kathryn Au on culturally responsive teaching, see:

http://www.reading.org/General/Publications/ReadingToday/RTY-0912_culturally_responsive.aspx


11. Scan through the discussion among the children called Long, Molly, Khalid, Jack, and Samantha on pp. 298-299. Identify places in the discussion where you think you can spot “higher-order thinking” and explain why those comments caught your eye.

12. Why does a discussion of theme in a text often lead to higher-order thinking? Do you think the children whose discussions are presented here really understood the notion of theme, at least at an appropriate level for their age?

13. Some people believe teachers should not ask young children, or children who are still struggling to acquire basic literacy skills, to engage in the kinds of higher-order thinking about texts that the authors of the article recommend. What might be the rationale behind such a belief? What might be the rationale for building higher-order thinking even if basic reading and writing skills have not yet been mastered? What do you think?

14. The vignettes in the article seem to be mostly about fiction texts; there is one biographical text described, but the text seems to still be a narrative. The three categories the authors used to classify questions (Theme, Character Interpretation, Making connections to students’ lives) seem more suited to narrative text than to expository texts found in various content areas. How would higher-order discussion of nonfiction/expository texts look? What categories might be proposed to classify higher-order thinking about nonfiction texts?

15. Read through the section under the heading “Classroom Examples of High-Level Questioning” (pp. 299-301). The authors show examples of questions in their three classification categories (Theme, Character Interpretation, Making connections to students’ lives). Using a text you use (or might use) with your own present or future students, attempt to generate a few higher-order questions that could be classified under each category. If possible, share your questions with other educators. Critique each others’ questions, and discuss any challenges or difficulties that arose as you attempted to generate your questions.

16. Is it always necessary for a student to identify with a character or to connect a text to prior experience if that student is to fully comprehend and appreciate that text? For what kinds of texts would making such connections be relatively easy? For what kinds of texts would it be a challenge?

17. Teachers in the study reported here made instructional changes with the help of their colleagues. Why was such support and scaffolding so critical to the change process?

18. What kinds of administrative support were needed to make sure the required level of collaboration could occur? What kinds of resources needed to be present, and allocated, in order to make instructional change possible?

19. The project is presented in the article in a way that makes it seem as if all of the teachers involved were fully on-board for the change, but realistically, in most change processes, some resistance may be observed. What kinds of resistance might emerge when a school is working to move students into higher-order thinking? How might that resistance be met with by those supporting the change?

20. Describe the role of the literacy coach in the change process. What were some specific literacy coach behaviors that facilitated the processes of professional learning and instructional change?

Monday, February 6, 2012

Putting fluency on a fitness plan: Building fluency’s meaning-making muscle



Marcell, Barclay (2011). Putting fluency on a fitness plan: Building fluency’s meaning-making muscle. The Reading Teacher, 65(4), 242-249.



This balanced, sensible look at fluency, that controversial “pillar” of reading, comes from an elementary school literacy teacher. It’s a concise, useful piece that covers the history of the emphasis on fluency, including the development of the research base on repeated readings, the problems with current forms of fluency assessments, and some possible ways of getting beyond seeing fluency as only its most quantifiable aspects (speed and accuracy) and making sure the aspects of fluency related to meaning-making (expression and especially comprehension) receive their proper emphasis. Marcell does not recommend throwing out the current literacy assessments that stress only speed and accuracy, as some of those who oppose fluency assessments propose; such assessments are presented here as having usefulness as a screening device, when they are augmented by additional assessments of expression and comprehension.

Marcell proposes an acronym, REAL (Rate, Expression, Accuracy, Learning), and presents two specific assessment tools based upon that acronym. The first tool is a “student-friendly” rubric designed to help students self-assess their fluency on all four fluency aspects (I’m thinking this would be a good tool for teachers and students to use collaboratively). The second tool, called Repeated Readings Revisited, is designed to take repeated readings a few steps further than is often the case, that is, to give them more “meaning-making muscle” than just having students read through a passage orally and looking at correct words per minute. The tool takes readers to higher levels of comprehension on each successive reading. The first reading is for main ideas and details, the second reading is for understanding the author’s purpose and paraphrasing main ideas, and the third and final reading is for telling what the reader found most interesting and why, evaluating the title, and indentifying the author’s intentions. The Repeated Readings Revisited tool does have places to record correct words per minute, but it puts those aspects of fluency in their proper place. Comprehension is the bottom line of reading, and that is clearly illustrated here.



This article sounds like the “voice of reason” on fluency to me, and I hope many classroom teachers and literacy specialists will read it and try what Marcell suggests. The article, short as it is, covers a lot of important ground and is written in an engaging and accessible style. Because Marcell weaves in classroom vignettes that will resonate for many teachers, the article has authenticity and credibility. Yet even though this is an article aimed at practitioners, Marcell’s well-grounded, clearly narrated chronicle of the timeline of research on fluency and repeated readings makes this article credible for researchers and teacher educators as well.


A few concerns arose for me as I read. First, I wondered a little about the “student-friendly” rubric, especially the descriptions for students who are not meeting expectations. Although I believe we must level with students about whether or not they are meeting learning goals (they know anyway), I think some of the wording here might be a bit discouraging for some of the most challenged readers. I could be hypersensitive about this, and many learners might be fine with this wording, but I know children who might have hurt feelings if words like “weird” or “flat” were used to describe their reading. I’d probably make a few tweaks on the wording before using this tool with children.



My other concern is that one of the reasons the typical fluency assessments that count correct words per minute are so popular is that they only take a minute to administer. I worry that teachers and administrators will not want to change the “quick and dirty” but easily quantifiable assessments for those that may take longer and won’t provide numbers and so-called “objectivity.” Let’s face it: If you also assess expression and comprehension as Marcell suggests, that is going to take more time and be a bit less quantitative than only counting correct words per minute. Don’t get me wrong—I think it SHOULD take more than a minute, and with the high stakes placed on fluency assessments these days, assessments really need to look at fluency in its entirety rather than just looking at things that can be quickly and easily counted. Even more importantly, we need to stop teaching children that reading fast and pronouncing words are all there is to reading. As Marcell so convincingly points out, we need to stop sending mixed messages to children about reading. Children will quickly pick up that what is assessed is what is valued in school; assessing only rate and accuracy while also teaching that reading strategies and meaning-making are important may be even worse than sending mixed messages. It may be sending a very clear and definite message about what is REALLY valued, while at the same time teaching that what adults SAY is not important if something different is what actually counts. In sum, I am all for making the changes Marcell suggests, but I worry that quick and quantitative assessments are so seductive in today’s accountability-charged schools that it may be difficult for some educators to let go of them.



The above concerns, however, do not dim my appreciation of this article. I definitely plan to share it with the future teachers in my own preservice literacy education courses, and maybe with my literacy study group. Articles in recent issues of this journal have begun including some nice extras that are helpful for those of us involved in teacher education and professional development, namely, the sidebars “Pause and Ponder” which provides some pithy discussion/reflection prompts, “Take Action!” which suggests ways to link theory/research with practice, and “More to Explore” which provides some resources for those who want to learn more. These sidebars are particularly apt for this article, and combined with Barclay’s assessment tools, form a real “keeper” of an article with a lot of meat in a few pages.



Twenty Discussion Prompts:


1. What makes a good reader? What do good readers do?



2. What is “fluency”? How does fluent reading sound? What do fluent readers do?


3. On page 246 of the article, Marcell outlines his acronym for the four elements of fluency: R = Rate, E = Expression, A = Accuracy, and L = Learning. Obviously these four elements were named and ordered as they were in order to spell out the acronym. What are one-word synonyms that might be used for each of the elements in the REAL model?


4. If you were forced to rank the elements in Marcell’s REAL model in order of importance (without worrying about disrupting the REAL acronym!) how would you rank them? How do you think Marcell would rank them?



5. If you have used current literacy assessments with students (or maybe AS a student who was being assessed!), share your experiences. Have you had an experience like the one the author had with “Amelia”?



6. How is literacy typically assessed today? What aspects of fluency are most emphasized with these assessments?



7. What are problems that can result from stressing only reading rate and reading accuracy? What are the advantages to sticking to those two fluency elements?


8. Students who struggle with reading are the students most likely to experience fluency assessments in school. Why?



9. Assessment can be a powerful vehicle for teaching students what kinds of literacy learning and literacy behavior we think are most important. How can that power be used beneficially? How can it be used detrimentally?



10. Look at some of the wording in Figure 1, the REAL Student-Friendly Rubric (p. 246). Some educators might say in a few spots the wording is unnecessarily negative, and that especially students who struggle with reading might be discouraged or even upset by some of that wording. What do you think? How would you feel if your teacher said your reading was “flat” or “like a robot”?



11. How would you feel if you were told you were reading “below the target rate”? How might knowing where they stand in comparison with others potentially help a struggling reader? How might that knowledge be harmful? What is the best balance here?


12. What are some of the “mixed messages” we send to students about reading? What causes us to send mixed messages? What are the effects of mixed messages on students and on teachers? How can we avoid sending mixed messages?


13. Look at some of the fluency benchmarks proposed by literacy experts for students at various grade/age levels. How useful are such benchmarks for teachers? For some good information, go to www.readingrockets.org and enter the word “fluency” into the search engine. To look at norms for fluency, try this article: http://www.readingrockets.org/article/31295/ Some researchers claim that students’ scores on measures of “correct words read per minute” DO predict their scores on reading comprehension assessments. Logically, why might fluency predict comprehension?

(NOTE: It is important to remember that when we talk about scores on one measure predicting scores on another, we are talking about determining correlations. A correlation coefficient does NOT signify that one factor CAUSES another; it only means that movement on one factor is related to movement on another factor—either in the same direction or in opposite directions. That is, two factors may rise together, fall together, or one factor may rise when the other factor falls.)

14. What is meant by “barking at print” (p. 243)? What beliefs underlie the use of this metaphor to describe what some students do when they read? What beliefs underlie the use of the term “word callers” to describe some students’ reading behavior? How do such metaphors originate?



15. What barriers might there be to discourage or prevent adding the improvements to fluency assessments that Marcell proposes (i.e., extending the reading, adding comprehension probes, adding self-assessments, error analysis, corrective feedback, strategy instruction) ?



16. How do ‘repeated readings” work? Why does research show they improve fluency? What is the logic behind those research results? What aspects of fluency from the REAL model would you expect repeated readings to develop the most? The least?



17. Look closely at Figure 3, Example of an All-Encompassing Graph (p.247). Look at each element of fluency that the graph assesses, and how it is captured and documented. How valid is this assessment tool (Do you think it measures fluency)? How reliable would it be in use (Would two or more reviewers score the same reader similarly)? How practical is it for the classroom, and how likely to be used consistently? Would you use it with your students?


18. What is the main difference between the way Rate and Accuracy are measured in Figure 3 and the way Comprehension (which Marcell calls “Learning”) and Expression are measured? What does that tell us about those two pairs of fluency elements?



19. Marcell reports that some good results have been reported for using poetry and Readers Theatre to build fluency. What is Readers Theatre? How might it be beneficial for students needing to build fluency? The www.readingrockets.org web site has much good information; simply enter “Readers Theatre” into the search engine. One article to start with is: http://www.readingrockets.org/article/39/


20. What is prosody? How is it related to comprehension for those who listen to a reader? How is it related to comprehension for the one who is reading?