Monday, February 6, 2012

Putting fluency on a fitness plan: Building fluency’s meaning-making muscle



Marcell, Barclay (2011). Putting fluency on a fitness plan: Building fluency’s meaning-making muscle. The Reading Teacher, 65(4), 242-249.



This balanced, sensible look at fluency, that controversial “pillar” of reading, comes from an elementary school literacy teacher. It’s a concise, useful piece that covers the history of the emphasis on fluency, including the development of the research base on repeated readings, the problems with current forms of fluency assessments, and some possible ways of getting beyond seeing fluency as only its most quantifiable aspects (speed and accuracy) and making sure the aspects of fluency related to meaning-making (expression and especially comprehension) receive their proper emphasis. Marcell does not recommend throwing out the current literacy assessments that stress only speed and accuracy, as some of those who oppose fluency assessments propose; such assessments are presented here as having usefulness as a screening device, when they are augmented by additional assessments of expression and comprehension.

Marcell proposes an acronym, REAL (Rate, Expression, Accuracy, Learning), and presents two specific assessment tools based upon that acronym. The first tool is a “student-friendly” rubric designed to help students self-assess their fluency on all four fluency aspects (I’m thinking this would be a good tool for teachers and students to use collaboratively). The second tool, called Repeated Readings Revisited, is designed to take repeated readings a few steps further than is often the case, that is, to give them more “meaning-making muscle” than just having students read through a passage orally and looking at correct words per minute. The tool takes readers to higher levels of comprehension on each successive reading. The first reading is for main ideas and details, the second reading is for understanding the author’s purpose and paraphrasing main ideas, and the third and final reading is for telling what the reader found most interesting and why, evaluating the title, and indentifying the author’s intentions. The Repeated Readings Revisited tool does have places to record correct words per minute, but it puts those aspects of fluency in their proper place. Comprehension is the bottom line of reading, and that is clearly illustrated here.



This article sounds like the “voice of reason” on fluency to me, and I hope many classroom teachers and literacy specialists will read it and try what Marcell suggests. The article, short as it is, covers a lot of important ground and is written in an engaging and accessible style. Because Marcell weaves in classroom vignettes that will resonate for many teachers, the article has authenticity and credibility. Yet even though this is an article aimed at practitioners, Marcell’s well-grounded, clearly narrated chronicle of the timeline of research on fluency and repeated readings makes this article credible for researchers and teacher educators as well.


A few concerns arose for me as I read. First, I wondered a little about the “student-friendly” rubric, especially the descriptions for students who are not meeting expectations. Although I believe we must level with students about whether or not they are meeting learning goals (they know anyway), I think some of the wording here might be a bit discouraging for some of the most challenged readers. I could be hypersensitive about this, and many learners might be fine with this wording, but I know children who might have hurt feelings if words like “weird” or “flat” were used to describe their reading. I’d probably make a few tweaks on the wording before using this tool with children.



My other concern is that one of the reasons the typical fluency assessments that count correct words per minute are so popular is that they only take a minute to administer. I worry that teachers and administrators will not want to change the “quick and dirty” but easily quantifiable assessments for those that may take longer and won’t provide numbers and so-called “objectivity.” Let’s face it: If you also assess expression and comprehension as Marcell suggests, that is going to take more time and be a bit less quantitative than only counting correct words per minute. Don’t get me wrong—I think it SHOULD take more than a minute, and with the high stakes placed on fluency assessments these days, assessments really need to look at fluency in its entirety rather than just looking at things that can be quickly and easily counted. Even more importantly, we need to stop teaching children that reading fast and pronouncing words are all there is to reading. As Marcell so convincingly points out, we need to stop sending mixed messages to children about reading. Children will quickly pick up that what is assessed is what is valued in school; assessing only rate and accuracy while also teaching that reading strategies and meaning-making are important may be even worse than sending mixed messages. It may be sending a very clear and definite message about what is REALLY valued, while at the same time teaching that what adults SAY is not important if something different is what actually counts. In sum, I am all for making the changes Marcell suggests, but I worry that quick and quantitative assessments are so seductive in today’s accountability-charged schools that it may be difficult for some educators to let go of them.



The above concerns, however, do not dim my appreciation of this article. I definitely plan to share it with the future teachers in my own preservice literacy education courses, and maybe with my literacy study group. Articles in recent issues of this journal have begun including some nice extras that are helpful for those of us involved in teacher education and professional development, namely, the sidebars “Pause and Ponder” which provides some pithy discussion/reflection prompts, “Take Action!” which suggests ways to link theory/research with practice, and “More to Explore” which provides some resources for those who want to learn more. These sidebars are particularly apt for this article, and combined with Barclay’s assessment tools, form a real “keeper” of an article with a lot of meat in a few pages.



Twenty Discussion Prompts:


1. What makes a good reader? What do good readers do?



2. What is “fluency”? How does fluent reading sound? What do fluent readers do?


3. On page 246 of the article, Marcell outlines his acronym for the four elements of fluency: R = Rate, E = Expression, A = Accuracy, and L = Learning. Obviously these four elements were named and ordered as they were in order to spell out the acronym. What are one-word synonyms that might be used for each of the elements in the REAL model?


4. If you were forced to rank the elements in Marcell’s REAL model in order of importance (without worrying about disrupting the REAL acronym!) how would you rank them? How do you think Marcell would rank them?



5. If you have used current literacy assessments with students (or maybe AS a student who was being assessed!), share your experiences. Have you had an experience like the one the author had with “Amelia”?



6. How is literacy typically assessed today? What aspects of fluency are most emphasized with these assessments?



7. What are problems that can result from stressing only reading rate and reading accuracy? What are the advantages to sticking to those two fluency elements?


8. Students who struggle with reading are the students most likely to experience fluency assessments in school. Why?



9. Assessment can be a powerful vehicle for teaching students what kinds of literacy learning and literacy behavior we think are most important. How can that power be used beneficially? How can it be used detrimentally?



10. Look at some of the wording in Figure 1, the REAL Student-Friendly Rubric (p. 246). Some educators might say in a few spots the wording is unnecessarily negative, and that especially students who struggle with reading might be discouraged or even upset by some of that wording. What do you think? How would you feel if your teacher said your reading was “flat” or “like a robot”?



11. How would you feel if you were told you were reading “below the target rate”? How might knowing where they stand in comparison with others potentially help a struggling reader? How might that knowledge be harmful? What is the best balance here?


12. What are some of the “mixed messages” we send to students about reading? What causes us to send mixed messages? What are the effects of mixed messages on students and on teachers? How can we avoid sending mixed messages?


13. Look at some of the fluency benchmarks proposed by literacy experts for students at various grade/age levels. How useful are such benchmarks for teachers? For some good information, go to www.readingrockets.org and enter the word “fluency” into the search engine. To look at norms for fluency, try this article: http://www.readingrockets.org/article/31295/ Some researchers claim that students’ scores on measures of “correct words read per minute” DO predict their scores on reading comprehension assessments. Logically, why might fluency predict comprehension?

(NOTE: It is important to remember that when we talk about scores on one measure predicting scores on another, we are talking about determining correlations. A correlation coefficient does NOT signify that one factor CAUSES another; it only means that movement on one factor is related to movement on another factor—either in the same direction or in opposite directions. That is, two factors may rise together, fall together, or one factor may rise when the other factor falls.)

14. What is meant by “barking at print” (p. 243)? What beliefs underlie the use of this metaphor to describe what some students do when they read? What beliefs underlie the use of the term “word callers” to describe some students’ reading behavior? How do such metaphors originate?



15. What barriers might there be to discourage or prevent adding the improvements to fluency assessments that Marcell proposes (i.e., extending the reading, adding comprehension probes, adding self-assessments, error analysis, corrective feedback, strategy instruction) ?



16. How do ‘repeated readings” work? Why does research show they improve fluency? What is the logic behind those research results? What aspects of fluency from the REAL model would you expect repeated readings to develop the most? The least?



17. Look closely at Figure 3, Example of an All-Encompassing Graph (p.247). Look at each element of fluency that the graph assesses, and how it is captured and documented. How valid is this assessment tool (Do you think it measures fluency)? How reliable would it be in use (Would two or more reviewers score the same reader similarly)? How practical is it for the classroom, and how likely to be used consistently? Would you use it with your students?


18. What is the main difference between the way Rate and Accuracy are measured in Figure 3 and the way Comprehension (which Marcell calls “Learning”) and Expression are measured? What does that tell us about those two pairs of fluency elements?



19. Marcell reports that some good results have been reported for using poetry and Readers Theatre to build fluency. What is Readers Theatre? How might it be beneficial for students needing to build fluency? The www.readingrockets.org web site has much good information; simply enter “Readers Theatre” into the search engine. One article to start with is: http://www.readingrockets.org/article/39/


20. What is prosody? How is it related to comprehension for those who listen to a reader? How is it related to comprehension for the one who is reading?

No comments:

Post a Comment