Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

Education Policy | Post Primary | Preschool | Primary | Tertiary | Search

 

Literacy Research in South Auckland: a Critique

Literacy Research in South Auckland: a Critique.

Education Policy Group

College of Education

Massey University

In recent months, the Ministry of Education, the Minister of Education and the media have given favourable attention to literacy research and professional development carried out in Mangere and Otara. This attention has been very uncritical and parents, teachers and members of the public might be excused if they think that this research has been found to be valid and significant by the research community in New Zealand.

The original research (Picking up the Pace) was an educational research project, supported by a Ministry of Education contract, and carried out in 12 low- decile schools by academics at Auckland University (Phillips, McNaughton and MacDonald 2002). The study attempted to raise the level of reading attainment of new entrants by means of a type of professional development for the teachers. It claimed to have shown that children in these low decile schools came to achieve at the level of the average child and hence, the work of teachers is more significant than the home background of the children.

The initial study was followed by further research (in 7 of the original 12 schools) into the sustainability of the model of professional development used in PACE (Timperley, Phillips and Wiseman 2003). This phase was entitled Shifting the Focus. The two reports have been widely publicised as ‘proving’ that teacher expectations are the key to student achievement and as showing that teachers typically have low expectations of children in low decile schools: changing these expectations improves the achievement of children in these schools and the achievement is sustained. We argue that these claims are false: the original study was seriously flawed and the conclusions drawn from it are seriously misleading.

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

Both these studies are part of a much larger project Strenthening Educational in Mangere and Otara (SEMO) which aimed to raise achievement significantly for students in these communities. Although the two major studies are closely connected, we propose to deal with them separately.

The PACE research.

Among the serious deficiencies are the following:

Control groups are inadequate for experimental research on which significant policy decisions are to be based. In this context it is interesting to note that on January 8th 2002 the President of the United States signed into law the Elementary and Secondary Education Act (ESEA). Under this Act, teacher training institutions are bound to ensure that what teachers learn is ‘based on scientifically based research.’ which involves ‘ rigorous, systematic, and objective procedures to obtain reliable and valid knowledge by employing systematic methods of observation or experimentation.’ (Cochran-Smith, 2002.p. 188). In terms of the Act the only research which is regarded as reliable and useful for teachers is that based on randomised experiments and related designs including careful use of control groups. While we would not want to advocate such a narrow view of research, the Ministry of Education might take notice that ‘best evidence’ which its recent publications stress, requires a robust notion of what is and what is not ‘evidence.’ (It is NOT simply a summary of all studies which seem in some way relevant). It has to be acknowledged that the researchers describe their study as ‘quasi experimental’ and they compare this with both a ‘ baseline group’ and a ‘non-intervention group.’ However we know nothing about the characteristics of the former other than their ages and the fact that they are in decile one schools. The latter are ‘groups of children attending the same school and in most cases being taught in the same classrooms.’ (p. 28). In neither case is there a genuine control group, carefully matched with the experimental group and unaffected by the procedures.

Despite claims that children in the intervention group score at or close to the ‘average’ level, all the children remained well below the national average on all seven scales used to assess their reading . On four of the scales the group comes out in the lowest quartile, referred to by the OECD report as ‘the tail’ which has caused such concern about New Zealand’s reading standards. For the key measure of ‘reading text level’, researchers, following Wylie and Thomson (1999), believe that 12 is the approximate national mean for 6.0 year olds. However, the mean score on this measure for children in the intervention group is 9.3. That is to say, the children scored on average about three months below age-appropriate levels. The data certainly do not support the claim made that “[the achievement of children in low decile schools ] can be like any other child in New Zealand at 6.0 years.” (p 9 emphasis ours)

One third of the group being studied is lost in the process. While 108 new entrants are tested at age 5.0, only 77 remain to be tested at age 6.0. That is to say, one third of the ‘sample’ about which important claims are being made, are just not tested to see if they had improved or not. This is a large number. It is interesting to note that a Child Poverty Action Group found that nearly one third of children in decile one schools were likely to change schools in any one year. (CPAG, 2003 page 35) The report acknowledges the loss of 29% of the sample but argues that the mean initial scores of the children who are ‘retained’ in the study are not statistically significantly different from those ‘lost.’ Even if we ignore the small numbers on which this claim is based, it does not affect our criticism. Attrition on this scale cannot be assumed to have no biasing effect. If the attrition rate is mainly the result of mobility and if mobility affects school progress this alone could explain any improvement in the scores of those remaining. The affect of mobility is noted in one of the Ministry’s “Best Evidence” documents: “Children who have frequent changes of school (for a variety of reasons) tend, on average, to have lower levels of achievement than their less mobile peers” (Biddulph, Biddulph and Biddulph 2002 , 98) Thus, we are unconvinced by data showing that the ‘retained’ group and the ‘lost’ group were similar at the outset.

In a recent call for professional development proposals on literacy, the Ministry cites this research as part of the justification for the expenditure of some ten million dollars. If the research is as flawed as we are suggesting, it might prove very expensive to the public purse. The ‘public’ may be interested to know that so much money is to be spent on proposals based on research which is highly debatable.

The Focus research.

This research is subtitled “A Summary of the Sustainability of Professional Development in Literacy.” Among the problems are:

It is clear from the data that the literacy gains are not sustained at the predicted level. Even if we concede that some gains are made and sustained at some level over 18 months , it is clear that (a) the children at final testing were nowhere near the average for their age on the Burt or text tests of reading. (b) The belief of the authors that the programme had raised the mean and set them on a rising path is not demonstrated. On the contrary, the authors interpret their own data (Fig.3.2 page 52) as showing that “the effect sizes overall were relatively small” (p.52) and having analysed progress at each of the seven schools (Figure 3.4, page 55), the authors state “Only school B showed an effect size greater than 0.4 per year which Hattie (1999) argues is the standard which should be used when judging the significance of educational innovation.” (p.56). (See NOTE 3 for a more technical elaboration of our claims.)

In contrast to what is claimed in the media, teacher expectations are not shown to be significant. Indeed, the researchers have no measure of teacher expectations as used in the accepted literature. Instead, they rely on interpretations of interviews and use as proxy for a measure of expectations, a measure of ‘teacher efficacy.’ This is justified by the implausible assumption that any increase in perceived ‘teacher efficacy’ would arise from changing beliefs about the learners rather than from the new techniques learned in the teacher development programme. In fact, however, results reported do not support the claim that perceived teacher efficacy improves over the course of the professional development. Although teachers came to accept the view of the researchers that factors outside the school are less important than factors within the school, their estimation of their own ‘efficacy’ did not change in any significant way. As far as the interviews go, it is important to note that the teachers being interviewed are not always the same teachers as originally interviewed. “In only three schools…were the original staff still teaching Year Zero/One children in both phases.” (p.34). It is remarkable that claims can be made about changes in teacher perceptions when in fact different teachers are being interviewed. Thus there is no evidence for, and some modest evidence against, the view that as the authors put it ‘one of the impediments to raising the achievement levels of students has been the low expectations teachers and school leaders have for their students.’ Indeed, the researchers themselves actually deny their own claim: ‘While it may be important to have positive attitudes, such attitudes on their own are not enough to make the difference to student learning.” Nor is it enough for a teacher to be ‘highly motivated’. Rather there is need for ‘intensive and ongoing’ course work, regular meetings with ‘a sense of urgency’, support from senior staff, methodological flexibility and a good deal of information on the achievement of children. There is much here about teachers doing new things and learning new strategies; there is nothing about expectations.

The research does NOT show that the entry characteristics of the children are insignificant in contributing to the differences between students and between schools. The researchers claim (and the politicians and media people have emphasised) that ‘the research found that contextual factors (such as students’ skills on starting school) were not significant in identifying the high achieving schools.’ If this were true it would go against several decades of research on schools both overseas (especially Britain and the United States) and in New Zealand. (Eg. Harker and Nash, 1996). It would also contradict one of the Ministry’s own “Best Evidence’ publications: “There is overwhelming evidence that literacy resources in the home, both materials and experiences, are crucial for children’s literacy development and achievement.” (Biddulph, Biddulph, and Biddulph, 2002 p. 93) The apparent discrepancy between this small South Auckland study and the ‘overwhelming evidence’ from research over several decades and across several continents is easily explained. (1) The Focus researchers fail to acknowledge that they examined only decile one schools and hence it is to be expected that background effects, while not uniform, would be less obvious than if they were comparing decile one schools with decile ten schools (or even with decile four schools.) It is a commonplace of comparative research that the closer two groups are to each other on some variable, the less that variable will distinguish between them. (2) In a somewhat contradictory move, the researchers state that individual school attainments can be compared because all are attending low decile schools and hence it can be assumed that scores at entry are likely to be similar. This is just not acceptable. The Ministry’s TFEA decile system was never meant to be a proxy for the SES status of individual students and any use of it for this purpose is a clear misuse. Doug Willms (1992:49), a leading authority in this area, states that if unacceptable levels of bias are to be avoided, “data on pupils’ background characteristics must be collected at the individual level and include measure of prior academic achievement or cognitive ability.” One cannot make claims about the significance of entry characteristics without some sound measure of those characteristics and it is likely that there are important differences in early childhood experiences of children from different cultural groups. The research does not show that home background is insignificant; it wrongly assumes it.

In concluding this discussion we want to make clear that we are not in any way opposed to the major aim of this research: to find ways of improving the performance of children especially those who are at present not performing well. We in no way support the view that social class and educational background predetermine any child to under achievement. We believe that teachers have to be sensitive to the background of their students but must do all that they can to bring all children to acceptable levels: socio-economic data must never be used as an excuse for professional inaction. Similarly we are not opposed to the general strategy of targeting professional development to teachers in their own schools, based on the achievement data of their own students. Nor are we unsympathetic to the real needs of Maori and Pacific Island children in schools in South Auckland and hope that culturally sensitive programmes will continue to be put in place for these schools. In particular we welcome two facets of this research:

It is very pleasing to note that the Ministry has rejected the dogma of the early 1990’s: that the way to improve schools and teachers is to force them to compete with each other. The focus in these reports on improving schools by improving teachers is to be welcomed.

It is also gratifying to note that the Focus report argues the need to ‘deprivatise’ education and to ensure that teachers share their knowledge about teaching. The contrary trend arose from the market ideology of the 1990s. Prior to that teachers did co-operate and share knowledge. In the artificially competitive market foreshadowed by Tomorrow’s Schools and fostered by subsequent government policy, teachers were encouraged to keep their professional ‘secrets’ to themselves. It is good to see these reports advocating a return to the professional co-operation of the earlier period.

Summary of our criticisms:

We conclude our critique with the following summary. Despite all the uncritical praise given to these studies:

the numbers in the study (108 decreasing to 77) are too small to justify claims about teaching or to support major policy changes in the education system;

the type of ‘control’ used is inadequate; there may have been no untypical achievement gains by the group studied; the ‘loss’of one third of the group prevents any firm conclusions one way or another;

if there were any gains, they were not sufficient to bring the children up to the average level; at the end of the intervention the mean achievement of children in the seven follow up schools remains in the lowest quartile of achievement (‘the tail’);

progress in learning was not sustained at the predicted level in the seven schools studied for this purpose;

there are no data on teacher expectations and the proxy tests (‘teacher efficacy’) fail to support claims about changing teacher expectations;

there is no evidence to support the claim that entry characteristics are insignificant in evaluating differences between schools;

It is indeed a scandal that research which rightly focuses on the education of teachers and calls for them to be supported with sustained professional development (involving more resources) should be politicised by being falsely represented as relying on teacher ‘expectations’ and implying (gratuitously) that teachers currently do not have high expectations for children in low decile schools. This leaves possibly fruitful research at the mercy of ideological interpretation by politicians and newspaper editors.

NOTES

For fuller development of most of the arguments given above, see Roy Nash, “One PACE forward, two steps backwards: social backgrounds, school attainment and educational research” available from the author at Massey University or R.Nash@xtra.co.nz

For discussion of some of the limitations of the view taken of literacy and the tests used in the research reported here, see William E.Tunmer, Jane E. Prochnow and James W.Chapman, “Meeting of Minds or Feeding of Minds? A Commentary Review of S.McNaughton, Meeting of Minds.” Wellington: Learning Media, 2002. New Zealand Journal of Educational Studies, (in press).

The overall gains achieved by the children in the participating schools as a result of the professional development programme were, at best, very marginal, with an average increase on the standardised Burt Word Reading Test of less than two words, and an average increase in text level of less than two levels. These differences did, in fact, reach statistical significance (p<.01) but are hardly educationally significant. The mean scores of 12.61 on the Burt and 7.95 on text level are nowhere near average levels for children at this age, and, taken together, clearly indicate that the children were performing, on average, three to six months below age-appropriate levels after only one year in school. (see table 3.2 on page 52). The researchers compare the two schools which score the highest following the intervention (the "Group Three" schools) with the rest of the schools to find out what is different about them. They note that the literacy leaders in these two schools seemed to spend more time discussing achievement data compared with the other schools (p11). However, the detailed secondary analyses presented by the researchers are largely meaningless, as one of the two Group Three schools showed no significant gains in reading over the course of the study and the gains of the other Group Three school were not robust. The baseline scores of the children in both Group Three schools were simply higher than most other schools. If the scores of the two Group Three schools were combined, it is highly unlikely that statistical significance would have been reached, in which case no claims can be made about the success of the professional development programme since the scores for these two schools were untypically high before the intervention took place.

REFERENCES:

Biddulph, Fred, Jeanne Biddulph and Chris Biddulph (2003), The Complexity of Community and Family Influences on Children’s Achievement in New Zealand: Best Evidence Synthesis. Wellington :Ministry of Education.

Child Poverty Action Group (2003), Our Children: The Priority for Policy (Second Edition). Auckland: Child Poverty Action Group.

Cochran-Smith, Marilyn. (2002). “What a Difference a Definition makes: Highly Qualified Teachers, Scientific Research and Teacher Education.” Journal of Teacher Education, 53 (2) May/June.

Harker, Richard K. and Roy Nash (1996), “Academic Outcomes and School Effectiveness” Type “A” and Type “B” Effects. New Zealand Journal of Educational Studies, 31 (2), 143-170.

McNaughton, Stuart, Gwenneth Phillips, & Shelley MacDonald (2000), “Curriculum Channels and Literacy Development over the First Year of Instruction” New Zealand Journal of Educational Studies, 35 (1),49-59.

Phillips,Gwenneth, Stuart McNaughton & Shelley MacDonald, (2002), Picking up the Pace : effective literacy interventions for accelerated progress over the transition into decile 1 schools. Auckland: the Child Literacy Foundation and Woolf Fisher Research Centre.

Timperley, Helen, Gwenneth Phillips, and Joy Wiseman, (2003),The Sustainability of Professional Development in Literacy Parts one and two. Report to the Ministry of Education.

Willms, J.D. (1992), Monitoring School Performance” A Guide for Educators. London: Falmer Press.

Wylie, C.and J.Thompson (1998), Competent Children at Six: Families, Early Education and Schools. Wellington: New Zealand Council for Educational Research.

---------------------------

Members of the Education Policy Group, College of Education, Massey University, responsible for this critique:

James Chapman, Pro Vice-Chancellor, College of Education.

John Clark, Senior Lecturer in Philosophy of Education.

Richard Harker, Director, Institute of Professional Development and Educational Research.

Roy Nash, Associate-Professor of Sociology of Education

Anne-Marie O’Neill, Senior Lecturer, Department of Social and Policy Studies in Education.

Jane Prochnow, Senior Lecturer, Department of Learning and Teaching.

Ivan Snook, Emeritus Professor of Education.

William Tunmer, Professor, Department of Learning and Teaching.


© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Culture Headlines | Health Headlines | Education Headlines

 
 
 
 
 
 
 

LATEST HEADLINES

  • CULTURE
  • HEALTH
  • EDUCATION
 
 
  • Wellington
  • Christchurch
  • Auckland
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.