Jump to ContentJump to Main Navigation
Seeing Complexity in Public Education$

Donald Peurach

Print publication date: 2011

Print ISBN-13: 9780199736539

Published to Oxford Scholarship Online: September 2011

DOI: 10.1093/acprof:oso/9780199736539.001.0001

Continuously Improving

Chapter:
(p.138) 4 Continuously Improving
Source:
Seeing Complexity in Public Education
Author(s):

Donald J. Peurach

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199736539.003.0021

Abstract and Keywords

This chapter examines SFAF’s work improving its programs and its organization, all while supporting a state-sized system of schools operating in turbulent environments. Rather than a conventional research-and-development process, the chapter details how SFAF’s development agenda, organization, and process interacted to produce new knowledge of effective implementation through collaborative organizational learning among SFAF and its schools. Yet, again, the chapter details Success for All’s paradox. On the one hand, this collaborative learning drove a rapid, four-year evolution in Success for All in support of expert implementation in schools. On the other hand, complex interdependencies among schools, the program, SFAF as an organization, and turbulent environments interacted not only to effect new challenges within the Success for All network but, also, to threaten the existence of the network.

Keywords:   continuous improvement, program development, knowledge production, research and development, development process, development organization, organizational learning, program evolution, environmental turbulence

Programs need to constantly be learning from schools themselves and from research, and then incorporating new ideas into new materials. This enables innovative educators to feel as though they are constantly growing and contributing to an endless development process. Continuing development is also necessary to respond to new standards, new assessments, and new objectives adopted by states and districts. In addition, continuing evaluation of program outcomes, if positive, contributes to a sense that a program is progressing and is justifying the efforts necessary to implement it.

Robert Slavin and Nancy Madden, 1996a1

For the newly founded SFAF, success brought problems. Beginning in 1999, concurrent with a rapid doubling of its installed base of schools, SFAF began recognizing widespread patterns of rote, mechanistic implementation and of lower-than-desired student achievement. With that, SFAF found itself staring down a steep and novel challenge: formalizing support for expert use of the program and, then, rapidly rippling program improvements through a 250-member training organization and through more than 1,500 schools.

SFAF needed to do so quickly. Its schools were working within a three-year implementation window, and their success (and Success for All’s reputation for effectiveness) hung in the balance. The urgency was heightened by what SFAF executives forecasted to be another wave of federal policy support for continued scale-up: the pending reauthorization of the federal Elementary and Secondary Education Act as the No Child Left Behind Act. Urgency was matched with uncertainty. There were no “how to” manuals laying out knowledge and methods to guide such work. Indeed, the effort depended on SFAF’s capabilities to rapidly produce and use the knowledge needed to support and sustain the enterprise.

From SY 1999/2000 to SY 2002/2003, SFAF set about the work of continuously improving the effectiveness of the program while maintaining (and seeking to expand) its installed base of schools. The effort was marked by SFAF’s characteristic interdependence: the development agenda, organization, and process (p.139) evolving and emerging in interaction with SFAF’s ambitions and history, its programs, its schools, and their environments. The effort was marked by SFAF’s characteristic combination of the rational and the organic: developers (like trainers) working within both formal structures and social relationships to leverage research and experience, all towards the goals of improving effectiveness and increasing scale. And the effort was marked by the characteristic emergence of new problems, both within the Success for All enterprise and in its interactions with its environments.

The results were remarkable. By the close of its 1999–2003 development cycle, SFAF’s interdependent development agenda, organization, and process had combined to drive a revolution in the program that appeared both to increase potential for expert use and to position the organization for a new round of explosive growth. Yet at the same time, a new set of interdependent problems combined to drive a most unexpected reversal of fortune. SFAF found itself struggling not only with new challenges in its program, its schools, and its own organization. It also found itself struggling to draw essential resources from its environments, including new schools, consequent revenue, and external funding.

By the close of its 1999–2003 development cycle, rather than have reached a new zenith of effectiveness and scale, SFAF found itself living in a very strange place and on a very fine edge, with the continued viability of the Success for All enterprise hanging in the balance.

The Development Agenda

Central to SFAF’s approach to the work of continuous improvement was managing a dynamic development agenda: an informal “to do” list describing on-going and emerging efforts to improve the program. Beginning in 1999/2000 (immediately upon recognition of problems of process and outcomes) and continuing through 2002/2003 (the first year that NCLB was making its way through the system), SFAF’s explosive growth in schools and in its training organization was matched with an exploding agenda for further developing and improving the program.

SFAF was a rational actor in what appeared to be rationalizing environments. Even so, SFAF’s development agenda did not take shape in a classic, rational way: that is, through a comprehensiveness analysis of all available information, an exhaustive review of all possible improvement initiatives, and the selection of some optimal set of improvement initiatives that would maximize both effectiveness and opportunity for growth. Rather, SFAF’s development agenda emerged in ways that mirrored the very policymaking processes on which it depended: as a sort of “muddling through” more incremental and evolutionary than classically rational, as SFAF adapted its ideas and plans for improvement in response to its ambitions, its history, and, especially, its environments.2

SFAF’s 1999–2003 development agenda emerged at the intersection of two streams of influences. The first stream ran through the Success for All enterprise, itself: the organization, its program, and its network of schools. Consistent with its (p.140) long-held commitment to continuous improvement, SFAF began its 1999–2003 development cycle already engaged in planned and previously funded improvement activity spanning virtually every component of the program: the curriculum components (Early Learning, Reading Roots, and Reading Wings); tutoring: family support; and leadership. In response to increasing recognition and understanding of problems of process and outcomes in Success for All schools, this on-going development activity was both redirected and expanded to address those problems. The focus was squarely on improving resources and learning opportunities to support teachers, leaders, and trainers in the expert use of the program (both novice users needing to master expert use and expert users wanting more).

The second stream of influences ran from beyond the Success for All enterprise: from U.S. educational environments, and from still-broader political, economic, and social environments. Three such influences bore heavily on SFAF’s development agenda. The first was plentiful funding to support development activity. In contrast to its struggles securing funding for its training organization, SFAF had a long history of securing funding to support development activity, as funders saw program development (rather than organizational development) as a legitimate use of their resources. As it began its 1999–2003 development cycle, there was much such funding to be had, with the broader economy booming on the strength of activity in the information technology sector (and, with it, the grants economy that fueled research and development in education). The second was emerging knowledge, technologies, and other resources with potential to support expert use of the program: for example, rapidly evolving research on early reading, new research on instructional leadership and on “data use,” commercial assessments, and new information technologies. The third was continued educational reform activity in state and federal policy environments, including state-level efforts to devise the primary instruments of standards-based reform: curriculum standards, performance standards, and accountability assessments.

These streams of influences were constantly flowing, and they were constantly reshaping themselves—not randomly, but in cycles: for example, the annual school year as a cycle, public and private funding cycles, electoral cycles, and policy cycles. Of particular interest to SFAF was the periodic reauthorization of the federal Elementary and Secondary Education Act (ESEA). As SFAF embarked on its 1999–2003 development cycle, policy environments were buzzing with activity surrounding the reauthorization of ESEA. And with long-term viability as important as near-term effectiveness, SFAF was looking out over the horizon, with a keen eye on what had long been its North Star.

As with earlier federal policy initiatives, activity surrounding the reauthorization of ESEA provided hope for renewed support for comprehensive school reform (in general) and for Success for All (specifically). Regarding general support, the New American Schools (NAS) initiative continued to garner legitimacy and influence, with its proponents advocating for formally incorporating the Obey-Porter CSRD into ESEA. In 2001, Chester E. Finn, president of the Thomas B. Fordham Institute, described NAS as having transitioned from “revolutionary outsider to beltway insider,” and as instrumental in advancing the cause of (p.141) comprehensive school reform.3 As Finn observed, “With billions of federal dollars subsidizing its advance, the strategy known as whole-school reform is a fixture of the U.S. education landscape. NAS did a great deal to bring that situation about.”4 Regarding specific support, in congressional testimony on the reauthorization of ESEA in 1999, U.S. Secretary of Education Richard Riley cited Success for All as a “proven reform model that struggling schools can adopt—often with the help of federal funds—‘right out of the box.’  ”5 Indeed, over the period preceding the reauthorization, SFAF was actively seeking to influence federal policy activity, with cofounder and chairman Robert Slavin advocating widely for using of Title I of ESEA as “an engine of reform” to support comprehensive, research-based, and research-validated programs.6

With SFAF executives, developers, and trainers all sitting at the intersection of these two streams, the resulting development agenda did not emerge as a detailed set of top-down marching orders: a coherent plan for a next, improved, integrated version of the program (as in the style of a software upgrade). Rather, it emerged as set of shared understanding of SFAF’s development priorities.

With an eye on improving expert use, improving student outcomes, and increasing the scale of operations, these priorities did not focus on improving the fundamental design for school restructuring. Little in SFAF’s research or experience gave its members pause to question the organizational blueprint for comprehensive school reform. Nor did the agenda focus primarily on addressing deep problems in the SFAF training organization: for example, improving strategies for recruiting new trainers, or strengthening the practice-based learning of trainers. At the beginning of the 1999–2003 development cycle, the depth of such problems had yet to be fully recognized and understood within SFAF.

Instead, the development priorities focused primarily on improving two things. The first was Success for All’s supports for implementation: the coordinated materials, tools, and learning opportunities provided to schools. The second was the formal supports for the work and professional development of trainers (rather than the social supports provided through communities of training practice).

These shared priorities directed and coordinated improvement activity across the full complement of program components: instructional organization; curriculum and instruction; supplemental services (i.e., tutoring and family support); and school leadership. Guided by shared priorities, actionable sub-agendas were established at the level of the individual improvement initiative: for example, incorporating commercial assessments to support quarterly analysis and regrouping; completely rewriting all primary curriculum components; developing new supports for school leadership; supporting schools in aligning the program with state standards; incorporating information technologies; and improving the supplemental services components (along with their coordination with both instruction and leadership).

These sub-agendas were managed and coordinated informally and jointly by teams of developers and SFAF executives, their specifics evolving and adapting with streams of influences from within and beyond the SFAF enterprise. In fact, a joke in SFAF was that these sub-agendas could evolve quickly on any given day (p.142) based on who rode the elevator together that morning and how many floors they rode it. This sort of expansive, dynamic agenda management was enabled by ample funding for development that supported program-spanning development initiatives, that created slack for experimentation and adaptation, and that reduced efficiency as a primary managerial concern.

Thus, with the way that SFAF approached the work of continuous improvement, the development agenda was many things at once. It was both reactive and proactive, addressing past problems and future opportunities. It was both rational and organic, with key goals for development managed socially. It was diffuse, owing to the many problems to be solved in schools, the comprehensiveness of the design and its supports, and ample funding for improvement. It was dynamic, as the agenda ebbed and flowed with changing understandings, ambitions, and influences.

The development agenda was also decidedly evolutionary. The agenda for improving Success for All formed largely within (and at the margins of) established, fundamental parameters of the program: for example, the existing program components; extensive elaboration as a way of recreating usable knowledge in schools at scale; extensive scaffolding via direct instruction, practice-based learning, and in-practice support; and supporting trainers using this same combination of elaboration and scaffolding.7 To be sure, the agenda included important efforts to extend the boundaries of the program: for example, efforts to continue horizontal expansion into other K–6 content areas, and efforts to initiate vertical expansion into pre-K and middle school reading. Even so, the agenda for improving Success for All did much more to chart evolutionary improvement within (and at the margins of) existing parameters than it did to reconstruct those parameters, themselves.

The Development Organization

Within SFAF’s approach to the work of continuous improvement, a dynamic development agenda was matched by a dynamic development organization that shouldered the day-to-day work of improving the program. Just as scaling up its base of schools drove growth in the training organization, scaling up the development agenda drove growth in the development organization. While knowledge and technologies were emerging to support the development agenda, SFAF’s environments were hardly a parts bin from which developers could quickly cobble to improve the program. Rather, the work of continuous, program-wide improvement required that SFAF expand its internal capabilities to create, revise, and adapt its many materials, tools, and scaffolding opportunities.

The expansion of the development organization mirrored the expansion of the development agenda; it also mirrored the expansion of the training organization. The expansion was largely evolutionary in nature, with SFAF building on a well-developed foundation (rather than starting from scratch). Further, the expansion blended the rational and the organic. While the development organization evolved (p.143) to feature more complex formal structures, its core capabilities continued to rest in informal communities of practice.

Again, the original Success for All project team began as a university-based group with extensive knowledge and experience in program development, training, and research, uniquely positioned at an intersection between academic, educational, and policy networks. Thus positioned, the original project team worked as an informal community of practice, able to quickly identify, incorporate, process, produce, and use knowledge and information bearing on their work.

Over the 1990s, with the success of its comprehensive school reform program and with funding from NAS to expand into additional content areas, the development agenda continued to expand and, with it, the development team. And, as it did, the project team succeeded in identifying and recruiting knowledgeable and experienced developers, as such work was ubiquitous in the reform-rich environments of U.S. public education. While this growth was matched with some formalization of roles and responsibilities, developers of that period reported that they continued to work in informal communities of practice: social networks within which they collaborated to share knowledge and information, to coordinate their activities, and to perform their work.

Between SY 1999/2000 and 2002/2003, as the development agenda and organization continued to grow, those earlier trajectories held. SFAF succeeded in drawing from its environments and its own network to recruit knowledgeable and experienced developers, thus incorporating their expertise into the development organization and (by virtue of their professional associations and memberships) expanding the development network. SFAF also recruited seasoned teachers and leaders from Success for All schools and districts both as full-time members and as adjuncts, bringing with them knowledge and experience in the expert use of the program. And SFAF developed key partnerships with external developers, including commercial assessment providers, external media production firms, and university-based development and research organizations.8

As with the training organization, increasing size and complexity brought with it increased formalization. For example, developers were organized into component-specific and function-specific teams, some pre-existing and others newly constituted: Early Learning, Reading Roots, Reading Wings, tutoring, family support, leadership, Training Institute, alignment, media development, and software development. Roles were differentiated within teams: senior staff charged with overseeing and coordinating projects; mid-level staff charged with a combination of managerial, design, and production responsibilities; base-level staff charged with responsibility for producing program and training materials; and boundary-spanning roles that had some developers continuing to hold formal responsibilities in the training organization. And formal systems were established for generating, communicating, and retaining information useful to developers: for example, reports from schools (e.g., Implementation Visit Reports, quarterly assessment results, and curriculum-based assessment results); state assessment results; and research reports produced both by SFAF’s internal research department and by external researchers.

(p.144) Despite such formalization, those involved reported that the development organization functioned exactly as it always had: as a collection of communities of practice. All involved reported extensive, informal participation, interaction, and collaboration spanning teams and spanning levels within teams. Freed from rigid structures, developers were able to devise, coordinate, and respond to rapid shifts in the development agenda, constrained loosely by such broad goals as supporting expert use the program, improving student outcomes, and adapting the program to changing environments. The combination energized developers and fueled their collaboration. In a 2003 interview, one seasoned developer explained that SFAF’s communities of development practice actually mitigated problems that she had experienced in an earlier position at another organization in which development activity was organized more formally:

That’s one of the things that I marvel at. Where I worked before, collaboration was not as easy. When I came in, I was always on the defensive, expecting people to be engaged in turf battles and wanting to stake out ownership of ideas and people. I was kind of stunned at the way people work with each other here. It’s much more collaborative. People are much more inclined to get to agreement and consensus and to work towards that and to have that as a goal, rather than to stake out turf and to be more competitive. Everybody here is on the same team. And another place where I worked, it was like a curriculum team, then a school team, and then an administrative team, and all those teams were sometimes more in competition then alignment.

These informal relationships stretched far beyond the development organization and into the training organization, the Success for All network of schools, and broader environments. As had always been the case in Success for All, some developers continued to train, and some trainers continued to develop (including area and regional managers within the training organization). Further, developers were in constant contact with trainers and with schools, both formally (e.g., via program pilots in schools, and as “downstream consumers” of Implementation Visit Reports) and informally (e.g., via interactions at conferences, via phone, and via email). Finally, many were active participants in (and collaborators with) professional organizations and associations in their individual areas of expertise.

As a consequence, SFAF developers lived under a waterfall of constantly flowing knowledge and information about existing problems, about potential solutions, about emerging research and technologies, and about environmental contexts. Some of this knowledge and information was formalized into white papers and research proposals and circulated among developers. However, the vast majority of knowledge and information was informal, anecdotal, and unrecorded. It was retained and indexed in the minds of individuals and in their collaborative ways of working. It was shared and transferred via personal interactions, thus highly dependent on a collective sense of “who knows what,” and on relationships between those having and those needing knowledge and information. And its meaning was constantly negotiated and renegotiated by all involved.

(p.145) All of these informal interdependencies made for formidable coordination challenges. Yet within the development organization, work was not coordinated and managed via formal methods of project management, nor even by tight, formal control of the agenda. Rather, true to its historical roots, and consistent with its means of managing its dynamic development agenda, coordination among developers was largely informal. One informal, coordinative mechanism was as described immediately above: dense, informal interactions among developers, complemented by norms of agreement and consensus. A second was the active participation of SFAF executives in the day-to-day work of development and training, which allowed them to establish and coordinate agendas, to allocate resources to development initiatives, and to facilitate coordination by brokering relationships. A third was the Success for All program itself. The established program functioned as a template around with developers described themselves as “tinkering” and “fiddling.”

Much as with schools, the preceding placed a premium on newly recruited developers buying into the program and understanding that they were to work within its constraints. In a 2003, one developer explained that SFAF cofounder and President Nancy Madden took care to ensure that newly recruited developers shared that understanding:

Nancy [Madden] has always been careful that whoever comes in, particularly in reading, that they recognize the Success for All stance, and that they recognize that Success for All way of doing things. In general, in a working sense, it’s to let people know not to expect to come in and creatively go off on your own, in your own direction, and have complete leeway to do that. And also to recognize that there is significant detail. There is a process already established, and there will always be a significant level of detail. There will always be certain elements that are represented in this program, such as direct instruction, such as phonics, such as directing teachers what to do. There are certain elements of instruction. There’s a process of instruction. There is an approach to instruction. All of those things, a template for that, literally and figuratively, has been established. And so it’s to let people know that you can’t come in and just ignore that template and go off and develop what you want to develop, if it differs significantly from that template.

As these communities of development practice emerged and evolved, their work continued to be motivated and supported by key, enabling conditions: for example, physical proximity (most developers worked at SFAF’s Towson headquarters, and those who didn’t work at the headquarters traveled there frequently); slack time and resources (owing to ample funding for program development); the shared language of the Success for All program; long-established (and deeply-valued) personal relationships; functional interdependence among the program components on which all were working; cross-team responsibilities within the development organization (both formal and informal); and members who continued to have both development and training responsibilities. Most important were (p.146) norms, values, and practices linked by developers both to SFAF’s founders and to its roots as an organization: deep (almost religious) commitment to the mission of improving the academic achievement of historically disadvantaged children; to-the-bone pragmatism; vigilant attention to the research literature; constant experimentation; and constant evaluation of effectiveness. Chief among these was great tolerance for informality over formality. As explained by Barbara Haxby, SFAF’s director of implementation and a member of the original project team:

It has to do with kind of the history of the organization. It’s always been such an academic group that sort of morphed into an organization and, as such, I think real, definite hierarchical labeling of people has never been what we’ve been very comfortable with. So consequently, we all have very strange titles that reflect only part of what we think the person does.

Thus, even with SFAF and its broader environments advocating for rationality in educational reform, the SFAF development organization (like the development agenda) continued to evolve organically. In some ways, it evolved to capitalize on key resources readily available in U.S. educational environments: specifically, a large pool of experienced and capable developers, and ample funding for program development. In other ways, it evolved to compensate for weaknesses of U.S. educational environments: specifically, a lack of readily available and readily usable knowledge and technologies with which to quickly improve the program. Indeed, as much as a knowledge-consuming organization, the development organization evolved as a knowledge-producing organization, staffed and informally structured to quickly fashion a waterfall of constantly flowing knowledge, experience, and information into new understandings, tools, and methods for improving implementation, outcomes, and scale.

The Development Process

The third, interdependent dimension of SFAF’s approach to continuous improvement was the development process, itself: the set of activities by which members of the development organization moved ideas from the development agenda (SFAF’s figurative “to do” list) and into its designs, its supports for schools, and its supports for trainers. The development process included a critical point of coordination: a point at which the development organization released program improvements to the training organization, which was then responsible for supporting their effective use in the installed base of schools. School-level use, in turn, fed back into the development agenda, organization, and process. With that, SFAF’s development process drew the work of designing, supporting, and scaling up into a process of on-going reflection, reconsideration, and adaptation of the program, its ends, and its means. In doing so, the development process leveraged the full range of resources spanning SFAF’s networks, its programs, its environments, and its own organization.

(p.147) As enacted during its 1999–2003 development cycle, SFAF’s development process was much less a classic “research-and-development” process and much more a pragmatic, collaborative method for producing and codifying practical knowledge in usable, replicable form. As developers worked to improve Success for All, they did not organize and coordinate their work as a progression through an established sequence of steps: problems/needs definition; basic and applied research; development; commercialization; diffusion; and, ultimately, adoption and use.9 The work was not compartmentalized and arranged nearly so neatly.

Rather, developers engaged in a collection of simultaneous, interdependent, and loosely coordinated activities distributed across SFAF’s many development teams: using the program in schools; acquiring, pooling, and interpreting knowledge and information; formalizing and piloting promising improvements; disseminating improvements through the installed base of schools; and using research to evaluate overall program effectiveness. All of these tasks were performed continuously, in interaction, over the entire 1999–2003 development cycle.

Working in this complex, interdependent way reversed the usual order of things. This wasn’t so much “research-and-development” as it was “development-and-research,” as new, practical knowledge emerged and evolved from the continuous, collaborative improvement efforts of SFAF developers, trainers, leaders, and teachers.10 Understandings of problems, needs, and opportunities emerged from the work of large-scale use of the program (not in advance of it), and functioned to inform SFAF’s dynamic agenda-management process. The majority of new knowledge about how to improve the program emerged through the work of formalizing, piloting, and disseminating program improvements, largely via processes of collaborative, experiential learning (and not in advance of such tasks, via basic and applied research). Trainers, school leaders, and teachers were active participants in the development process, and not the downstream benefactors of it.

By way of organizational epistemology, the development organization was not dogmatic in its approach to the work of development but, instead pragmatic. While developers were vigilant in leveraging research when available, they were equality vigilant in leveraging widely distributed, experiential learning. In a 2003 interview, in a moment of candor that belied SFAF’s public face of rationality, cofounder and President Robert Slavin acknowledged just that:

We don’t know what we’re doing. We’re making this up. We don’t have a final answer. We have to be learning from the experience. You’d be crazy to be working in all these schools all over the place and not learning from them to gradually move toward something better.

With that, SFAF’s development process mirrored key characteristics of its development agenda and organization. The development process was adapted to SFAF’s environments. In environments pressing for effectiveness but thin on knowledge, SFAF used the development process to produce its own. The development process was more organic and social than it was classically rational, (p.148) as developers collaborated to explore new paths, reconciled their experiences, and agreed to exploit those that seemed most promising. And the development process was decidedly evolutionary, with developers, trainers, leaders, and teachers leveraging their experiences to make incremental improvements within (and at) the established parameters of the program.11 Active use in schools functioned as a source of variation in understandings and practices. Acquiring, pooling, and analyzing functioned as mechanisms for selecting favorable understandings and practices (and for culling unfavorable ones). Formalizing and piloting functioned as mechanisms for codifying and retaining favorable adaptations. And disseminating functioned as a mechanism for replicating favorable adaptations through the installed base of schools.

Active Use

Integral to SFAF’s development process was on-going use of the Success for All program by students, teachers, school leaders, and trainers: an ever-churning source of knowledge and information from 1,500+ schools spanning 48 states. Large-scale use functioned as a source of knowledge and information about problems: about the specific difficulties experienced by students, teachers, leaders, and trainers as they sought to enact the program as intended, at both the novice and expert levels. Large-scale use functioned as a source of potential solutions. Indeed, as designed, expert use of Success for All was, itself, a process of systematic, distributed experiential learning involving analysis, experimentation, and reflection. And large-scale use functioned as a source of information about dynamic, evolving environments: alone (for example, specific district and state environments); as they interacted with each other (for example, as districts interpreted and acted upon state and federal policies); and as they bore on the design and use of Success for All in schools.

Acquiring, Pooling, and Analyzing

Concurrent with large-scale use, SFAF developers constantly acquired, pooled, and analyzed new information and knowledge: as generated and reported formally within SFAF; as generated via participation in personal and professional networks; as generated via interaction with the Success for All teachers and leaders; and, especially, as generated through interaction with SFAF trainers. It was through these processes of acquiring, pooling, and analyzing that tacit understandings were made explicit and explicit understandings were shared.

Especially important was the constant flow of knowledge and information from SFAF trainers. The core work of trainers included collaborating with teachers, leaders, and other trainers to analyze information about implementation and outcomes on a classroom-by-classroom, school-by-school, region-by-region, (p.149) and area-by-area basis. Some of the resulting knowledge and information was recorded and reported formally: for example, in Implementation Visit Reports. Much was retained and transferred informally, through personal interactions. Indeed, trainers functioned as a sort of social conduit of knowledge and information in the Success for All network, connecting schools to each other, to the development organization, and to SFAF executives. In a 2003 interview, Robert Slavin described the contributions of trainers as a sort of informal, widely distributed qualitative research that complemented the formal, disciplined research of the SFAF research department:

Since we’re not doing a lot of our own qualitative research, we get our wisdom of practice from the trainers, themselves, who are doing their own little qualitative research, in a sense. They’re not qualitative researchers. But, in a way, they’re doing a form of qualitative research all the time to characterize what’s happening. And we pay close attention to our trainers, particularly with the new programs, where we’re thinking about new initiatives, to understand from them what they’re learning out there. We very much feel the terror of going around changing stuff all over America. We can’t see it. We don’t know what’s happening, ourselves, on a day-to-day basis in all these many, many schools and all these different places. But, on the other hand, we have eyes and ears in these places who are very capable and intelligent people. And so rather than imposing an idea that there are researchers, and then there are trainers, and they’re not the same people, we take our own trainers very seriously and want to hear their experience and be able to incorporate their experience into some sort of a progressive change process. And, to me, it’s hard to imagine that you’d do things any other way.

Formalizing and Piloting

New knowledge and information fed what most approached a moment of invention within the development process: that point at which developers devised and tested new print resources, classroom materials, videos, software, training materials, and other tools. SFAF developers described this as a process of “formalizing” best practice, “writing” materials, and “embedding” new knowledge and ideas in tools and materials. In some cases, this had SFAF developers readily incorporating new knowledge and technologies into the program: for example, the findings and language of the National Reading Panel; commercial assessments and reporting systems; and increasingly affordable, stable, and ubiquitous information technologies.12 In most cases, this was the point at which informal knowledge and information were formally codified and retained in usable form in Success for All’s standard array of supports: in manuals, materials, and other tools for use in schools, and in scripts, booklets, and other materials used by trainers in formal scaffolding opportunities.

(p.150) Concurrent with formalizing program improvements, developers also piloted them in schools. In some cases (e.g., when required as a condition for external funding), piloting included formal, summative evaluation of the effectiveness of a given program improvement. However, in the main, piloting was largely an informal, formative process by which lead developers and trainers tested individual program improvements for “proof-of-concept” in actual schools prior to large-scale replication (and, if necessary, made any required revisions). In a 2003 interview, one experienced SFAF developer with responsibility for preparing supports for trainers summarized the process:

What happens is, we get comments and feedback from both schools and trainers, or more research shows that another approach is more effective. It builds up until, one day we determine that it’s time to rewrite a piece. A development team is identified. Who takes part depends on the component, but Nancy (Madden) oversees all of the teams. She meets with the writing team to discuss what needs to be written. The writers begin to write and then they meet with her again and review what’s been written. Often they’ll ask other people to review the materials and provide comments, ask questions and the like. When our department gets involved, we share our opinions as well. That’s particularly helpful because we’ve all worked in Success for All schools in the field and know what is likely to work, and what’s not. And it just sort of morphs into this on-going process. Each item is revised and revised and revised based on internal feedback and feedback from talented trainers and even feedback from teachers.

Disseminating and Using

Every year, the combination of publication deadlines, SFAF’s annual scaffolding cycle, and the new school year drove divergent, exploratory development activity to a point of convergence. At this time, program improvement efforts were drawn to a temporary close and disseminated through the training organization and the installed base of schools.13 Dissemination was not structured so that program improvements were queued up, coordinated, integrated, and released as new, “shrink-wrapped” versions of Success for All, in the style of software releases: for example, “Success for All v. 3.0.” Rather, program improvements were released more in the style of software bug fixes: immediately and independently, on a fix-by-fix basis. This improvement-by-improvement dissemination strategy was responsive to multiple concerns: the impossibility of expecting all schools to re-purchase a completely revised program every year; pressure shared by SFAF and its schools to quickly improve implementation and student achievement; and SFAF’s commitment to helping as many students as possible, as quickly as possible. These concerns combined to create great urgency in SFAF’s dissemination process. Explained one SFAF developer: “There’s always sort of an eagerness here, (p.151) that if we think it’s going to work, get it out there and let them start using it. There’s always an eagerness. If we think it’s going to work, get it out there!”

SFAF had two primary means of disseminating program improvements. One was through the sale and distribution of new and revised materials: a sort of immediate, large-scale broadcasting of program improvements. Another was through Success for All’s conventional scaffolding opportunities, especially annual Experienced Sites Conferences (for schools) and annual Experienced Trainers Institutes (for trainers). These scaffolding opportunities functioned as opportunities to share additional information and knowledge about program improvements beyond that formally codified in program resources, both informally (e.g., verbally, through discussion) and formally (e.g., via booklets and copies of overheads used in scaffolding sessions). They also created opportunities for trainers, leaders, and (sometimes) teachers to practice using program improvements, though within the constraints of 1.5 hour to 3 hour off-site training sessions, thus usually with the goal of exposure to (rather than mastery of) those improvements.

Dissemination, in turn, triggered active use in schools—first at the novice/consistent level, and then at the expert/adaptive level. This was active use at scale: not via collaboration among lead developers, lead trainers, and able schools in the controlled context of a pilot; but via collaboration among modal trainers, leaders, and teachers in uncontrolled contexts. Indeed, it was at this point that environments were reintroduced into the development process. Variability in specific schools and their local environments, in turn, triggered another layer of widely distributed, experiential learning, as trainers, leaders and teachers began to experience (and learn to manage) the interaction between program improvements, school contexts, and environmental challenges. In a 2004 interview, Barbara Haxby, SFAF’s director of implementation, explained:

So you get this very clean result in pilot. And somehow you assume that, if that’s effective, you can put it at scale. But, in the dynamics of going to scale, there are a whole lot of other factors that all of a sudden have equal weight compared to the effectiveness of curriculum. Superintendent turnover, district support, power of principals in school leadership, teacher turnover, basic school climate issues, environmental instabilities, longitudinal questions of what happens over time. You’ve got teachers turning over every single year. You’ve got superintendents coming in brand new and wanting to bring their own stuff with them. You’ve got the personal dynamics that happen in some schools, people who don’t like each other. You get into all that messy stuff. And when you’re at scale, you have to take care of those kinds of things, too, if you’re going to create change. A wonderful curriculum, in and of itself, or any other structure, whether it is effective or not often depends on all these other things. And I think one of the lessons of learning at scale is to say, “We’ve got to learn some things about how to create conditions where even something that’s got a great research base can be nurtured and grown.” These are things you can learn about. I’m not sure that’s just (p.152) chaos and unknowable. You can learn about them. Because when you’re at scale, that’s the kind of stuff that you worry about each and every day, much more so than the mechanics of the program.

The Role of Research

Research played several important roles in the SFAF development process. In some cases, external research functioned as an input to the work of developers, as they recognized existing or new findings and incorporated them into their improvement efforts. In other cases, developers did a sort of “bench check” in which they squared their ideas for program improvements with the research literature, both to get a sense of the likely effectiveness of the program improvements and to legitimate the program improvements as having a basis in research.

The most systematic role of research was in the work of SFAF’s internal research department. The work of the research department did not include fine-grained, qualitative analyses of program implementation. Again, informal, qualitative research was the province of trainers. Rather, the research took four primary forms: funded studies of program effectiveness for the program as a whole (most often using a matched comparison design); formal evaluation of component-specific program improvements as required by funders; periodic meta-analyses and syntheses of both internal and external research on Success for All; and annual production of state-by-state “Reports of Program Effectiveness” (ROPEs) using student outcomes on state accountability assessments. Indeed, if the mantra of SFAF’s development organization was “Get it out there!” then the mantra of SFAF’s research department was “Does it work?”

Again, the work by the research department did not directly inform the work of the development organization. Developers did much more to leverage their own experiential learning than they did findings from internal research efforts. Rather, findings from internal research efforts were used most prominently by SFAF executives to monitor and manage development and training operations, to rally continued commitment and enthusiasm among Success for All schools, and to demonstrate publicly the effectiveness of the program.

Vitality and Complexity

Thus, rather than a tidy sequence of compartmentalized tasks, the SFAF development process consisted of a set of continuous, interdependent learning activities. While the process drew on research when possible, it drew most heavily on the experience of using the Success for All program in schools. These activities were supported by unusual capabilities for learning in the SFAF development organization: seasoned developers organized into communities of practice; freed to experiment by slack time and resources; motivated by a shared mission; informally managed and coordinated; and driven to consensus. They were supported by (p.153) unusual capabilities in the SFAF training organization to provide a steady flow of knowledge and information about interactions between the program, schools, and their environments. And they were supported by an unusual role for chronically underperforming schools: not as the downstream recipients of packaged solutions but, instead, as active participants in a novel process for producing, validating, and formalizing practical knowledge in usable, replicable form.

Those involved described the SFAF development process as making for a dynamic and invigorating (if sometimes-chaotic) work environment. The vitality derived, in part, from the sheer ambition of the 1999–2003 development cycle. While SFAF had long engaged in the work of continuous improvement, never had development efforts been so expansive, nor had the training organization or the installed base of schools with which developers were to coordinate. The scope and scale were new for most all involved. The vitality derived, in part, from the close collaboration among developers, the sense of urgency and mission behind the development efforts, and the constant learning of all involved.

And the vitality derived, in part, from close interaction with SFAF cofounders and chief executives Nancy Madden and Robert Slavin. Though by all accounts a large organization, SFAF was still described by its members as a Mom and Pop shop. All involved described drawing enthusiasm and energy from their steadfast commitment to the mission of the organization. Explained one SFAF developer, in a 2003 interview:

It is really energizing to work here. It is for me, doing what I do. It’s energizing, because there is such a clear vision. There’s such a clear goal. And I can tolerate the messiness of the organization sometimes, because the goal is there. But I think Nancy (Madden)’s vision and the style of managing makes everything you do seem really important. Everything seems really important, because she takes it that way. I practically never found a level of detail that Nancy, if I took it to her and wanted her feedback, would say, “Oh, just figure it out.” Even though I could, she’s keenly interested in thinking it over. And she often takes things to a level of detail that I never would. Some of that is good. Some of it, I don’t agree with. But I do think that, by her style of management, she does energize. She and Bob (Slavin) both probably energize people and galvanize them around this mission. I feel that what I do is used better, more effectively, more directly, and so I can see the consequences of my work in positive ways, which is very energizing for me.

If the development environment was energizing for developers, it was also tough on the SFAF executives responsible for managing the enterprise. Much as research-proven effectiveness was a hallmark of Success for All, so, too, was the identity of Success for All as an integrated suite of program components and, thus, an alternative to conventional, targeted approaches to school reform. Yet norms of continuous, distributed experimentation and improvement made for steep challenges of coordination and integration. The imperative for SFAF executives, thus, was to ensure that the many products of the development process were (p.154) stitched together into a single, coherent, comprehensive program. In a 2003 interview, SFAF President Nancy Madden explained that coordination and integration were among the steepest challenges that came with SFAF’s approach to the work of continuous improvement:

We’re not standing still. We want to keep getting better. You have to keep moving people along. As the thinking develops, and as the problem-solving strategies get implemented, you have to implement them with a lot of people. All those people have to be on the same page, and they are all contributing to the page at the same time. And possibly the most unmanageable part of all of this is that we encourage that. We want people to be taking what they see, finding the nuggets—again, not just changing to do something different, but finding the pieces that will really make a real improvement. And when we see those things, then we say, “Well, you know what, we can do that one, and it’s going to have an impact.” And so, then, we’ve got to make sure that that then gets played out in all of the different ways it’s got to be played out to be an integrated part of the whole.

Revolution via Evolution

As enacted during its 1999–2003 development cycle, the work of continuous improvement in SFAF rested on interdependent, simultaneous, distributed, and largely evolutionary activity: managing the development agenda; growing, structuring, and managing the development organization; and enacting the development process. Over this four-year period, this combination of evolutionary activity drove a revolution in the Success for All program. The blueprint for school restructuring remained largely intact, its core components and the relations among them readily identifiable. Yet working to the rhythm of the annual school year, a rapid succession of incremental improvements transformed a program highly adapted to support large-scale, novice use into one packed with new potential to support large-scale, expert use. At the same time, these incremental improvements were designed to position the program to exploit what SFAF executives anticipated to be opportunities for continued scale up under the No Child Left Behind Act of 2001.

The 1999–2003 development cycle began almost jubilantly, and it proceeded with even higher expectations. Already supporting an installed base of 1,500+ schools, and with additional funding from the Obey-Porter Comprehensive School Reform Demonstration Act and the Reading Excellence Act slated to enter the system through 2002, members of SFAF saw growth to over 2,000 schools as just over the horizon, and practically inevitable. With new support from No Child Left Behind of 2001, SFAF executives forecasted the possibility of growing to over 3,000 schools. In that SFAF’s financial strategy depended heavily on revenue from new business, new potential for large-scale growth brought with it new potential for the long-term viability of the Success for All enterprise.

(p.155) 1999/2000: Fundamental Framing

SFAF’s 1999–2003 development cycle began with activity at the extremes of the program: leadership, historically the component that provided the weakest support for expert use; and tutoring, historically the component that provided the most support for expert use. These improvements marked a sea change in the Success for All enterprise, in that they gave common language to ambitions for expert use that had long been on the minds of key members of SFAF but that all had struggled to communicate.

In 1999/2000, SFAF constituted a new leadership development team charged with improving the linchpin (but historically weak) leadership component. Balancing program goals for grade-level reading performance with state accountability requirements, the effort focused on formalizing a process for working backwards from state assessment results, quarterly assessment results, and other information resources to develop grade-by-grade, classroom-by-classroom, student-by-student goals and plans for improvement. This effort yielded a set of key program improvements: for example, data analysis routines; the incorporation of commercial formative assessments for quarterly regrouping and analysis; and a new leadership manual formalizing key practices of expert Success for All leaders.14 All were drawn together into a new scaffolding initiative called “Leadership Academy,” which was developed and piloted in New York City’s Chancellor’s District. Working on a monthly basis with leadership teams from geographically concentrated schools, Leadership Academy was designed as an opportunity for developers to support the use of new resources, to facilitate the construction of local support networks, and to collaborate with principals and facilitators on school-specific analysis and problem solving.

Arguably the most important contributions of the leadership development team were three frameworks that, for the first time, formally codified the developmental sequence from novice to expert use, thus giving language to Success for All’s aspirations for professional practice and learning.15 “The Change Process” described a general sequence of steps through which all schools could expect to progress in transitioning from novice to expert use. Drawn from the Concerns-Based Adoption Model, “Stages of Concern” provided language that framed the individual-level experience of progressing from novice to expert. Also drawn from the Concerns-Based Adoption Model, “Levels of Use” detailed an eight-stage progression from novice, individual use to expert, collective use of Success for All, with the leap from “routine” to “refined” use marking the transition from novice to expert (see Table 4.1, below). Indeed, the very notion of “Levels of Use” effected a subtle—but critical—shift in agency. As framed formally for schools, Success for All went from being a program that teachers and leaders were expected to implement to being a tool for them to use in solving problems of student achievement.

Concurrently, the curriculum development team revised the tutoring component to improve support for expert use, with the primary goal of strengthening the relationship between assessment, goal setting, and intervention. As with the leadership component, a central contribution was new language framing expert (p.156)

Table 4.1 Success for All: Levels of Use Framework*

Levels of Use (LoU) of the Success for All Program

The Levels of Use component of the Concerns-Based Adoption Model (CBAM) identifies eight distinct levels of the change process. School leaders can use these levels to determine the extent to which teachers and school are implementing the SFA program. Educators who can accurately assess where individuals or schools are in relation to these levels can provide the support necessary to encourage schools to progress to the next levels. The chart below outlines how the Levels of Use might be described in a Success for All school.

Level

Description

Examples

Level O: Non-use

Schools have little or no knowledge of SFA, no involvement with it, and are doing nothing toward becoming involved.

  • Schools and teachers who have not been exposed to SFA.

Level I: Orientation

Individuals or schools have acquired or are acquiring information about SFA and/or have explored its value and requirements.

  • Awareness sessions have been conducted with the school staff.

  • Visitations to SFA schools have occurred.

  • Individuals and schools have examined the research related to SFA.

Level II: Preparation

Schools are preparing for their first use of SFA. All requirements for implementation have been met, and a specific date to begin has been determined.

  • An 80% vote has been secured.

  • Principal and facilitator have attended the New Leaders Conference.

  • Teachers at the school have been trained.

  • Materials have been organized and classrooms prepared.

Level III: Mechanical Use

Teachers are implementing SFA for the first time. Focus is on mastery of the instructional tasks of the program. These attempts often result in disjointed, awkward, and superficial instruction. This level coincides with the storming stage of the Tuckman change model. Teachers and schools often experience discomfort during this stage due to the stress of trying to master new materials. A high level of support for teachers is vital at this stage.

  • Teachers experience difficulty with teaching all components within the 90 minute reading period (pacing).

  • Teachers often refer to the teaching manuals during lessons.

  • Transitions between activities are slow.

Level IVa: Routine

Teachers’ capacity to teach SFA has stabilized. Focus remains on the teaching process rather than the consequences of the program on student achievement. Teachers and schools often feel a certain amount of relief at this level; the discomfort of the mechanical level of implementation has passed. School leaders need to make sure that a school does not stabilize at the routine level. Routine levels of instruction may feel more comfortable but do not guarantee student achievement. It is not until teachers begin to “own,” use, and adapt the instructional process to thoughtfully advance student achievement that real, substantive, and long-lasting academic gains are realized. Schools can get “stuck” at this phase and fail to reach the higher levels of use that are synonymous with high achievement and success for all.

  • Teachers can complete all lesson components within the allotted time.

  • Routines have been established that reduce the amount of time teachers spend on lesson preparation.

Level IVb: Refined

Teachers focus on the connection between instruction (process) and student achievement (results). Teachers are able to adjust instruction to meet the needs of individual students. This level of use is necessary to attain powerful gains for students. In schools with high teacher turnover, all teachers may not reach refinement at the same time. It is the responsibility of school leaders to assess each teacher’s progress toward this goal and to provide the supports needed for each teacher to attain refinement.

  • Teachers make professional decisions within the SFA framework and research base.

  • Teachers use student achievement data to determine effectiveness of instruction.

  • Teachers understand the rational behind various program components and are able to emphasize different instructional strategies based on individual student needs.

  • Teachers accelerate instruction when appropriate.

Level V: Integration

Level at which teachers skilled in teaching SFA are combining their own efforts with the efforts of other skilled teacher to achieve a collective impact on student achievement. This is also the stage at which a whole-school reform effort finally connects all the elements so that a school can attain the full synergy possible in comprehensive reform. Now, not only is every component at a level of refinement, but all the components function seamlessly together to promote grade-level performance for every student.

  • Teachers skilled in the use of SFA consult with one another to share effective instructional strategies.

  • Schools encourage collaboration among skilled SFA teachers by creating structure to promote team learning.

  • Tutors communicate regularly with teachers to develop seamless connections between tutoring and classroom instruction.

  • Family Support personnel collaborate with teachers to develop both preventive and early intervention plans that are targeted to student achievement.

  • School and community resources are fully aligned with the school’s SFA goals.

Level VI: Renewal

The level at which schools seek major ways to improve the implementation of SFA across all parts of the school community, with an emphasis on increasing the reading achievement of all students. This is the stage where change becomes self-sustaining. Structures have been put into place so that the “program” is now how the school does business, and the business is to promote high growth for student through the thoughtful engagement of all school personnel.

  • Staff and community examine student achievement data on a continuous basis and engage in problem solving and decision-making processes aimed at improving implementation.

  • A culture of mutual accountability exists among school staff and community members.

*From Success for All Leadership Guide, (pp. 1.6–1.8), by the Success for All Foundation (2002a). Reprinted with permission.

(p.157) (p.158) (p.159) tutoring practices. Specifically, the language focused on describing options available to teachers for in-the-moment diagnosis-and-response. “Teaching and modeling” described supplemental direct instruction in a particular skill or strategy in response to evidence of non-understanding and misuse by students. “Prompting and reinforcing” described strategic questioning, prompting, and encouragement to probe and to improve students’ use of skills and strategies. Much as “routine use” and “refined use” soon became conventional language characterizing the leap from novice to expert use, “teaching and modeling” and “prompting and reinforcing” soon became conventional language characterizing the leap from simply implementing the Success for All cycle of diagnostic instruction to actually using the cycle of diagnostic instruction adaptively to identify and to address students’ immediate needs.

There was nothing particularly profound about this new language and these new frameworks. The framework and language incorporated into the leadership component had been in circulation for years. Even so, with the Success for All enterprise super-saturated in adverse interpretations and misuse, this new language and these frameworks acted as seeds that both crystallized understandings of those problems and provided a means of clearly communicating long-held intentions for expert use.

2000/2001: Rapid Diffusion

Momentum gathered over SY 2000/2001. In March of 2001, Slavin and Madden published a second edition of their earlier trade book on Success for All, Every Child, Every School, though with its titled updated to reflect the scale of the program: One Million Children: Success for All.16 The book featured an updated review of research on Success for All that echoed earlier reviews, with a reported average effect size of 0.50 in grade levels 1–5, and with reported effect sizes for the lowest 25% of students ranging from 1.03 (first grade) to 1.68 (fourth grade).17

Throughout the year, earlier development initiatives also continued. Leadership Academy was expanded to six sites, and the revised tutoring component was released to all schools. Concurrently, key ideas and language from these initiatives began to diffuse throughout the Success for All enterprise, as developers and trainers began working both to manage interpretation of the program as bureaucratic and/or technocratic and to improve support for expert use across the conventional array of scaffolding opportunities.

To improve direct instruction for newly-recruited schools, initial training for teachers and leaders was adapted to incorporate new frameworks and language developed within the leadership initiative. The intent was to immediately dispel the notion of Success for All as either “bureaucratic” or “technocratic” and to locate power and responsibility squarely with leaders and teachers. With the “Levels of Use” framework, teachers and leaders were immediately presented with clear descriptions of the ultimate goals for individual and collective practice: refinement, integration, and renewal. They were also immediately presented with (p.160) the central challenge of Success for All: first mastering mechanical and routine use and, then, bridging the gap from “routine” to “refined” use by shifting their focus from their own performance to students’ performance. With complementary discussion of “The Change Process” and “Stages of Concern,” issues of stress, affect, motivation, and commitment were presented immediately, framed consistently, and given language to be shared by teachers, leaders, and trainers.

To improve direct instruction for experienced schools, developers created 19 new sessions for SFAF’s annual Experienced Sites Conferences focused on both novice and expert use of all program components. These included new sessions in leadership and tutoring that drew from on-going improvement efforts. Importantly, these also included the introduction of two sessions focused on laying a foundation for expert use in regular classroom instruction: “Observing, Informing, and Performing in Reading Roots” and “Observing, Informing, and Performing in Reading Wings.” Both sessions drew from the improvements in the leadership component to frame expectations for teachers’ performance as a progression from “routine” to “refined” use of the program. Both sessions provided routines for monitoring students’ use of reading skills and strategies, framed using Yeta Goodman’s notions of “kidwatching.”18 And both sessions drew from the tutoring component to frame subsequent intervention in terms of “teaching and modeling” and “prompting and reinforcing” the use of skills and strategies.

Improved support via direct instruction was matched with improved support for practice-based learning and in-practice follow-up. For example, a team of SFAF developers produced state-by-state resources documenting the alignment between the Success for All curriculum and individual state standards, along with routines, forms, and guidance for mapping backwards from state assessment results to plans for classroom intervention and professional development. These resources were intended to be used collaboratively by teachers and leaders in component team meetings, as well as by leaders and trainers in implementation visits. At the same time, another team of developers revised the standard Implementation Visit Report. Developers eliminated the requirement that trainers conduct a comprehensive, “check-check-check” survey of implementation, and they increased guidance for observing the substance of instructional interactions among students and teachers (again, with an eye on the progression from “routine” to “refined” use of the program).

2001/2002: Tightening Linkages and Anticipating Environments

By 2001/2002, what had started two years earlier as a trickle of improvements in two program components became a torrent spanning the entire program. Early efforts continued: the release of Leadership Academy to all schools, supported by its own cohort of specially-selected trainers and by a new, web-based information system; the doubling of new and revised Experienced Sites Conference sessions (from 19 to 38), the majority of which addressed issues of expert use; (p.161) and continued revisions to the implementation visit process to focus on the progression from “routine” to “refined” use. Complementing the preceding were three key initiatives focused keenly on both technical effectiveness and increasing the scale of operations: an initiative to more tightly link trainers and leaders; an initiative to more tightly link leaders and teachers; and an initiative to more tightly link the entire enterprise with anticipated changes in policy environments.

To more tightly link the work of SFAF trainers and school leaders, developers devised the Goal Focused Improvement Process. Initially piloted in 2001/2002 in one of SFAF’s four geographic training areas, the Goal Focused Improvement Process sought to integrate SFAF’s rapidly-expanding routines, information resources, and guidance into a conventional, continuous improvement process for leaders and trainers. The process included: identifying school-level goals; analyzing “internal” and “external” data; prioritizing areas of concern and identifying the root causes of problems; and devising and monitoring interventions. Embedded within the Goal Focused Improvement Process were additional routines and guidance for mapping backwards from state assessment results both to classroom-specific interventions and to priorities for teachers’ professional development, thus providing the additional advantage of linking the work of trainers and leaders more tightly with environmental expectations for performance.

To more tightly link the work of schools leaders and teachers, developers devised the “Building Better Readers” series: a school-based curriculum for teachers’ professional development designed to support both existing curriculum components and curriculum revisions anticipated for SY 2002/2003. Building Better Readers focused on developing foundational knowledge of four core reading comprehension strategies: clarification, summarization, questioning, and prediction. It also included tools and methods for assessing students’ use of these strategies, for collaboratively analyzing formal and informal assessments, and for intervening in response to misuse or weak use. Building Better Readers included primary source materials, materials for direct instruction, self-study guides, guidance for structuring collaborative component team meetings, DVD and other media resources, and assessments.

To more tightly link the entire enterprise to its environments, SFAF began adapting its development agenda and process to respond to what many at the time were calling the most aggressive federal educational reform policy to date: the reauthorization of the Elementary and Secondary Education Act as the No Child Left Behind Act of 2001 (NCLB).

Since the founding of Success for All, SFAF executives had been seeking to influence policy, research, and other environments to secure support, with a particular focus on advancing the twin causes of research and rationality in educational reform.19 Things appeared to break SFAF’s way, with NCLB both reinforcing the agenda for rational, systemic reform and changing the rules. Rather than a radical, monolithic policy, NCLB was its own bundle of evolutionary, incremental policy and program changes that promised to interact and invigorate reform activity throughout the system—not just in chronically underperforming schools, but in all schools, districts, and states. These changes ran the gamut: for example, (p.162) federal goals for 100% of students performing on standard by SY 2013/2014; revised provisions for holding school, districts, and states accountable for adequate yearly progress towards those goals; new provisions for disaggregating student performance by historically underperforming subgroups; new resources to enlist districts and states in providing technical assistance to struggling schools; a reduction in the poverty threshold allowing schools to pursue schoolwide improvement using Title I funds (from 50% to 40%); new provisions for teacher licensure; and more.

The biggest payoff for SFAF appeared to be in support for research-based and research-validated comprehensive school reform. NCLB formally incorporated and adapted the two federal policies that had driven SFAF’s explosive growth: the Obey-Porter Comprehensive School Reform Demonstration Act (which evolved into Comprehensive School Reform, or CSR); and the Reading Excellence Act (which evolved into Reading First). At the same time, NCLB further formalized guidance for qualifying external programs, with increased attention to programs both based on and validated by “scientifically-based research”—a phrase used famously 111 times in the legislation. For example, Comprehensive School Reform expanded Obey-Porter’s nine criteria for qualifying programs to eleven, including new criteria directly addressing the research basis of programs (see Table 4.2, below).

Research was especially central to the rhetoric of Reading First, which drew directly from the findings of the National Reading Panel to specify five core dimensions central to success in primary reading: phonemic awareness, phonics, vocabulary development, reading fluency (including oral reading skills), and reading comprehension strategies. Per the federal guidance for Reading First:

Quite simply, Reading First focuses on what works, and will support proven methods of early reading instruction in classrooms. The program provides assistance to States and districts in selecting or developing effective instructional materials, programs, learning systems and strategies to implement methods that have been proven to teach reading. Reading First also provides assistance for the selection and administration of screening, diagnostic and classroom-based instructional reading assessments with proven validity and reliability, in order to measure where students are and monitor their progress. Taken together, the complementary research-based programs, practices and tools required by Reading First will give teachers across the nation the skills and support they need to teach all children to read fluently by the end of third grade.20

The rhetoric of rationality was complemented by the parallel emergence of an administrative infrastructure that appeared to support research-based, research-validated comprehensive school reform. For example, with the Education Sciences and Reform Act of 2002, the Department of Education’s Office of Educational Research and Improvement was reorganized into two new organizations: the Institute of Education Sciences and the Office of Innovation and Improvement. (p.163)

Table 4.2 No Child Left Behind Act: Criteria for Qualifying Schoolwide Programs*

Key components of comprehensive school reform programs under NCLB included:

  1. 1. Employs proven strategies and proven methods for student learning, teaching, and school management that are based on scientifically based research and effective practices and have been replicated successfully in schools;

  2. 2. Integrates a comprehensive design for effective school functioning, including instruction, assessment, classroom management, professional development, parental involvement, and school management, that aligns the school’s curriculum, technology, and professional development into a comprehensive school reform plan for schoolwide change designed to enable all students to meet challenging State content and student academic achievement standards and addresses needs identified through a school needs assessment;

  3. 3. Provides high quality and continuous teacher and staff professional development;

  4. 4. Includes measurable goals for student academic achievement and benchmarks for meeting such goals;

  5. 5. Is supported by teachers, principals, administrators, school personnel staff, and other professional staff;

  6. 6. Provides support for teachers, principals, administrators, and other school staff;

  7. 7. Provides for the meaningful involvement of parents and the local community in planning, implementing, and evaluating school improvement activities consistent with section 1118;

  8. 8. Uses high quality external technical support and assistance from an entity that has experience and expertise in schoolwide reform and improvement, which may include an institution of higher education;

  9. 9. Includes a plan for the annual evaluation of the implementation of school reforms and the student results achieved;

  10. 10. Identifies other resources, including Federal, State, local, and private resources, that shall be used to coordinate services that will support and sustain the comprehensive school reform effort; and

  11. 11. (A) Has been found, through scientifically based research to significantly improve the academic achievement of students participating in such program as compared to students in schools who have not participated in such program; or (B) has been found to have strong evidence that such program will significantly improve the academic achievement of participating children.

*U.S. Department of Education, 2005, Sec. 1602.

(p.164) By all appearances, the aim was to create federal infrastructure to support a system-wide emphasis on the use of science in educational reform. Concurrently, federally funded, quasi-governmental organizations were developing capabilities to evaluate comprehensive school reform programs and other programs for rigorous evidence of effectiveness: for example, the University of Oregon’s Reading First Center, the National Clearinghouse on Comprehensive School Reform, and the What Works Clearinghouse.

SFAF’s dynamic development agenda and process, its fluid development organization, and its long-established capabilities for (and commitment to) research supported a quick response. Responding to NCLB’s emphasis on scientifically based research, SFAF sought to set the standard by designing a randomized field trial of Success for All, the closest that SFAF or any other provider had ever come to conducting a controlled experiment evaluating the effectiveness of a comprehensive school reform program. At the same time, SFAF drew quickly from existing and newly developing resources to package a version of Success for All targeted directly at districts and schools received federal funding under Reading First.21 Called “Success for All–Reading First,” promotional literature described the program as “precisely aligned with the requirements of this new legislation.”22 A promotional booklet describing the program to prospective schools explained further:

To align Success for All with the requirements of Reading First, we have created a new program, Success for All–Reading First, which is designed to meet these requirements. It builds on the programs, practices, and training capabilities established over the years by the nonprofit Success for All Foundation.

Success for All–Reading First is not merely an element of Success for All. Every aspect of the classroom program for grades K–3 has been examined in light of the Reading First guidelines and in light of changing state and national standards for practice and content. Many new elements are being introduced in Success for All–Reading First to strengthen the alignment with Reading First guidelines, and to improve professional development, assessment, curriculum, and other elements.23

2002/2003: The Core Curriculum Components

For 2002/2003, earlier development initiatives continued. These efforts spanned all components and operations: for example, a complete redesign of initial training for school leaders; a complete revision to the primary leadership manual to align it with on-going program improvements; a new release of Success for All’s web-based information system; the continued piloting and release of strategy-specific components in the Building Better Readers series; initiation of a second revision to tutoring in which all tools and resources would be incorporated into a multimedia software package; reconceptualization of the family support team as (p.165) the “solutions team,” including efforts to more tightly coordinate academic and nonacademic services for students; an extended pilot of the Goal Focused Improvement Process throughout the training organization; the release of 27 new or revised Experienced Sites Conference sessions (along with associated participant training books); and the initial fielding of SFAF’s randomized study of program effectiveness.

At the same time, SFAF put the figurative capstone on its 1999–2003 development cycle by completing revisions to the core Success for All curriculum components. Early Learning was redesigned and re-released as KinderCorner. Reading Roots was released as “Reading Roots Third Edition,” which included a new phonics sub-component called “FastTrack Phonics.” And Reading Wings was updated to include new “Targeted Treasure Hunts” focused on the same, four core comprehension strategies as the Building Better Readers series: clarification, summarization, questioning, and prediction. All had been under revision during the entire development cycle, though they were under a much longer development timeline due to the time and cost of updating such extensively elaborated materials and due to their sensitivity as the coordinative center of the entire program.

Across the components, the overall design for curriculum and instruction remained intact: continued focus on cognitive skills and strategies; continued use of cooperative learning and the cycle of instruction; continued use of highly elaborated units and lessons; and increased attention to embedding supplemental guidance in the core curriculum materials. Within this framework, developers sought to accomplish multiple goals at the same time. One goal was to increase support for expert use in regular classroom instruction: for example, by strengthening informal and formal assessments and their use for adapting instruction; by adopting conventional language of expert use across curriculum components (e.g., “diagnostic instruction,” “teaching and modeling,” and “prompting and reinforcing”); and by embedding video and DVD technology into classroom instruction to model expert use by teachers and students. A second goal was to further adapt the program to environments: for example, by aligning intended outcomes with standards and assessment of key states, and by incorporating the increasingly ubiquitous language from the findings of the National Reading Panel. A third goal was to address sundry needs that had been recognized over time: for example, improving aesthetics and “ease of use”; improving coordination and transitions between the different levels of the reading curriculum; and incorporating additional writing instruction into the reading curriculum.

Reprise: Recapturing the Program

If the goal of SFAF’s 1999–2003 development cycle was to recapture the program, there was much to suggest that SFAF was well on its way. In four quick years, while continuing to support its installed base of schools, the SFAF development organization used incremental, component-by-component improvements to completely (p.166) re-engineer Success for All. Collectively, these program improvements appeared to position Success for All strongly in response to past problems of “process” and “outcomes” by vastly increasing support for more expert use of the program. While revisions to the core curriculum components functioned as a figurative capstone on the 1999/2000–2002/2003 development cycle, development efforts would actually continue, with (among other things) a particular focus on improving guidance for school-specific interventions. Even so, by the end of 2002/2003, all of the above-described program improvements were available in some form to the full network of Success for All schools.

The 1999–2003 development cycle also appeared to position the Success for All enterprise very strongly in response to NCLB, two of its flagship programs (Comprehensive School Reform and Reading First), and its increased attention to scientifically based research in educational reform. Indeed, in the view of SFAF’s executives, the still-growing research base on Success for All was a key, potential source of competitive advantage in NCLB-effected environments.

To help secure that advantage, SFAF began posting research on Success for All on its web site. Two studies were featured particularly prominently. Both were led by University of Wisconsin researcher Geoffrey Borman.24 Both were published in leading, peer-reviewed journals concurrent with the close of the 1999–2003 development cycle. And both bore good news, for comprehensive school reform (in general) and for Success for All (in particular).

In 2002, Borman and Gina Hughes, a Johns Hopkins University researcher, published a study of the long-term achievement effects and the cost effectiveness of Success for All in Educational Evaluation and Policy Analysis, a journal of the American Educational Research Association. In the study, Borman and Hughes used data provided by the Baltimore City Public School System for the years 1986/1987 to 1998/1999 to compare students who attended five of the original Success for All schools in Baltimore to students who attended matched comparison schools as they progressed to middle school and high school. Borman and Hughes reported that their results indicated that “Success for All students complete eighth grade at a younger age, with better achievement outcomes, fewer special education placements, and less frequent retentions in grade at a cost that is essentially the same as that allocated to educating their control-group counterparts.”25 Further, they reported that “the replicable educational practices of prevention and early intervention, as modeled by Success for All, are more educationally effective, and equally expensive, relative to the traditional remedial educational practices of retention and special education.”26

In 2003, Borman and colleagues published in the Review of Educational Research (another journal of the American Educational Research Association) what, to that point, was the most extensive meta-analysis of the effects of comprehensive school reform programs on student achievement: a 100+ page review of the results of 232 studies of 29 comprehensive school reform programs. While they acknowledge weaknesses in the amount and quality of available research, and while they reported that effect sizes varied tremendously, Borman and colleagues argued that “the overall effects of CSR are statistically significant, meaningful, and appear to (p.167) be greater than the effects of other interventions that have been designed to serve similar purposes and student and school populations.”27

Further, Borman and colleagues observed that, even though conventional funding covered a three-year implementation window, achievement gains improved dramatically as schools exceeded that window. Specifically, schools that had implemented comprehensive school reform programs for five years or more showed achievement effects two times the overall average effect size of 0.15. Schools that had implemented programs for seven or more years showed gains 2.5 times greater than the overall average effect size. And schools that had implemented programs for eight to fourteen years showed gains 3.3 times greater than the overall average effect size.28

More importantly for SFAF, Borman and colleagues identified Success for All and two other programs (Direct Instruction and the School Development Program) as having the strongest evidence of effectiveness, with Success for All reported as having an overall effect size of 0.18 and an effect size of 0.08 in third-party research.29 To be sure, the difference between the overall effect size (which included SFAF’s own research) and the effect size for third-party research was eye-catching. So, too, were the difference between the effect sizes reported by Borman and colleagues and the +0.50 effect sizes reported by SFAF.30 While recognizing the potential for bias, Borman and colleagues argued that the explanation more likely rested in developers either: (a) deciding not to publish weak findings; and/or (b) studying high-quality implementation in order to best learn how their programs work in practice.31 To guide interpretation, Borman and colleagues drew on conventional sources to suggest that an effect size of 0.20 be interpreted as generally small, though representative of positive effect sizes in education-related fields. They also suggested that, in education and related fields, modest effect sizes (i.e., between 0.10 and 0.20) should not be interpreted as trivial.32

From the perspective of SFAF, that both studies showed positive findings was clear. With an extensively revised program and new evidence of effectiveness, the stage appeared to be set for another rapid increase in scale. That was a view held by SFAF’s executives. That was also a view held by many others, as well. For example, in a 2003 book on leadership in comprehensive school reform programs, researchers Joseph Murphy (of Vanderbilt University) and Amanda Datnow (then of the Ontario Institute for Studies in Education at the University of Toronto) made exactly that point:

The market for CSR models is quite an active one; it has been estimated that more than $1 billion was spent by local school improvement services by the year 2001 (New American Schools, n.d.)

Overall, we now find that the scaling up of CSR models is occurring at an unprecedented rate, affecting thousands of schools in the United State and elsewhere…. Considering that the number of schools implementing CSR models is estimated at 6,000 (Educational Quality Institute, n.d.), these schools comprise 6.5% of the approximately 92,000 public schools in the United States….

(p.168)

There is considerable evidence to suggest that the CSR movement will continue to grow and thrive in the next few years. Congress has continued to support CSRD, an increasing amount of research is being conducted on the implementation and effects of CSR (and hence our knowledge base has increased), and the reform models are maturing and design teams are becoming more adept at working with schools. There is still much to be learned from the CSR movement and its implications for school improvement and for leadership.33

New Problems Within, New Problems Beyond

SFAF’s revolution was not the same type of abrupt, seismic event as federally education policy: a bundle of evolutionary, interdependent policy and program improvements enacted on a given day, with the stroke of a pen. Rather, it was a series of waves that swelled over the course of the 1999–2003 development cycle and that lifted the entire enterprise. As those waves swelled, the Success for All enterprise began to look less and less like the fragmented, technically-weak system of U.S. public education and more and more like the coherent professional network long sought by SFAF founders.

Yet as waves of success swelled, a new set of interdependent and deeply perplexing problems began swirling beneath them. And, oddly, at the same time it was evolving in ways that differentiated itself from U.S. educational environments, the Success for All enterprise was also evolving in ways that resembled U.S. educational environments. Much as with explosive growth its training organization, SFAF’s development agenda, organization, and process interacted over the course of the 1999–2003 development cycle to recreate enduring problems of U.S. educational reform within the Success for All enterprise, including some of the very problems that SFAF had initially set out to solve. By no means had the Success for All enterprise suddenly devolved into a dysfunctional urban school district. Even so, for those who have studied such districts (or, more problematic for SFAF, for teachers and leaders actually working in them), the problems would have been familiar: a steady supply of rapidly churning and weakly integrated improvement initiatives; incoherence, faddism, and distrust in schools; and a potential-rich program that exceeded capabilities for effective use in schools.

At the same time that new problems began emerging within the Success for All enterprise, new problems began emerging beyond. Broader stability gave way to profound turbulence, and environments that had long been supportive of SFAF turned unexpectedly hostile. As its 1999–2003 development cycle progressed, despite projections for continued growth, SFAF found itself struggling to secure both the contributions needed to support its continuing development efforts and the revenues needed to sustain the enterprise.

Just as with the emergence of new potential within the Success for All enterprise, these new problems were not effected in one, turbulent moment. They, too, emerged, interacted, and compounded over the 1999–2003 development cycle. (p.169) As members of SFAF began recognizing them, early enthusiasm evolved to grave concern. In some ways, SFAF found itself sailing out of one perfect storm and straight into another, this one more furious than the one before it. In other ways, SFAF found itself someplace altogether different: living on some complex and ever-turbulent edge, staring down a combination of interdependent problems and solutions within and beyond the Success for All enterprise, its continued viability hanging in the balance.

Churning and Weakly-Integrated Improvement Initiatives

Within the SFAF enterprise, the problems began with the development agenda, organization, and process interacting to recreate a version of the uncoordinated reform blizzard constantly howling in the broader environments of U.S. public education. Spanning the 1999–2003 development cycle, developers worked urgently to improve both the elaboration and scaffolding of every program component, with improvements readied for release on at least an annual basis (and sometimes more frequently than that). While the intent was to rapidly address pressing problems and to seize new opportunities, an unintended consequence was that SFAF’s prolific development organization produced a torrent of constantly churning program improvements. The number of new and revised sessions at annual Experienced Sites Conferences was dizzying in and of itself: 19 new or revised sessions in 2001; 38 new or revised sessions in 2002; and 27 new or revised sessions in 2003. Over this same period, the Success for All Materials Catalog (the standard document detailing products and services available to schools) evolved from a thin newsletter to a 50+ page booklet.

Much as the rapid scale-up of the installed base of schools overwhelmed the historic informality of SFAF’s training organization, so, too, did the rapid scale-up of development activity overwhelm the historic informality of SFAF’s development organization. Developers struggled to coordinate these many, complex, and fast-paced improvement initiatives informally among themselves. Further, the number, complexity, and pace made it increasingly difficult for SFAF executives to be as intimately involved in all aspects of the day-to-day work and, thus, more difficult for them to informally coordinate it. Consequently, from 1999–2003, what from a bird’s eye view appeared to be an orderly progression of program improvements appeared much less orderly for those on the ground. In a 2003 interview, one SFAF developer expressed the frustration of many:

Somebody gets an idea, we’re rolling it out the door the next day, without any real thought as to how it’s going to effect this, this, or this. All this stuff comes rolling out…. Its just like, every day, I walk in, and I think, “What’s new this week?” I love our organization, but whoever is there last at any given meeting is the one who seems to make the decision. “Okay, this is how it’s going to happen.” But you can walk out in the hallway and talk to another person, and somebody else can change that decision.

(p.170) Problems of coordination in the development organization led to what members of SFAF described as problems of integration: that is, problems linking separately improved components into a comprehensive, coherent, improved program. Program improvements simply were not fitting together as tightly as developers intended and as schools needed. Integration was particularly problematic within the leadership component. SFAF had a dedicated leadership team charged with improving support for school leaders. At the same time, virtually every other program improvement from every other development team had implications for school leaders, from aligning the program with environments to scaffolding teachers in the use of new curriculum improvements. While these many program improvements had potential to interact to support a more expert version of leadership practice, the lack of coordination among developers resulted in a lack of explicit linkages between program improvements, as well as a lack of shared understandings about how the entire lot could work in interaction.

Problems of constantly churning and weakly integrated program improvements were not recognized immediately. Instead, members of SFAF began to recognize them midway through the 1999–2003 development cycle, as development initiatives expanded, efforts to field them grew, and experiences and frustrations accumulated. Yet stopping these problems at their source by slowing the number and pace of improvement initiatives did not present itself as an option to SFAF. Continuous, program-wide improvement was among the core commitments and capabilities of SFAF, work historically supported by a steady stream of public and private funding. Thus, it continued.

Incoherence, Faddism, and Distrust in Schools

Constantly churning and weakly integrated program improvements began to interact with complex, school-level dynamics to recreate within the Success for All enterprise versions of the incoherence, faddism, and distrust so characteristic of chronically underperforming schools. The problems began with SFAF’s urgent dissemination process, which effected a level of incoherence in schools even beyond that effected by problems of design-level integration. Again, a rallying cry among SFAF executives and developers was, “Get it out there!” And, again, rather than issuing complete, revised versions of Success for All in the style of software releases (e.g., “Success for All v. 3.0”), program improvements were urgently “gotten out there” much more in the style of software “bug fixes” and “patches,” on an improvement-by-improvement basis (sometimes annually, sometimes more frequently).34 Thus, while newly enlisting schools received a complete, up-to-date version of Success for All, all other Success for All schools were always working with some combination of existing and improved program components.

Incoherence was exacerbated by a sort of faddism. Working as an external provider on a voluntary basis, SFAF had no formal authority over which schools elected to adopt which new-and-improved program components. Though developers and trainers provided guidance, their voluntary relationship with schools (p.171) gave them no formal authority over such decisions. Rather, such decisions were intentionally delegated to schools, themselves: in deference to their status as the paying customers; in an effort not to effect interpretations of SFAF as a bureaucratic, controlling outsider; and to cultivate schools’ ownership and motivation. The result, thus, was individual schools working within existing understandings, preferences, and budget constraints to sample from SFAF’s ever-expanding catalog of materials and scaffolding opportunities: a veritable shopping mall of hot, new school improvement resources and services.35 Some schools aggressively pursued the latest and greatest, no matter their needs. Others rode out this year’s model in anticipation of next year’s model. Still others did a little of both.

Problems of incoherence and faddism were matched with problems maintaining trustful relationships with schools. Working as an external provider, SFAF always had to maintain a soft touch, actively working to support implementation without breeching the trust or autonomy of schools. The work of continuous improvement complicated that challenge, with SFAF sitting between a rock and a hard place. On the one hand, members of SFAF reported that some schools pushed for continuous improvement, both as a sign of SFAF’s commitment to them and because they drew motivation from participating in a dynamic, forward-moving enterprise. On the other hand, members of SFAF reported that other schools interpreted continuous improvement as a sign that SFAF was floundering. Exacerbating that perception was the fact that, with problems both in coordinating and disseminating program improvements, SFAF sometimes did flounder. In a 2003 interview, one member of SFAF with joint responsibility for development and training explained the consequences:

When you’re out in the field, there’s nothing worse than principals mad at you because they ordered materials and they didn’t get them. Or the revisions are not quite done, and they’re trying to order materials now, because they have their funding, but they don’t know if Reading Roots Third Edition will be ready in time. There’s nothing worse than that. There’s nothing more destructive in terms of the way that we maintain ourselves. Word of mouth between schools is, “Yeah, they’re a good program, but they don’t have their act together.” And people in the field deal with that, and Customer Relations deals with that. Developers don’t always deal with that.

The Paradox of Potential and Practice

Among the most surprising of the problems that SFAF recreated within its own enterprise was the paradox of potential and practice: The more potential that the development organization packed into the program, the more difficult it was to put that potential into practice in weak schools.

With the program improvements released during its 1999–2003 development cycle, SFAF was pushing its schools to make a fundamental transition. Teachers and principals locked into mutually reinforcing interpretations of Success for (p.172) All as bureaucratic and/or technocratic (and, consequently, into mutually-reinforcing patterns of mechanistic use) were being challenged to understand and to use the program fundamentally differently: that is, as a resource for professional practice and learning. However, the more support for expert use that developers layered into the program, the harder and farther they pushed teachers and leaders beyond existing understandings and ways of working.

While SFAF’s new materials, tools, and resources were developed with careful attention to ease of use, and while SFAF’s new scaffolding opportunities were developed with the intent of supporting quick use in practice, none of these supports was designed to be effective without the extensive involvement of SFAF trainers. Indeed, SFAF’s 1999–2003 development process pushed SFAF trainers to think and to work just as differently as it did teachers and leaders. Further, absent solutions to problems of integration, coordination, and dissemination within the development organization, those became problems for trainers to solve on the ground, in their work with schools. Understanding how to leverage program improvements simply at a rudimentary level grew to be increasingly problematic as the number and pace of improvement initiatives expanded, never mind integrating the different program improvements to support expert use in schools.

And that was the rub. As the development organization worked urgently to pack new potential into the program, it pushed hardest on what, going into the 1999–2003 development cycle, had been SFAF Achilles’ heel: its training organization, many members of which were, themselves, locked into adverse interpretations and misuse of the program. Yet awareness of the breadth and depth of problems in the training organization was not shared equally among members of SFAF in advance of the 1999–2003 development cycle. Rather, shared awareness emerged and sharpened over the course of the 1999–2003 development cycle. This awareness emerged as trainers began attempting to use new resources and new scaffolding opportunities to support expert use in schools, and as SFAF executives, managers, and developers began recognizing their struggles doing so.

During its 1999–2003 development cycle, absent deep understanding of the extent of problems in the training organization, SFAF approached improving the professional development of trainers exactly as it approached improving the program as a whole: that is, via incremental, evolutionary improvement. SFAF continued to work within existing constraints: for example, a lack of external funding to support the development of the training organization; the inability to stop development activity and re-allocate earmarked funding to the training organization; and the inability to take trainers “off line” and out of the field to engage in extended professional learning opportunities. As such, SFAF continued to support trainers’ professional development with the same, formal structures: via direct instruction in the context of annual conferences and periodic meetings, with a primary focus on exposure to SFAF’s many, new program improvements (rather than mastery of them). And SFAF continued to rely on informal communities of training practice to develop capabilities for expert training practice.

Familiar results followed. More expert trainers participating in stronger communities of practice were more able to capitalize on program improvements. (p.173) More novice trainers participating in weaker communities of practice struggled, often incorporating new language and ideas (especially the distinction between “routine” and “refined” use of Success for All) without changing their day-to-day work. Uneven progress developing trainers’ capabilities to exploit new potential in the program, in turn, became a limiting condition on developing schools’ capabilities to exploit that new potential. Some in SFAF drew the line right down the middle, with two of four geographic training areas progressing to more expert use and with two continuing to struggle.

As members of SFAF began recognizing the challenges both of expert training and of developing expert trainers, SFAF initiated immediate efforts to respond. For example, in SY 2001/2002, SFAF revised first-year professional development for new trainers to include the analysis of data from actual Success for All schools, with the goal of immediately beginning to develop their capabilities to identify problems of implementation and effectiveness—in September, before they ever conducted their first Implementation Visit. To support collaboration and integration among trainers, SFAF incorporated its previously independent family support training team into the full training organization, and it began aggressive efforts to “cross-train” trainers to ensure that all were equally prepared to support all program components. And one of SFAF’s geographic training areas used early pilot work on the Goal Focused Improvement Process to learn more about communities of practice as a context for developing capabilities for expert training.

Lessons learned from these early efforts suggested steep challenges, most notably the pilot work on the Goal Focused Improvement Process. The pilot, itself, was something of a best case scenario: an expert regional manager (a National Board Certified teacher with a master’s degree in organizational development) in SFAF’s most progressive, geographic training area, working with approximately 10 trainers to support 100 schools, under the guidance of an area manager who also served as one of the lead developers. Recalling her experiences, the regional manager reported that it required three years of intensive collaboration among her team to advance to expert use of the Goal Focused Implementation Process. En route, she experienced 50% attrition among her team, in part because of trainers self-selecting out of the organization due to the challenges in the work. With this best case scenario as a lower bound, developing expert trainers under the guidance of the modal regional manager in the modal geographic training area was shaping up to be a very long-term proposition.

Thus, SFAF found itself facing the same paradox of potential and practice that had long frustrated other ambitious, external educational reformers. It was a matter of organizational physics: of potential and kinetic energy. SFAF developers were winding and winding and winding a spring that packed potential into the program. However, the spring’s release mechanism—its training organization—was stressed entering the 1999–2003 development cycle, and it began to seize up as the cycle progressed. This release mechanism was not a simple trip switch. Rather, the release mechanism was a complex, geographically distributed, self-funding, hard-working group of very committed people without the plan, the opportunity, or the slack to support collegial, practice-based learning. With that, (p.174) new potential remained tightly coiled in the program, and not in motion and in practice in schools.

Of all that members SFAF reported learning through the 1999–2003 development cycle, problems and complexities in the professional development of trainers appeared to be most surprising, both for experienced members of SFAF and for developers newly recruited to the effort. Whether a veteran or a rookie, none had ever managed so extensive a program improvement effort in interaction with such a large training organization and such a large installed base of schools. Explained one newly recruited developer, in a 2003 interview:

This work environment is a new experience for me, working with hundreds of trainers in the field and disseminating information to thousands of schools. That’s the type of infrastructure for this size of an organization, and it is new for me. I’m the type of person who’s an action person. I usually do something, I get it done, I do the next step, and it all happens very smoothly. I’ve just learned, oh, but we’re dealing with hundreds of people now. Hundreds of people who learn very different ways, have very different styles of learning, have different needs, have different backgrounds, and they’re just not all going to get this at the same time and in the same way. That’s been a learning experience for me, and I think a valuable one, because that’s just the real world.

Hostile Environments

Enduring problems of reform within the Success for All enterprise began interacting with an equally enduring problem beyond: Broader environments quickly turned hostile.36 SFAF’s 1999–2003 development cycle began concurrent with a series of abrupt, seismic events in broader economic, political, and social environments. Economic growth, political stability, and domestic security that SFAF had enjoyed over its history quickly gave way to profound turbulence. Early in 2000, the very thing that had driven the economic prosperity of the late 1990s—the “dot com” bubble—burst. Following quickly was the 2000 presidential election, among the most contentious and controversial in the nation’s history. For an enterprise that had gained its full momentum largely within the administration of a single Democratic president, the 2000 election marked an abrupt transition. Square in the middle of SFAF’s 1990/2000–2002/2003 development cycle came the terrorist attacks of 9/11/2001. Resulting turbulence stretched clear through the 1999–2003 development cycle to the invasion of Iraq in March of 2003.

Shocks and aftershocks ramified through U.S. educational environments. Money began drying up everywhere. Philanthropic contributions began to decline with the rapid fall in over-valued investment portfolios. States, districts, and schools that had reaped the benefits of the roaring ‘90s suddenly found themselves squeezed. With new pressure from NCLB to increase achievement, the challenge to states, districts, and schools was to do much more with much less. (p.175) Indeed, many critics decried NCLB as an unfunded mandate, arguing that the work required of states, districts, and schools far exceeded federally provided resources.

Shocks and aftershocks ramified through the Success for All enterprise. The 9/11 terrorist attacks dealt yet another blow to SFAF’s ever-stressed training organization. Several years earlier, one of SFAF’s most experienced teams of trainers had been relocated to Manhattan to serve New York City’s Chancellor’s District. Its members were distributed throughout the city and in schools at the time of the attacks. At the very same time, all of SFAF’s newly recruited trainers for that year (many ex-teachers fresh out of their own classroom) were gathered for initial professional development in Tyson’s Corners, Virginia, within a few short miles of the Pentagon at the time of the attack. Other trainers were distributed throughout the country, conducting their first implementation visits of the new school year. All struggled to travel home to their families—and, for the years following, to travel at all.

To match new strain on the training organization, declining philanthropic contributions began to erode the funding available to support SFAF’s expansive development agenda. Uncertainty in state and local funding meant that schools had fewer resources with which to purchase SFAF’s newly revised materials and training services, thus stemming a key source of revenue. And uncertainty in state and local funding interacted with SFAF’s loosely coordinated dissemination process to create a drain on SFAF’s hard-won operating capital. Long printing timelines, uncertain demand for new materials and training sessions, and ever-churning development had SFAF incurring large costs for obsolete program materials on an annual basis (costs estimated by one SFAF executive to be in the millions of dollars).

Still worse was a rapid decline in SFAF’s primary source of operating capital: revenues from new business. Rather than beginning the climb to 3,000+ schools over its 1999–2003 development cycle, SFAF executives began recognizing an unfamiliar and unanticipated phenomenon: a rapid decrease in the rate of growth in 2000/2001 (from 42% to 3%) and, then, the actual loss of schools for 2001/2002 and 2002/2003 (at a rate of roughly 4% per year).37 SFAF lost experienced schools: in some cases, school-by-school; in other cases, en masse, as districts that had coordinated the adoption of Success for All and other comprehensive school reform programs dropped them (including New York City’s Chancellor’s District, in the winter of 2003).38 Worse yet, SFAF struggled to recruit new schools. By 2002/2003, the end of its development cycle, the installed based had declined to 1,480 schools—still more than supported by all but nine state education agencies, and more than SFAF served at the time of the organization’s founding in SY 1998/1999.39 Even so, for an organization heavily dependent on generating operating capital through new business, this was very bad news.

Reductions in funding, revenue, and the installed based of schools led to an unanticipated contraction in SFAF as an organization. Rather than growing to provide more support to more schools, SFAF found itself needing to lay off developers, trainers, and administrative staff, the first reductions in force in the history (p.176) of Success for All. Exhaustion and uncertainty had others leaving voluntarily. SFAF’s historically informal (and slack-rich) development organization began to face new incentives to manage the development process more efficiently, against scarce resources—at the same time that it was discovering the need to urgently address problems of integration and to urgently support the practice-based learning of trainers. Amidst new policy expectations for adequate yearly progress and the strain of life on the road in post-9/11 environments, SFAF’s already stressed trainers faced new pressure to improve achievement in order to retain existing schools, and they faced new incentives to recruit new schools in order to save their own jobs.

For a young organization still viewed by its members as a Mom and Pop shop, the impact of unexpectedly hostile environments struck a deep blow in the combination of confidence, camaraderie, and missionary zeal that had long supported the work of the organization. Particularly frustrating to all involved was that SFAF was experiencing these new problems with its environments despite the success of its 1999–2003 development cycle, both in packing new potential into the program and in strategically aligning it with NCLB’s Reading First program. Just as frustrating was that SFAF was experiencing these problems despite what appeared to be a strengthening in policy (and other) support for the still-nascent, niche market for research-based, research-validated comprehensive school reform programs.

However, whether the comprehensive school reform market was strengthening or weakening was actually emerging as a very open question. Concurrent with the close of the 1999–2003 development cycle, a new line of criticism was beginning to develop in SFAF’s environments: criticism not of Success for All, in particular, but of comprehensive school reform, in general. Where SFAF cited the review of research by Borman and colleagues as continued evidence of possibility, others cited the overall effect size of 0.15 as evidence that comprehensive school reform was yielding weak returns on very formidable investments, and weaker yet when considering that the overall effect size for third party research was only 0.09.40 By this logic, Success for All appeared to be the happy exception, not the disappointing rule.

One such critic was University of Michigan historian Jeffrey Mirel. In 2001 and 2002, Mirel published two reports on the history of the New American Schools.41 Arguing that the history of NAS was one of “unrequited promise,” Mirel was critical of the participating designs as more evolutionary than “break the mold,” of achievement effects as weak, and of NAS garnering formidable political influence amidst questionable evidence of effectiveness (thereby echoing the earlier criticisms of Stanley Pogrow).

Just as problematic was the ten year report on the New American Schools initiative published in 2002 by Rand researchers Mark Berends, Susan Bodilly, and Sheila Nataraj Kirby.42 Their report framed whole school reform as challenging-yet-promising. Over their period of study, the challenges were formidable: for example, the participating designs were under constant development; achieving uniformly high levels of implementation in schools proved difficult; environments complicated implementation; and design team capacity in the sponsoring (p.177) organizations proved to be variable.43 Those challenges notwithstanding, they also reported that approximately 50% of schools implementing NAS designs demonstrated achievement gains relative to comparison groups.44

Consistent with the findings of Borman and colleagues, Berends and colleagues qualified their report by observing that implementation appeared to be improving over time, and that it may have been premature to expect robust achievement results across all schools in the six year window structured by NAS. However, recognizing little tolerance for complex, long-term reform, they concluded their report with a cautionary note regarding expectations for externally designed whole school reform:

Currently, many schools throughout the country are attempting whole-school reform requiring significant changes in teacher and administrator behaviors using the federal funding provided by such programs as Title I and the CSRD program. RAND’s program of studies of NAS has identified the conditions needed to make these efforts successful including: teacher support and sense of teacher efficacy; strong and specific principal leadership abilities; clear communication and ongoing assistance on the part of design developers; and stable leadership resources, and support from districts.

The RAND analyses indicate these conditions are not common in the districts and schools undertaking CSRD….

We anticipate continuing conflicts between whole-school design or model adoption and district and schools contexts as well as political pressures rushing schools and external assistance providers into partnerships that are not well thought through. If districts continue in this manner, the outcome will be neither short-term gains nor long-term success. Expectations regarding the ability of schools to make meaningful changes with the assistance of externally developed designs in this fragmented and unsupportive environment are not likely to be met. This may well lead policymakers to abandon what could be a promising vehicle for whole-school reform without having given it a chance.45

Losing the Organization

With SFAF, so much depended on one’s perspective. From one vantage point, its 1999–2003 development cycle appeared to have SFAF recapturing the program. From another, its 1999–2003 development cycle appeared to have SFAF losing the organization: the organization of its developments efforts, resulting in problems of ever-churning and weakly integrated program coherence; the organization of its cornerstone program, with problems of incoherence and faddism in schools; the organization of its training operations, with new potential in the program exceeding trainers’ capabilities to exploit that potential; and the SFAF organization, itself, as environments turned unexpectedly hostile.

(p.178) That all of this was happening was becoming increasingly clear. Why it was happening was not. Apprehending and understanding the array of interdependent problems both within and beyond the Success for All enterprise would have challenged executives and executive staff in even the most professionally managed organizations. Yet SFAF continued to be managed by educational researchers and developers, themselves supported by thin executive-level staff. Indeed, SFAF did not have a formal operations department to tease out the complex problems within the Success for All enterprise, nor did it have a formal “marketing department” or “policy department” that monitored and interpreted environments. Again, much such work was conducted informally among executives, developers, trainers, school leaders, and teachers. However, echoing problems in development and training, profound turbulence over the course of the 1999–2003 development cycle swamped SFAF’s informal means of monitoring and interpreting interdependent activity within and beyond the enterprise—and, with that, its abilities to apprehend and understand interdependent problems bearing on it.

Just as with SFAF’s program improvements, these new problems had been emerging, evolving, interacting, and compounding over the course of the 1999–2003 development cycle. As the cycle drew to a close, these new problems reached a fevered pitch. SFAF was under great stress. Cracks began to show in its members’ resolve. Trainers were overwhelmed. Developers were exasperated. Executives were bewildered. One SFAF staff member spoke for all: “It seems like we’re always raging against the machine.”

Living on the Edge

Over the 1990s and into the new millennium, as SFAF worked to design, support, scale up, and improve its comprehensive school reform program, students of complex systems began wrestling with a notion that they called “the edge of chaos.”46 This notion was used to describe complex systems that had evolved to teeter on some fine point between predictability and randomness. At issue was what happens to such systems over time, as small changes in key variables drive abrupt transitions from one phase in the system’s existence to another. One wrinkle in scholars’ debates focused on the capabilities of entities living on this edge. Some argued that the edge of chaos was a place of maximum evolution. They argued that, through natural selection processes, entities that survived there “evolved to evolve” by developing unusual capabilities to process large volumes of information and to adapt quickly and strategically. Some questioned the argument of the edge of chaos as a place of maximum evolution. Others questioned whether some place like “the edge of chaos” really existed at all.

While scholars debated, the theoretical notion of “the edge of chaos” evolved into a general metaphor describing the challenge of existing in some space between predictability and randomness. To the chagrin of some, the metaphor grew legs, and it began to travel. The metaphor made its way into research and practical writing on business strategy. Scholars and analysts appropriated the notion to (p.179) describe “creative disorganization” as a legitimate and adaptive (if inefficient) managerial strategy under particular conditions (and a stark contrast to conventional methods of rational management).47 A variant of the metaphor made its way into research on innovation. Scholars leveraged the notion to frame the innovation process as a journey down a river that oscillated between divergent, random, chaotic flows and convergent, orderly, periodic flows. For managers, they described the most acute problems as sitting at the point of transition between convergent and divergent flows: the fulcrum between predictability and randomness, and a point of choice between rational and “pluralistic” management strategies.48 The metaphor even made its way into popular literature, with a novel titled The Edge of Chaos using the notion to frame a fundamental human condition in the new millennium: a need for constant learning and change in order to stem a life of hopeless predictability, though at risk of effecting a life of disorder and instability.49

Whether real or imaginary, fact or fiction, the notion of “the edge of chaos” appeared to describe perfectly the place where SFAF and its enterprise lived, and what it took to survive there. SFAF and its installed base of schools lived at the interstices of three complex, interdependent systems: SFAF’s self-constructed network of schools; the broader system of U.S. public education; and the still-broader economic, political, and social system. In some ways, in the early 2000s, these systems were mired in order and predictability, each reinforcing the other: Success for All schools, mired in rote, mechanistic use of the program; the deeply institutionalized U.S. educational system, mired in sub-standard performance and non-improvement; and the broader system, mired in an industrial economy and political partisanship. In other ways, these systems were in the throes of great change, each reinforcing the other in a push for some new, rational order, and at a rapid rate when cast against history: the Success for All enterprise, driving systematic experimentation and evidence-based decision making into classrooms, schools, external programs, and policy; the U.S. educational system, driving systemic reform through the use of research and accountability; and still-broader environments, in which the Information Revolution fueled a global economy and a populist enlightenment.

These were dynamic systems. They were prone to abrupt, seismic changes at their most fundamental levels, followed by shocks and aftershocks within and between: for example, in Success for All, with the introduction of new, simple language describing expert use that captured fundamental ambitions and fueled rapid development; in the U.S. educational system, with a bundle of evolutionary, interdependent changes in policies and programs enacted at a moment with the stroke of a pen; and in the broader economic, political, and social systems, as bubbles burst, towers crumbled, and all watched in shock and awe. Yet these changes were not entirely random. Key elements of these systems moved in predictable (if uncoordinated) cycles that functioned to synchronize activity within and between actors: the school year; policy cycles; funding cycles; economic cycles; and electoral cycles.

Working amidst abrupt shocks and within predictable cycles, these systems and the entities in them responded. They did not do so using a shared set of rules, (p.180) such that each was responding in known and predictable ways to each other. Indeed, though rhetoric, intentions, and possibilities for rationality ran through all three systems, movement over time within and between systems was far more evolutionary than it was classically rational, as all simultaneously processed circumstances, made meaning, and charted their way forwards. Actor by actor, the great bulk of activity appeared to more of the “successive limited comparisons” variety than the “step back, consider, evaluate, and choose the optimal alternative” variety.

This was SFAF’s world: the interstices of three complex, interdependent, adapting systems. This world was certainly not what some described as “empty”: a world of predictable parts neatly arranged, such that one could easily improve the world by isolating a given part, plucking it out, studying it, improving it, and re-inserting it. Nor was this world an apocalyptic anarchy, with all its parts strewn here and there, each acting in its own self-interest, absent any recognition of, deference to, or responsibility for the collective. Rather, this was a sort of “full world” in which parts, problems, solutions, and challenges were moving and interacting with each other, within and between, in regular cycles, punctuated by occasional, abrupt changes in the fundamentals. Indeed, at its very core, the vision of Success for All was predicated on this very fullness: the deep faith that, somehow, the gentlest of movements—little children in classrooms learning together in new ways—could stir up a storm of equality, prosperity, and hope.

Living at the nexus of these three systems, SFAF had evolved to evolve, constantly adding new organizational capabilities both in response to (and to compensate for) its environments. These capabilities combined to support an approach to the work of continuous improvement that blended capabilities for exploiting what research had to offer with an unusual, pragmatic epistemology for producing new knowledge from experience. SFAF had evolved to include mechanisms that kept it flooded with information and knowledge from within its own system, from the broader system of U.S. public education, and from the still-broader economic, political and social system. It had evolved to include mechanisms for rapidly processing that information and knowledge, making shared meaning of it, and codifying it. It had evolved to include mechanisms for rapidly adapting its program and its organization. It had evolved to include mechanisms for replicating favorable adaptations across its installed base of schools. And it had evolved to include mechanisms with which to influence, manage, and buffer environmental disturbances, all in an effort to secure conditions favoring its effectiveness and survival.

Over its history, SFAF had succeeded in leveraging these capabilities to understand and to manage key interdependencies within its own enterprise: among its schools, program, and organization. SFAF had also succeeded in leveraging these capabilities to manage key interdependencies with its environments: among organizations and interests both in the broader system of U.S. public education and in the still-broader economic, political, and social system. Yet, at the close of its 1999–2003 development cycle, SFAF was staring down two steep challenges. On the one hand, SFAF was evolving so fast, along so many dimensions all at once, (p.181) that it began to recreate within its own enterprise many of the very problems that it had originally set out to solve. On the other hand, SFAF was not evolving fast enough.

Well into its second decade of operations, and even with all of its unusual capabilities for learning, SFAF was still developing the knowledge, programs, and capabilities needed to effectively support large-scale, comprehensive school reform. At the same time, experiences surrounding the passage, early implementation, and administration of NCLB had SFAF deeply concerned. Billions of dollars in federal funding were entering the system in apparent support for systemic, comprehensive, schoolwide reform, with billions more slated for each of the next five years. Yet problems retaining and recruiting schools had SFAF worried about a weakening in the comprehensive school reform market. How, exactly, states, districts, and schools were using this funding simply was not clear. In 2003, one member of SFAF with joint responsibilities for development and training expressed the worries of many:

I think our organization is just coming to understand whole school reform through this development process, as well as data-driven instruction and implementation. I think we’re still trying to understand whole school reform. And it might be gone by the time we understand it.

Sustaining the enterprise by maintaining its strategic, precarious position at the nexus of these three, complex, interdependent systems would surely require continued, rapid evolution: perhaps within the niche market for comprehensive school reform; perhaps to transform the enterprise and to position it for entry into some new market niche. And with rapid evolution shaping up as a near-certainty, so, too, were new challenges. On the one hand, rapid evolution risked continuing to effect and to exacerbate the classic problems of U.S. educational reform that were emerging within the SFAF enterprise. On the other hand, SFAF’s capabilities for rapid evolution were eroding. With declines in contributions and revenues, slack was quickly evaporating, and SFAF’s long-held tolerance for “creative disorganization” was being matched with urgent need for rational management. The size and complexity of the operation—the installed base of schools, the program, the development and training organizations, and their environments—were all pushing in this same direction. Camaraderie was being tested, and confidence was waning.

If responsibility for packing potential into the program rested with the development organization, and if responsibility for implementation rested with the training organization, then responsibility for evolving the strategic vision and charting a course that would sustain the enterprise rested with SFAF executives. Such work had been on-going over the history of Success for All, and highly confounded with work of continuous improvement, though always under the tight control of SFAF executives. Problems that emerged over SFAF’s 1999–2003 development cycle took work that had long been taken for granted in SFAF and yanked it to the fore.

(p.182) The work of sustaining the enterprise was shaping up to be as challenging as any that SFAF had yet faced. The work would require building new executive capabilities commensurate with the complexity of activity within and beyond the enterprise. It would require developing the capability to discriminate between “creative disorganization” and simple disorganization. It would require striking a balance between budding ideas like “pluralistic leadership” and plain, old-fashioned rational management. Above all else, it would require incorporating and producing the executive-level knowledge needed to pull it all off. This last challenge was shaping up to be the most difficult of the bunch. All of this, at a moment when leading scholars were still struggling simply to conceptualize the fundamental condition that appeared to be at hand, and while other leading scholars and analysts were just beginning to produce the practical knowledge needed to manage it.

Much like the schools it sought to serve, SFAF had arrived at its own moment of reckoning. For SFAF the missionary, the moment had come to save thyself. Either that, or call off the revival, roll up the tent, and move on. For SFAF the player, the moment had come to pull an ace out of its sleeve. Either that, or fold ‘em, walk away from the table, and hit the buffet. For SFAF the organization, the moment had come to see clearly the complex, full world in which it lived, and to evolve rapidly in response. Either that, or succumb.

Such was SFAF’s surreal life at the interstices of these three complex, interdependent, adaptive systems: a place aptly described as the edge of chaos.

Notes

Notes:

(1.) Slavin and Madden (1996a: 7).

(2.) Lindblom (1959).

(3.) Finn’s comments about the transition from revolutionary outsider to beltway insider appear in his foreword to a report by University of Michigan historian Jeffrey Mirel titled, The Evolution of New American Schools: From Revolution to Mainstream. See Mirel (2001:iii). Mirel’s report is ultimately critical of NAS, and raises questions as to its implementation, effectiveness, and influence. Taking up Mirel’s critique, Finn’s foreword raises such questions, also. The Mirel/Finn critique is taken up at a later point in this chapter, in discussion of criticism of the New American Schools and comprehensive school reform that arose late in SFAF’s 1999–2003 development cycle.

(4.) Again, Finn’s comments appear in his foreword to Mirel (2001:iv). Per the preceding note, Finn’s comments couched a critique of NAS, given its influence on U.S. education reform. That critique is taken up later in this chapter.

(5.) U.S. Department of Education (1999b).

(6.) See Slavin (2001) on Title I as the “engine of reform” in U.S. schools.

(7.) For single and double loop learning, see Argyris and Schön (1974; 1978).

(8.) For example: Scholastic, from which SFAF incorporated the Scholastic Reading Inventory for use in quarterly assessment; Riverside Publishing, from which SFAF incorporated the Gates-MacGinitie Reading Assessments for this same purpose; the Center for the Study of Learning and Performance at Concordia University in (p.183) Quebec, which functioned as a key collaborator in improving the tutoring components; and the National Organization for Research at the University of Chicago, which functioned as a collaborator in conducting SFAF’s randomized field trial.

(9.) See Rogers (1995:131–160) for a widely-cited rendering of the conventional research-development-diffusion model of innovation development.

(10.) For a corroborating account of the SFAF development process, see Park and Datnow (2008).

(11.) For analyses of innovation development consistent with the approach used within SFAF, see: Zollo and Winter (2002); Winter and Szulanski (2001); Van de Ven, Polley, Garud, and Venkataraman (1999); Van de Ven, Angle, and Poole (2000); March (1996); and Gibbons, Limonges, Nowotny, Schwartzman, Scott, and Trow (1994).

(12.) The National Reading Panel was convened in 1997 at the request of the U.S. Congress by the National Institute of Child Health and Human Development. The charge to the Panel was to evaluate available research in order to assess the effectiveness of different approaches to teaching children to read. The Panel published its findings in April of 2000. See National Reading Panel (2000).

(13.) This characterization of innovation involving “divergent” and “convergent” learning derives from Van de Ven, Polley, Garud, and Venkataraman (1999). See, especially, pp. 181–214. See, also, March (1996).

(14.) Allen, Halbreich, Kozolvsky, Morgan, Payton, and Rolewski (1999).

(15.) See Tuckman (1965) and Hord, Rutherford, Huling-Austin, and Hall (1987).

(16.) Slavin and Madden (2001b).

(17.) Slavin and Madden (2001b:278).

(18.) These Experienced Sites Conference sessions drew from Wilde (1996).

(19.) See Slavin (2001) and Slavin (2002a).

(20.) U.S. Department of Education (2002:7).

(21.) At the same time, SFAF packaged a second offering, “SFA—Early Reading First,” addressing the pre-K “Early Reading First” component of NCLB.

(22.) Success for All Foundation (2002b).

(23.) Success for All Foundation (2002c:1).

(24.) In that critics had long taken exception with research conducted by associates of Success for All, it is important both to acknowledge Borman’s relationship with SFAF and to acknowledge the outlets for the research by Borman cited here (and elsewhere in this manuscript). Like earlier researchers at the University of Memphis, Borman was both an external researcher and an associate of Robert Slavin. In 2002 and 2003, Borman was an assistant professor at the University of Wisconsin and senior researcher for the Consortium of Policy Research in Education, a group of leading research universities whose members collaborated in studying education reform, policy, and finance as they bear on student learning. In 2001, Borman also collaborated with Robert Slavin and Samuel Stringfield (a researcher and associate of Slavin at Johns Hopkins University) in editing a book on the history and politics of Title I (Borman, Stringfield, and Slavin, 2001). For further reference, see, also, Borman’s work with Jerome D’Agostino on a meta-analysis of federal evaluation results on the effects of Title I on student achievement (Borman and D’Agostino, 1996). Borman would go on to direct Success for All’s national randomized field trial. To stem critics’ skepticism of research by associates of SFAF, note that all of the research on Success for All by Borman that is cited here (and elsewhere in this manuscript) was published in peer reviewed journals of the American Educational Research Association.

(25.) Borman and Hughes (2002:256).

(26.) Borman and Hughes (2002:256).

(27.) Borman, Hewes, Overman, and Brown (2003:164).

(28.) Borman, Hewes, Overman, and Brown (2003:153).

(29.) Borman, Hewes, Overman, and Brown (2003:161).

(30.) As a general matter, Borman, Hewes, Overman, and Brown (2003) found that developers’ own research was more favorable to their programs than was third party research. Borman et al. (p. 164) report a high-end estimated effect size of 0.15 when considering all studies and a low-end estimated effect size of 0.09 when considering only third-party studies.

(31.) See Borman, Hewes, Overman, and Brown (2003:167).

(32.) See Borman, Hewes, Overman, and Brown (2003:164).

(33.) Murphy and Datnow (2003:15).

(34.) Some such bundling was done at the component and sub-component level: for example, Reading Roots Third Edition. However, such bundling was not done for the program as a whole. Moreover, when such bundling was done at the component level, it was done amidst continued bug-fixing for those schools unable to purchase the revised, bundled component.

(35.) The “shopping mall” metaphor derives from Powell, Farrar, and Cohen (1985).

(36.) See Rowan (2002), on the population ecology of the school improvement industry, and on educational environments as hostile to innovators in marginal market niches.

(37.) Per personal email from Robert Slavin on 06/14/2005, school totals and percentage gains for 2000–2005 are as follows: 1600 (+3%) for 2000/2001; 1540 (-3.75%) for 2001/2002; 1480 (-3.9%) for 2002/2003; 1400 (-5.4%) for 2003/2004; and 1300 (-5.4%) for 2004/2005.

(38.) The loss of New York City’s Chancellor’s District was particularly stinging for SFAF. One point of concern was that SFAF had devoted considerable development and training resources to the Chancellor’s District under very trying circumstances. A second point of concern was with the programs that replaced Success for All: “Month by Month Phonics (from Carson-Dellosa Publishing Company in Greensboro, N.C.); and the New York City Passport program (newly developed specifically for New York City by Voyager Expanded Learning of Dallas, TX). SFAF executives saw these programs as maintaining a primary focus on low-level phonics instruction (the primary focus of Success for All’s first grade component, Reading Roots) and as paying comparatively scant attention to reading comprehension (the primary focus of Success for All’s grade 2–6 component, Reading Wings). See Traub (2003). A third point of concern was that, in the analysis of SFAF executives, the new approach lacked anything like Success for All’s base of research providing evidence of effectiveness, either in general or New York City. SFAF researchers had produced two reports showing positive effects of Success for All, both in the Chancellor’s District and in the broader New York City public schools (Success for All Foundation, 2008c; 2008d). An evaluation published in 2004 by a team of researchers from New York University corroborated SFAF’s findings of gains in elementary reading in the Chancellor’s District (Phenix, Siegel, Zaltsman, and Fruthter, 2004). Acknowledging that their study design did not enable them to explain the specific causes of those gains, they concluded: “It is important to reiterate that the Chancellor’s District took over some of the city’s least well-resourced (p.185) schools serving the city’s poorest and academically lowest performing students. By developing, mandating and implementing a comprehensive set of organizational, curricular, instructional and personnel changes, the Chancellor’s District significantly improved the reading outcomes of the students in those schools, in three years of focused effort. This is not a small accomplishment” (p. 27).

(39.) In a report prepared for the U.S. Department of Education by the Council of Chief State School Officers, Williams, Blank, Toye, and Petermann (2007) reported the following states as serving more that SFAF’s 1480 elementary schools: California (5550); Texas (3934); Illinois (2619); New York (2521); Ohio (2208); Michigan (2139); Pennsylvania (1920); Florida (1826); and New Jersey (1520).

(40.) Borman, Hewes, Overman, and Brown (2003:164) report a high-end estimated effect size of 0.15 when considering all studies and a low-end estimated effect size of 0.09 when considering only third-party studies.

(41.) See Mirel (2001; 2002).

(42.) Berends, Bodilly, and Kirby (2002).

(43.) Berends, Bodilly, and Kirby (2002:xxx–xxxiv).

(44.) Berends, Bodilly, and Kirby (2002:xxxiv–xxxv).

(45.) Berends, Bodilly, and Kirby (151–153).

(46.) See, for example: Lewin (1999); Waldrop (1992).

(47.) For example, see Brown and Eisenhardt (1998).

(48.) Van de Ven, Polley, Garud, and Venkatamaran (1999). Van de Ven et al. reference chaos theory explicitly as their orienting framework.

(49.) McCorduck (2007).