Framing My Program Inquiry 

Huey-Tsyh (2005) comments that, “no single evaluation strategy, approach, or method can succeed with every possible evaluation need or situation.” That implies that more than one approach is often required. As such, it is important initially, to compare different approaches to determine what best fits your particular requirement. A requirement which pertains to the program you wish to dissect. So far I have examined approaches that take a look at process, short and long term goals and also analyze programs through different lenses that are specifically scientific and widely holistic. In any case, I wish to keep these all in mind as I examine my program of choice.

Works Cited
Huey-Tsyh, C. 2005. Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Sage Publications. Thousand Oaks, CA.

Program Choice

It was difficult choosing a program to evaluate. Many things I wished to explore; deep learning, mindfulness and collaborative problem solving for example, do not necessarily have prescribed programs or statistical information to analyze. Through some digging on reading intervention programs, I settled on a program used by our school board, Empower™.

Empower™ was developed by Dr. Maureen W. Lovett and a team of educators and psychology professionals in the Learning Disabilities Research Program at Sick Kids Hospital. The program has been developed for teaching children, adolescents and adults with various levels of reading disabilities. The program consists of different modules and focuses on various skills concerning spelling, analyzing text, decoding words, comprehension and vocabulary. Specifically I wish to focus on my school board’s use of the program for intermediate and high school students. Intermediate students (Grade 6-8) were introduced in one school in the 2013/14 school year and four more schools during the spring of 2014 with the program expanding slowly as more teachers are trained. Eight secondary schools during the 2014/15 school year and one alternative education location were introduced to High School Empower™ and during the 2015/16 school year, five additional teachers were trained.

I hope to focus specifically on my school board’s success with the program if I’m able to obtain enough quantitative and qualitative data, but at the very least, I know I can access lots of information available online with its extensive use in Canada and the U.S.A. My motivations for selecting this program stem from my desire to both become more knowledgeable about the program and my probable ability to access locally sourced information because it is utilized in my board.

Evaluation questions to help guide my evaluation design include but are not limited to the following:

  • Why was the program selected for our board?
  • Cost to use program? Cost of program in schools concerning personnel?
  • At what rate are teachers being trained in the program?
  • How many schools have the program available as of now?
  • How does the school measure the impact or success of the program?
  • Who can I speak with about our board’s use of the program?
  • Where can I get the quantitative information? E-best?
  • Will I be able to obtain qualitative information from teachers trained in using the program?
  • Is the program used with students of different demographics?
  • Is there a systematic approach?
  • What are the drawbacks of the program?
  • Roadblocks?
  • Bias of data?

Now it’s time to begin collecting more information about inputs, short term goals and long term outcomes and develop a model!

Works Cited
http://www.hwdsb.on.ca/wp-content/uploads/2015/07/Empower-Implementation-Report.pdf
http://www.sickkids.ca/LDRP/Empower-Reading/

A More Refined Approach

As I began to research at greater length and frame my program inquiry, I realized my initial questions above weren’t specifically focused. A little all over the place. So is the nature of inquiry! Start big, refine, ask more detailed questions, refine, reflect etc. I realized that deciding on a more specific focus and approach is the best place to start.

Lovett et al (2008) comment, “[t]here is very little evidence, however, regarding intervention for the hundreds of thousands of students still struggling with reading when they reach high school. Is reading remediation effective into high school?” This prompted me to take a closer look at the high school reading intervention program used by my own board, Empower™ Reading: High School Program, formally known as Phast Paces. The next step was to settle on the most appropriate evaluation approach to use?

Process evaluation, looks at how the program is attended, how it is being implemented, and asks whether it is being done well. This proved too difficult in obtaining the relevant information as there were roadblocks (time/scope/collective agreement) in seeing it in action for myself. The more I researched, the more I realized that my board hasn’t been using the program all that long and the implementation of the program from its start date was gradual. Therefore any outcome evaluation, looking at long term goals, would not be possible. An impact evaluation made the most sense, to look at the short term goals and objectives of the program, looking specifically at the success of the strategies put into place through qualitative and quantitative data, and analyze assessment completed before and after the program to determine its success. Possibly, a close look at changes in attitude, and the behavior of stakeholders as well.

Locating definitive results of overall student success as a direct result of the program was difficult. Also, in this regard it was hard to determine how results were collected and how much of a student’s improvement came as a direct result of the program. Most information I located that I believe could be useful was accumulated by the creators of the program in controlled research environments. I was feeling a little directionless….

An article I read on A365, Anne Vo on Decision Research in Education, took a look at evaluation use in the education sector and more specifically, reading. I felt it may help better form my program inquiry by helping me to identify some possible stakeholder information or conditions to consider while looking at our board’s use of Empower™. The article summarizes a study done by Cynthia Coburn and colleagues on decision-making in elementary schools and urban school districts in the State of California when they were implementing new reading instruction policies.

They found that teachers “relied on their professional experiences and mental models to make choices about classroom practice” (Robinson 2015) especially if the school culture wasn’t collaborative in nature, there was little space for differences of opinion or there was no obvious connection between policy and practice. In my own experience, I see this. Often people continue to do as they have always done because it is safer unless the benefit is obvious and proven to their satisfaction. This has large ramifications on any reading intervention program, as the way it is implemented by the teacher can have an affect on its success. Makes it challenging to evaluate for impact unless you look carefully at the inputs.

The study also determined that school and district administrators views developed by experience and previously-held beliefs, “had greater influence on their decision-making than actual data” and that it was “contingent upon what’s organizationally and politically feasible at the time the decision needed to be made” (Robinson 2015). The board ultimately determines where to invest their money based on decisions of administrators and selecting reading intervention programs without data could be detrimental. It also undermines the validity of the chosen program, if teachers, who are on the front lines can’t see valid proof of the decision to use it.

Moving forward, after reading this article and with the data available to me, I’ve decided to focus specifically on the program’s merits as it has been researched and tested for high school students. So in looking at the Empower™/Phast Paces program, I am looking at the implementing organization (qualifications), delivery protocols, assessment of student levels of different reading skills, teacher training as it effects program delivery, and overall success. Essentially, was the decision to utilize this program a valid one?

Program Theory

Sidani and Sechrest (1999) indicate that a program theory explains the conditions of the program, hypothesizes the outcomes, and specifies the necessary requirements through a series of statements. The Empower™ Reading: High School Program is an intensive reading intervention program,not intended to replace a school language program but act as a supplement to it. It targets specific core reading skills for students with reading and language disabilities. Much research has been done and continues to be done. “[E]fficacy studies with the PHAST PACES [Empower™] Program indicates the importance of teaching word identification strategies to struggling readers of any age” (Lovett et al. 2007).

The conditions necessary to carry out the program consist of a highly trained educator, delivering target specified reading instruction in decoding and comprehension to a small group of high school students with learning disabilities over a time of 60 – 70 hours.

The program “addresses multiple sources of dysfluent reading and impaired reading comprehension, and focuses on the decoding, reading rate, and comprehension problems of adolescent students with RD” (Lovett et al. 2008) and projected outcomes are the improvement in their ability in these areas. Ultimately, “to allow every student with reading problems an opportunity to achieve an excellent standard of reading skill and access to a fully literate future” (Lovett et al. 2008).

Requirements for the program’s success revolve around a couple key issues. One, the capability of the teacher delivering instruction, being “fluent in their use and orchestration of a repertoire of effective and adaptive instructional strategies” (Lovett et al. 2008). Second, that the student attempts and engages in the program as specified, completing the necessary time in the program.

To further help with outlining the program, clarification and specification, I’ve created an Action & Change Model as a graphic with accompanying descriptions for each section.

Evaluation Approach

My evaluation approach is summative, as it looks at the effectiveness of the program and is intended to inform whether it should continue as our purchased reading intervention program for high school students at the board. This type of evaluation is designed for those who make decisions about the program’s future. As I mentioned previously, it makes the most sense to conduct an impact evaluation that looks at the short term success of the program. It is possible to examine reading skill acquisition through the use of pre- and post-assessments but it would be difficult to ascertain if the program led to literate adulthood, increased graduation rates, greater employment or economic standing. Looking at the objectives of the program as it relates to improving student reading and than looking for discrepancies as it relates to performance or the actual acquisition of those skills. I intend to use a variety of means to assess the implementation of the program; looking at published materials that contain surveys, ratings, observations, and interviews (Funnell, 2000; Lipsey, 2000) of the use of Phast Paces/High School Empower™ in Canada and the U.S.A., conducted by independent organizations and the Learning Disabilities Research Program at The Hospital for Sick Children. After taking a look at the evidence provided, it is important to develop conclusions about the validity of the program and justify my conclusions. Next, it will be important to reconnect with stakeholders to determine how my evaluation will be used, whether that be to inform on best practice in selecting reading intervention programs, continuing with the board’s use of the program, or looking at teacher professional development to support it.

Works Cited
http://www.bestevidence.org/overviews/P/PHAST-Reading.htm
Bloom, D. E., & Canning, D. 2000. The health and wealth of nations. Science, 287, 1207–1208.
Funnell, S, C. 2000. Developing and using a program theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87, 91-101.
Huey-Tsyh, C. 2005. Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Sage Publications. Thousand Oaks, CA.
Lipsey, M.W. 2000. Evaluation methods for social intervention. Annual Review of Psychology, 51, 345-375.
Lovett, M. W., Lacerenza, L., Kunka, M., & De Palma, M. 2007. PHAST PACES: Remediation for struggling readers in high school and beyond. Manuscript in preparation.
Lovett MW, De Palma M, Frijters JC, Steinbach KA, Temple M, Benson NJ, Lacerenza L. 2008. Interventions for reading difficulties: a comparison of response to intervention by ELL and EFL struggling readers. Journal of Learning Disabilities. 41(4), 333‐352.
Lovett, M.W., Lacerenza, L., De Palma, M., Steinbach, K.A., & Frijters, J.C. 2008. Preparing teachers to remediate reading disabilities in high school: What is needed for effective professional development? Teaching and Teacher Education. 24(4), 1083-1097.
Perras, Cindy. 2016. Empower™ Reading: Taking a Scientific Approach to Reading on https://www.ldatschool.ca/literacy/empower-reading-taking-a-scientific-approch-to-reading/
Robinson, Sheila. 2015. Posted in Research on Evaluation on http://aea365.org/blog/roe-tig-week-anne-vo-on-decision-research-in-education/
Sidani, S., & Sechrest, L 1999. Putting program theory into operation. American Journal of Evaluation, 20(2), 227-238.
http://www.sickkids.ca/empower/Program-effectiveness/intervention-studies/index.html

Data Collection & Analysis

Qualitative and quantitative data will be collected from different phases of the program, located in a variety of documents from scholarly articles and board published materials.

Qualitative information from written feedback forms from the initial implementation will be assessed that include information on teacher impressions, thoughts about successes, barriers or needed supports for the program.  Feedback forms would again be distributed to teachers at the end of the first year of the program with similar questions but also include completion information. Students too, would complete a feedback form answering questions like: Did they find it helpful? Would they take it again? What did an average day or lesson look like? How could the program be improved? When examining the responses to the questions, I would attempt to ascertain whether any patterns were noticeable, and what that might mean for the program overall. Were there any noticeable deviations from that patten? Is this a reflection of attitudes? If there is a large section of information that was absent, would I then have to refine my questions?  In analyzing the data I think it would be important to compare and contrast the two different groups, students and teachers and look at what notions are shared by the larger group. Also, it would be interesting to study how the participants’ perception of how well the program works, lines up with the data collected from the battery of skills tests.

Quantitative information would be generated by a series of skills assessments from the program looking at the beginning of the semester or testing period and then again at two more intervals. Located in the document, Piloting the Empower™ Reading Program in HWDSB’s Secondary Schools  2014-15 School Year, completed by  the board’s research department, Geeta Malhotra, Principal of Student Success and Trisha Woehrle, Teacher Research Consultant, is an example of the board’s evaluation plan.

2017-03-13

In determining the effectiveness of the program at a local level, it would make sense to evaluate at regular intervals, collecting data as they have. In this particular example, I was unable to see the results because I am unable to view it until the superintendents and trustees have allowed the results to be public. It is important to note in this case, “[s]tudent achievement data was submitted from six participating schools on a total of 38 students (three schools did not submit their student data). Not all students had data collected and recorded for the three time points.”

The following table represents a quantitative analysis of gains made by students using the reading intervention program. More specifically, it’s a comparison of teachers who have just begun using the program and those who are more experienced to see if that also effects student outcomes. This was/is a significant consideration for me as I pursue stakeholder participation as well.

2017-03-13 (1)

Lovett, M.W., Lacerenza, L., De Palma, M., Steinbach, K.A., & Frijters, J.C. 2008. Preparing teachers to remediate reading disabilities in high school: What is needed for effective professional development? Teaching and Teacher Education, 24(4), 1083-1097.

Data indicates that greater gains were in fact made by students with more experienced teachers of the program. However, the report indicates in some areas, it is not significant. I have collected several different studies located in scholarly articles that give summaries in a similar format so that I can analyze the data. In another article, Evaluating the Efficacy of Remediation for Struggling Readers in High School, Lovett et al, indicate that after “60 to 70 hours of PHAST PACES instruction, struggling readers demonstrated significant gains on standardized tests of word attack, word reading, and passage comprehension and on experimental measures of letter–sound knowledge and multisyllabic word identification relative to control students. An average effect size of .68 was revealed across these outcome measures.” However, a year later showed decelerated growth after the intervention, except for passage comprehension which continued to grow. It is important to note that many of the articles on the research are published by the developers of the program.

Lovett, Maureen W., Lacerenza, L., De Palma, M.,  Frijters, J.C. 2011. Evaluating the Efficacy of Remediation for Struggling Readers in High School. Journal of Learning Disabilities. 45(2),  151-169.

Enhancing Evaluation Use

“Evaluation use (or evaluation utilization) refers to the way in which an evaluation and information from the evaluation impacts the program that is being evaluated” (Alkin 2003). The reason for doing an evaluation is so that it can be used in some way to justify, change or enlighten.

“[P]articipation in evaluation gives stakeholders confidence in their ability to use research procedures, confidence in the quality of the information that is generated by these procedures, and a sense of ownership in the evaluation results and their application” (Shulha & Cousins 1997). Participatory evaluation for use, “no doubt increase the likelihood that findings will be used” (Weiss 1998). It makes sense that involving stakeholders would increase the likelihood that findings may be used because they feel a part of it and not removed from it. They have a voice and develop a sense of ownership. The information will be familiar, in context and some will be assimilated in smaller chunks to allow for better processing and understanding by the stakeholder.

This is why I believe it is important to follow the school board’s approach in including both qualitative feedback by teacher and student. Buy-in between those two groups is critical for the program’s success. Also, they are best able to provide information on possible changes that are needed for the program to be successful in a practical environment outside of a research group that may have a reduced student group size and controls that minimize disruption.

The use of the evaluation, to determine the effectiveness of the program and to determine whether it should be expanded to involve more schools or be dropped in favour of a different reading strategy lies in the hands of teachers. They are the ones who deliver the program, and are responsible for the students who engage in the program. Participation is essential.

My ability to utilize my evaluation for change requires that I am able to effectively engage in discourse with those who have decision-making power. I must continue to build their confidence in my abilities and insights. Over the last few years, I have volunteered for many initiatives and committees. The board already has an evaluation and research body but often seeks out the opinions from its teaching staff, (as noted above). I am actively pursuing a student success or instructional coaching position which would position me in a place where I would have influence with fellow teachers in advocating for the program or suggesting alternatives. The dessimination of my findings will begin with this blog and then a presentation that can be shared with; first the research department who have been supportive in my venture to gain their thoughts and insights, and second, my superintendant upon a request for an interview. After that, it would be his decision what is done.

The responsibility of the school board to its students and ongoing transparency to the public signifies that the results of its internal review will become available publicly on its site when all executive members have had the chance to review it thoroughly. That information (and perhaps access to my own indpendent evaluation) may illicit a public response one way or the other.

Alkin, M. C., and Taut, S. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29, 1-12.
Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.
Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.

Standards of Practice

It is important when conducting a program evaluation to commit to specific standards of practice. Using the Joint Committee on Standards for Educational Evaluation guidelines with additional input involving my selected program, I have specified the following obligations:

Utility 

Utility is ensured through the use of stakeholder feedback, including their voice in the process of evaluating the merits of the program. Teachers conduct an implementation check-in  and an additional feedback survey and students are interviewed and asked questions about the program. Perspectives, procedures and rationale are clearly stated and any persons involved in or affected by the evaluation are identified.

Feasibility 

Evaluation practices involve some pre-existing student assessments (Grade 10 OSSLT) and Empower™ assessments that students are to receive using the program, that help to minimize disruption and are conducted by teachers and students without additional costs.

Propriety 

All evaluations are conducted under the watchful eye of the school board and teachers who are held to standards set out by the Ontario College of Teachers. E-Best, the school board’s research department ensures proper recording of strengths and weaknesses of the program and that disclosure of findings are made accessible after superintendents and trustees have witnessed the results. Scholarly article results undergo scrutiny by those in the field of study.

Accuracy 

The Empower™/Phast Paces program has been described and documented clearly using a variety of sources, internal and external of the program. The context within the school board has been described as it represents a pilot program within eight high schools. Information-gathering methods are described and procedures guard against personal bias by the evaluator.

The Joint Committee on Standards for Educational Evaluation, James R. Sanders, Chair (ed.): The Program evaluation Standards, 2nd edition. Sage Publication, Thousand Oaks, USA, p.23-24; 63; 81-82,125-126 (see http://www.wmich.edu/evalctr/jc/)

 

 

 

Advertisements