Thursday, April 23, 2015

Measuring Success in 1:1 Programs

Posted by Jericho CC License Attribution 3.0 Unported
Do the Results Warrant the Investment?

@joe_edtech

So, it's a blog. A little bit of self-indulgence and personal ambition is to be expected, right? In a completely selfish attempt to better understand the research that I want to do in my doctoral program, the posts over the next few Thursdays will focus on getting to meaningful measurement of 1:1 programs. I will provide in text citations for a number of pieces, if you would like a full reference list or links to any of the individual documents, please don't hesitate to contact me.

Part I: The Problem

In 1997, the United States President’s Council of Advisors on Science and Technology called on school districts throughout the country to spend the money necessary to equip American schools and classrooms with modern educational technology (Richtel, 2011). Since that time, literally billions of taxpayer dollars have been invested in improving technological infrastructures, Wifi access, upgrading access to technological teaching tools, decreasing the student to computer ratio, and the latest big push in education, to support 1:1 computing in schools by providing every student with full-time access to a mobile computing device (Bebell, Clarkson, & Burraston, 2014; Bebell & Pedulla, 2014; Stager, 1998). Without a doubt, the call to increase technology has been met. The student to computer ratio has been reduced from a high of 125 students to a single shared school computer in the 1980s to today’s national ratio of three students per school owned device; and nearly all of the classrooms in the US have some kind of access to the Internet (Russell, Bebell, & Higgins, 2004; Snyder & Dillow, 2012). With the introduction of powerful yet affordable mobile technologies, like Apple’s iPad and the Google supported Chromebook, the number of 1:1 programs has accelerated (Bebell & Pedulla, 2014).

However, the problem with the goal established by the Council of Advisors on Science and Technology is that while school districts continue to pour scarce resources into their technology investments at a time when most public schools in America are facing financial crisis, there is very little evidence that the investment is paying off in terms of increased student achievement (Richtel, 2011). In 2011, The New York Times did an exposé on the Kyrene School District in Chandler, AZ, a district that invested millions in a 1:1 computing program and that was highlighted as a model program by the National School Boards Association (NSBA) in 2008 (Richtel, 2011). Despite the many accolades from groups like the NSBA, since the investment in new technology scores for Kyrene students in both English/Language Arts (ELA) and Math Skills have not increased, and in some cases have even decreased while scores across the state have steadily risen in the same period of time (Richtel, 2011). Without proof of increased student achievement, many critics are questioning the wisdom of spending billions of additional taxpayer dollars on 1:1 computing at a time when school districts are laying off teachers and cutting salaries (Bebell, Clarkson, & Burraston, 2014; Richtel, 2011).

My Opinion: How do we resolve the problem?

A few weeks ago, I had the opportunity to attend the 2015 Consortium for School Networking Conference in Atlanta. While it was clear that everyone in the conference hall shared the belief that 1:1 technology in the hands of students has the power to transform education and improve student learning, it was equally true that a great many of the school leaders present were struggling with measuring student success. In one session about responding to the critics, school leaders discussed ideas about delaying measurement until they had achieved full-integration, or attempting to measure key pieces of data that have little to do with student achievement (attendance or referral rates), or simply not measuring success based on integration, but instead holding focus groups with stakeholder groups to explain the need for technology based education. This actually mirrors suggestions made by Bebell and Burraston (2014) who called the relationship between technology integration and student achievement a complex issue (of course, it is) and encouraged schools to start measuring using a variety of different datasets, student achievement comprising just one small component of the measurement.

There is great merit to what Bebell and Burraston (2014) wrote, and some merit to what the attendees of the CoSN conference said, but it all misses the point. As the Director of Instructional Technology, I hear from parents and stakeholders on a daily basis. I hear from teachers even more frequently. Whether taxpayer or reluctant teacher, the question is always the same. How does the use of technology improve learning for our students? Bebell and Burraston (2014) are right; it is a complex issue. However, there are existing and comprehensive studies have helped guide our answers (Gulek & Demirtas, 2006). When we provide our teachers and students with the proper resources and support, we can help facilitate learning by creating classrooms and other learning opportunities that capitalize on Constructivist learning theories and will improve student achievement, even when that achievement is measured through traditional standardized tests. In order to continue to provide opportunities and access for our students, it is incumbent upon us as Instructional Technologists not to shy away from state imposed testing measures, but to use them to prove our point.
-------------------------------------------------------------------------------------------
How have you begun measuring your technology integration programs?

No comments:

Post a Comment