<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://draft.blogger.com/navbar.g?targetBlogID\x3d11106144\x26blogName\x3dEducational+Technology+and+Life\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://mark.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://mark.blogspot.com/\x26vt\x3d9181144880663634019', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Educational Technology and Life has moved:
Visit edtechlife.com for new content.

Saturday, May 21, 2005

Educational Technology Assessment - Part II

My response to the second prompt of the week in "Management of Technology for Education"...

Congratulations! You have been instrumental in implementing technology in your school. Thanks to your vision, technical expertise, successfully funded proposal, rapport with senior administrators, technology training and staff development efforts, your school has been using technology in almost all subject areas for the past year. It is now time to evaluate your efforts.

a) Describe the methods and procedures you will undertake to evaluate assessment of technology in teaching. How will you go about collecting data... Questionnaires, personal interviews, observations, small group meetings?? How will you determine if technology has made a difference?

b) Provide detailed information on your program assessment plan. Include sample questions you will incorporate in collecting data from participants in your study.

As always, support your comments with research, and also comment on other students' posts.


*******

For the purposes of answering this prompt, I could answer as if I were once again running the educational technology program at a school site, however I think this will be an even more meaningful exercise for me if I look at this as an opportunity to develop the advice I could give a site technology coordinator who asked how to evaluate their programs.

This topic overlaps quite a bit with the previous one, so I will sometimes refer back to my previous post.

In fact, to begin with, as I stated earlier this week, the evaluation of a program depends heavily on the initial needs assessment for the program. Why was the program implemented? Did it do what it was meant to do? Did it meet the identified need(s)?

An effective evaluation also depends on good evaluation design. Oliver (2000) suggests six formal steps to evaluation design:

"1. Identification of stakeholders
2. Selection and refinement of evaluation question(s), based on the stakeholder analysis
3. Selection of an evaluation methodology
4. Selection of data capture techniques
5. Selection of data analysis techniques
6. Choice of presentation format" (Oliver, 2000)


I also found his articulations of the three elements of evaluating an educational technology to be helpful as an initial focus.

"• A technology
• An activity for which it is used
• The educational outcome of the activity" (Oliver 2000)

This focus, and the above suggested steps are good as broad starting points and a formal framework, but there are many "challenges presented by the evaluative contexts... [and] a large number of possible contextual variables, operating ependently and interactively at several levels" (Anderson et al, 2000) within a school's educational technology program; these are made all the more complicated by the various implementations that will occur depending on the subject, department, or faculty member.(Anderson et al, 2000)

Two methods for addressing these challenges are to form a multidisciplinary research team and to implement a multi-method research design. Anderson et al (2000) describe the benefits of these evaluation methods:

"There were two fundamental (and familiar) aspects of our approach to evaluation which we felt – both in prospect and in retrospect – put us in a good position to tackle the general challenges outlined above. The first was to have a multi-disciplinary research team, whose members would bring to the investigation not only knowledge about educational technology, evaluation, and learning and teaching in higher education, but also sets of research skills and approaches that were distinctive as well as complementary. The second broad strategy was to have a multi-method research design, which involved capitalising on documentary, statistical and bibliographic materials already in the public domain, reviewing records held and reports produced by the projects themselves, as well as devising our own survey questionnaires and interview protocols in order to elicit new information." (Anderson et al, 2000)


The authors also suggest a variety of slightly more specific evaluation strategies:

"
  • tapping into a range of sources of information
  • gaining different perspectives on innovation
  • tailoring enquiry to match vantage points
  • securing representative ranges of opinion
  • coping with changes over time
  • setting developments in context
  • dealing with audience requirements" (Anderson et al, 2000)


  • I would specifically recommend some techniques that have worked for me.

    1. Observations - Walk around the campus. How are computers being used in the classroom? How are they being use in the library or computer labs? What assignments are students completing with their computers, and products are students creating? How are these things different from the way students were learning and creating in the past? Much can be learned (and many a dose of reality swallowed) when an educational technology coordinator gets into the field on a random day to see how the rubber meets the road.

    2. Focus groups and/or informal interviews - A survey or other more formal evaluation instrument can be severely biased if the evaluator is simply asking for the things he or she thinks he needs to be asking. Conducting observations can go a long way towards helping an evaluator understand what needs to be formally evaluated, but not only can he or she only observe a limited subset of the program's entire implementation, but the issue of altering the observation by the mere presence of the observer comes into play as well. By including others (ideally an multi-disciplinary group, see above) the evaluator can gain valuable insight and create a more effective survey in turn. The following are examples of questions from a focus group discussion I lead during a district technology leaders meeting during which we had representatives from each district present.

    In what ways do districts currently use OCDE Educational Technology training facilities and trainers?

    In what ways could our training facilities and trainers be used to better meet district technology staff development needs?

    Specifically, what classes would you as district technology leaders like to see offered in the county training labs?

    Specifically, what classes would you as district technology leaders like to see offered through our "custom training" program?

    In what ways can our training programs best affect positive changes in student learning?


    3.) Conduct surveys - Online survey's in particular are now easy to administer (particularly when learners are sitting in front of computers anyway) and to analyze (most services will do simple analysis for you)... though their design should not be approached in any less careful a manner. I use Survey Monkey as an evaluation follow each professional development event that I manage. There is no reason a site tech coordinator or classroom teacher couldn't do the same thing. Samples of the question we use at the OCDE can be found at this sample survey.

    4.) Focus groups again - A single-minded and easily biased analysis of the data by the evaluator is not nearly as valuable as organizing, or re-convening, a multi-disciplinary team to review the results.

    In California, any technology coordinator can also use the CTAP2 iAssessment survey for teachers (and for students) to evaluate technology use on their campus.

    Of course, when it comes to the effectiveness of any instructional program, one can look to state test scores as well, though the evaluator may be more concerned about other learning outcomes than what a standardized state test measures. In this respect, it might be best to develop an evaluation strategy for tracking authentic outcomes. In many ways, for this to be effective it might require continued communication with students for years after they leave a school.

    Finally, Shaw and Corazzi (2000) share a set of nine "typical purposes of evaluation" that might be kept in mind when designing an evaluation process specific to a given school site.
    "1. measurement of achievement of objectives of a programme as a whole
    2. judging the effectiveness of course or materials
    3. finding out what the inputs into a programme were- number of staff, number and content of contact hours, time spent by the learner, and so on
    4. ‘mapping’ the perceptions of different participants – learners, tutors, trainers, managers, etc
    5. exploring the comparative effectiveness of different ways of providing the same service
    6. finding out any unintended effects of a programme, whether on learner, clients or open learning staff
    7. regular feedback on progress towards meeting programme goals
    8. finding out the kinds of help learners need at different stages
    9. exploring the factors which appear to affect the outcomes of a programme or service" (Thorpe, 1993, p. 7, as cited in Shaw and Corazzi, 2000)


    Again, I suspect this may be too lengthy to be terribly valuable to others in the class, but it has been a valuable exercise for me.

    -Mark


    References

    Anderson, C., Day, K., Haywood, J., Land, R., and Macleod, H. (2000). Mapping the territory:issues in evaluating large-scale learning technology initiatives. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/anderson.html

    Oliver, M. (2000). An introduction to the evaluation of technology. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/intro.html

    Shaw, M. and Corazzi, S. (2000). Avoiding holes in holistic evaluation. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/shaw.html

    0 Comments:

    Post a Comment

    << Home