<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d11106144\x26blogName\x3dEducational+Technology+and+Life\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://mark.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://mark.blogspot.com/\x26vt\x3d9181144880663634019', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe", messageHandlersFilter: gapi.iframes.CROSS_ORIGIN_IFRAMES_FILTER, messageHandlers: { 'blogger-ping': function() {} } }); } }); </script>

Educational Technology and Life has moved:
Visit edtechlife.com for new content.

Saturday, May 21, 2005

Educational Technology Assessment - Part I

This post is a bit too "research based" for my tastes, but as I summarized for my classmates, it was a valuable exercise...

Teaching Assessment: Describe briefly how you assess your teaching performance in the classroom (or any instruction you give as part of your job). Are you satisfied with this method. What are some of the advantages/disadvantages of the method(s) you currently use.

I am currently teaching professional development courses in educational technology at the Orange County Department of Education, and have just completed the process of assessing and planning the summer schedule, so I will consider this process from start to finish in answering this prompt.

In my formal and informal assessments, I make an effort to use both qualitative and quantitative assessments. Oliver (2000) supports this philosophy.

"On the one hand, quantitative methods claim to be objective and to support generalisable conclusions. On the other, qualitative methods lay claim to flexibility, sensitivity and meaningful conclusions about specific problems. Quantitative evaluators challenged their colleagues on the ground of reliability, sample validity and subjectivity, whilst qualitative practitioners responded in kind with challenges concerning relevance, reductionism and the neglect of alternative world views." (Oliver, 2000)


In reading his paper I also discovered that a "new philosophy has emerged" which seems to mirror my own fierce focus on pragmatism.

"A new philosophy has emerged that eschews firm commitments to any one paradigm in favour of a focus on pragmatism. Rather than having a theoretical underpinning of its own, it involves a more post-modern view that acknowledges that different underpinnings exist, and adopts each when required by the context and audience." (Oliver, 2000)


Though it may not be appropriate for the work we will do in academia for Walden, the philosophy Oliver elaborates validates many of my decision making priorities as a practitioner.

"Central to this view is the idea of evaluation as a means to an end, rather than an end in itself. Methodological concerns about validity, reliability and so on are considered secondary to whether or not the process helps people to do things. Patton provides various examples of real evaluations that have been perfectly executed, are well documented, but have sat unread on shelves once completed. In contrast, he also illustrates how “quick and dirty” informal methods have provided people with the information they need to take crucial decisions that affect the future of major social programmes." (Oliver 2000)


Most importantly, he describes such a pragmatic practice as requiring "
the creation of a culture of reflective practice similar to that implied by action research, and has led to research into strategies for efficiently communicating and building knowledge" (Torres, Preskill, & Piontek, 1996, as cited in Oliver, 2000).
In implementing this philosophy I begin with what Scanlon et al (2000)would consider the "context" of the evaluation. In order to evaluate the use of educational technology "we need to know about its aims and the context of its use" (Scanlon et al, 2000). Ash (2000) also suggests that "evaluation must be situation and context aware."

In order to understand the context of my evaluations, I first performed a broad needs assessment via focus groups (such as the quarterly district technology leaders meeting) and survey (using surveymonkey.com and a listserv) to set the goals for the professional development schedule. I use course descriptions developed in partnership with the instructors and others in my department to further determine the goals of individual courses. Finally, on the day of a course (and sometimes before the first day via email) I always ask the participants to introduce themselves, explain where they work, and what they hope to get out of the class. This helps me to tailor that specific session to the individuals in the room. (I also ask all of the other instructors to do the same.)

During a class I monitor what Scanlon et al (2000) might call "interactions." (Scanlon et al) because "observing students and obtaining process data helps us to understand why and how some element works in addition to whether it works or not." I often check for understanding, and always include "interactive modes of instruction" (NSBA, n.d.).

Due to my initial and ongoing assessments, following a course I am able to focus on what Scanlon et al (2000) might call the "Outcomes" of a course. "Being able to attribute learning outcomes" to my course can be "very difficult...  [so] it is important to try to assess both cognitive and affective learning outcomes e.g. changes in perceptions and attitudes" (Scanlon, 2000). I use formal evaluations, which include both likert scale questions and open ended questions. For some special events, such as the Assistive Technology Institute - which we put on for the first time this spring, I will follow up the initial evaluation of the session by an additional online survey a week later. The real test of my success, though, is an authentic one... it is whether or not the teachers and administrators return to their sites and apply what they have learned. A dramatic example of this sort of authentic evaluation came following the blogging for teachers classes I ran over the past two months. After the first few weeks, it was clear that teachers were not using their blogs (for I had subscribed to all of them using my feed reader). Bringing this up in subsequent training sessions lead to productive discussions of the barriers, and eventually (and primarily simply because we followed up, I believe) they began using them, and I am now often greeted when new posts when I return to my reader.

Ultimately, "good assessment enhances instruction" (McMillan, 2000), and I believe that such authentic assessments are the only way for me to know the true impact of my programs. I hope to be able to include more such authentic follow-up assessments in the coming months.

Because the county programs operate largely on a cost recovery model, by which districts pay for services rendered, cost is also a factor in my assessment of the professional development programs I manage. “An organisation is cost-effective if its outputs are relevant to the needs and demands of the clients and cost less than the outputs of other institutions that meet these criteria” (Rumble, 1997, as cited in Ash, 2000). To determine the cost effectiveness of a program...

"evaluators need to:
  • listen and be aware of these aspects and others;
  • focus the evaluation towards the needs of the stakeholders involved; and
  • continue this process of communication and discussion, possibly refocusing and adapting to change, throughout the study (what Patton refers to as “active-reactive-adaptive” evaluators)." (Ash, 2000)


  • Unfortunately, "the area is made complex by a number of issues that remain open for debate" (Oliver, 2000).
    "These include:
    • The meaning of efficiency. (Should it be measured in terms of educational improvement, or cost per day per participant, for example?)
    • The identification of hidden costs. (Insurance, travel costs, etc.)
    • The relationship between costs and budgets. (Are costs lowered, or simply met by a different group of stakeholders, such as the students?)
    • Intangible costs and benefits. (Including issues of quality, innovation and expertise gained.)
    • Opportunity costs. (What alternatives could have been implemeneted? Moreover, if it is problematic costing real scenarios, can costs be identified for hypothetical scenarios and be used meaningfully as the basis for comparison?)
    • The use of ‘hours’ as currency. (Are hours all worth the same amount? If salary is used to cost time, how much is student time worth?)
    • Whether something as complex as educational innovation can be meaningfully compared on the basis of single figures at the bottom of a balance sheet." (Oliver & Conole, 1998b, as cited in Oliver, 2000)


    I work in a strange hybrid of a business and public institution with further complicates this issue, such that sometimes it is not best to be cost effective as long as a service is valuable, or for political reasons, is perceived as valuable.

    This has been a valuable reflection for me. I hope the large blockquotes did not make it too difficult to read, and I look forward to any of your comments.

    -Mark


    References

    Ash, C. (2000). Towards a New Cost-Aware Evaluation Framework. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/ash.html

    McMillan, J. H. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation. 7(8). Available http://PAREonline.net/getvn.asp?v=7&n=8

    NSBA. (N.D.) Authentic learning. Education Leadership Toolkit: Change and Technology in America's Schools Retrieved May 20, 2005 from http://www.nsba.org/sbot/toolkit/

    Oliver, M. (2000). An introduction to the evaluation of technology. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/intro.html

    Scanlon, A. J., Barnard, J., Thompson, J., and Calder, J. (2000). Evaluating information and communication technologies for learning. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/scanlon.html

    0 Comments:

    Post a Comment

    << Home