How do we, as educators, upskill in a way that both empowers us to figure out how things might work best for our learners, while having no experiencing of learning in that way ourselves? Stephen Bright in E-teachers collaborating: Process based professional development fo... (a .pdf file) explores this question.
Stephen indicates that "Lecturers (e-teachers) who get involved with e-learning face a number of challenges. Often they are grappling with a way of teaching in which they have no experience as learners, and while feedback processes may be available for monitoring and analysing the face-to-face lecturing environment, few systems are in place in most institutions to give supportive feedback to staff about their teaching effectiveness in the online environment" (Bright, 2008, p. 75).
Conducting a small-scale case study, Stephen worked with six teaching staff who had a range of eLearning experience from beginner to advanced. The purpose was to develop a framework and process for collegial review of teacher presence in online courses. It was framed in terms of Professional Development (PD) rather than Quality Assurance (QA). The study was conducted to increase the quality and quantity of feedback that teachers get about their courses. Most of the QA processes tend to be based on a check list, so re-framing it as PD was a way of making it less imposing, and this was enhanced by the fact that the participants created their own checklist.
In the paper, Stephen recommends the Garrison, Anderson and Archer (2000) eBook as a primer for eLearning and a model. He also uses the Seven Principles for Good Practice from Chickering and Gamson (1987) around undergraduate education, engagement, and active learning. Stephen Marshall's Maturity Model is also suggested as a benchmark.
Of the seven people involved, each was given one principle each. They then met to brainstorm, and collated their ideas in a wiki. The final step was going through and undertaking a rating process (what are the must haves, and what are the nice to haves?). This resulted in primary indicators (30 - the must haves) , and secondary indicators (60 - the nice to haves). The participants discussed how eTeachers could set high expectations - feedback, timeliness, exemplars, and models, and generic feedback comments in neutral spaces, for example.
The Collegial Appraisal process was based around a range of roles, which took about 8.5 hours of face-to-face time and 3.5 hours contributing to the wiki. They spent an average of 2 hours each on self-appraisal and 4.5 hours for 3 review meetings.
The findings indicated that the staff who participated felt empowered rather than evaluated, and the resulting framework was available for institutional use. In addition, it illustrated the fact that you don't have to have best practice frameworks, and you end up with more ownership when the eTeachers develop the frameworks themselves. The framework often ends up a good match with other benchmark models.
How do you conduct evaluation of blended and online courses at your institution? Is this an approach you might like to, or have already tried? Please leave comments below.
Reference: Bright, S. (2008). E-teachers collaborating: Process based professional development for e-teaching. In Hello! Where are you in the landscape of educational technology? Proceedings ascilite Melbourne 2008. http://www.ascilite.org.au/conferences/melbourne08/procs/bright.pdf
Image: 'Planning Your Online Coursev2' http://www.flickr.com/photos/59217476@N00/8186356402. Found on flickrcc.net.
Add a Comment