Precis: Online Case-based Discussions (Ertmer & Koehler, 2014)

Standard

This is an installment in a series of summaries of journal articles that I have been reading.

Ertmer, P. A., & Koehler, A. A. (2014). Online case-based discussions: examining coverage of the afforded problem space. Educational Technology Research and Development, 1–20.

computer

CC Image courtesy of Markus Spiske / http://www.temporausch.com

Case based and problem based learning, “ has demonstrated multiple advantages…over traditional method of instruction,” (Ertmer & Koehler, 2014) including increasing student motivation, deeper learning of content, and application of skills.  While there is affordances of case based learning have been well researched, little has been explored about what happens in the learning process during a case based discussion.  The purpose of this qualitative study is to explore how students address the concepts in a case study and how the instructor’s facilitation affects the discussion.

The sample consisted of 16 graduate students and 2 instructors.  The students participated in 3 instructor-led case discussions and 3 student-led case discussions. The authors focused on the posts from the third case discussion, and they analyzed 167 student posts and 30 instructor posts. They coded the posts by pre-identified categories and sub-categories that represented the problem space of the individual case.  The problem space consisted of the key aspects of and appropriate solutions for the case.

In their analysis of the coding, the authors found that 86% of the pre-identified problem space was covered during the instruction.  In looking at the instructor posts, they found that most posts were crafted to support students in interpretation of the case and crafting appropriate solutions.  The authors concluded that instructor prompts were important to initiating the discussion in the right direction and deepening the discussion as it unfolded.  They identify two recommendations: the importance of starter prompts and mapping out the problem space for the case study to help guide the instructor’s facilitation.

As the authors note, the study involves a relatively small data set, which is a limitation.  Another limitation occurred when the authors chose to focus on frequency of coding to evaluate the coverage of the problem space.  This presumes that frequency of occurrence can represent quality or thoroughness of the topic coverage.

While this study identified the value of identifying the learning space and its categories as a tool to facilitate discussion, this data could have been analyzed in other ways to shed more light on case based discussions.  For example, social network analysis (SNA) could have been conducted to identify patterns within the interaction.  Xie, Yu, and Bradshaw (2014) used this approach to identify patterns in moderation in asynchronous online discussions.  The posts could have been coded to identify how peers were facilitating discussion.  For example when studying student interaction  in online instruction Xie and Ke (2010) coded for information sharing, egocentric elaboration, and allocentric elaboration among other categories.

Future research should look at case-based discussions in other disciplines (e.g. business, health) in which they are frequently used.  While this study identified the importance of instructor facilitation, future studies should explore the role of peer facilitation and interaction.  A final suggestion for future research involves the examination of the quality of case-based discussions.

References

Ertmer, P. A., & Koehler, A. A. (2014). Online case-based discussions: examining coverage of the afforded problem space. Educational Technology Research and Development, 1–20.

Xie, K., Yu, C., & Bradshaw, A. C. (2014). Impacts of role assignment and participation in asynchronous discussions in college-level online classes. The Internet and Higher Education, 20, 10–19. doi:10.1016/j.iheduc.2013.09.003

Xie, Y., Ke, F., & Sharma, P. (2010). The effects of peer-interaction styles in team blogs on students’ cognitive thinking and blog participation. Journal of Educational Computing Research, 42(4), 459–479. doi:10.2190/EC.42.4.f

Precis: Learning Analytics in CSCL (Leeuwen, Erkens, & Brekelmans, 2014)

Standard

This is an installment in a series of summaries of journal articles that I have been reading.

Van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2014). Supporting teachers in guiding collaborating students: Effects of learning analytics in CSCL. Computer & Education, 79, 28–39.

keyboardComputer-supported collaborative learning (CSCL) generates a lot of data through student interaction.  Instructors must sift through and make sense of this data before they can give appropriate guidance. Previous research looked at the types of student interaction and instructor guidance, but there is a gap in the literature in how instructor guidance may be influenced by different presentations of collaboration data. The purpose of this study is to examine the effect of participation analysis tools that on instructor diagnosis and guidance.

The sample consisted of 28 high school teachers and student teachers.  14 teachers were randomly assigned to the control group (no supporting tools) and 14 were assigned to the experimental group (supporting tools). Each teacher was given 4 vignettes which consisted of authentic collaborative situations.  Each vignette consisted of several collaborative groups and each group varied in collaborative and cognitive aspects.  Three types of data were collected: instructor actions during the vignettes, instructor interventions (messages sent to students), and overall diagnosis of participation and discussion for each group.

The results indicated that the frequency of interventions, focus of the interventions, and receivers of the interventions differed between the control and experimental groups. The authors concluded that the statistical supporting tool helped instructors identify problems with participation.  Additionally they found that instructors using the support tools judged non-problematic groups less harshly.  One result, the lack of difference in focus on collaborative aspects between the experimental and control groups, surprised the authors.

The study had two key limitations: lack of data for one of the supportive tools and variables that could have affected the instructors’ focus on collaborative versus cognitive aspects of the student interaction. The general implication of this study is the conclusion that supporting tools can affect teachers’ guidance of groups.

This study was an interesting experiment in CSCL because it looked at an intervention to encourage positive interaction among students, but it focused more on the tool that the ways that an instructor should use it.  The authors hypothesized that the support tools would help instructors better diagnose problems with participation and discussion, but they results did not confirm this hypothesis.  This may have been influenced in part by the lack of distinction between cognitive and collaborative aspects within the design of the study.  The authors do not describe how the tools support these two different aspects of participation; they mention only that the tools support participation in general.  The authors do not clearly explain whether they are trying to foster cognitive or collaborative participation.

An unaddressed aspect of this study was the temporality of the interaction.  Each vignette played in 8 minutes, and the instructor sent messages during that time.  This suggests that the results of this study are limited to single session synchronous collaborative learning.  Future research should examine these learning tools with asynchronous group work when the interaction is taking place over days or weeks.  The actions and behavior in synchronous and asynchronous CSCL may need to be interpreted differently by instructors because the delays in responses to messages and the frequency of messages may have different causes in the two types of situations.

Many of the instructors in this study were student teachers.  It is reasonable to think that a veteran teacher may be more adept at identifying interaction patterns and problem groups without the support tools.  The authors should analyze their data from this perspective to see if experience affects the level of problem diagnosis.

 

This Week’s Productivity Experiment: No Meeting Wednesday Morning

Standard

Productivity2I recently read Mark Arnoldy’s “This Is How I Work” on Lifehacker.  He had a lot of great tips, but the one that really inspired me was how he and his team have “No Meeting Wednesdays”.  They have no internal or external meetings on Wednesday so they can have dedicated time to work on complex tasks that require focused thinking.  I love this idea, because I feel like I really miss out on quality “work product” time during the work day.  I often find it waiting for me at the end of the day when my mind isn’t fresh.  While I don’t think it’s realistic for me to dedicate a whole day to “no meetings”, I’m going to experiment with blocking off my Wednesday mornings.

UPDATE: So I’ve done two “Work-product Wednesdays”. The first one did not go particularly well, because I did not plan out my time. Without clear priorities, my blocked off time quickly turned into some phone calls and I checked off very few items from my to-do list. The second attempt was much more in line with my intent. At the end of the day Tuesday, I planned out my Work-product Wednesday time and scheduled out my to-dos. I got much more accomplished. I like the timing of Wednesday morning, because I haven’t let the week get away from me and mornings are my best focus and energy times. I’ve continued to keep this block and plan to complete my bigger projects at this time.

VERDICT: Effective technique