International Computing Education Research 2017 Conference: A PLTF Trip Report

At the start of the semester, I attended the International Computing Education Research (ICER) conference with some support from the Provost Teaching and Learning Fellows program. Computer science (CS) is one of the youngest STEM disciplines, and the study of how to teach CS has just started — this was only the 12th ICER conference.

Dr. Mark Guzdial

Dr. Mark Guzdial

ICER is a small conference, at only 150 attendees, with a unique format. It’s a single track conference (schedule available here) where everyone sits around round tables, rather than in lecture or stadium seating. After each presentation, there is a five minute discussion period where each table talks about the presentation and considers the best questions to ask. The traditional question-and-answer session is high-quality and represents input from all of the attendees.

There were three papers presented with Georgia Tech connections. One was by Lauren Margulieux (who earned her PhD at Georgia Tech last year, and is now an assistant professor at Georgia State University) and Richard Catrambone (her PhD advisor here in Psychology), Using Learners’ Self-Explanations of Subgoals to Guide Initial Problem Solving in App Inventor (see paper in ACM digital library here, which can be accessed for free on campus). Lauren developed materials to teach programming with an approach to designing instruction that Richard invented called subgoal labeling.

In subgoal labeling, students are learning a process (like a program, but could also be a multi-step experiment or calculation) with portions of the program segmented and labeled with the goal of that portion of the process (the subgoal). Lauren’s research found that students who see examples presented with subgoals labeled remember the subgoals and can use that to help them write and understand other programs. In the study described in this paper, Lauren asks students to generate the labels on the subgoals themselves, and then uses those labels to explain other programs, which she found further improved student performance over just subgoal labels given alone.

My students, Miranda Parker and Kantwon Rogers, presented a paper about the use of Web-based electronic books (ebooks) that we’ve been creating for high school CS teachers and students (see links here). Their paper, Students and Teachers Use An Online AP CS Principles EBook Differently: Teacher Behavior Consistent with Expert Learners (see paper here) asked a big question: “Can we develop one set of material for both high school teachers and students, or do the different kinds of learners need different kinds of materials?” First, they showed that there was statistically significantly different behaviors between teachers and students (e.g. different number of interactions with different types of activities). Then, they tried to explain why there were differences.

They presented a model of teachers as expert learners (e.g., they know more knowledge so they can create more linkages, they know how to learn, they know better how to monitor their learning) and high school students as more novice learners. They analyzed recordings of use of the ebook (log file data) to find evidence consistent with that explanation. For example, students repeatedly try to solve Parsons problems long after they are likely to get it right and learn from it, while teachers move along when they get stuck. Students are more likely to run code and then run it again (with no edits in between) than teachers. At the end of the paper, they offer design suggestions based on this model for how we might develop learning materials designed explicitly for teachers vs. students.

Another of my students, Katie Cunningham, presented a paper Using Tracing and Sketching to Solve Programming Problems: Replicating and Extending an Analysis of What Students Draw (see paper here). In computer science, we often teach students to use pen-and-paper notations to analyze their programs, as physics or engineering teachers teach students to sketch free-body diagrams. Katie asked the question: “Of what use is paper-and-pen based sketching/tracing for CS students?” Does it really help? Several years ago, there was a big, multi-national study of how students solved complicated problems with iteration, and they collected the students’ scrap paper. (You can find a copy of the paper here.) They found (not surprisingly) that students who traced code on paper were far more likely to get the problems right than those who tried to understand the program just in their head. Katie led a study where she collected the scratch paper of over 100 students answering questions about computer programs (e.g., what will this program print?). She could then compare what they sketched (if anything) to how well they did on the problems.

First, this study replicated the earlier study. Those who traced the programs on paper did better on problems where they have to predict the behavior of the code. But then, they found that it was not always true that tracing code on paper helps. If a problem is pretty easy, those who trace on paper are actually more likely to get it wrong. Most surprising, those who start to trace on paper but then stop (because they think they know the answer, or they give up — we don’t know) are even more likely to get it wrong than those who never traced at all.

They also start to ask a question that’s relevant to all classes that teach pen-and-paper diagramming: Why do the students use the pen-and-paper diagramming or tracing methods that they do? A method is only useful if it gets used — what leads to use? Katie interviewed the two teachers of the class (each taught about half of the 100+ students in the study) and some of the teaching assistants. Both teachers did tracing in class. Teacher A’s method gets used by some students. Teacher B’s method gets used by no students! Instead, some students use the method taught by the head Teaching Assistant. Why do some students pick up a tracing method, and why do they adopt the one that they do? Maybe because it’s easier to remember, or because they believe it’s more likely to lead to a right answer, or because they trust the person who taught it? Katie and her team don’t know the answer yet.

The best paper award (and one of my favorite papers at the conference) went to a paper from the University of Chicago, K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals from Katie M. Rich and her colleagues (see paper here). Many US states and countries are in the process of defining what elementary school students should learn about computer science. These processes usually start from talking to experts — what is important for students to learn? But those experts can’t know what students at different ages can actually do. Experts also can’t know what concepts should students be able to learn first, and what comes later.

Rich’s team started from a corpus of 160 papers, and used those to identify what students could actually do — what evidence we had that students in elementary school could learn those computing concepts. They then constructed sequences of those concepts, drawing on either the mathematics literature (which has studied the trajectories of what elementary school students learn for many years) or their own work with elementary school students. The result is that they were able to define trajectories of concepts for basic parts of computer programming, like sequencing, iteration, and conditionals. This paper will be a significant resource for efforts to define curriculum standards around the world.

The Rich et al. trajectories paper is important for computing education research because it presents a set of testable hypotheses. Most efforts to define standards are students are vague, like “Students should know that programming is a creative process.” Rich’s trajectories are testable. They tell us what concrete abilities students should be able to demonstrate. Even if these trajectories are wrong, they give us a starting place for doing experiments and creating better ones.

ICER continues to be a top conference for exploring how students come to understand computing and how to improve computing. These papers give a sense of the breadth of research at ICER, from elementary school students (K-8) through undergraduates studying algorithms. Other research presented at ICER studied bootcamps, hackathons, and how teachers learn computing. These studies help us to understand and improve how students learn critical 21st century skills.

Print Friendly, PDF & Email

You may also like...