Quantifying Feedback: Insights Into Peer Assessment Data

Publikation: KonferencebidragPaperForskningfagfællebedømt

Standard

Quantifying Feedback : Insights Into Peer Assessment Data. / Aslak, Ulf; Wind, David Kofoed.

2017.

Publikation: KonferencebidragPaperForskningfagfællebedømt

Harvard

Aslak, U & Wind, DK 2017, 'Quantifying Feedback: Insights Into Peer Assessment Data'.

APA

Aslak, U., & Wind, D. K. (2017). Quantifying Feedback: Insights Into Peer Assessment Data.

Vancouver

Aslak U, Wind DK. Quantifying Feedback: Insights Into Peer Assessment Data. 2017.

Author

Aslak, Ulf ; Wind, David Kofoed. / Quantifying Feedback : Insights Into Peer Assessment Data. 10 s.

Bibtex

@conference{7aa5343f32904580a362c1bacc418ce1,
title = "Quantifying Feedback: Insights Into Peer Assessment Data",
abstract = "The act of producing content - for example in forms of written reports - is one of the most used methods for teaching and learning all the way from primary school to university. It is a learning tool which helps students relate their theories to practice. Getting relevant and helpful feedback on this work is important to ensure a good learning experience for the students. Providing this feedback is often a time-consuming job for the teacher. An effective way to learn is to teach others, and similarly give feedback on work done by others. One way to approach a combined solution to the above challenges, is to use peer assessment in the classroom which as a learning method has become more and more popular. In this paper we look at data collected using the web-based peer assessment system Peergrade. The dataset consists of over 350 courses at more than 20 educational institutions and with a total of more than 10,000 students. The students have together made more than 100,000 peer-evaluations of work by other students, and these evaluations together contain more than 10,000,000 words of text feedback. A key problem when using peer assessment is to ensure high quality feedback between peers. Feedback here can be a combination of quantitative / summative feedback (numerical) and qualitative / formative feedback (text). A lot of work has been done on validating and ensuring quality of quantitative feedback. We propose a way to let students evaluate the quality of the feedback they receive to obtain a quality measure for the feedback. We investigate this measure of feedback quality, which biases are present and what trends can be observed across the dataset. Using our measure of feedback quality, we investigate how it relates to various factors like the length of the feedback text, the number of spelling mistakes, how positive it is and measures of the student{\textquoteright}s report-writing skills.",
author = "Ulf Aslak and Wind, {David Kofoed}",
year = "2017",
month = jun,
day = "1",
language = "English",

}

RIS

TY - CONF

T1 - Quantifying Feedback

T2 - Insights Into Peer Assessment Data

AU - Aslak, Ulf

AU - Wind, David Kofoed

PY - 2017/6/1

Y1 - 2017/6/1

N2 - The act of producing content - for example in forms of written reports - is one of the most used methods for teaching and learning all the way from primary school to university. It is a learning tool which helps students relate their theories to practice. Getting relevant and helpful feedback on this work is important to ensure a good learning experience for the students. Providing this feedback is often a time-consuming job for the teacher. An effective way to learn is to teach others, and similarly give feedback on work done by others. One way to approach a combined solution to the above challenges, is to use peer assessment in the classroom which as a learning method has become more and more popular. In this paper we look at data collected using the web-based peer assessment system Peergrade. The dataset consists of over 350 courses at more than 20 educational institutions and with a total of more than 10,000 students. The students have together made more than 100,000 peer-evaluations of work by other students, and these evaluations together contain more than 10,000,000 words of text feedback. A key problem when using peer assessment is to ensure high quality feedback between peers. Feedback here can be a combination of quantitative / summative feedback (numerical) and qualitative / formative feedback (text). A lot of work has been done on validating and ensuring quality of quantitative feedback. We propose a way to let students evaluate the quality of the feedback they receive to obtain a quality measure for the feedback. We investigate this measure of feedback quality, which biases are present and what trends can be observed across the dataset. Using our measure of feedback quality, we investigate how it relates to various factors like the length of the feedback text, the number of spelling mistakes, how positive it is and measures of the student’s report-writing skills.

AB - The act of producing content - for example in forms of written reports - is one of the most used methods for teaching and learning all the way from primary school to university. It is a learning tool which helps students relate their theories to practice. Getting relevant and helpful feedback on this work is important to ensure a good learning experience for the students. Providing this feedback is often a time-consuming job for the teacher. An effective way to learn is to teach others, and similarly give feedback on work done by others. One way to approach a combined solution to the above challenges, is to use peer assessment in the classroom which as a learning method has become more and more popular. In this paper we look at data collected using the web-based peer assessment system Peergrade. The dataset consists of over 350 courses at more than 20 educational institutions and with a total of more than 10,000 students. The students have together made more than 100,000 peer-evaluations of work by other students, and these evaluations together contain more than 10,000,000 words of text feedback. A key problem when using peer assessment is to ensure high quality feedback between peers. Feedback here can be a combination of quantitative / summative feedback (numerical) and qualitative / formative feedback (text). A lot of work has been done on validating and ensuring quality of quantitative feedback. We propose a way to let students evaluate the quality of the feedback they receive to obtain a quality measure for the feedback. We investigate this measure of feedback quality, which biases are present and what trends can be observed across the dataset. Using our measure of feedback quality, we investigate how it relates to various factors like the length of the feedback text, the number of spelling mistakes, how positive it is and measures of the student’s report-writing skills.

M3 - Paper

ER -

ID: 203008581