MOOC - Assessing

1. Introduction
2. Evaluation tools
  • 2.1 Multiple Choice Test
  • 2.2 P2P
3. Who evaluates?
4. When evaluating?
5. AutoCorrection
6. Really massive and global courses vs els more local and less massive courses


The assessment of knowledge and skills acquired throughout a MOOC is one of the most debated topics, which of course is far from being closed. In fact many of the criteria that have been evaluated in a student e-learning contenxt may be applicable in MOOCs.

There are several factors that make a MOOC to be different, especially in the field of evaluation. The number of students who are targeted are making it impossible to interact more or less directly with the faculty team, losing part of the evaluation. We are all used in a classroom or virtual learning environment (eg via Moodle) that almost "know" every student. You should also take into account students' motivation to engage with the content, participatinf in forums, etc. These two facts are needed to have in mind when designing assessment activities within a MOOC.

Nowadays the debate is wide open. Because a MOOC is a course (its C says it), we assume that learning should be evaluated who get their participants, and even give a mark. In the case of a MOOC, an assessment is to know where you are while you are progressing through the different modules. This evaluation is important in order to motivate the student to continue, but always to help them, not to push them to give the course up. Another question is whether the the students will be able to bring knowledge that have acquired during the course into practice. Therefore the accreditation of a course MOOC (which will be discussed later) depends quite if this evaluation.


Now we are prepared to design the evaluation of our MOOC course, what tools we can use. As we explained, this part is the one than differenciates a MOOC course from a open course, for example based on Moodle. Not only is the assessment that is different, but mostly the role of the teacher.

In a MOOC course, the professor prepares all tests but then do not take part. We must remember that in a course we can have hundred of thousand of students and therefore we need to find a sustainable way to carry it out.

Most platforms offer two types of evaluation:
- Multiple choice tests - immediate correction
- Peer-to-peer (P2P), correction between pairs. The correction comes from their own classmates.

Multiple choice test

These tests, which can be designed in many different ways, are frequently used in virtual assessments as they can be easily adapted to a large number of participants. Basically, the only limitation is the size of the platform which may withstand more or less answers.

Surveys should be different depending on the course module, contents, etc. So it is a difficult to point to general guidelines. It should be borne in mind that we want the student to acquire the knowledge that we are offering, but also to go on advancing through the different modules and complete the total course. From our experience we can list a few tips when designing the survey, although they are nothing more than advice.
  • - No too long test and very concise and direct questions related to what students have been reading / watching. It is not very stuffy, but the student has the feeling of progressing satisfactorily.
  • - The number of times you can answer a survey can be defined in the platform. It is not necessary that students get a good mark at the first time, but that they learn from answering. So the best thing is do not limit the number of times they can answer, giving them a large number of opportunities.
  • - The number of correct answers to pass the test is also a controllable parameter. Because students can answer the different times, it is important that we ask for a quite high mark. In that way we ensure that they have not responded randomly, and more or less have a correct view of the global subject.
  • - Usually multiple choice questions have the opportunity of adding a feedback to the students. There are many questions where this feedback is very convenient. This can help the student to respond correctly and do not feel alone all through the course evaluation.

Tests P2P (peer-to-peer)

The peer-to-perr assessment is what differs a MOOC from other online courses. This kind of evaluation is based on some work performed by students and evaluated by other students within the same course. The work can be very different depending on the type of course, which can range from a small written essay in pdf format to a video uploaded to YouTube features. MOOC participants themselves are the ones who evaluates their classmates.

As with the previous tool, we list below some few tips which can be taking into account when designing a P2P work. The design of the activity will be linked to the content of the course module as well as other characteristics of each of the courses.

  • - Clearly expose what do you want in the work. We need to remember that we are always working with a very, very large number of students, and the asked issue can generate a very large number of questions in the forum, which will become unsustainable. So we need to limit the space, pattern content, ask clearly, etc.
  • - Rubric of how to evaluate the work. As we explained, the other students are the evaluators. So they need to know clearly what and hwo they need to evaluate their classmate works. So better to have a few rubrics that allow them to take it out easily.
  • - The work can be described as pass / failed but in some cases can even be evaluated with marks. The rubrics will help us be able to define this qualification.
  • - Experience tells us that these works tend to cause a slowdown in the time ahead on the course. So its schedule during the module needs a important reflexions, like being more or less attractive.
  • - The number of students who evaluate each job is a parameter to control the platform. If we want a practical assessment should not ever happen 2 and at most 3 students, who evaluate each work. Moreover, we should bear in mind that this evaluation is usually slow and not the best subject to go through a module (that is, to advance to the next module need not be evaluated first by other students). Keep in mind that, at least MiriadaX platform, the selection of reviewers is automatically and randomly, although the system seems to consider the evaluated student and the evaluator need to be both following the course)
  • - Allow feedback in the assessment. This also allows us to create a network between students, putting them more in touch.

3.What evaluate?

This question has no single answer, because it depends on the type of course, and within each course the different module that we are considering. The can be interested in evaluate concepts, or may be a more practical work, or even a combination of both. In our MOOCs, questions and works were very related to what it was explained in the course.

- Details of direct questions vídeos- This will help ensure that students had gone through the videos (in the case of MiriadaX where the videos are on YouTube and it was perhaps the only way to ensure that students watched the whole video. Coursera allows to ask questions in the middle video - this is a developing subject, for example in Málaga Congress March

- Questions related more subjective contents linked to contents - in this case the answer should be no single, necessarily. This part can be controlled with the platform, defining one or more correct questions.


In the design of the evaluation one must take into account where we add the various tests. It has been explained that this is an important issue to consider for the success of the different courses. A "hard" evaluation in a Module makes the drop increases. We experience this fact in the two courses that we were offering last year. In one of the courses in Module 2, we propose a small job. In that point is where we identified the largest number of dropouts. In the second course, the work was proposed at the end of modules, and the withdrawing was quite less. Always at the end of the course, the students is more inclined to write down works.

Our advice, always based on our experience, is that the evaluation should be gradually increased complexity module after module. Students should feel accompanied from the beginning and to know they are following the right way to the end of the course. So in the first modules we use to propose multiple choice tests, where the answers should be very clear and easy to identify with the contents. Also the feedback for the questions is very important. Once students are more involved in the course they were asked perform some P2P work. But there is no a unique guideline, since you can not wait for the penultimate module, you need to create a network with the students a earlier. Therefore they key is to find a balance between the different objectives, depending a lot of the type of course.


There have been some attempts to automate the process of correction. It is currently an important area of ​​research. For example, take a look at the page "Moocs: the correction automatique à curriculum customization December" (note is a "MOOC Lab"). There are many materials, especially related to the fact that the evaluator must learn to evaluate (it needs to be trained to do so). Also we need to keep in mind that it is a technological group.

Regarding AutoCorrect more related texts, there is abundant literature. This is one example: "Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review"

In our opinion, it has the advantage of being able to automate doing more monitoring of the student, but it has the disadvantage that the process of evaluation (and training for the evaluation) is also a form of learning. As always, a mixed model is probably the optimal.

6. Really massive and global courses vs els more local and less massive courses
As it has been showing, something MOOCs are true (really massive), and other courses based on MOOC format but with a more limited number of participants. An example of the latter case could be this course, that could be offered to anyone who wanted MOOC format to a concret community. Another example of limited MOOC (SMOOC, small MOOC) could be an introductory course in college, like what we are planning zero chemistry, for students who have not done chemistry in high school, but found that the first course (engineering, biology, etc.). It also could be an introductory course to a master, as a marketing tool to graduate students.

Thus, in these cases with a low number of students enrolled, perhaps with a degree of loyalty expected higher, one can think of creating loyalty from students involving more interaction and teacher-student. A course like zero chemistry Catalan, probably covering a smaller physical geographical territory (a MOOC really covers everyone). So one can add different assessment tools: face meetings or hangouts, some element based games / contests that allow teachers to gauge the progress of each participant personally, etc.

For these small MOOCs, evaluation techniques based on Moodle ones can be add along the different modules. Indeed, this Wiki have pointed out the differences between a regular online course and a MOOC course, but of course there are still many similarities. For a small MOOC, you can take everything that works for a regular online course. The boundary is more diffuse in this case. In addition, there is the issue of language. A MOOC should be done in a language of global use. A course MOOC format for a given territory, may be in a language less overall. One example is the ongoing XarxaMOOC ( to teach specialized Catalan.