Educational programmes across higher education incorporate knowledge from multiple disciplines. This can be multidisciplinary, interdisciplinary, or transdisciplinary education. In multidisciplinary education the disciplines are juxtaposed, whereas in interdisciplinary education knowledge from the different disciplines is integrated to create new solutions to problems that could not be solved by separate disciplines. Conversely, in transdisciplinary education, the boundaries between disciplines are transcended, and often stakeholders from outside academia are involved as well (Ashby & Exter, 2019; Klein, 2017).
According to van Goch (2023) interdisciplinary education should meet three conditions: there’s a complex problem, multiple disciplines provide insights into this problem, and the different disciplinary insights are integrated. In educational settings, students usually integrate these insights themselves, but teachers can also provide knowledge integration. Knowledge integration strategies should be an explicit part of instruction during interdisciplinary education, since students will not acquire integrative skills merely through learning by doing (van Goch, 2023; van Lambalgen & van der Tuin, 2024).
Interdisciplinary education can help students develop skills needed for their future careers (van den Beemt et al., 2020). However, knowledge about the design and outcomes of interdisciplinary education is often only shared locally (Lindvig & Ulriksen, 2019). This post presents a shot overview of current knowledge about the design, learning outcomes and assessment of interdisciplinary education. The full literature overview it is based on (in Dutch) is available upon request.
Traditionally, the SET is one of the most widely employed instruments in higher education for evaluating the quality of teaching (e.g., Hendry & Dean, 2002; Hounsell, 2003). For a typical SET, after taking a course, students are asked to rate various aspects of the course (e.g., the clarity of the objectives, the usefulness of the materials, the methods of assessment) on a Likert scale. SET data is often the first and foremost source of information that individual teachers can use to evaluate both existing and new innovative teaching practices. SETs are often integrated in higher education professional development activities. For instance, at some faculties at Leiden university at Leiden University, SET results and their interpretation is an integral part of the University Teacher Qualification (a.k.a., BKO). Starting out teachers are expected to critically reflect in teaching portfolios on the results of SETs for the classes they have taught. Furthermore, the results of SETs can function as a source of information for teachers’ supervisors to guide discussions in yearly Performance and Development Interviews, sometimes leading to recommended or enforced future professional development activities for teachers.
However, for some time now, the SET has been subject to scrutiny for a variety of reasons. First, based on an up-to-date meta-analysis, the validity of SETs seems questionable. That is, there appears to be no apparent correlation between SET scores and student learning performance at the end of a course (Uttl, 2017). In fact, when learning performance is operationalized as the added value of a teacher to the later performance of students during subsequent courses (Kornell & Hausman, 2016), the relationship can even be inversed (i.e., teachers with lower SET scores appear to be of more added value). One explanation for this finding is that making a course more difficult and challenging can result in lower SET scores, presenting teachers with a perverse incentive to lower the bar for their students to obtain higher scores on a SET (Stroebe, 2020).
Second, the intensive and frequent use of SETs can lead to a form of “evaluation fatigue” among students (Hounsell, 2003), sometimes resulting in mindless and unreliable evaluations of teaching (e.g., Reynolds, 1977; Uijtdehaage & O’Neal, 2015). As a case in point, a classic article by Reynolds (1977) reported how a vast majority of students in a medical course chose to evaluate a lecture that had been cancelled, as well as a video that was no longer part of the course. In a rather ironic reflection on these results Reynolds concluded that:
“As students become sufficiently skilled in evaluating films and lectures without being there,… …, then there would be no need to wait until the end of the semester to fill out evaluations. They could be completed during the first week of class while the students are still fresh and alert.”
Third, the results of student evaluations of teaching can be severely biased (e.g., Neath, 1996; Heffernan, 2022). For instance, in a somewhat tongue-in-cheek review of the literature, Neath (1996) listed 20 tips for teachers to improve their evaluations without having to improve their actual teaching. The first tip on the list: Be male. Apparently, research suggests that, in general, male teachers receive higher ratings on SETs compared to female teachers. In a more recent review of the literature, Heffernan (2022) goes on to argue that SETs can be subject to racist, sexist and homophobic prejudices, and biased against discipline and subject area. Also, SETs that also allow for a qualitative response can sometimes illicit abusive comments most often directed towards women and teachers from marginalized groups. As such, SETs can be a cause of stress and anxiety for teachers rather than being an actual aid to their development.
Fourth, although studies often emphasize the importance of SETs for supporting and improving the quality of education, the underlying mechanism remains elusive (Harrison et al., 2022). It is unclear how SETs contribute to improving the quality of teaching. To the contrary, teachers can often find it challenging to decide what actual changes to make based on aggregated SET data that is largely quantitative in nature (Henry & Dean, 2010).
In short, the continued use of SETs for evaluating the quality of teaching in higher education is difficult to justify. The findings reported in the literature indicate that the validity and reliability of the SET are questionable, and the value for educational practice appears to be limited. One could argue that sticking with the SET is more a tradition than it is evidence-informed practice. Perhaps universities mostly persist in the routine in lack of an equally (cost-)efficient and scalable alternative. In this blog, we delineate the development and pilot of one possible alternative.
The FET. Late 2023, an Innovation Fund Proposal was awarded a small grant to develop an alternative approach for the evaluation of teaching. At the start of 2024, Mario de Jonge (researcher at ICLON), Boje Moers (project manager at LLInC), Anthea Aerts (educational consultant at LLInC), Erwin Veenstra, and Arian Kiandoost (developers/data analysts, LLInC) collaborated on the development and subsequent small-scale pilot of the FET (Formative Evaluation of Teaching).
The FET is designed to be more conducive for the improvement of teaching practices (formative, qualitative) and less focused on mere assessment of teaching (summative, quantitative). Like the traditional SET, the FET is fast, efficient and relatively inexpensive. However, the FET aims to give teachers clearer directions and qualitative input on how to improve their teaching.
In the first step of the FET survey (Figure 1), students are presented with a list of course aspects on which they can give feedback. Some of the aspects on the list are general (e.g., the methods of assessment), while some of them can be course-specific (e.g., learning objectives). Note that the course aspect pertaining to the teacher specifically asks students to direct feedback on their didactic approach. As noted, students’ evaluations of teaching can sometimes be prone to unconstructive abusive comments. By explicitly asking students to focus on the didactic approach, we hope to discourage these type of undesirable and unconstructive comments.
From the list of aspects, students are asked to select just one or two key aspects which they appreciated (i.e., tops), and one or two key aspects which they think could be improved upon (i.e., tips). With this design feature, we hope counter the threat of evaluation fatigue that is more likely to occur in more comprehensive surveys like the traditional SET that require students to evaluate each and every aspect of a course.
In the second step (Figure 2), after selecting one or two aspects as tips and tops, students are asked to write a short motivation for their respective selections. This set-up allows students to share their insights in a fast, efficient, and meaningful way.
After a given course has been evaluated, the FET output provides teachers with a ranking of aspects that were selected most frequently. Because selected aspects have also been enriched with qualitative textual input from students, teachers can engage in a focused review of those student contributions that are most relevant for improving their course (i.e., comments on aspects that were selected most frequently).
Going over the FET evaluation results should be a relatively straightforward task for those who teach small classes. However, for teachers with larger classes we anticipated that this could be a considerable burden. This is where AI comes into play. LLInC developer Erwin Veenstra and data analyst Arian Kiandoost worked on a way of complementing the raw data with an AI-generated summary of the main results. Specifically, we wanted to select a Large Language Model (LLM) that was capable of performing the task of summarizing the data in such a way that it is easy to process and interpret. We expected that, with the current level of sophistication of available LLMs, it should be possible to generate a high-quality descriptive summary of the qualitative data. It took a fair amount of experimentation, iteration, and team discussion about different possible LLMs, output formats, and the “right” prompt before we arrived at a model and approach capable of performing the task.
The LLM we ended up using was OpenAI’s GPT-4 API (OpenAI Platform, n.d.). Note that, in contrast to the non-API consumer service ChatGPT, the OpenAI API does not have the same privacy and security issues. That is, data sent to the OpenAI API is not used to train or improve the model. Still, because we ended up using a cloud-based LLM, the data were first anonymized before feeding it to the LLM. Also, we rearranged the survey data into a JavaScript Object Notation (JSON) format (JSON, n.d.) to make it easier for the LLM to group information per course aspect. The LLM was prompted in such a way that it recognized comments were grouped per course aspect, and that differences in magnitude should also be expressed in the summary (i.e., one Tip versus 10 Tops should not carry the same weight). Furthermore, we prompted the LLM to generate one synthesized integrated summarization for the tips and tops per course aspect. We found that this way of reporting helped to make explicit and nuance apparent contradictions in the data (e.g., half of the students stating one thing, the other half stating the opposite). After the summary was generated, any omissions in the output due to anonymization would be transformed back into the original values in the final report.
In the AI-generated summary, course aspects are presented in a descending order starting with the one that was selected most frequently. For each aspect, a short summary is generated to capture the overall gist of the student comments. Figure 3 shows a screenshot of an AI-generated summary and for one aspect, the working groups, of a course. Note that the summary only gives a descriptive synthesis of the students’ positive and negative, but the actual interpretation is left to the teacher. As is common knowledge, LLMs can sometimes be prone to “hallucinations”. We noticed that prompting the model to also provide an interpretation of the data, beyond what was in the text, increased the occurrence of hallucinations and also decreased the degree of reproducibility of the results. However, a simple more bare-bones LLM-generated descriptive summary provided what we felt was an accurate and reproducible representation of the data. To be sure, we prompted the LLM to supplement each summary with up to six “representative examples” (i.e., actual instances of student feedback) of tips and tops as a reference to the actual data. Furthermore, in the introduction text of to the AI-generated report, we encouraged teachers to cross-check with the actual raw data that was provided along with the summary, in case doubts would arise about the reliability.
In the past couple of months, the FET has been piloted in different contexts at Leiden University, ranging from small-group settings such as an elective master class course (+-20 students) to a large-group setting such as a BA course (200+ students). The feedback from the participating teachers has been overwhelmingly positive. All teachers indicated wanting to use the FET again in the future and in their interactions with us, they were able to give multiple concrete examples of changes they intended to make in future iterations of their course. Based on the large BA course, the median time it took students to fill out the survey was around 2 minutes and 40 seconds, a duration we consider not to be too much of a burden for the students. Compared to the regular SET survey from a previous cohort, the FET survey produced much more qualitive student feedback in terms of the total number of student comments. Furthermore, although the average word count per comment that not differ much between the SET and the FET, students filling out the FET clearly put more effort into comments specifically directed at improving the course (i.e. Tips). Most important, after receiving and discussing the report, the participating teacher indicated having a high-degree of confidence in the reliability of the AI-generated summary based on cross-checking with the raw data. In short, the preliminary results of our small scale pilot suggest that the FET can be a valuable tool for efficient collection of high-quality student feedback that is formative and more conducive to the improvement of teaching practices.
Outreach activities (workshops and presentations about the FET project) have now spiked the interest in the FET project within the university. In the next phase, we hope to get further support and funding to scale up the project and see if we can replicate our findings in a broader range of contexts and faculties. Also, for future direction, we aim to use an LLM that can be run on a local server (e.g., Mistral AI, n.d., Meta-Llama, n.d.). To run the larger versions of these kind of models, we need a more powerful computer than the one we had access to during the current project. However, such a machine has recently become available at LLInC.
As the project enters the next phase, we aim to investigate how the FET survey can be successfully implemented to improve educational design and how it can support teachers professional development activities. Furthermore, in our future endeavors we plan to also take into account the student perspective. This was outside the scope of the current project, but it is vital to consider the student perspective if the project is going to move forward and scale up.
Lastly, In the FET we purposefully chose to collect only qualitative data. As already noted abusive comments can sometimes enter into qualitative evaluation data and this can cause stress and anxiety among teachers. However, the qualitative evaluation data from our small-scale pilot did not seem to contain any student comments that could be considered abusive. Perhaps this was due to the design of the FET and the phrasing of the aspects in the list from which students could choose. Or perhaps it was simply due to the fact that students were aware that they were participating in a pilot project. However, even if abusive comments would enter into the FET, we expect that the LLM should be capable of filtering out such unconstructive comments. This is one thing that we would also want to test in the future (e.g., by contaminating evaluation data with a preconstructed set of abusive comments, and training the model to filter the data).
In conclusion, we believe the FET allows teachers to collect valuable feedback on the efficacy of their teaching in a fast, efficient, and meaningful way. Furthermore, the FET holds the potential for enhancing and enriching existing teacher professionalization activities as it can facilitate critical reflection on one’s own teaching practice.
References
Harrison, R., Meyer, L., Rawstorne, P., Razee, H., Chitkara, U., Mears, S., & Balasooriya, C. (2022). Evaluating and enhancing quality in higher education teaching practice: A meta-review. Studies in Higher Education, 47, 80-96.
Heffernan, T. (2022). Sexism, racism, prejudice, and bias: A literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment & Evaluation in Higher Education, 47, 144-154.
Hendry, G. D., & Dean, S. J. (2002). Accountability, evaluation of teaching and expertise in higher education. International Journal for Academic Development, 7, 75-82.
Hounsell, D. (2003). The evaluation of teaching. In A handbook for teaching and learning in higher education (pp. 188-199). Routledge.
Reynolds, D. V. (1977). Students who haven’t seen a film on sexuality and communication prefer it to a lecture on the history of psychology they haven’t heard: Some implications for the university. Teaching of Psychology, 4, 82–83.
Stroebe, W. (2020). Student evaluations of teaching encourages poor teaching and contributes to grade inflation: A theoretical and empirical analysis. Basic and applied social psychology, 42, 276-294.
Uijtdehaage, S., & O’Neal, C. (2015). A curious case of the phantom professor: mindless teaching evaluations by medical students. Medical Education, 49, 928-932.
Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.
Many first-year higher education students experience the transition from secondary to higher education as challenging. To facilitate this transition, universities offer mentoring programs. How can such a mentoring program be designed in an effective way? This literature overview outlines effective ingredients of mentoring programs.
How does mentoring help to foster student success?
Mentoring can be defined as “a formalized process based on a developmental relationship between two persons in which one person is more experienced (mentor) than the other (mentee).” (Nuis et al., 2023, p. 7). Based on a synthesis of the literature, I have developed the following conceptual model that relates mentoring to student success.
The relationship between mentoring and academic success can be explained through three mediating factors: academic integration, social integration, and psychosocial well-being (Lane, 2020). Academic integration involves academic knowledge and skills (Crisp & Cruz, 2009), career path development (Crisp & Cruz, 2009), and student identification with the norms of the university and their field of study (Tinto, 1975). Academic integration ensures that the student is committed to the goal of successfully completing their studies, thereby lowering attrition (Tinto, 1975). Social integration involves sense of belonging, with peers and within the wider university community, and the ability to find one’s way within the university (Lunsford et al., 2017; Tinto, 1975). This type of integration is also hypothesized to reduce attrition (Tinto, 1975). Psychosocial well-being involves issues such as stress, resilience, self-efficacy, and motivation (Law et al., 2020).
Does mentoring work?
Many studies show that mentoring is effective in increasing student success (Andrews & Clark, 2011; Campbell & Campbell, 1997; Crisp & Cruz, 2009; Eby et al., 2008; Gershenfeld, 2014; Jacobi, 1991; Lane, 2020), and studies confirm that the mechanism through which mentoring is effective operates through the mediating factors (Lane, 2020; Lunsford et al., 2017; Webb et al., 2016). However, effects seem to be generally small (average effect size .08; Eby et al., 2008).
What are characteristics of effective mentoring?
Below, I discuss characteristics of effective mentoring programs for which evidence is available. It is important to keep in mind that it is the combination academic, social, and psychosocial support that makes a mentoring program effective (Lane, 2020). Among the characteristics of effective mentoring, the role and person of the mentor stands out: This appears the most important ingredient of any mentoring program.
Type of mentor: peer or teacher
The two main types of mentors are senior peers and faculty members. Peer mentors may be more suitable for providing social and psychosocial support (Leidenfrost et al., 2017), may be more available and approachable, and therefore easier to confide in (Lunsford et al., 2017). However, for academic integration and academic success peer and teacher mentoring appear equally effective (Lunsford et al., 2017).
Mentor characteristics
What are attributes of effective mentors? I discuss a number of them:
Helpfulness: The mentor’s helpful and empowering attitude makes a significant contribution to the psychosocial well-being of a mentee (Lane, 2020). Mentees even prefer a mentor who considers their needs and helps them make choices above an empathetic mentor (Terrion & Leonard, 2007).
Role model and openness: Mentors must be able to act as role models and reflect on their own experiences and challenges (Holt & Fifer, 2018). Furthermore, if a mentor can open up to the mentee in a healthy way, and sees the relationship as a joint learning process, this can lead to a relationship in which there is room for growth (Terrion & Leonard, 2007).
Self-efficacy: Mentor’s self-efficacy seems to be an important predictor of perceived support. Careful selection, training and guidance of mentors helps to ensure appropriate self-efficacy of mentors (Holt & Fifer, 2018).
Availability and approachability: Sufficient availability and good approachability of mentors leads to higher satisfaction, both for mentors and mentees (Ehrich et al., 2004; Terrion & Leonard, 2007).
Experience with mentoring: It seems that mentors do not need to have previous experience as mentors (Terrion & Leonard, 2007).
Type of activities
There is some evidence about which activities have proven effective. Social integration is promoted by facilitating contact between fellow students and with the mentor, by encouraging conversation and discussion, by exchanging ideas and experiences, and by supporting mentors and fellow students in problem-solving (Ehrich et al., 2004). Providing constructive feedback, and avoiding judgmental feedback, fosters academic integration and psychosocial well-being (Ehrich et al., 2004; Leidenfrost et al., 2011). Academic integration can be promoted by helping students to interpret and respond to feedback (Law et al., 2020), in self-regulated learning in general, in writing, and in exam preparation (Andrews & Clark, 2011; Holt & Fifer, 2018). This latter type of support requires tacit knowledge, which peer mentors can share from first-hand.
Duration & frequency
There is no consistent evidence about the duration and frequency of meetings (Lane, 2020). On the one hand, more contact between mentor and mentee leads to more perceived support (Holt & Fifer, 2018; Andrews & Clark, 2011) and higher student success (Campbell & Campbell, 1997). On the other hand, if the mentee is satisfied with the mentor’s support, the time the mentor spends with the mentee does not lead to more mentee satisfaction (Terrion and Leonard; 2007). So while the quantity of contact is important, the quality of contact appears equally significant.
Conclusion
Based on the literature, the conclusion seems justified that the person of the mentor and how the mentor fills the support are the most important ingredient of any mentoring program: A helpful and open mentor who is approachable and able to empower the mentee can be a powerful source for effective mentoring.
References
Andrews, J., & Clark, R. (2011). Peer mentoring works! Aston University.
Bandura, A. (1997). Self-efficacy: The exercise of control. Freeman.
Campbell, T. A., & Campbell, D. E. (1997). Faculty/student mentor program: Effects on academic performance and retention. Research in Higher Education, 38(6), 727-742. https://doi.org/10.1023/A:1024911904627
Crisp, G., & Cruz, I. (2009). Mentoring college students: A critical review of the literature between 1990 and 2007. Research in Higher Education, 50(6), 525-545. https://doi.org/10.1007/s11162-009-9130-2
Eby, L. T., Allen, T. D., Evans, S. C., Ng, T., & DuBois, D. L. (2008). Does mentoring matter? A multidisciplinary meta-analysis comparing mentored and non-mentored individuals. Journal of Vocational Behavior, 72(2), 254-267. https://doi.org/https://doi.org/10.1016/j.jvb.2007.04.005
Ehrich, L. C., Hansford, B., & Tennent, L. (2004). Formal mentoring programs in education and other professions: A review of the literature. Educational Administration Quarterly, 40(4), 518-540. https://doi.org/10.1177/0013161×04267118
Gershenfeld, S. (2014). A review of undergraduate mentoring programs. Review of Educational Research, 84(3), 365-391. https://doi.org/10.3102/0034654313520512
Holt, L. J., & Fifer, J. E. (2018). Peer mentor characteristics that predict supportive relationships with first-year students: Implications for peer mentor programming and first-year student retention. Journal of college student retention : Research, theory & practice, 20(1), 67-91. https://doi.org/10.1177/1521025116650685
Jacobi, M. (1991). Mentoring and undergraduate academic success: A literature review. Review of Educational Research, 61(4), 505-532. https://doi.org/10.3102/00346543061004505
Lane, S. R. (2020). Addressing the stressful first year in college: Could peer mentoring be a critical strategy? Journal of College Student Retention: Research, Theory & Practice, 22(3), 481-496. https://doi.org/10.1177/1521025118773319
Law, D. D., Hales, K., & Busenbark, D. (2020). Student success: A literature review of faculty to student mentoring. Journal on Empowering Teaching Excellence, 4(1), 22-39.
Leidenfrost, B., Strassnig, B., Schabmann, A., Spiel, C., & Carbon, C.-C. (2011). Peer mentoring styles and their contribution to academic success among mentees: A person-oriented study in higher education. Mentoring & tutoring, 19(3), 347-364. https://doi.org/10.1080/13611267.2011.597122
Lunsford, L. G., Crisp, G., Dolan, E. L., & Wuetherick, B. (2017). Mentoring in higher education. The SAGE handbook of mentoring, 20, 316-334.
Nuis, W., Segers, M., & Beausaert, S. (2023). Conceptualizing mentoring in higher education: A systematic literature review. Educational Research Review, 41, 100565. https://doi.org/https://doi.org/10.1016/j.edurev.2023.100565
Terrion, J. L., & Leonard, D. (2007). A taxonomy of the characteristics of student peer mentors in higher education: findings from a literature review. Mentoring & Tutoring: Partnership in Learning, 15(2), 149-164. https://doi.org/10.1080/13611260601086311
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89-125. https://doi.org/10.3102/00346543045001089
Webb, N., Cox, D., & Carthy, A. (2016). You’ve got a friend in me: The effects of peer mentoring on the first year experience for undergraduate students Paper presented at the Higher Education in Transformation Symposium, Oshawa, Ontario, Canada
Recently NWO release a new funding call for educational innovation projects, labelled “Scholarship of teaching and Learning”. This is an interesting funding opportunity for academics who would like to strengthen their teaching. Academic teachers can apply for funds to put their innovative teaching ideas into practice. And indeed this is a good opportunity to get your funding for those teaching ideas you have been waiting to implement. This also is the time to re-think your teaching and teaching ideas and put them to the test.
Last week I visited Germany for my study around the climate crisis and the issue of hope. One German student said: Can’t we as teachers just tell students to become vegetarians to save the planet?
What do you think, would it be a solution, or wise, to tell students what to eat, drink, vote, do or think in order to bring about change? I mean, shouldn’t we do something as we know that studies about the effects of climate change on young people reveal that pessimism, guilt, hopelessness and fear are common in the new generation?
Bringing about change in times of the many present day crises with all the doom stories and anxiety is an interesting, yet challenging research topic. Interestingly, precisely in the midst of complex crises, those who provide education have a crucial role: to make the new generation appear to the world as powerful and innovative (Arendt, 1958). To not reinforce fear or impose what to do or think, but have the new generation discover from hope that a different future is possible and that even a crisis includes profound problems-though complex and intractable- for which solutions can be found. A focus on hope is key!
Hope as a construct has received attention from many different angles, such as psychology, theology, philosophy and recently even famous primatologist and anthropologist Jane Goodall (2021). Yet, although many authors endorse the need and importance of hope, to date there has been little innovation in the ways in which hope can have a practical impact and lead to change, let alone in education. In my research project, hope has been incorporated into a pedagogy of hope. It holds several powerful design principles for a pedagogy of hope stemming from pilots in teacher education institutes in both the Netherlands and Germany and is now tested in the context of the climate crisis. Around this climate crisis, pre-service teachers are known to feel very committed to teaching the topic, but at the same time concerned and anxious about the climate themselves and ignorant in how to provide hopeful and effective teaching about the climate crisis in their secondary school internship classes (Bean, 2016).
The pedagogy of hope was implemented in a Dutch and German teacher education institute. The preliminary outcomes show that participants were able to formulate specific intentions that are both directed toward hope for the climate as well as easy to implement in their actual teaching in secondary education. Also, many intentions show to be action-oriented and participants often used their creativity to find non-traditional ways of conveying climate hope. We also found hindrances for teaching hopefully, such as not enough time, curriculum coverage and a lacking attention in textbooks for climate change and climate hope. Also, the different opinions that others could have could make it a controversial issue to teach in school.
On to the next steps!
Michiel Dam, researcher at ICLON, LTA teaching fellow
Multiple choice (MC) testing is generally viewed as an efficient way of assessing student knowledge. Up until today it has been a very popular assessment format in higher education. Especially in courses where a large number of students are enrolled, the MC format is often used as it allows for fast and reliable grading. Not surprisingly, as an educational consultant and teacher trainer much of my work has revolved around supporting teachers in creating and/or improving their MC assessments. Throughout the years, I have taught numerous training sessions on Testing and Assessment for the University Teacher Qualification at Leiden University. On the one hand these training sessions are designed to teach best practices, but at the same time the sessions are also designed to cater to teacher needs. As such, a large part of the sessions is focused on giving teachers instructions and tips on how to create good MC questions. To be sure, I have always managed to squeeze in some discussion on the downsides and many limitations of MC testing as well. But still… It always kept me feeling a bit uneasy. In giving the instructions that the program compelled me to, I might have inadvertently been endorsing this practice more than I would have wanted. Thus, this blogpost will be as much repentance as it is a cautionary exposition about some of the negative consequences that MC testing can have on student learning.
There are multiple reasons for why MC exams could be considered as detrimental for student learning. For instance, one often heard criticism is that the recognition-based MC exam format will often result in students preparing for exams in a superficial way. Furthermore, one could argue that the ecological validity of MC exams is low and not representative of real-world situations. Also, the MC test format is by design not suitable for assessing higher levels of learning. These kind of objections are well-known and they have also received considerable attention in the University teacher Qualification courses on testing and assessment taught at Leiden University. I am not going to reiterate them extensively in this blogpost. Instead, I will discuss one particularly negative consequence of MC testing that I think is often neglected: the misinformation effect.
The misinformation effect
Before we consider the misinformation effect in the context of MC testing, we will first take a step back and consider some general research on the workings of human memory and how misinformation can result in misremembering. One of the first general demonstrations of the misinformation effect was provided by Loftus & Palmer (1974). In Experiment 2 of their seminal study, participants watched a short video clip of a car accident. After watching the video, participants were asked to give an estimate of the speed of the cars that were involved in the accident. Half of the participants were asked to estimate the speed of the cars when they smashed into each other, while the other half of participants estimated the speed for when the cars hit each other. The subtle change of the verb used in the question resulted in a difference in the reported speed: Participants estimated the speed to have been higher when they were in the smashed condition. More importantly, one week after giving the speed estimates, participants returned and were asked to indicate whether they remembered seeing broken glass in the video. Interestingly, participants in the smashed condition were much more likely to report having seen broken glass even though there was none to be seen in the video.
The results from the Loftus & Palmer study are often cited in the context of the reliability of eye-witness testimonies (and the effects that leading questions can have on misremembering). More importantly, the results are also taken as evidence in support for the idea that human memory is reconstructive in nature. During the retrieval of information from memory we reconstruct what we have previously experienced. When previously exposed to some form of misinformation, the process of reconstruction can result in substantial misremembering of previous experiences.
The misinformation effect in the context of MC questions
In the Loftus & Palmer (1974) study, the degree to which participants were exposed to misinformation was rather subtle (i.e., a small change of verb in the leading question). However, if we now consider the situation of an MC exam, the degree of exposure to misinformation seems much more extreme. A typical MC question will often have four alternatives for students to choose from of which the majority (usually three) is incorrect. Thus, by using MC exams, we are intentionally exposing our students to misinformation. MC exams are designed to do just that. Surely, you could argue that the negative consequences of MC exams might be less severe, because students are aware that they are being exposed to misinformation. They are going into the exam expecting this. However, in preparation of the exam, the teacher has also taken careful consideration of phrasing erroneous answers in such a way that they are plausible. Teachers are instructed to formulate alternatives that students are likely to mistakenly select as the correct one. By exposing students to misinformation in the context of MC exams, teachers might very well be sacrificing student learning for the sake of fast and reliable grading.
In a later study by Roediger & Marsh (2005) the consequences of MC testing on student learning was investigated. In their experiment, participants studied short prose passages (or not) and were subsequently tested on the materials (or not) using MC questions with a number of alternatives ranging from 2 – 6. One week later participants returned and received an open-ended short answer test. Going into the test, participants were also given explicit instructions not to guess. First of all, the results on the 1-week test showed that the consequences of MC testing were not all bad: Taking a MC test increased the retention of (correct) information. This finding, also referred to as the testing effect, is well-established in the literature and has often been replicated across different test formats and settings (e.g., Rowland, 2014). On the other hand, however, being exposed to misinformation in the MC test, also increased the production of erroneous answers on the 1-week short answer test. The degree to which participants produced erroneous (MC) answers tended to increase as the number of alternatives of the MC test increased. Note that this was the case even though participants had received explicit instructions not to guess on the short answer test. Clearly, the misinformation effect is not just relevant in the context of eye witness testimonies, but also in the context of assessment in higher education. MC exams can have an adverse effect on student learning in the sense that students can mistakenly recall incorrect answer options at a later point in time. Later research (Butler & Roediger, 2008) has shown that the misinformation effect as a result of MC testing can be reduced by giving students direct feedback (either after each individual question or after taking an entire test). However, in my experience, summative MC exams in higher education usually don’t provide immediate feedback to students. In the absence of corrective feedback, students might stay under the impression that their erroneous responses on a test were correct.
To end on a positive note, there are promising alternatives for MC exams that teachers are exploring. For instance, at the Leiden University Medical Centre (LUMC) some teachers have started using Very Short Answer Questions (VSAQs) on exams as a substitute for MC questions. Among others, dr. Alexandra Langers (Leiden University Teaching Fellow), and her PhD student Elise van Wijk have started investigating the consequences of VSAQ exam format. VSAQs require students to generate short (1 – 4 word) answers to exam questions. Compared to MC questions, VSAQs require retrieval of correct answers rather than simple recognition and as such these type of questions can be more conducive for student learning. Because answers are short, VSAQs will still allow for some degree of automatic scoring (for some predetermined “correct” responses). This can keep grading time acceptable even for teachers with large classes. Some of the findings of the VSAQ research project have recently been published in an article in PLOS ONE. Replicating previous findings (Sam et al., 2018), van Wijk et al., (2023) demonstrate that VSAQ exams can have added benefits over MC tests in terms of higher reliability and discriminability. In addition, van Wijk at al. found that the average grading time per individual VSAQ was around two minutes. This seems very acceptable considering the cohort in the study consisted of more than 300 students. Hopefully, initiatives like the one at LUMC will pave the way for other teachers to start using assessment types that can be more supportive of student learning.
References
Butler, A. C., Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604–616. https://doi.org/10.3758/MC.36.3.604
Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning & Verbal Behavior, 13, 585–589. https://doi.org/10.1016/S0022-5371(74)80011-3
Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of experimental psychology. Learning, memory, and cognition, 31, 1155–1159. https://doi.org/10.1037/0278-7393.31.5.1155
Rowland C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect. Psychological bulletin, 140, 1432–1463. https://doi.org/10.1037/a0037559
Sam, A. H., Field, S. M., Collares, C .F., van der Vleuten, C. P. M., Wass, V. J., Melville, C., Harris, J., & Meeran, K. (2018), Very-short-answer questions: reliability, discrimination and acceptability. Med Educ, 52, 447-455. https://doi.org/10.1111/medu.13504
van Wijk, E. V., Janse, R. J., Ruijter, B. N., Rohling J. H. T., van der Kraan J., Crobach, S., de Jonge, M., de Beaufort, A. J., Dekker, F. W., Langers, A. M. J. (2023). Use of very short answer questions compared to multiple choice questions in undergraduate medical students: An external validation study. PLOS ONE, 18, e0288558. https://doi.org/10.1371/journal.pone.0288558
How often have you referred to a teacher as ‘good’ or ‘bad’ in your school career? When assessing teacher quality, we often struggle with subjective judgments and varying criteria. Is a teacher considered good because they explain well? Or because the teacher’s students obtain high scores? Perhaps it depends on positive evaluations from students. In this research blog, we aim to redefine the way we assess the quality of university educators and propose a shift toward embracing teacher agency. We argue that educators should be seen as experts in their field, who can not only meet the needs of students but also foster innovation. Educators’ willingness to take responsibility and contribute to institutional progress can significantly foster a transformative educational environment. In this regard, educators transcend their traditional role as mere providers of education and instead become facilitators of educational innovation and development.
Shortcoming of current methods
The current methods of assessing the quality of university educators have been widely criticized. For instance, several studies raise concerns about the interpretation and usefulness of Student Evaluation of Teaching (SET) ratings. These studies revealed that SET ratings were significantly influenced by students’ perceptions of their educators, thereby calling into question the validity of this specific assessment tool (see Shevlin et al., 2000 and Spooren et al., 2013 for examples).
There are also educational concerns, for example, that current assessment methods do not contribute to educators’ professional development. Among other things, assessment methods are often criticized for providing little or no constructive feedback. Without this, educators may find it difficult to improve their teaching methods or address weaknesses. Moreover, critics argue that current assessments often fail to take into account the teaching context, such as subject matter, class size, level (bachelor’s or master’s), and student diversity and background. Each of these factors can significantly affect teaching methods and outcomes and should be considered when assessing educators. Moreover, current assessment methods neglect broader purposes of teaching, such as the value of mentorship and creating an inclusive learning environment.
In some institutions, however, there is already a focus on a more holistic approach that integrates different sources of feedback, such as peer evaluations and self-reflection, to gain a more accurate understanding of teacher effectiveness, for example during the University Teaching Qualification track. The ability to reflect on what works and what does not work, and to understand why, is invaluable to teacher quality. Therefore, universities play a crucial role in promoting these skills, as these skills must be recognized and valued by policy makers by reflecting them in assessments. A change to holistic assessment of educators emphasizes effective teaching and the long-term effect educators can have on student growth. Educators need to actively pursue their own development and make informed choices in any given situation, highlighting the significance of teacher agency in discussions about teacher quality.
Embracing Teacher Agency
Embracing teacher agency in the evaluation of teacher quality is crucial for fostering a culture of innovation, growth, and student-centered education. Teacher agency refers to the ability of educators to make intentional choices and take purposeful actions in their teaching practice. It involves educators’ capacity to initiate and control the learning environment, make informed pedagogical decisions in any given situation, and collaborate with colleagues and students. Teacher agency is often seen as a key factor in promoting effective teaching and learning in higher education. By recognizing and valuing teacher agency, universities can tap into the expertise and unique perspectives of their educators.
Moreover, teacher agency encourages continuous professional development. When educators have the autonomy to explore and experiment with different instructional strategies, they are more inclined to seek out new research, attend workshops, collaborate with colleagues, and reflect on their own teaching practices. This proactive approach to professional growth ultimately benefits both educators and students, as it promotes a culture of lifelong learning and innovation within the educational institution.
Embracing teacher agency also cultivates a sense of trust and collaboration between faculty members and administration. Rather than imposing rigid evaluation criteria, universities can create opportunities for open dialogue, feedback, and collaboration, allowing educators to take an active role in shaping their own professional growth and the overall direction of the institution.
Conclusion
In conclusion, embracing teacher agency is a powerful facilitator for elevating teacher quality in universities. By empowering educators to exercise their expertise, make informed decisions, and engage in continuous professional development, universities can foster a dynamic and student-centered educational environment that nurtures innovation, growth, and excellence in teaching and learning. It is essential to engage in an active debate about the assessment of educational quality and challenge the predominant reliance on quantitative evaluations such as SET. Recognizing the complexity of assessing teacher quality, we propose a paradigm shift towards valuing teacher agency in universities. By fostering a culture that empowers educators, promotes collaboration, and encourages continuous learning, we can unlock the potential for lasting educational reforms. Embracing teacher agency is crucial for assessing educators’ quality effectively. By involving educators in the assessment process and valuing their expertise, autonomy, and professional judgment, we can create a more meaningful evaluation system. Practically, this can be achieved through collaborative goal setting, self-reflection and self-assessment, peer observations and feedback, diverse assessment methods, and continuous professional development. By recognizing educators as professionals and empowering them to take an active role in their own assessment, we create a comprehensive and empowering process that benefits both educators and students. Embracing teacher agency thus not only benefits individual educators, but also fosters an educational environment characterized by its dynamic and student-centered nature. It promotes innovation, encourages growth and strives for excellence in both teaching and learning. And that’s what we call: good teaching!
References
Biesta, G., Priestley, M., & Robinson, S. (2017). Talking about education: Exploring the significance of teachers’ talk for teacher agency. Journal of curriculum studies, 49(1), 38-54.
Cherng, H. Y. S., & Davis, L. A. (2019). Multicultural matters: An investigation of key assumptions of multicultural education reform in teacher education. Journal of Teacher Education, 70(3), 219-236.
Harris, A., & Jones, M. (2019). Teacher leadership and educational change. School Leadership & Management, 39(2), 123-126.
Imants, J., & Van der Wal, M. M. (2020). A model of teacher agency in professional development and school reform. Journal of Curriculum Studies, 52(1), 1-14.
Kusters, M., van der Rijst, R., de Vetten, A., & Admiraal, W. (2023). University lecturers as change agents: How do they perceive their professional agency?. Teaching and Teacher Education, 127, 104097.
Shevlin, M., Banyard, P., Davies, M., & Griffiths, M. (2000). The validity of student evaluation of teaching in higher education: love me, love my lectures?. Assessment & Evaluation in Higher Education, 25(4), 397-405.
Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642.
Tao, J., & Gao, X. (2017). Teacher agency and identity commitment in curricular reform. Teaching and teacher education, 63, 346-355.
Higher education institutions have adopted blended learning to actualize the benefits brought by technology. Specifically, the flexibility of learning afforded by blended courses make learning less dependent on specific space and time (Graham, 2006), and instructional method (Harding, 2012). In addition, blended learning is regarded as a more cost-effective approach than fully on-campus teaching (Harding, 2012). In addition to these potential benefits brought by blended learning, blended learning also provides a context in which students need to adapt themselves from the online environment to on-campus learning environments (Chiu, 2021) which at the same time, puts forward new requirements for teaching. Although many factors like teachers’ interaction with technology, academic workload, institutional environment, interactions with students, the instructor’s attitudes, and beliefs about teaching, and opportunities for professional development (Brown, 2016) are identified as factors influencing teachers’ adoption of blended learning, the institutional environment can always stand out in an authentic educational context. Many teachers are encouraged to use blended learning without thinking about whether it is aligned with their teaching goal and usually expect much from blended learning before they go to the classroom.
Potentials of blended learning
Blended learning is a balance of online learning and on-campus learning activities (Siemens, Gašević, & Dawson, 2015). Compared to on-campus and online learning, blended learning (through a Learning Management System) enables students to access learning materials whenever and wherever they want (Moskal, Dziuban, & Hartman, 2013) without the loss of obtaining in-person support and instruction in the classroom (Graham, Allen, & Ure, 2005). Additionally, blended learning can provide more effective instructional strategies to meet the course objectives than only online learning or fully on-campus learning (Graham et al., 2005; Morgan, 2002). This is probably because students can tailor their path of learning based on the personalized options afforded by blended learning (Medina, 2018). It can also improve students’ sense of belonging more than either on-campus or online learning (Rovai & Jordan, 2004). Zhu, Berri, and Zhang (2021) elaborated that, on the one hand, students may feel less disconnected because, with a blended learning situation, students can meet occasionally. On the other hand, they can have interactions and immediate feedback because they are interconnected on the online platform.
Challenges in Blended Learning
However, there are also many challenges in blended learning environments. Blended learning can be problematic if not appropriately designed and utilized. Even if appropriately designed and utilized it does not necessarily lead to the desired goal (Fisher, Perényi, & Birdthistle, 2021). Specifically, finding the optimal “blend” and coordinating all the elements is challenging for instructional designers and teachers of blended learning (Kaur, 2013). If the “blend” is not well-designed students will encounter difficulties in navigating between online learning and on-campus learning. Furthermore, there are technical challenges regarding ensuring student performance and achievement by utilizing and supporting appropriate technologies. In addition, isolation in online learning environments and distraction by non-academic online activities can hinder students’ engagement (Rasheed, Kamsin, & Abdullah, 2020). A crucial element is the amount of guidance and feedback in blended learning compared to on-campus learning due to the reduced in-person time with peers and teachers (Heinze & Procter, 2004). In all, blended learning requires more self-regulation skills and technology literacy than on-campus learning (Rasheed et al., 2020).
Technology as a Scapegoat
Ironically, although both advantages and disadvantages are afforded with the blended model, many stakeholders, like academics, course designers, and teachers, easily blamed technology for low engagement, increasing workload, and the failure of teaching. Specifically, some institutions and teachers were very glad when they heard they were allowed to return to campus. A teacher mentioned that she did not like teaching online with synchronous interaction because she felt that, to a large extent, she lost her control in building emotional bonds with students.
The distrust of technology and blended learning is very disappointing, especially to educational specialists who have invested huge amounts of energy in making blended learning more effective in learning. Joe O’Hara, a full professor of education at Dublin City University (DCU) and the president of the European Educational Research Association, commented “The ‘new normal? Two years of innovation and hard work consigned to the dustbin of (educational) history” under the news of “While some schools may close this week if local conditions are poor, they have been told days must be made up by cutting non-tuition activities.”
To be honest, we always have the problem of making education more effective. Teachers did not have scapegoats like technology to blame in the past rather than their own course design. It is always not the fault of any educational elements hindering the effectiveness of education. What really matters is how we use them in teaching and learning. To be specific, before deciding on using blended learning, think about how it can support the learning goals, fit specific learning tasks and facilitate the learning process.
Conclusions
There are no ways for us to go back to traditional teaching with the 2-year influence of online and blended learning during the pandemic. We highly recommend that teachers and course designers be more agentic in blended courses to make the best use of blended learning.
References
Brown, M. G. (2016). Blended instructional practice: A review of the empirical literature on instructors’ adoption and use of online tools in face-to-face teaching. The Internet and Higher Education, 31, 1-10. https://doi.org/10.1016/j.iheduc.2016.05.001
Chiu, T. K. (2021). Digital support for student engagement in blended learning based on self-determination theory. Computers in Human Behavior, 124, 106909. https://doi.org/10.1016/j.chb.2021.106909
Fisher, R., Perényi, Á., & Birdthistle, N. (2021). The positive relationship between flipped and blended learning and student engagement, performance and satisfaction. Active Learning in Higher Education, 22(2), 97–113. https://doi.org/10.1177/1469787418801702
Graham, C. R. (2006). Blended learning systems. In Curtis J. Bonk, Charles R. Graham (Eds.), The handbook of blended learning: Global perspectives, local designs, 1, pp. 3-21. Wiley Publishers.
Graham, C. R., Allen, S., & Ure, D. (2005). Benefits and challenges of blended learning environments. In M. Khosrow-Pour, Encyclopedia of Information Science and Technology, First Edition (pp. 253-259). IGI Global.
Harding, M. (2012). Efficacy of supplemental instruction to enhance student success. Teaching and learning in Nursing, 7(1), 27-31. https://doi.org/10.1016/j.teln.2011.07.002
Medina, L. C. (2018). Blended learning: Deficits and prospects in higher education. Australasian Journal of Educational Technology, 34(1). https://doi.org/10.14742/ajet.3100
Morgan, K. R. (2002). Blended Learning: A Strategic Action Plan for a New Campus. Seminole, FL: University of Central Florida.
Rasheed, R. A., Kamsin, A., & Abdullah, N. A. (2020). Challenges in the online component of blended learning: A systematic review. Computers & Education, 144, 103701. https://doi.org/10.1016/j.compedu.2019.103701
Rovai, A. P., & Jordan, H. M. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. International Review of Research in Open and Distributed Learning, 5(2), 1-13. https://doi.org/10.19173/irrodl.v5i2.192
Joksimović, S., Kovanović, V., Skrypnyk, O., Gašević, D., Dawson, S., & Siemens, G. (2015). The history and state of online learning (pp. 95-131). In G. Siemens, D. Gašević, & S. Dawson, S. (Eds.), Preparing for the digital university: A review of the history and current state of distance, blended, and online learning, pp. 55-92. Retrieved from http://linkresearchlab.org/PreparingDigitalUniversity.pdf
Zhu, M., Berri, S., & Zhang, K. (2021). Effective instructional strategies and technology use in blended learning: A case study. Education and Information Technologies, 26(5), 6143-6161. https://doi.org/10.1007/s10639-021-10544-w
Not all students enter university with the same economic, social and cultural capital. Therefore, access, inclusiveness and well-being for all are key in developments in higher education across the world. Higher Education Institutions (HEIs) are part of the broader social tissue and not just places where students acquire academic skills; they also help students become more resilient in the face of adversity and feel more connected with the people around them. Not least, HEIs are the first place where students experience society in all its facets, and those experiences can have a profound influence on students’ attitudes and behavior in life.
Importance of students’ sense of belonging
As higher education becomes increasingly competitive, students come under more pressure to succeed in their grades, which increases their levels of stress. Stress has been linked to mental health problems, which are highly prevalent among the student population and have been shown to impact learning and well-being (Stallman & King, 2016). A number of factors can affect student retention and well-being, including the student’s social experience within the higher education environment. Students’ sense of belonging to their institutions – personal feelings of connectedness to the institution occurring in academic and social spheres – has come to be recognized as one of the most significant factors in students’ success and retention in higher education. While individual characteristics such as personality and propensity to connect may have some impact, it is also acknowledged that institutional factors play an important role. Elements such as the culture of the university or curriculum design may affect the students’ experiences, including their sense of belonging and connection to other students, staff and the institution (Kahu & Nelson, 2018).
COVID-19 pandemic
The lockdowns in response to COVID-19 pandemic have interrupted conventional schooling and in many countries online teaching is now a new routine for many students in higher education, but it presents significant challenges. Many students experience challenges with respect to keeping a sense of belonging to their peers, staff and institution. Students in the most marginalized groups, who don’t have access to digital learning resources or lack the resilience and engagement to learn on their own, are at risk of falling behind. Universities from around the world have been uncertain about how long the COVID-19 crisis will last and how it might affect the mental health of students and faculty.
What to do?
Students have to cope with many challenges, both inside and outside HEIs, which can have immense consequences varying from poor access, low engagement and feelings of distress to delays and drop-out. Yet these challenges and consequences can be different depending on students’ social, cultural, economic and language backgrounds leading to a discrepancy between inclusive access and inclusive outcomes. But how should HEIs take up this massive challenge of access, inclusiveness and well-being for all? For sure not the umpteenth study on how students experience higher education in COVID-19 times. More attention for connecting students, socializing activities, and embodying social settings; less lecturing, testing and calls to account. Let’s stay connected to take up this massive challenge!
Het notaoverleg van de vaste commissie voor Onderwijs, Cultuur en Wetenschap over de Strategische Agenda Hoger Onderwijs heeft veel stof doen opwaaien in de sociale media. Zo is er een voorstel van VVD-kamerlid Dennis Wiersma voor digitalisering van colleges. Zo veel mogelijk colleges zouden voor iedereen gratis online beschikbaar moeten komen. Aankomende studenten zouden dan kunnen beoordelen of een bepaalde studie voor hen geschikt is, digitaal onderwijs zou makkelijk kunnen worden gecombineerd met een baan of zorgtaken en mensen hebben zo altijd toegang tot kennis en kunnen bijblijven in hun vak. Vervolgens heeft vooral de opmerking dat het hoger onderwijs dan ook wel wat goedkoper kan, veel reacties opgeroepen. De Minister lijkt overigens niet erg enthousiast te zijn over dit plan, mede omdat zij inschat dat het juist meer geld kost.
Problemen bij online onderwijs
Maar waar de discussie eigenlijk over moet gaan is of digitalisering van hoger onderwijs daadwerkelijk de toegankelijkheid tot kennis verbetert. Er moet een onderscheid worden gemaakt tussen onderwijs(aanbod) en leren. Uit de inmiddels omvangrijke kennisbasis over open online hoger onderwijs weten we dat studenten erg verschillen in de reden waarom zij het onderwijs volgen en deze redenen veranderen ook nog eens gedurende een cursus. Ook verschillen zij in hun voorkennis over het onderwerp, hebben zij verschillende voorkeuren om zich kennis en vaardigheden eigen te maken, en denken dat zij verschillend over wat een student en een docent zou moeten doen in onderwijs. En al deze verschillen zijn niet bekend als het open online onderwijs wordt gemaakt en uitgevoerd.
Zoveel mogelijk aanbod
De oplossing voor al deze verschillen tussen studenten in open online onderwijs is zoveel mogelijk aanbod klaar zetten waaruit iedere student dan een keuze kan maken. Maar onderzoek heeft al aangetoond dat:
veel studenten in open online hoger onderwijs deze benodigde zelfregulatievaardigheden om goed hun weg in het aanbod te vinden onvoldoende bezitten;
het geboden onderwijs vooral gericht is op kennisoverdracht door de docent, iets waarvan we al veel langer weten dit niet effectief is, en
studenten weinig actief bezig zijn met de inhoud van het onderwijs, iets waarvan we al langer weten dat het juist wel werkt.
Dus…
Dit leidt ertoe dat in open online hoger onderwijs goede studenten het meeste leren en de minder goede studenten afhaken of uitvallen. De digitalisering van onderwijs vergroot weliswaar de toegankelijkheid van alles wat er is, maar verkleint de toegankelijkheid van het daadwerkelijk verkrijgen van meer kennis en vaardigheden. Een verschil tussen onderwijs(aanbod) en leren.
Literatuur
Hendriks, R. A., de Jong, P. G. M., Admiraal, W. F., Reinders, M. E. J. (2019). Teaching modes and social-epistemological dimensions in Medical Massive Open Online Courses: Lessons for integration in campus education. Medical Teacher, 41(8), 917-926.
Hendriks, R. A, Jong, P. G. M., Admiraal, W. F., & Reinders, M. E. J. (2020). Instructional design quality in medical Massive Open Online Courses for integration into campus education. Medical Teacher, 42(2), 156-163.
Jansen, R. (2019). Dealing with autonomy. Self-regulated learning in open online education. Dissertatie. Universiteit Utrecht.
Pilli, O., Admiraal, W., & Salli, A. (2018). MOOCs: Innovation or stagnation? Turkish Online Journal of Distance Education, 19(3), 169-181.
Please, feel free to add comments and questions to the posts by clicking on the title of the post. At the bottom you can put your comments.
Please share the post in your network by clicking on the icon (-s) below the posts.
Recent Comments