Post by Category : teacher research

A guide to scientific skepticism  0

Don’t take my word for it, but being a scientist is about being a skeptic.

About not being happy with simple answers to complex problems.

About always asking more questions.

About not believing something merely because it seems plausible…

.. nor about reading a scientific study and believing its conclusions because, again, it all seems plausible.

“In some of my darker moments, I can persuade myself that all assertions in education:
(a) derive from no evidence whatsoever (adult learning theory),
(b) proceed despite contrary evidence (learning styles, self-assessment skills), or
(c) go far beyond what evidence exists.”
– Geoff Norman

Why you should be a skeptical scientist

The scientific literature is biased. Positive results are published widely, while negative and null results gather dust in file drawers (1, 2). This bias functions at many levels, from which papers are submitted to which papers are published (3, 4). This is one reason why p-hacking is (consciously or unconsciously) used to game the system (5). Furthermore, researchers often give a biased interpretation of one’s own results, use causal language when this isn’t warranted, and misleadingly cite others’ results (6, 7). Studies which have to adhere to a specific protocol, such as clinical trials, often deviate from the protocol by not reporting outcomes or silently adding new outcomes (8). Such changes are not random, but typically favor reporting positive effects and hiding negative ones (9). This is certainly not unique to clinical trials; published articles in general frequently include incorrectly reported statistics, with 35% including substantial errors which directly affect the conclusions (10-12). Meta-analyses from authors with industry involvement are massively published yet fail to report caveats (13). Besides, when the original studies are of low quality, a meta-analysis will not magically fix this (aka the ‘garbage in, garbage out’ principle).

Note that these issues are certainly not restricted to qualitative research or (semi-)positivistic paradigms, but are just as relevant for quantitative research from a more naturalistic perspective (14-16).

everybody lies

This list could go on for much longer, but the point has been made; everybody lies. Given the need to be skeptical, how should we read the scientific literature?


Using reflective reasoning to prevent bias

Reading is simple, but reading to become informed is much harder. This is especially the case when we are dealing with scientific theories. To aid you in this endeavor I will borrow the ‘reflective reasoning’ method from medical education. It has been proven that it There is some evidence that it enhances physicians’ clinical reasoning, increases diagnostic accuracy, and reduces bias (17-19).

Step One. Pick a theory. This can be your own theory, or any theory present in the academic literature. We will call this theory the diagnosis.

Step Two. Now list all the symptoms which are typical of this diagnosis. In other words: which data/studies support the theory? The key step is to differentiate between findings in the following manner:

  1. Which findings support the theory?
  2. Which findings contradict the theory?
  3. Which findings are expected given the theory, but are missing?

Why can this be helpful? Because by our nature we fixate on findings which confirm what we already believe (20). These questions can help reduce confirmation bias and give you a much more balanced perspective on the literature.

If you are not aware of any contradictory or missing evidence then take this as a sign that you might have been reading a biased section of the literature.

Step Three. In addition to the initial theory, list all alternative theories which could potentially explain the same array of findings and again list all the three types of findings, like this:


Theories Confirming findings Contradictory findings Findings which are expected, but missing
Theory A Findings 1-3 Findings 4-5 Findings 6-9
Theory B Findings 2-5 Finding 1 Findings 10-11
Theory C Findings 1-4 Findings 2-3, 5 Findings 6-11

Why is this step so important? Because most finding can be explained by multiple theories, just as any given symptom can be explained by multiple diagnoses. Should we only check whether a particular theory is supported by some data, than any theory would suffice because every theory has some support. In the above example, theory B and C both have the same level of supporting findings, but differ dramatically in the amount of contradictory and expected-but-missing findings.

It is a given that findings can differ in the quality of evidence they provide (from uninformative to very convincing) but also in their specificity; does a finding support only one theory, or does it fit in many models? If a theory is based mainly on findings which are also explained by other theories, it’s not a strong theory.

In the end, a theory is more than the sum of its supporting or contradicting findings. Nevertheless, carefully reflecting on the quantity and quality of evidence for any theory is an essential step for being a critical reader.


Why you should not be a skeptical scientist

No matter how critical or reflective you are, you will always remain biased. It’s human nature. That’s why you should not be a skeptical scientist by yourself.

Step Four. Invite others to take a very, very critical look at the theories you use and write about. In other words, ask others to be a ‘critical friend’. For a truly informative experience, invite them to be utterly brutal and criticize any and every aspect of whichever theory you hold dear, and then thank them for showing you how you lie a different perspective.

Luckily, there just happens to already exist an excellent platform where academics relentlessly criticize anything that is even remotely suspect. It’s called Twitter. Get on it. It’s fun and very informative.


More tips for the skeptical scientist

In addition to the reflective reasoning procedure, here are some more tips which can help you become a more critical, or skeptical, scientist. Do you have tips of your own? Please share!

  1. Play advocate of the devil: For every finding which is used to support a theory/claim, try to argue how it can be used to contradict it and/or support a different theory.
  2. Use these wonderful (online) tools to check: whether there is evidence for p-hacking (21), whether reported statistics such as p-values are correct (22 or 23), and whether reported Likert-scale summaries are plausible (24).
  3. Check the repeatability of a finding: For every finding, find at least one other study which reports the same finding using the same procedure and/or a different procedure. Likewise, actively search for contradicting findings.
  4. Doing a review or meta-analyses? Do all of the above, plus make funnel plots (25).
  5. Read the References section.
  6. Even if you’re not a fan, try pre-registration at least once.
  7. Use the free G*Power tool to post-hoc calculate the power of published studies, and use it to a-priori to plan your own studies (26).
  8. When reporting empirical data, strive to visualize it in the most informative way. Bar plots are easily one of the least informative visualizations. Use more informative formats instead, such as the pirate plot in the image below (27).

pirate plot


  1. Dwan, K., Gamble, C., Williamson, P. R., & Kirkham, J. J. (2013). Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS one, 8(7).
  2. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505.
  3. Coursol, A., & Wagner, E. E. (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias.
  4. Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals. Academy of Management Journal, 20(1), 132-141.
  5. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3).
  6. Brown, A. W., Brown, M. M. B., & Allison, D. B. (2013). Belief beyond the evidence: using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence. The American journal of clinical nutrition, 98(5), 1298-1308.
  7. Van der Zee, T. & Nonsense, B. S. (2016). It is easy to cite a random paper as support for anything. Journal of Misleading Citations, 33(2), 483-475.
  9. Jones, C. W., Keil, L. G., Holland, W. C., Caughey, M. C., & Platts-Mills, T. F. (2015). Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC medicine, 13(1), 1.
  10. Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior Research Methods, 43(3), 666-678.
  11. Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior research methods, 1-22.
  12. Nonsense, B. S., & Van der Zee, T. (2015). The thirty-five percent is false, it is approximately fifteen percent. The Journal of False Statistics, 33(2), 417-424.
  13. Ebrahim, S., Bance, S., Athale, A., Malachowski, C., & Ioannidis, J. P. (2015). Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. Journal of clinical epidemiology.
  14. Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. World Politics, 49(01), 56-91.
  15. Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The qualitative report, 8(4), 597-606.
  16. Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in nursing science, 8(3), 27-37.
  17. Schmidt, H. G., van Gog, T., Schuit, S. C., Van den Berge, K., Van Daele, P. L., Bueving, H., … & Mamede, S. (2016). Do patients’ disruptive behaviours influence the accuracy of a doctor’s diagnosis? A randomised experiment. BMJ quality & safety.
  18. Mamede, S., Schmidt, H. G., & Penaforte, J. C. (2008). Effects of reflective practice on the accuracy of medical diagnoses. Medical education, 42(5), 468-475.
  19. Van der Zee, T. & Nonsense, B. S. (2016). Did you notice how I just cited myself; How do you know I am not just cherry-picking? Journal of Misleading Citations, 33(2), 497-484.
  20. Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. The quarterly journal of experimental psychology, 29(1), 85-95.
  25. Duval, S., & Tweedie, R. (2000). Trim and fill: a simple funnel‐plot–based method of testing and adjusting for publication bias in meta‐analysis. Biometrics, 56(2), 455-463.

My Questionnaire Quest  0

Quest: a journey towards a goal, serves as a plot device and (frequently) as a symbol… In literature, the objects of quests require great exertion on the part of the hero, and the overcoming of many obstacles”.

From last December until now, I have been anxiously awaiting for mail in my mailbox here at ICLON. By the time it was Christmas, I was singing “All I want for Christmas, is maiiiiil”! Really, nothing could make me a happier PhD student than receiving a bunk of mail like this on my desk every week:

















Because, ladies and gentlemen, my first data have arrived! Finally tangible proof that my first year of work as a PhD student has paid off.

Constructing and distributing a questionnaire was somewhat more complicated than I thought. Here are some things you should think about:


  1. Constructing the questions. You cannot just ask whatever you fancy to ask, I discovered. You need to justify what you ask your participants, preferably supported by literature and/or by questionnaires already available covering your topic.
  2. Online or hardcopy’s by mail? That was a very relevant question, since I wanted to have about 2000 students filling in my questionnaire, an online survey would be so much easier for me. But it would also have such a small response rate… Also considering that my questionnaire took about 20 minutes to complete, I chose to send around my questionnaire in hardcopy. That way, students could fill it out in the lesson of the corresponding teacher, and be more likely to complete the survey.
  3. Recruit participants. In my case, I needed schools and teachers to commit to my research, ensuring their students and teachers would fill in my questionnaires. I already knew some teachers, but definitely not enough to cover my whole research population. So, I started asking around. With colleagues (do they have any contacts with schools that might be interested?), friends, acquaintances, institutions also interested in my research topic… And eventually, I even made a list of appropriate schools, looked up their telephone numbers and tried to call the specific teachers to explain them about my research and invite them to participate. Do not underestimate this step. It. Takes. Time.
  4. Logistics. After I printed and stapled over 2000 questionnaires (thank God for automatic staplers), they needed to get to the right persons in the right schools. And those right persons in the right schools should also be able to send the piles of paper back to the right person: me! I am so, so lucky to have gotten help from people in my department, and people from the post office (and occasionally my boyfriend, who helped in the stapling process). I had this whole administration of how many questionnaires should go to which school, how many for students (white coloured paper) and how many to teachers (orange coloured paper). I needed piles of envelopes with the right addresses, and also self-addressed envelopes in which the teachers could send the questionnaires back in the mail.
  5. Communication. It helps to be clear about the agreement you have with the corresponding teacher. Remind them in which classes the questionnaire has to be distributed. In general, remind them. And administer who returned how many questionnaires to you.


And then, everything needed to get back to me. I waited in such anxiety. At this moment, my response rates luckily seem very high, although one of my greatest fears also came true as some of the envelopes got lost in the mail. While still awaiting the very last envelopes to return, the scanning and analyses can begin…

What are your experiences when constructing and distributing a questionnaire? What were the obstacles you met, and do you have any tips and tricks for others? Please let me know in the comments below!














Image source:

Killing two birds with one stone: Audit trail to secure research transparency and accountability  0


Until recently, academic integrity within Dutch educational research was largely a matter of individual and collective responsibility, rather than applying specific compliance measures. University researchers are expected to conduct themselves in an ethical manner with respect to the ways in which they design, carry out and report academic research. However, procedures and requirements with respect to ethical conduct and academic integrity practices of research projects are changing. Research financers and scientific journals require more strict procedures, incidents with research integrity, safety and privacy call for more transparency and accountability, and recent developments in the domain of open resources trigger questions about accessibility and re-use of data.


Much of the literature is framed in terms of misconduct or academic corruption with research ethics and tends to focus on the negative framing of academic integrity as “corrupt” or “bad” practice. Fabrication and falsification of results together with plagiarism and ethical abuses practiced by academic researchers are frequently highlighted. Yet ICLON Leiden University Graduate School of Teaching designed a procedure that builds on trusting academic researchers to conduct themselves in an ethical manner with respect to the ways in which they design, carry out and report academic research.


A distinctive character of the ICLON research program is its double focus of developing educational theory and practice, which means that the research projects of the program aim at simultaneously contributing to the improvement of educational practice and generating knowledge about this practice. Because of this double focus, many research projects of the research program are highly-contextualized and are characterized by complex research processes which ask for many interpretations of the researchers and lack standardized procedures of analysis.Kill 2 birds 3


Akkerman and colleagues designed and evaluated a so-called audit procedure to ascertain if this kind of studies meets the criterion of trustworthiness. This procedure is about the visibility, comprehensibility, and acceptability of the research process. A decision in the research process must be made explicit and communicated to be judged at all and substantiated to be judged by its logic and content.

The data management procedure of ICLON builds on this audit procedure and includes an audit trail, which allows an auditor to track, understand and assess the research process from the final conclusions as reported in a paper back to the data. This data management procedure is considered a practical and useful way to secure both transparency of the research process and accountability of its researchers. It can also be understood as a way to support researchers to improve the quality of the research process and to raise their awareness of the importance of academic integrity.

Ruimte voor excellentie?  0

Iedere professional heeft ruimte nodig om goed te kunnen functioneren, of niet?

Van sommige professionals wordt zelfs verwacht dat zij deze ruimte zelf creëren en benutten. De beginnende docenten in het Eerst-de-Klas traject  en het OnderwijsTraineeship  zijn hiervan een voorbeeld. De overheid heeft deze speciale leerwerktrajecten van de lerarenopleiding opgezet om excellente academici te enthousiasmeren voor het onderwijs. In het voortgezet onderwijs zouden deze academici een optimale ruimte dienen te creëren en benutten om hun baan uitdagend te houden voor zichzelf en een innovatieve wind te laten waaien in de school. Mooie woorden worden gebruikt in de omschrijving van deze programma’s. Zo is er bij het Eerst De Klas traineeship een leiderschapsprogramma ‘dat is vormgegeven door de meest toonaangevende organisaties in Nederland.’ Het OnderwijsTraineeship biedt naast de lerarenopleiding ‘masterclasses die je internsief laten kennis maken met de volledige breedte van het onderwijsveld’.

Toch lijkt er een belangrijk mechanisme over het hoofd te worden gezien. De docenten in beide trajecten starten namelijk als beginners in schoolorganisaties en hebben tijd nodig om de organisatie te leren kennen en de eigen positie binnen de ‘gevestigde orde’ te zien. Een begrip als enculturatie past goed bij deze fase. De balans tussen ruimte creëren, benutten en krijgen is daarmee een belangrijk onderdeel van deze initiatieven. Wij (Jacobiene Meirink & Anna van der Want) zijn heel benieuwd naar hoe de huidige docenten in het EDK en Onderwijstraineeship de balans tussen creëren, benutten en krijgen van ruimte ervaren! In een door ProBO gesubsidieerd project onderzoeken we deze thematiek.

Voor meer informatie zie


How to improve the speaking skills of individual pupils in secondary schools in classes of 30 pupils?  2


In September 2014 I started as a Ph.D-candidate at Leiden University (ICLON) in the context of the Dudoc-alfa program, supervised by Prof. Dr. Jan van Driel en Dr. Ir. Fred Janssen. The main purpose of the Dudoc-Alfa program is the improvement and innovation of foreign language acquisition in secondary schools.

Focus research: speaking skills
My research focuses on feedback on speaking skills in foreign languages. Many language teachers in secondary schools have difficulty paying attention to the performance of each pupil and adjusting their feedback on each individual. What type of feedback is effective, when and how to give?

Feedback: What, when, how?
If these are questions you are also interested in, as a teacher or as a researcher, please contact me to share your ideas, opinions, advices and wishes:

Academic research by teachers – a huge research capital!  9

Like practitioners as clergy, lawyers or clinical psychologists, teachers are tightly linked to a practice which is mostly examined by outside researchers. Teachers have years of experience with working with different instructional methods, tools and formats. They are all experts in their school subject knowing which learning strategies their students apply and which misconceptions they have. And teachers have an accurate idea of the context in which they teach. With other words, teachers have developed practical wisdom about their practice, which is invaluable for research on this practice. And –last but not least- they have easy access to information about teaching and learning which is mostly unreachable for external researchers. Yet, academic research about teaching and learning is mostly done by the outside educational researchers, who do not possess these advantages. There might be two reasons why this is common practice. Firstly, educational research requires particular competencies that researchers have acquired and are absent in teachers. Secondly, outside researchers examine an extensive set of practices, which allows them to generate conclusions about these teaching practices. But aren’t these actually myths? And shouldn’t we think better of how academic research can take advantage of teachers’ experience with and access to their practice? And wouldn’t that be via research by teachers themselves?

Read more

Translate »