Posts for Tag : literature review

ASReview: AI-assistance for article screening  0

 

If you’re a researcher, you probably have conducted literature reviews or will do so in the future. Depending on your keywords, a search in online databases easily results in several hundred or even thousands of hits. One of the most time consuming steps is screening all those titles and abstracts to determine which articles may be of interest for your review. Isn’t there a way to speed up this process? Yes, there is!

 

Recently, we started a literature review about the use of Open Educational Resources (OER) in K12-education. We came across a great tool for article screening. In this blog, we will introduce this tool and share our experiences.

 

Training the system

Researchers at the University of Utrecht have developed an open source and free screening tool to help researchers go through the enormous digital pile of papers: ASReview LAB (see www.asreview.nl). In a 2-minute introduction video, you can learn how it works. Basically, the programs helps you to systematically review your documents faster than you could ever do on your own by, as they put it, “combining machine learning with your expertise, while giving you full control of the actual decisions”. We just had to try that!

 

First, we made sure the papers we’ve found all included a title and an abstract as that is what we would use to screen them on relevance. It was very easy to import our RIS file (from Zotero in our case, but can be from any reference management system) with all the hits from our search query. Then it was time to teach ASReview! We provided the system with a selection of relevant and irrelevant articles which it uses to identify potential matches, thus expediting the screening process. Following the guidelines provided in the ASReview documentation, we utilized the default settings of the AI model.

 

The researcher as the oracle

Once the system was trained, the screening phase could start. At each stage, we evaluated whether a document was relevant or not, providing notes to justify the decisions. In cases of uncertainty, where the abstract alone was not sufficient to make a judgment, we referred to the full text of the article. With each decision, ASReview adapts its learning model to ensure that as many relevant papers are shuffled to the top of the stack. That’s why it is important to make the ‘right’ decision. We worked in the ‘Oracle Mode’ (other modes are possible as well, but for reviews this is the best) which makes the researcher ‘the oracle’. ASReview describes the relevance of taking your time to make decisions: “If you are in doubt about your decision, take your time as you are the oracle. Based on your input, a new model will be trained, and you do not want to confuse the prediction mode.” (ASReview, 2023). So make sure that you carefully formulate your research questions and inclusion criteria before beginning to screen the articles. This helps to decide if an article might be of interest or not.

 

To avoid endless manual screening (which is kind of the point of using this tool), it was recommended to formulate a stop rule. To formulate our stop rule we made use of the recommendations provided by ASReview and Van de Schoot et al. (2021). According to our rule, screening would cease once at least 33% of the documents were reviewed AND ASReview presented 25 consecutive irrelevant items. This approach helped prevent exhaustive screening while maintaining rigor and reliability. A tip for maintaining focus is to spend a limited amount of time per day screening articles (for example a maximum of two hours a day). Throughout the screening process, ASReview’s dashboard provided a visual overview of progress and decisions made.

 

In total, 460 items were excluded by the system, while 324 were manually screened, with 173 rejected for various reasons. These reasons ranged from focusing on specific educational technologies to addressing broader educational issues beyond the scope of the study. To ensure the reliability of the screening process, a second researcher independently assessed a random sample of 10% of the documents.

 

 

 

Combining AI and human judgment

After completing the screening process, it is very easy to download a file with an overview of all the decisions made, including both relevant and irrelevant articles. The dashboard and the output files help you in reporting why certain articles were excluded from the review. Notably, the PRISMA model already accommodates for articles excluded through AI. So, in conclusion, ASReview offers a powerful solution for streamlining the literature review process, leveraging AI to expedite screening while maintaining the integrity of the review. It combines the efficiency of AI with human judgment, saving you time – something welcomed by all.

Lysanne Post & Marjon Baas

 

 

Read more: