The statistics section in EasySLR offers quantitative data and metrics concerning the screening or review process. It includes a range of statistical measures and analyses aimed at aiding in the comprehension and assessment of the collected screening data.
Components of a statistics section include:
Overview: The “Overview” section summarises your project's progress concisely. It presents key metrics such as the number of screened articles, included and excluded articles, and overall screening progress in a clear numerical format.
This layout is designed for clarity, breaking down progress across various screening stages. When you upload your RIS file, the system automatically captures and displays this data, allowing you to track your project's progression through each screening stage effortlessly.
EasySLR automatically records and updates your progress in real-time based on your actions as you start screening articles. This feature ensures you have up-to-date information on the number of articles screened, their stage status, and those remaining.
This visibility into your project's progress lets you plan and allocate resources as needed.
Progress by Stage:This feature in EasySLR offers valuable insights into the progress of article reviews across various stages like Title Abstract Screening and Full-text Review on a daily basis. It provides visibility into the volume of articles screened each day, aiding in the effective management and scheduling of screening tasks.
Progress by Reviewer: This section enables you to monitor the daily progress of individual reviewers in screening articles. It offers a detailed view of the number of articles each reviewer has screened on any given day. This feature is especially beneficial for project managers or team leads who aim to ensure equitable contributions, identify workload disparities, and provide targeted support as necessary. It also facilitates performance assessment of individual reviewers, empowering informed decisions to enhance the efficiency of the screening process.
*Different colors are used to distinguish between different reviewers. This visual cue helps easily identify which reviewer has screened articles.
Quality: The Quality dashboard evaluates each reviewer's adherence to screening guidelines and protocols, such as considering PICOS criteria and project-specific inclusion/exclusion criteria.
This assessment is extremely crucial specifically when multiple reviewers, (either 2 humans and one AI or one human), are involved.
It tracks key metrics, such as reviewer consistency, decision accuracy, ensuring the integrity of your research workflow. These data points help in identifying areas for process improvement, empowering teams to maintain the highest standards for their systematic reviews.
All metrics are calculated based on the final decision on the article. The description of how each metric is calculated is provided below. You can also view this information by hovering over the 'i' icon next to each metric.
Pending final decision: All articles that are still awaiting a final decision from all required reviewers.
Included correctly: Articles that were correctly included.
Excluded correctly: Articles that were correctly excluded.
Included incorrectly: Articles that were incorrectly included (should have been excluded).
Excluded incorrectly: Articles that were incorrectly excluded (should have been included).
Conflicts Resolved: The number of decision conflicts resolved by the reviewer.
Decision match rate (%):The percentage of inclusion and exclusion decisions made by this reviewer that match the final reviewer's decisions.
Recall (%): Percentage of included articles that we correctly included.
Conflict rate (%):The percentage of decisions that resulted in conflicts with other reviewers, excluding the lead reviewer.