EasySLR’s AI can streamline screening by providing inclusion/exclusion suggestions based on your protocol. However, to get the best results, it's important to first calibrate the AI to ensure it aligns with your decision-making. The steps below outline how to do this effectively.
Step 1: Upload a Sample Set for Evaluation
Start by uploading a sample of 100–200 citations to your project. This smaller, controlled dataset is ideal for testing how well the AI performs in comparison to human reviewers. We recommend using a representative sample that includes both clear includes and excludes for balanced evaluation.
Step 2: Use Protocol Optimiser for Targeted Suggestions
EasySLR’s Protocol Optimiser helps you strengthen your protocol to improve AI alignment:
It suggests specific edits to clarify ambiguous criteria.
Recommendations may include refining definitions, adding examples, or adjusting key inclusion/exclusion phrases.
Apply suggested changes.
Step 3: Enable and Run AI Screening
Once your sample set is uploaded:
Enable Title & Abstract AI in the Settings.
Allow the AI to generate inclusion/exclusion suggestions based on the protocol.
The AI will provide suggestions along with rationale.
Step 4: Manually Screen the Same Sample
Review and screen the same sample manually:
Step 5: Compare AI and Human Decisions
Step 6: Recalibrate and Rerun AI
If more protocol changes are required, update and:
Rerun the AI on the sample to see if performance improves.
This calibration process may take a few iterations, but it’s critical to ensure accuracy in large-scale reviews.
Once you're confident in the AI’s decisions:
Pro Tips:
Checklist before moving to full dataset:
Protocol finalized and saved
Sample set reviewed manually
Conflicts between AI and human resolved
Protocol Optimiser suggestions applied (if needed)
AI performance metrics (recall, conflict %) within acceptable range