EasySLR is designed to help reviewers streamline their workflows and make evidence-based decisions faster. Two powerful features — Protocol Text Review and AI Performance Analysis —ensure your review protocol is aligned, AI-ready, and easy to evaluate.
1. Protocol Text Review
The Protocol Review is a smart tool that analyses your protocol and helps you improve it for AI screening. It focuses on strengthening two critical areas:
A. PICOS Refinement
The optimiser reviews your defined Population, Intervention, Comparator, Outcomes, and Study Design and provides suggestions to:
Clarify vague or ambiguous terms (e.g., what constitutes “elderly” or “high PD-L1 expression”)
Resolve conflicting inclusion/exclusion rules
Structure criteria more precisely so the AI can apply them accurately
Recommend standardised language and examples to improve reproducibility
B. Hierarchy of Selection Criteria
This section ensures that your inclusion and exclusion reasons are logically ordered and clearly described, which is crucial for:
Ensuring transparent reviewer decisions
Helping the AI apply criteria consistently
Avoiding confusion when multiple exclusion criteria overlap
The tool will:
Highlight missing or overlapping decision rules
Suggest reordering criteria (e.g., excluding animal studies before design-based exclusions)
Recommend more specific exclusion labels for edge cases (e.g., single-arm studies, subpopulation-only data)
How to use it:
Go to the Tools section

Select 'Protocol Text Review'

Click on 'New Text Review'

Select the Stage: Title/Abstract or Full Text

Click Analyze Protocol
Review suggestions for:
Refining PICOS definitions
Adjusting or reordering hierarchy of selection criteria

Apply recommended changes manually.
Tip: Use the Protocol Optimiser before launching large-scale screening or rerunning AI to ensure optimal AI performance.
2. AI Performance Analysis
The AI Performance Analysis provides a transparent way to evaluate how well the AI is performing on your dataset. It helps build confidence in the AI by letting you validate its decisions before scaling.
What it does:
Compares AI suggestions with reviewer decisions on a screened sample
Generates recall, and agreement metrics
How to use it:
Go to the Tools section
Select ‘AI Performance Analysis’
Click ‘New Performance Analysis’

Select the Stage and Scope

Click 'Run Analysis'
Review key metrics like:
Note:
For best results, run the analysis on a representative sample of articles
If AI is disabled, performance analysis cannot be initiated
Use both together for optimal AI-guided screening:
Need Help?