EasySLR allows you to assess the accuracy and reliability of its AI-driven screening by directly comparing AI decisions with those made by human reviewers. This validation step helps you understand how closely the AI aligns with your decision-making and ensures reliability before scaling the process to a larger dataset.
You can compare AI and human reviewer decisions in two ways:
A. Within the Platform
- Navigate to the "Statistics" section and click on the “Quality”.
- Evaluate AI Performance: Analyse how AI decisions align with human decisions, aiding in assessing the efficiency of AI.
- Decision Match Rate: Percentage of citations where AI’s decision matched the human’s.
- Recall (Sensitivity): Out of all articles that were finally included, how many the AI correctly identified.
- Conflict %: How often your decision differed from the AI’s, giving insight into potential misalignments or misinterpretations.
These metrics help assess how closely the AI understands the protocol and where conflicts occur.
B. Using the Excel Export
- Navigate to the "Articles" section> Click “Downloads”> Download All to Excel.
- The Excel file contains the following information along with additional details:
- Column R – AI Decision (Include/Exclude)
- Column S – AI Reason
- Column T – AI Notes (Rationale)
- If you are using AI as one of the reviewers then you can view the conflicts between AI and human reviewer in Column U.
By reviewing this file, you can:
- Perform a side-by-side comparison.
- Identify articles where AI and human decisions differ.
- Understand the reasoning behind AI decisions to evaluate whether it aligns with your interpretation of the protocol.
By following these steps, you can effectively compare AI and human decisions in EasySLR, enhancing the accuracy and efficiency of your systematic review process.
Comments
0 comments
Please sign in to leave a comment.