![]() |
2766-6863 |
![]() |
2766-6863 (service hours) |
![]() |
Online Form |
![]() |
Contact your Faculty Librarians on in-depth research questions |
Research involves managing vast amounts of information, and GenAI offers new ways to handle it effectively. This guide introduces tools for research processes
With the overwhelming number of GenAI tools, it can be challenging to determine which ones to use. After you have identified a task where AI can boost efficiency, match your goal to the right tool(s) through evaluating the following 3 factors:
After selecting a tool, do not forget to try it yourself. Test tools hands-on to evaluate performance and time savings to ensure it integrates well with your workflow. Also, remember to keep exploring emerging tools that may outperform current options!
While GenAI excels in information searching—such as speed, accessibility, and the ability to generate diverse perspectives, it is also essential to critically assess the output, as AI-generated information can be incorrect and may mislead users.
The CRAAP test is a simple tool to help you evaluate information sources, including AI generated content. It involves asking yourself questions across 5 key aspects to determine whether a source is suitable for your research or decision-making. Below are some suggested questions specifically focused on evaluating AI-generated information.
Criteria | Description | Questions |
---|---|---|
C - Currency | Timeliness of information |
|
R - Relevance | Contextual fit |
|
A - Authority | Source credibility |
|
A - Accuracy | Reliability of content |
|
P - Purpose | Reason for existence |
|
Modified based on Evaluating Information - Applying the CRAAP Test By Meriam Library, California State University, Chico
Understanding how well an LLM performs across different functionalities enables you to select the more appropriate tool for your specific research needs. IBM explained LLM benchmarks as standardized frameworks assessing the performance of large language models (LLMs). These benchmarks facilitate the evaluation of LLM skills in different aspects, such as coding, common sense, reasoning, natural language processing, and machine translation.
However, please also be aware of the limitations of benchmarks. Increasingly, leading models are achieving similar scores and overfitting certain benchmarks, which causes some benchmarks to lose their usefulness in distinguishing LLM capabilities.
The generator below consolidates some LLM benchmarks* of GenAI LLMs in PolyU GenAI. You can compare the benchmark scores to determine the most suitable GenAI tool for your work.
*Data retrieved from llm-stats.com
This tool enables you to compare the performance of the LLM models provided by PolyU. You can select specific models and benchmarks, then click 'Generate Chart' to visualize the results. Please note that some benchmark data are unavailable and no results will be shown.