Skip to Main Content
PolyU Library

Harnessing GenAI in Your Academic Journey


Research involves managing vast amounts of information, and GenAI offers new ways to handle it effectively. This guide introduces tools for research processes:

  1. Literature Discovery and Evaluation: Tools designed to retrieve relevant scholarly information, aiding the iterative literature review process.
  2. Ideas and Language Enhancement: Chatbots utilizing different large language models (LLMs) for various language tasks, such as refining research questions, checking grammar, polishing language, and translation, or even generic information search.
  3. Other Research Utility Tools: Beyond those offered by PolyU, numerous AI-integrated tools are available to assist in research tasks such as literature comprehendinginformation organizing and academic writing. Explore some popular tools with free plans.

Which GenAI Tool should I use?

With the overwhelming number of GenAI tools, it can be challenging to determine which ones to use. Start by considering the following three factors to guide your decision:

  • Functionality (1)
    • Identify tasks that could be enhanced, such as writing, coding, data analysis, or literature review
    • Select the tool that specialize in features relevant to your identified task
    • Take reference from benchmarks to assess effectiveness of a large language model (LLM)
    • Test the tool yourself to evaluate the usability and how well it meets your specific needs
  • Trustworthiness (2)
    • Evaluate the tool's methodology for generating information and its inherent limitations
    • Verify the data source to ensure its suitability for academic research; if the source is undisclosed, apply the CRAAP test to evaluate the generated content
    • Ensure the tool securely encrypts the information you input, particularly for private, sensitive, or confidential data
    • Seek feedback and reviews from other users for reference
  • Cost(3)
    • Consider the time investment needed for ease of use and learning
    • Analyze the subscription fee of the tool and weigh it against your budget

LLMs Benchmarks

Understanding how well an LLM performs across different functionalities enables you to select the more appropriate tool for your specific research needs. IBM explained LLM benchmarks as standardized frameworks assessing the performance of large language models (LLMs). These benchmarks facilitate the evaluation of LLM skills in different aspects, such as coding, common sense, reasoning, natural language processing, and machine translation

The table below consolidates some LLM benchmarks, provided by ITS, of GenAI LLMs in PolyU GenAI. You can compare the benchmark scores to determine the most suitable GenAI tool for your work.

Table with Radar Chart
Model MMLU MMLU
-Pro
BBH GSM8K MATH Human
Eval
Aider
Qwen2.5-72B-Instruct 86.1 71.1 86.3 95.8 83.1 86.6 65.4
Mistral-Large-Instruct-2407 84 - - 93 70 92 60.2
Llama-3.1-70B Instruct 83.6 66.4 81.6 95.1 68 80.5 58.6
GPT-4o 86.5 - - - 76.6 90.2 -
GPT-4o mini 82 - - - 70.2 87.2 -
Radar Chart

Remarks:

  • MMLU: Evaluates the breadth of knowledge, depth of natural language understanding, and problem-solving ability across multiple subjects.
  • MMLU-Pro: An enhanced version of MMLU, focusing on more challenging reasoning tasks, including math problems.
  • BBH: A set of challenging reasoning tasks
  • GSM8K: Tests mathematical reasoning skills through word problems.
  • MATH: A dataset of competition-level math problems across 5 levels & 7 disciplines
  • HumanEval: Assesses code generation capabilities through programming challenges.
  • Aider: Evaluates the ability to translate natural language coding requests into executable code

Other LLMs Benchmarking Websites

Evaluate AI generated content

While GenAI excels in information searching—such as speed, accessibility, and the ability to generate diverse perspectives, it is also essential to critically assess the output, as AI-generated information can be incorrect and may mislead users.

The CRAAP test is a simple tool to help you evaluate information sources, including AI generated content. It involves asking yourself questions across 5 key aspects to determine whether a source is suitable for your research or decision-making. Below are some suggested questions specifically focused on evaluating AI-generated information.

Criteria Description Questions
C - Currency Timeliness of information
  • Does the AI tool provide up-to-date data?
  • Is the information provided current?
  • How often is the AI model updated?
  • Does the AI tool have a knowledge cutoff date that affects the currency of its information?
R - Relevance Contextual fit
  • Who is the intended audience? Have you specified in your prompt?
  • Have you consulted various sources, before determining this is one you will use?
  • Would you cite this in your academic research?
A - Authority Source credibility
  • Is the AI tool developed by a reputable organization or individual?
  • What are the sources that the AI tool relies on? Are they credible sources?
  • Does the AI tool provide evidence or references? Can you verify the information by reading the original sources?
  • If any sources are provided, does the website URL offer insights about the source? 
    .gov - a government site
    .edu - an educational site
    .com - a commercial site
    .org - an organization site
A - Accuracy Reliability of content
  • Is there a risk of hallucination, which the information generated is fabricated?
  • Can you verify the information through other sources?
  • Is the information complete for your purpose? 
P - Purpose Reason for existence
  • Any bias in the AI-generated content?
  • Does “garbage in, garbage out” (GIGO) apply?
    The quality of response is affected by the quality of training data and user input. Is the prompt well-engineered and free from bias?

Modified based on Evaluating Information - Applying the CRAAP Test By Meriam Library, California State University, Chico