Skip to Main Content
PolyU Library

Harnessing GenAI in Your Academic Journey


Chatbots utilizing different large language models (LLMs) for various language tasks, such as refining research questions, checking grammar, polishing language, and translation, or even enhanced information search.


PolyU GenAI App

PolyU ITS offers a variety of GenAI chatbots for staff and students to use in teaching, research, and work-related activities. Access requires authentication with a PolyU NetID. For technical issues of this service, please contact ITS.

Based on documentation from ITS and the official model websites, the following compares available GenAI large language models (LLMs) that can help you to make decision in using which tool.


Please note that the information in this section is subject to revision and may change over time.
Last update: Sept, 2025

 

PolyU students and staff have 1850 credits per month to use across all foundation models and image generation models. ITS suggest the following:

  • General Chat Models: Great for text-based and quick questions
  • Reasoning Models: 
    • Ideal for mathematics, logic, and coding tasks
    • Capable of breaking down tasks into intermediate steps, generating a chain of thought internally or explicitly, allowing them to produce more accurate solutions without very detailed guidance
    • Most of the foundation models in PolyU Gen AI are reasoning models, or with reasoning capabilities
  • Visual Models: Best for handwriting recognition and understanding images
  • Image Generation Models (Not included in the tables below): Use these to create new images or edit images from text prompts

The tables below compare foundation models to aid your decision-making, excluding image generation models.


The first table compares the GPT series, o-series, and DeepSeek, which all consume credits.

Model Release Key Features Max. Prompt Size
(English Characters and file size)
Credit Consumption
GPT-4.1
(OpenAI)
Apr
2025
  • 3,500,000
  • 3 Images, each <=7MB
  • 1 Document, <=7MB

*GPT-4.1 consumes more credit than GPT-4.1-mini

GPT-4.1-mini
(OpenAI)
Apr
2025
GPT-5
(OpenAI)
 NEW 
Aug
2025
  • Reasoning models, with major gains in coding and long text handling
  • Capable of fine tuning in reasoning effort, to adjust speed, quality and length of response
  • Observable reasoning summary
  • Adjustable reasoning extent
  • Training data up to October 2024
  • GPT-5 offers top-tier reasoning and coding, while GPT-5-mini balances speed and cost efficiency with good reasoning
  • GPT-5-mini outperforms 4.1 in many benchmarks
  • Learn more: GPT-5 prompting guide
  • 1,000,000
  • 3 Images, each <=7MB
  • 1 Document, <=7MB
  • Input: 1.25 credits/1,000 tokens
  • Output: 10 credits/1,000 tokens, including chain of thought
GPT-5-mini
(OpenAI)
 NEW 
Aug
2025
  • Input: 0.25 credits/1,000 tokens
  • Output: 2 credits/1,000 tokens, including chain of thought
Gemini 2.5 Pro
(Google AI)
 NEW 
June
2025
  • Feature reasoning capabilities
  • Support web search
  • Observable reasoning summary
  • Adjustable reasoning extent
  • Training data up to Jan 2025
  • Gemini 2.5 Pro offers top-tier reasoning, while Gemini 2.5 Flash balances speed and cost efficiency with good reasoning
  • Learn more: Prompt design strategies, Gemini 2.5 Flash Model Card
  • 3,500,000
  • 3 Images, each <=7MB
  • 1 Document, <=7MB
  • Input: 2.5 credits/1,000 tokens
  • Output: 15 credits/1,000 tokens, including chain of thought
Gemini 2.5 Flash
(Google AI)
 NEW 
  • Input: 0.3 credits/1,000 tokens
  • Output: 2.5 credits/1,000 tokens, including chain of thought
Deepseek-R1-0528
[Azure AI Foundry]

(DeepSeek AI)
May
2025
  • Reasoning model, outstanding in mathematics, programming and general logic tasks
  • Observable chain of thought
  • Performance approaching leading models, such as o3 and Gemini 2.5 Pro
  • 400,000
  • 1 Document, <=7MB
  • Input: 1.35 credits/1,000 tokens
  • Output: 5.40 credits/1,000 tokens
  • Observable reasoning steps also consumes tokens
Deepseek-R1-0528
[Alibaba Cloud]

(DeepSeek AI)
  • 200,000
  • 1 Document, <=7MB
  • Input: 0.55 credits/1,000 tokens
  • Output: 2.20 credits/1,000 tokens
  • Observable reasoning steps also consumes tokens
Hunyuan-T1
(Tencent)

 
Mar
2025
  • Reasoning model, good at Chinese and English knowledge, maths, and logical reasoning
  • Observable chain of thought
  • Utilizes Mixture of Experts (MoE) architecture, enabling fast processing
  • 100,000
  • 1 Document, <=7MB
  • Input: 0.1375 credits/1,000 tokens
  • Output: 0.55 credits/1,000 tokens
  • Observable chain of thought also consumes tokens

 

The use of the models below will NOT consume the monthly use entitlement credit.

Model Release Total Parameters Key Features Max. Prompt Size
(English Characters and file size)
Host
Qwen3-235B-A22B
通义千问

(Alibaba)
Apr
2025
235B
  • Support switching to Think Mode, which takes further steps and longer time in reasoning, ideal for complex problem solving
  • Excels in coding, math, general capabilities, benchmark performance comparable to other top-tier models
  • 350,000
  • 1 Document, <=7MB
PolyU ITS

Qwen2.5-VL-72B-Instruct
通义千问

(Alibaba)

Jan
2025
72B
  • Capable in visual understanding, e.g. analyze text, recognize charts, icons, graphics and layouts within images
  • Competitive performance in various tasks, e.g. college-level problems, maths, general Q&A, visual agent tasks etc.
  • 100,000
  • 3 Images, each <=7MB
  • 1 Document, <=7MB
Magistral-Small-2506
(Mixtral AI)
June
2025
24B
  • Reasoning Model
  • Better at logical tasks, e.g. maths and science, compared to non-reasoning models
  • 350,000
  • 1 Document, <=7MB
Llama-4-Scout-17B-16E-Instruct
(Meta)
Apr
2025
109B
(17B active)
  • Reasoning Model
  • With enhanced performance compared to previous Llama models
  • Utilizes Mixture of Experts (MoE) architecture, enabling fast processing
  • 350,000
  • 3 Images, each <=7MB
  • 1 Document, <=7MB
Doubao
豆包

(北京春田知韻科技)
Updated constantly Not disclosed
  • Directed to official page https://www.doubao.com/chat/ when accessing
  • Simplified Chinese interface
  • Support switching to Deep Think Mode, which takes further steps and longer time in reasoning, ideal for complex problem solving
  • Specialised tabs and sample prompts for searching, writing, coding, image generating and document processing
    • With web search function to take references from current information
  • Support cross-platform versions, including webpage, computer application, iOS and android
Not specified External Platform
Microsoft 365
Copilot Chat

(Microsoft)
Updated constantly Not disclosed Not specified External Platform

Please be aware that conversations within a Topic on PolyU GenAI will be cleared if they are older than 7 days or wrapped around after reaching 30 pairs of conversations, whichever comes first. (Details here)

Learn more: PolyU GenAI FAQ

Chatbot Categories and Prompting Techniques

How can we effectively utilize AI chatbots?

The CLEAR Framework provides a simple approach to improve interactions with General Purpose AI (GPAI) models (e.g. GPT-4o), ideal for beginners to follow and refine prompts.

Remember 3 of the following components when designing your initial prompts.
Component Description Purpose Example
1 Concise Brevity and clarity in prompts Remove superfluous information and allow the LLM to focus ✅Explain the process of photosynthesis and its significance.
🚫Can you provide me with a detailed explanation of the process of photosynthesis and its significance?
2 Logical Structured and coherent prompts Help the LLM to comprehend the context and relationships between concepts ✅List the steps to write a research paper, beginning with selecting a topic and ending with proofreading the final draft.
🚫Can you explain how to write a research paper? Like, start by doing an outline, and don’t forget to proofread everything after you finish the conclusion. Maybe add some quotes from experts in the middle, but I’m not sure where.
3 Explicit Clear output specifications Enable the LLM to provide desired output format, content, or scope ✅Identify five renewable energy sources and explain how each works.
🚫What are some renewable energy sources?

The 4th and 5th components of the CLEAR Framework can help you to further enhance your prompts continuously.

  1. Adaptive – Tweak-as-you-go for instant adjustment
  • Adjust prompts during the interaction, through experimenting with phrasing and settings, to fix immediate issues like vagueness or information overload
  • Example
    If a generic answer is obtained from the prompt “Discuss the impact of social media on mental health”, you could re-prompt with a narrower focus: “Examine the relationship between social media usage and anxiety in adolescents.
  1. Reflective – Learn-and-evolve for long-term improvement
  • Evaluate past prompts and responses in a LLM to improve future prompts
  • Example
    • Initial Interaction:
      • Prompt: “List strategies for effective time management.”
      • Response: Generic advice like “set priorities”, or “delegate tasks”
    • Insight Gained: The LLM provides generic answers if prompts lack audience specificity
    • New Scenario: User needs prompts for improving academic writing productivity
      • Reflective Prompt: “Provide time-management strategies for postgraduate students balancing thesis writing and teaching assistantships, emphasizing context-specific challenges like irregular schedules and research deadlines.”

Source: The CLEAR path: A framework for enhancing information literacy through prompt engineering


Explore more prompting techniques from the library:

📝 Interactive Online Course

DataCamp is an online learning platform that provides a wide range of data training courses. Here is a selection of interactive online courses on prompt engineering. Please note that users must register with your student of staff email, in order to access Datacamp.

🎬 Video Tutorial

🔗 Ebook

📄 Journal Article


As LLMs evolve into specialized tools, such as Retrieval-Augmented Generation (RAG) systems and reasoning models, no single prompting strategy applies universally. Using these effectively requires understanding each chatbot's capabilities and limitations. Critical thinking is key to guiding AI towards desired outputs.

By recognizing chatbot types, we can better tailor prompts to leverage their strengths. Below, we compare three LLM chatbot categories available to PolyU researchers, with tips to maximize their potential:

Please note that the information in this section is subject to revision and may change over time.
Last updated: Apr, 2025

 

  General Chat Models RAG Systems Reasoning Models
Examples
  • GPT-4o, GPT-4o-mini
  • Qwen2.5-72B-Instruct, Qwen2.5-VL-72B-Instruct
  • Llama
  • Mistral
Distinctive Strengths
  • Handle open-ended and creative tasks better
  • Versatile for simple reasoning tasks
  • Combine generative models with external data retrieval, from databases or the internet 
  • Provide more accurate, domain-specific or even up-to-date answers
  • Reduce hallucinations 
  • Responses may include verifiable sources for further fact checking
  • Specialized generative models that prioritize logical reasoning over generative breadth
  • Follow complex instructions and adapt to unfamiliar contexts more effectively
  • Capable of breaking down complex tasks into manageable steps 
Academic Use Cases
  • Drafting outlines
  • Brainstorming ideas
  • Summarizing texts 
  • Paraphrasing
  • Translating

Excels at knowledge-intensive tasks, for example:

  • Research Summarizing: Aggregate insights from multiple scholarly sources
  • Fact-Checking: Allow the verification of claims with relevant references retrieved
  • Complex reasoning, e.g. mathematical proofs, suggest experimental designs, evaluate methodologies, interpret statistical results, critical analysis etc. 
  • Abstract or novel problems tackling, e.g. develops solutions for hypothetical scenarios, explores ethical dilemmas or thought experiments etc.
Unique Prompting Tips
  • Output Formatting: Define the desired structure for clear responses,
    e.g., “List in bullet points”, “Compare in a table”
  • Structuring Complex Queries: Define contextual boundaries, such as assumptions, scope, or format to prevent overgeneralization 
  • Exploring Alternatives: Trigger broader analysis by comparing options,
    e.g. “Compare two approaches to this problem”
  • Role Prompting: Assign a specific role to guide the perspective,
    e.g. “Act as a history professor”, “Act as a data analyst”
  • Iterative Refinement: Use follow-up prompts to tweak tone, depth, or focus,
    e.g. “Make the explanation concise”, “Expand on the second point”
  • Task Decomposition: Break complex tasks into steps,
    e.g. “First define the term, then provide three examples”, “Outline the process before summarizing”
  • Setting Constraints: Specify sources and time frames,
    e.g. “Use data from 2023 onwards”, “Retrieve data from PubMed only”
  • Being Specific: Use precise queries to enhance retrieval accuracy,
    e.g. instead of “Explain climate change”, use “Identify three key mitigation strategies for climate change from recent studies”
  • Encouraging Exploration: Prompt for gaps in knowledge,
    e.g. “What are unexplored research areas in …”, “Provide diverse perspectives on …”
Prompting Remarks

Detailed and specific prompts can still boost performance

  • Maintaining Individual Context Clarity: Avoid relying on conversational history, re-specify context in each prompt
  • Being Specific: Use precise queries to enhance retrieval accuracy,
    e.g., instead of “Explain climate change”, use “Identify three key mitigation strategies for climate change from recent studies"
  • Focus on just the most critical information to prevent the model from overcomplicating its answer
Learn More

 

More Prompting Resources