Skip to Main Content
PolyU Library

Harnessing GenAI in Your Academic Journey


Chatbots utilizing different large language models (LLMs) for various language tasks, such as refining research questions, checking grammar, polishing language, and translation, or even enhanced information search.


PolyU GenAI App

PolyU ITS offers a variety of GenAI chatbots for staff and students to use in teaching, research, and work-related activities. Access requires authentication with a PolyU NetID. For technical issues of this service, please contact ITS.

The following compares these available GenAI large language models (LLMs), which can be used for multiple purposes in research.


Please note that the information in this section is subject to revision and may change over time.
Last updated: Apr, 2025

 

PolyU students and staff have 1850 credits per month to use across various LLMs and image generation tools.

The first table compares the 4o series, O1 series, and DeepSeek, which all consume credits.

  • For quick and generic tasks  - Use the 4o series or the credit-free models, compared in the second table.
  • For complex reasoning - Use O1 series or DeepSeek perform better, though they consume more credits and take longer to process.
Model Release Total Parameters Key Features Max. Prompt Size
(in English Characters)
Host Credit Consumption
GPT-4o
(OpenAI)
May 2024 Not disclosed
  • Fast processing
  • Supports image-to-text
  • Supports more complex text prompts
  • Better translation between languages
100,000 Microsoft Azure Cloud

*GPT-4o consumes more credit than GPT-4o-mini

GPT-4o-mini
(OpenAI)
Jul
2024
O1
(OpenAI)
NEW
Dec 2024
  • Capable of breaking down tasks logically without detailed guidance by using an internal chain of thought
  • Takes longer time to reason and respond
  • Excels in complex reasoning tasks, esp. in science, coding, and math
  • O3-mini is weaker in factual knowledge on non-STEM topics, only comparable to smaller LLMs like 4o-mini
  • O1 can reason over images, but not O3-mini
700,000
  • Input: 6x GPT-4o
  • Output: 6x GPT-4o
  • Internal chain of thought also consumes tokens
O3-mini
(OpenAI)
NEW
Jan
2025
  • Input: 20x GPT-4o-mini
  • Output: 20x GPT-4o-mini
  • Internal chain of thought also consumes tokens
Deepseek-R1 (Azure AI Foundry)
(DeepSeek AI)
NEW
Jan
2025
671B, with 37B active parameters
  • Performance comparable to O1 models across math, coding, and reasoning tasks
  • Capable of analyzing your problem using an observable chain of thought before responding
     
400,000 Azure AI Foundry
  • Input: 1.35 credits/1,000 tokens
  • Output: 5.40 credits/1,000 tokens
  • Observable chain of thought also consumes tokens
DeepSeek-R1 (Alibaba Cloud)
(DeepSeek AI)
NEW
200,000 Alibaba Cloud
  • Input: 0.55 credits/1,000 tokens
  • Output: 2.20 credits/1,000 tokens
  • Observable chain of thought also consumes tokens

 

The use of the models below will NOT consume the monthly use entitlement credit, however, please be aware that for Copilot with Bing, there are 30 response quotas for each conversation and a new conversation can be initiated once 30 responses have been reached.

Model Release Total Parameters Key Features Max. Prompt Size
(in English Characters)
Host
Qwen2.5-72B-Instruct
通义千问

(Alibaba)
Sep
2024
72B
  • Excels in conversational understanding and math problems
  • Enhanced multilingual support, especially for tasks in Chinese
100,000 PolyU Campus^

Qwen2.5-VL-72B-Instruct
通义千问

(Alibaba)

NEW

Jan
2025
72B
  • Excels in visual understanding, e.g. recognize charts, icons, graphics and layouts within images
  • Excel in text analysis
Llama 3.3-70B Instruct
(Meta)
Dec
2024
70B
  • Optimized for multilingual dialogue use cases
  • Supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai
Mistral-Large-Instruct-2407
(Mixtral AI)
Jul
2024
123B
  • Excellent for complex reasoning
  • Comparable to GPT-4o in common sense and reasoning
  • Good at processing code inputs
Copilot with Bing
(Microsoft)
Nov
2023?
Not disclosed 16,000

Service provided by Microsoft

(Copilot Commercial Data Protection (CDP) available c.f. personal account)

^Information input to GenAI models hosted in PolyU Campus is encrypted and will be erased within 48 hours.

Chatbot Categories and Prompting Techniques

How can we effectively utilize AI chatbots?

The CLEAR Framework provides a simple approach to improve interactions with general generative models (e.g. GPT-4o), ideal for beginners to follow and refine prompts.

Remember 3 of the following components when designing your initial prompts.
Component Description Purpose Example
1 Concise Brevity and clarity in prompts Remove superfluous information and allow the LLM to focus ✅Explain the process of photosynthesis and its significance.
🚫Can you provide me with a detailed explanation of the process of photosynthesis and its significance?
2 Logical Structured and coherent prompts Help the LLM to comprehend the context and relationships between concepts ✅List the steps to write a research paper, beginning with selecting a topic and ending with proofreading the final draft.
🚫Can you explain how to write a research paper? Like, start by doing an outline, and don’t forget to proofread everything after you finish the conclusion. Maybe add some quotes from experts in the middle, but I’m not sure where.
3 Explicit Clear output specifications Enable the LLM to provide desired output format, content, or scope ✅Identify five renewable energy sources and explain how each works.
🚫What are some renewable energy sources?

The 4th and 5th components of the CLEAR Framework can help you to further enhance your prompts continuously.

  1. Adaptive – Tweak-as-you-go for instant adjustment
  • Adjust prompts during the interaction, through experimenting with phrasing and settings, to fix immediate issues like vagueness or information overload
  • Example
    If a generic answer is obtained from the prompt “Discuss the impact of social media on mental health”, you could re-prompt with a narrower focus: “Examine the relationship between social media usage and anxiety in adolescents.
  1. Reflective – Learn-and-evolve for long-term improvement
  • Evaluate past prompts and responses in a LLM to improve future prompts
  • Example
    • Initial Interaction:
      • Prompt: “List strategies for effective time management.”
      • Response: Generic advice like “set priorities”, or “delegate tasks”
    • Insight Gained: The LLM provides generic answers if prompts lack audience specificity
    • New Scenario: User needs prompts for improving academic writing productivity
      • Reflective Prompt: “Provide time-management strategies for postgraduate students balancing thesis writing and teaching assistantships, emphasizing context-specific challenges like irregular schedules and research deadlines.”

Source: The CLEAR path: A framework for enhancing information literacy through prompt engineering


Explore more prompting techniques from the library:

📝 Interactive Online Course

DataCamp is an online learning platform that provides a wide range of data training courses. Here is a selection of interactive online courses on prompt engineering. Please note that users must register with your student of staff email, in order to access Datacamp.

🎬 Video Tutorial

🔗 Ebook

📄 Journal Article


As LLMs evolve into specialized tools, such as Retrieval-Augmented Generation (RAG) systems and reasoning models, no single prompting strategy applies universally. Using these effectively requires understanding each chatbot's capabilities and limitations. Critical thinking is key to guiding AI toward desired outputs.

By recognizing chatbot types, we can better tailor prompts to leverage their strengths. Below, we compare three LLM chatbot categories available to PolyU researchers, with tips to maximize their potential:

Please note that the information in this section is subject to revision and may change over time.
Last updated: Apr, 2025

 

  General Generative Models RAG Systems Reasoning Models
Examples
  • GPT-4o, GPT-4o-mini
  • Qwen2.5-72B-Instruct, Qwen2.5-VL-72B-Instruct
  • Llama
  • Mistral
Distinctive Strengths
  • Handle open-ended and creative tasks better
  • Versatile for simple reasoning tasks
  • Combine generative models with external data retrieval, from databases or the internet 
  • Provide more accurate, domain-specific or even up-to-date answers
  • Reduce hallucinations 
  • Responses may include verifiable sources for further fact checking
  • Specialized generative models that prioritize logical reasoning over generative breadth
  • Follow complex instructions and adapt to unfamiliar contexts more effectively
  • Capable of breaking down complex tasks into manageable steps 
Academic Use Cases
  • Drafting outlines
  • Brainstorming ideas
  • Summarizing texts 
  • Paraphrasing
  • Translating

Excels at knowledge-intensive tasks, for example:

  • Research Summarizing: Aggregate insights from multiple scholarly sources
  • Fact-Checking: Allow the verification of claims with relevant references retrieved
  • Complex reasoning, e.g. mathematical proofs, suggest experimental designs, evaluate methodologies, interpret statistical results, critical analysis etc. 
  • Abstract or novel problems tackling, e.g. develops solutions for hypothetical scenarios, explores ethical dilemmas or thought experiments etc.
Unique Prompting Tips
  • Output Formatting: Define the desired structure for clear responses,
    e.g., “List in bullet points”, “Compare in a table”
  • Structuring Complex Queries: Define contextual boundaries, such as assumptions, scope, or format to prevent overgeneralization 
  • Exploring Alternatives: Trigger broader analysis by comparing options,
    e.g. “Compare two approaches to this problem”
  • Role Prompting: Assign a specific role to guide the perspective,
    e.g. “Act as a history professor”, “Act as a data analyst”
  • Iterative Refinement: Use follow-up prompts to tweak tone, depth, or focus,
    e.g. “Make the explanation concise”, “Expand on the second point”
  • Task Decomposition: Break complex tasks into steps,
    e.g. “First define the term, then provide three examples”, “Outline the process before summarizing”
  • Setting Constraints: Specify sources and time frames,
    e.g. “Use data from 2023 onwards”, “Retrieve data from PubMed only”
  • Being Specific: Use precise queries to enhance retrieval accuracy,
    e.g. instead of “Explain climate change”, use “Identify three key mitigation strategies for climate change from recent studies”
  • Encouraging Exploration: Prompt for gaps in knowledge,
    e.g. “What are unexplored research areas in …”, “Provide diverse perspectives on …”
Prompting Remarks

Detailed and specific prompts can still boost performance

  • Maintaining Individual Context Clarity: Avoid relying on conversational history, re-specify context in each prompt
  • Being Specific: Use precise queries to enhance retrieval accuracy,
    e.g., instead of “Explain climate change”, use “Identify three key mitigation strategies for climate change from recent studies"
  • Focus on just the most critical information to prevent the model from overcomplicating its answer
Learn More

 

More Prompting Resources