Deep Cognition and Language Research Lab

Research in Safe, Trustworthy, Multimodal AI

DeCLaRe Lab studies language, multimodal, and interactive AI systems across six themes: Safety, Trustworthiness, Multimodality, AI for Science, Efficiency, and Embodied AI.

About DeCLaRe Lab

Deep Cognition and Language Research

DeCLaRe stands for Deep Cognition and Language Research. The lab was established at the Singapore University of Technology and Design in 2019 by Soujanya Poria, with Navonil Majumder, Devamanyu Hazarika, and Deepanway Ghosal as founding members. In 2025, DeCLaRe Lab moved to Nanyang Technological University.

Today the lab works on methods, benchmarks, and open research artifacts for language, vision, audio, video, knowledge, and action, with the goal of building AI systems that are capable, grounded, interpretable, and robust in settings that require more than benchmark accuracy.

Lab's identity

The meaning behind the DeCLaRe logo

The robot-like structure recalls an old computer and illustrates DeCLaRe in Mandarin. The colored eyes point to the lab's interest in machines that understand affect: the logo frames cognition and language as ways to infuse machines with richer emotional and social understanding.

Explanation of the DeCLaRe Lab logo

Research themes

Research Areas

Six connected themes that organize the lab's current work.

Publications

Hot Papers 🔥

Selected recent and highly cited papers.

Publication archive
ICLR 2026

OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!

Operational safety and task-boundary evaluation for LLM agents.

ICLR 2025

Measuring and Enhancing Trustworthiness of LLMs in RAG

Trust-Score, Trust-Align, grounded attributions, citations, and refusal.

ICLR 2025

MOOSE-Chem: LLMs for Rediscovering Chemistry Scientific Hypotheses

AI for Science through literature-grounded chemistry hypothesis rediscovery.

ICML 2026

Data Agent: Learning to Select Data via End-to-End Dynamic Optimization

Efficient training through dynamic data selection.

arXiv 2025

NORA: A Small Open-Sourced Generalist Vision Language Action Model

Efficient embodied AI and action grounding.

ICLR 2026

TangoFlux: Super Fast and Faithful Text-to-Audio Generation with Flow Matching

Fast, faithful text-to-audio generation via flow matching.

Research support

Funded Research Directions

A brief view of major active support. The full grant record is maintained separately.

View funded projects
CNRS@CREATE / NRF · 2026-2029

Embodied Foundational Models

Total S$10M; awarded S$3.33M. Research on embodied foundation models and generalist interactive AI.

KLASS · 2026-2028

Toward Generalist Vision Language Action Models

S$1.5M support for vision-language-action models, action grounding, and embodied evaluation.

DSO · 2023-2026

Detecting, Measuring and Mitigating Hallucinations in LLMs

S$600K support for grounded, reliable, and trustworthy language model generation.