Fine-Tuning Language Models to Know What They Know (2026)
Metacognition is a critical component of intelligence, specifically regarding the awareness of one's own knowledge. While humans rely on shared internal memory for both answering questions and reporting their knowledge state, this dependency in LLMs remains underexplored. This study proposes a framework to measure metacognitive ability d_type2' using a dual-prompt method, followed by the introduction of Evolution Strategy for Metacognitive Alignment (ESMA) to bind a model's internal knowledge to its explicit behaviors. ESMA demonstrates robust generalization across diverse untrained settings, indicating a enhancement in the model's ability to reference its own knowledge. Furthermore, parameter analysis attributes these improvements to a sparse set of significant modifications.
View:
PDF
Citation:
arxiv:2602.02605, 2026.
Bibtex:

Elliot Meyerson Ph.D. Alumni ekm [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu
Sangjun Park Ph.D. Student sangjun [at] cs utexas edu
Xin Qiu Collaborator xin qiu [at] cognizant com