LELA 🌙: LLM-based Entity-Linking Approach
Work-in-progress - An approach for entity linking with large language models, which adapts to any domain and knowledge base, without any fine-tuning.
LELA 🌙: LLM-based Entity-Linking Approach
Work-in-progress - An approach for entity linking with large language models, which adapts to any domain and knowledge base, without any fine-tuning.
Retrieval-Constrained Decoding Reveals Underestimated Parametric Knowledge in Language Models
Language models (LMs) encode substantial factual knowledge, but often produce answers judged as incorrect. We hypothesize that many of these answers are actually correct, but are expressed in alternative surface forms that are dismissed due to an overly strict evaluation, leading to an underestimation of models' parametric knowledge. We propose Retrieval-Constrained Decoding (RCD), a decoding strategy that restricts model outputs to unique surface forms. We introduce YAGO-QA, a dataset of 19,137 general knowledge questions. Evaluating open-source LMs from 135M to 70B parameters, we show that standard decoding undervalues their knowledge.