AA-Omniscience1 [2026-2] 염제원, 김학선 - AA-Omniscience: Evaluating Cross-Domain KnowledgeReliability in Large Language Models AA-Omniscience: Evaluating Cross-Domain Knowledge Reliability in Large Language ModelsExisting language model evaluations primarily measure general capabilities, yet reliable use of these models across a range of domains demands factual accuracy and recognition of knowledge gaps. We introduce AA-Omniscience, a benchmark designed to measurearxiv.org ArtificialAnalysis/AA-Omniscience-Public · Data.. 2026. 2. 16. 이전 1 다음