Can large language models understand uncommon meanings of common words? (2024)

Jinyang Wuwu-jy23@mails.tsinghua.edu.cnFeihu Cheqkr@mail.tsinghua.edu.cnXinxin Zhengzhengxinxin2021@ia.ac.cnShuai Zhangzhang˙shuai@mail.tsinghua.edu.cnRuihan Jinjrh20@mails.tsinghua.edu.cnShuai Nienss90221@gmail.comPengpeng Shaoppshao@mail.tsinghua.edu.cnJianhua Taojhtao@tsinghua.edu.cn

Abstract

Large language models (LLMs) like ChatGPT have shown significant advancements across diverse natural language understanding (NLU) tasks, including intelligent dialogue and autonomous agents. Yet, lacking widely acknowledged testing mechanisms, answering ‘whether LLMs are stochastic parrots or genuinely comprehend the world’ remains unclear, fostering numerous studies and sparking heated debates. Prevailing research mainly focuses on surface-level NLU, neglecting fine-grained explorations. However, such explorations are crucial for understanding their unique comprehension mechanisms, aligning with human cognition, and finally enhancing LLMs’ general NLU capacities. To address this gap, our study delves into LLMs’ nuanced semantic comprehension capabilities, particularly regarding common words with uncommon meanings. The idea stems from foundational principles of human communication within psychology, which underscore accurate shared understandings of word semantics. Specifically, this paper presents the innovative construction of a Lexical Semantic Comprehension (LeSC) dataset with novel evaluation metrics, the first benchmark encompassing both fine-grained and cross-lingual dimensions. Introducing models of both open-source and closed-source, varied scales and architectures, our extensive empirical experiments demonstrate the inferior performance of existing models in this basic lexical-meaning understanding task. Notably, even the state-of-the-art LLMs GPT-4 and GPT-3.5 lag behind 16-year-old humans by 3.9%percent\%% and 22.3%percent\%%, respectively. Additionally, multiple advanced prompting techniques and retrieval-augmented generation are also introduced to help alleviate this trouble, yet limitations persist. By highlighting the above critical shortcomings, this research motivates further investigation and offers novel insights for developing more intelligent LLMs. The resources are available at https://github.com/jinyangwu/LeSC.

keywords:

Large language models , Human cognition , Semantic comprehension , Evaluation metrics , Prompting techniques , Retrieval-augmented generation

journal: Elsevier

{CJK}

UTF8gbsn

\affiliation

[label1]organization=Department of Automation, Tsinghua University,city=Beijing,postcode=100084,country=China\affiliation[label2]organization=Institute of automation, Chinese academy of science,city=Beijing,postcode=100190,country=China\affiliation[label3]organization=Beijing National Research Center for Information Science and Technology, Tsinghua University,city=Beijing,postcode=100084,country=China

1 Introduction

Any fool can know. The point is to understand.

Albert Einstein

Researchers in AI community, especially Natural Language Processing (NLP), have been investigating the fundamental principles of intelligence for years(Bubeck etal., 2023). Benefitting from unprecedented scales of model size and training corpus, LLMs like ChatGPT(OpenAI, 2023), Gemini(Anil etal., 2023), and LLaMA(Touvron etal., 2023) have shown surprising, even emergent Natural Language Understanding (NLU) capabilities(Rai etal., 2023) at or surpassing human levels even in unseen scenarios, which ensures proficient execution across diverse downstream tasks, such as sentiment analysis(Wang etal., 2018), question answering(Peng etal., 2023), and autonomous agents(Chen etal., 2024).

As Albert Einstein once stated, “Any fool can know. The point is to understand”111Words pronounced by Albert Einstein in the TV play “Doctor Einstein Before Lunch” by Ernest Kinoy, first aired on USNBC television on 20 May 1973. https://falschzitate.blogspot.com/2021/10/jeder-dummkopf-kann-es-wissen-der-punkt.html.. From a human cognitive perspective, understanding is crucial for possessing human intelligence rather than relying solely on memorization and mimicry(Lonergan, 1957; Sternberg, 1983). Given LLMs’ impressive performance across diverse semantic understanding tasks, there is no doubt that LLMs are powerful tools and possess some degree of language understanding and intelligence(Häggström, 2023). Nevertheless, recent studies(Bender etal., 2021; Bubeck etal., 2023; Xu etal., 2024) have expressed considerable concerns about the scientific and also philosophical question: “whether LLMs genuinely understand the world or just mimic language patterns and logic, i.e. stochastic parrots”. For example, (Bender etal., 2021) advocates that NLP researchers carefully examine the risks of blindly pursuing larger models, considering factors such as resource consumption and the significance of model outputs due to their inherent limitations. (Borji, 2023) points out that LLMs may excel in understanding abstract tasks detached from the physical world (e.g. mathematics or coding) but struggle with more real-world understandings, which is essential for developing more intelligent LLMs. (Li, 2023) discusses that new legal and ethical risks are emerging due to stochastic parrots and hallucinations.

Numerous attempts have been made to answer the above question, especially in creating meaningful benchmarks that reflect LLMs’ ability to accurately grasp human-conveyed semantics. (Choi etal., 2023) introduced a theory-driven benchmark and measured how well LLMs understand social language at the sentence level; (Riccardi and Desai, 2023) proposed a noun-noun combination-based dataset to assess LLMs’ capacity for meaningfulness judgment, which requires a high level of language understanding; (Jang etal., 2023) explored whether LLMs actually understand what they are instructed to do facing word-level modifications; GLUE(Wang etal., 2018) aimed to evaluate the performance of NLP models on eight language understanding tasks, such as Duplicate Sentence Detection; (Zheng etal., 2022) constructed FewNLU to makes a systematic assessment of the few-shot learning problems, and Promptbench(Zhu etal., 2023) were proposed to analyze LLMs’ robustness while facing NLU tasks with attacks. However, they primarily focus on sentence-level semantic understanding or word-level instruction comprehension, overlooking nuanced dimensions that hinder our holistic comprehension of their linguistic competence. Moreover, the basis of communication in psychology relies on a shared understanding of word meanings(Miller, 1967; DeDeyne etal., 2023), a fine-grained language comprehension capacity. Therefore, proposing a targeted dataset to bridge this gap is of profound significance.

In this work, we aim to comprehensively discuss the above challenge question focusing on uncommon meanings of common words, a fine-grained perspective in line with communication psychology and human cognition(Clark and Gerrig, 1983; DuVivier, 1999). Motivated by this intuitive inspiration, we construct the LeSC dataset to assess the word-level NLU ability, and introduce two metrics including our proposed weighted accuracy, which effectively mitigates the impact of model preferences, thus fostering a more fair and objective evaluation. Contrary to prior work, our proposed dataset LeSC specializes in context-aware fine-grained lexical semantics understanding (LSU, definition in theoretical linguistics(Geeraerts, 2002)), and also provides cross-lingual transfer tests, which delve deeper into the model’s comprehension at a more precise semantic level. Given this benchmark, a series of confirmatory experimental research can be conducted, and the results highlight a prevailing lack of LSU capability among current LLMs, with even ChatGPT producing less-than-satisfactory performance (Figure 1). We then take a step towards investigating the language capability transfer in LLMs. Moreover, some strategies are also discussed to effectively alleviate the above problem, such as few-shot prompting(Brown etal., 2020) and chain-of-thought(Wei etal., 2022). For in-depth analysis, we also posit conjectures to dive into the underlying explanations and magic through additional studies. To sum up, the key contributions are summarized as follows:

Can large language models understand uncommon meanings of common words? (1)
  • 1.

    We introduce LeSC, a pioneering open-source benchmark with a fair and objective evaluation metric rooted in model preferences for assessing the fine-grained LSU capability of LLMs. The design of LeSC ensures its validity, thereby furnishing a valuable foundation for future alignment with human comprehension and facilitating other NLU-based research like robotic navigation with LLMs.

  • 2.

    Extensive experiments reveal that existing models, affected by their inherent imitation tendencies, overconfidence, and inadequate language capacity transfer, exhibit limited proficiency in the basic LSU task, despite their notable efficacy in more complex NLU tasks.

  • 3.

    Comparing ChatGPT (GPT-3.5 and GPT-4) with the human performance measured by asking 16-year-old humans from diverse backgrounds on the same task, we show that there is a huge (similar-to\sim22.3%percent\%% and similar-to\sim3.9%percent\%%) performance gap to close.

  • 4.

    Results indicate that advancing prompting techniques and retrieval-augmented generation partially help mitigate this challenge, however, their benefits tend to diminish or even become counterproductive on very large language models.

  • 5.

    Some visualizations are presented to help delve into the underlying explanations and mechanisms.

Exploring the fine-grained LSU ability of LLMs can offer crucial insights into NLP research, consequently leading to a comprehensive enhancement in the efficacy of various downstream tasks like translation and summarization. Our findings indicate that despite remarkable performance on challenging NLU tasks, state-of-the-art (SOTA) language models like ChatGPT still have fundamental weaknesses in lexical context understanding and extracting proper meanings from the input. Beyond solving more challenging problems, future work should also take into account the underestimated challenge of fine-grained LSU, thus offering guidance and assistance in developing more powerful supermodels.

Can large language models understand uncommon meanings of common words? (2)

The paper is organized as follows. Section 2 introduces the constructions of datasets, and models and methods used in this paper. Then experimental results and analysis are presented in section 3. After that, we show the case study in 4. Finally, conclusions and future works are discussed in section 5 and 6.

2 Materials and methods

In this section, we present the construction process of LeSC benchmark (§2.1) with evaluation details. Selected models and methods (prompting techniques and retrieval-augmented-generation) are also introduced (§2.2) The overall workflow is illustrated in Figure 2.

2.1 The LeSC datasets

In this part, we describe the creation process of the LeSC dataset (§2.1.1) and the evaluation metrics (§2.1.2).

2.1.1 Dataset creation

Aiming at measuring LLMs’ fine-grained language comprehension capacity in scenarios with low error tolerance, we propose a novel dataset (LeSC) focusing on the lexical level, the first benchmark encompassing both fine-grained and cross-lingual dimensions. Specifically, inspired by datasets originating from standardized testing(Hendrycks etal., 2021; Zhang etal., 2023), we first collect polysemous words fromthe Chinese college entrance examination (GAOKAO) and College English Test (CET-4) online resources, and refine them manually. Then, utilizing online dictionaries222https://dict.youdao.com/ and https://pinyin.sogou.com/dict/, we create a polysemy dictionary linking the above English words with their various Chinese meanings. After that, previous exams, particularly reading comprehension parts, are crawled, and using ChatGPT, irrelevant parts are filtered out to build a 5k sentence dictionary with keys as sentences and values as bilingual word-definition pairs. Finally, after a meticulous human examination and curation process, ambious sentences are removed, resulting in 600 high-quality samples containing a list of multiple semantics for each key word.

Given LLMs’ remarkable performance in multiple-choice QA (MCQA) tasks(Hendrycks etal., 2021; Pal etal., 2022; Bubeck etal., 2023) like sentiment analysis and text classification (essentially equivalent to MCQA with limited answer options), we have formulated the above-curated samples in the simple multiple-choice format to obtain a fair and objective assessment of language comprehension capabilities. Here, we define the input in LLMs is the combination of a prompt P𝑃Pitalic_P and a question x𝑥xitalic_x: [P,x]𝑃𝑥[P,x][ italic_P , italic_x ], where [,][,][ , ] denotes the concatenation operation. For P𝑃Pitalic_P, following previous work(Zhu etal., 2023), with human design and GPT-4333https://openai.com/api/, we construct prompts for both task-oriented and role-oriented types, each consisting of three samples. For x𝑥xitalic_x, the order of options remains entirely random, as (Robinson and Wingate, 2023) highlights LLM’s sensitivity while facing a simple change in the order of options (details in Appendix A). In total, 3600 samples are obtained for LSU evaluation.

Noteworthy, our evaluation encompasses not only the model’s understanding of fine-grained lexical semantics but also its capacity for cross-lingual transfer. We have formulated the questions in English while the candidate answers are presented in Chinese (Figure 1 is an example). For comparison, we additionally render the original options in English to facilitate further validation and analysis. More descriptions can be found in Appendix A.

2.1.2 Evaluation metrics

For a question q, we denote its standard answer as a(q)𝑎𝑞a(q)italic_a ( italic_q ), and the answer of LLM M𝑀Mitalic_M by M(q)𝑀𝑞M(q)italic_M ( italic_q ). To evaluate the comprehension ability of M𝑀Mitalic_M, we consider the following two metrics:

Absolute Accuracy

This is the most commonly adopted average accuracy in NLP tasks. In the following experiments, unless otherwise specified explicitly, accuracy all refers to this type.

Accabs(M;Q)=qQ𝕀[M(q)=a(q)]|Q|𝐴𝑐subscript𝑐𝑎𝑏𝑠𝑀𝑄subscript𝑞𝑄𝕀delimited-[]𝑀𝑞𝑎𝑞𝑄Acc_{abs}(M;Q)=\frac{{\textstyle\sum_{q\in Q}}\mathbb{I}[M(q)=a(q)]}{|Q|}italic_A italic_c italic_c start_POSTSUBSCRIPT italic_a italic_b italic_s end_POSTSUBSCRIPT ( italic_M ; italic_Q ) = divide start_ARG ∑ start_POSTSUBSCRIPT italic_q ∈ italic_Q end_POSTSUBSCRIPT blackboard_I [ italic_M ( italic_q ) = italic_a ( italic_q ) ] end_ARG start_ARG | italic_Q | end_ARG(1)

where Q𝑄Qitalic_Q is the set of all test questions, and 𝕀𝕀\mathbb{I}blackboard_I is the indicator function, which equals 1 if the model answers correctly; otherwise, 0.

Weighted Accuracy

As illustrated in §2.1.1, LLMs exhibit a marked sensitivity to the order of options. That means that for questions correctly answered by the model M, it may be solely due to ground truth labels coincidentally aligning with the option symbols (e.g., ’B’) that M tends to favor and select, thus not convincing to indicate M’s perfect understanding ability (Appendix A). Hence, we introduce a novel weighted metric to mitigate impact of model symbol bias:

Accwtd(M;Q)=qQW(idx(q))𝕀[M(q)=a(q)]wWW𝐴𝑐subscript𝑐𝑤𝑡𝑑𝑀𝑄subscript𝑞𝑄𝑊𝑖𝑑𝑥𝑞𝕀delimited-[]𝑀𝑞𝑎𝑞subscript𝑤𝑊𝑊Acc_{wtd}(M;Q)=\frac{{\textstyle\sum_{q\in Q}}W(idx(q))\ast\mathbb{I}[M(q)=a(q%)]}{{\textstyle\sum_{w\in W}W}}italic_A italic_c italic_c start_POSTSUBSCRIPT italic_w italic_t italic_d end_POSTSUBSCRIPT ( italic_M ; italic_Q ) = divide start_ARG ∑ start_POSTSUBSCRIPT italic_q ∈ italic_Q end_POSTSUBSCRIPT italic_W ( italic_i italic_d italic_x ( italic_q ) ) ∗ blackboard_I [ italic_M ( italic_q ) = italic_a ( italic_q ) ] end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_w ∈ italic_W end_POSTSUBSCRIPT italic_W end_ARG(2)
W(i)=1accoptions(gt(i))/sum(accoptions)𝑊𝑖1𝑎𝑐subscript𝑐𝑜𝑝𝑡𝑖𝑜𝑛𝑠𝑔𝑡𝑖𝑠𝑢𝑚𝑎𝑐subscript𝑐𝑜𝑝𝑡𝑖𝑜𝑛𝑠W(i)=\frac{1}{acc_{options}({gt(i))}/sum(acc_{options})}italic_W ( italic_i ) = divide start_ARG 1 end_ARG start_ARG italic_a italic_c italic_c start_POSTSUBSCRIPT italic_o italic_p italic_t italic_i italic_o italic_n italic_s end_POSTSUBSCRIPT ( italic_g italic_t ( italic_i ) ) / italic_s italic_u italic_m ( italic_a italic_c italic_c start_POSTSUBSCRIPT italic_o italic_p italic_t italic_i italic_o italic_n italic_s end_POSTSUBSCRIPT ) end_ARG(3)

where W𝑊Witalic_W denotes the weight matrix with shape (1,|Q|)1𝑄(1,|Q|)( 1 , | italic_Q | ), idx(q)𝑖𝑑𝑥𝑞idx(q)italic_i italic_d italic_x ( italic_q ) and gt(i)𝑔𝑡𝑖gt(i)italic_g italic_t ( italic_i ) are the index of question q𝑞qitalic_q in LeSC and its label, and accoptions𝑎𝑐subscript𝑐𝑜𝑝𝑡𝑖𝑜𝑛𝑠acc_{options}italic_a italic_c italic_c start_POSTSUBSCRIPT italic_o italic_p italic_t italic_i italic_o italic_n italic_s end_POSTSUBSCRIPT consists of five elements, with each element representing the absolute performance of M while the ground truth answer for each question in Q𝑄Qitalic_Q is placed at the corresponding element location. Additionally, if a question has fewer than five options (n<5𝑛5n<5italic_n < 5), we maintain its original order of options when computing the value of accoptions𝑎𝑐subscript𝑐𝑜𝑝𝑡𝑖𝑜𝑛𝑠acc_{options}italic_a italic_c italic_c start_POSTSUBSCRIPT italic_o italic_p italic_t italic_i italic_o italic_n italic_s end_POSTSUBSCRIPT for positions beyond n𝑛nitalic_n.

2.2 Models and methods

In this part, we first describe the models utilized in our research (§2.2.1). Then human evaluation baseline alongside random baseline is presented for comparative analysis (§2.2.2). Additionally, we delve into advanced strategies, such as prompting techniques (§2.2.3) and retrieval-augmented generation (§2.2.4), to offer a more comprehensive analysis. Finally, attention visualization is also introduced to give some intuitive explanations (§2.2.5).

2.2.1 Selected models

To ensure the efficacy and generalizability of our research for both academic researchers and commercial applications, we consider models with varying architectures and sizes, and both open-source and closed-source: GPT-3.5 and GPT-4, Vicuna-v1.5 (7B, 13B, 33B)(Chiang etal., 2023), Llama2 (7B, 13B)(Touvron etal., 2023), Qwen (7B, 14B)(Bai etal., 2023), Baichuan2 (7B, 13B)(Baichuan, 2023), and ChatGLM3-6B(Du etal., 2022). This allows for a comprehensive quantification of LLMs’ language understanding capabilities across various dimensions. We give some brief description below. For more details, please refer to official websites or the corresponding Huggingface repository444https://huggingface.co/models.

  • 1.

    ChatGPT: Developed by OpenAI, ChatGPT is a large language model designed to produce human-like text in response to given prompts. Built on the GPT-3 architecture, the GPT-3.5 series has undergone fine-tuning for enhanced interactivity and conversational capabilities. Notably, GPT-4 stands out as the most proficient LLM in performance.

  • 2.

    Vicuna-v1.5: The Vicuna model, derived from fine-tuning the LLaMA-2 base model by LMSYS, was developed using around 70K user-shared conversations obtained from ShareGPT.com through public APIs.

  • 3.

    Llama2: The Llama2 model, developed by Meta AI’s FAIR team, is a widely-used autoregressive language model based on the Transformer architecture.

  • 4.

    Qwen: Proposed by Alibaba Cloud, Qwen series are strong base language models, which have been stably pretrained for up to 3 trillion tokens of multilingual data with a wide coverage of domains, languages (with a focus on Chinese and English), etc. They can achieve competitive performance on benchmark datasets.

  • 5.

    Baichuan2: Baichuan 2 is the new generation of large-scale open-source language models launched by Baichuan Intelligence. It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved excellent performance in authoritative Chinese and English benchmarks of the same size.

  • 6.

    ChatGLM3: ChatGLM3 is the latest open-source model in the ChatGLM series by Zhipu. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3 introduces the following features: more powerful base model, more comprehensive function support.

2.2.2 Human evaluation and random baseline

Humam evaluation

Following (Jang etal., 2023), we provide human performance on LSU task to explore the gap between SOTA LLMs and humans. We randomly selected 300 samples from LeSC and then conducted evaluations on 16-year-old humans drawn randomly from international high schools, representing diverse backgrounds, ensuring reliable results. This age is selected based on the widely acknowledged commonsense that sixteen is a milestone marking a significant transition from childhood to adulthood, with the emergence of mature cognitive abilities observed across various cultures(Cowie, 2019; Gordon, 2022). Our study aims to precisely measure the understanding level of LLMs compared to our hypothesis that even 16-year-old humans excel in the simple LSU task. Therefore, if SOTA GPT-4 struggles to surpass human performance, it suggests significant shortcomings in LLMs’ language understanding from the perspective of human intelligence.

Random baseline

The number of options in LeSC ranges from 2 to 5, leading to an average of 4.39 across all samples. Here, the random selection level are defined as the reciprocal of this average (22.77%percent\%%), which we intuitively hold the belief that existing splendid LLMs will undoubtedly outperform.

2.2.3 Prompting methods

Researchers have proposed various prompting methods to inspire inherent energy for improved performance in LLMs. Earlier explorations (Brown etal., 2020; Bubeck etal., 2023) have revealed the noteworthy influence of few-shot prompting on enhancing LLM performance. Chain of thought (CoT) prompting(Wei etal., 2022; Kojima etal., 2022), a type of few-shot prompting, achieved notable improvements across challenging benchmarks by suggesting a straightforward solution: changing the answers in few-shot examples to step-by-step answers, especially when combined with very large language models like ChatGPT(OpenAI, 2023). Furthermore, (Kojima etal., 2022), demonstrated that suitable hints in prompts also result in a respectable performance, even in the absence of any exemplar. However, (Shi etal., 2023) also pointed out that inappropriate prompts might meanwhile impede the model’s comprehension and decision-making processes to some extent. In this paper, we investigate the impact of these cutting-edge prompting techniques on our benchmark LeSC to explore their real effects on the LSU problem.

2.2.4 Retrieval-augmented generation

While LLMs have demonstrated impressive capabilities, they face issues such as hallucination and reliance on outdated knowledge(Huang etal., 2023; Kandpal etal., 2023). Retrieval-augmented generation has emerged as a promising solution these years(Gao etal., 2023; Zhao etal., 2024). By integrating pertinent external knowledge and information, this method facilitates accurate and credible generated content and reasoning process, particularly in knowledge-intensive tasks. Earlier work use retrieval models to obtain relevant documents, and directly feed into the generation model with original input(Chen etal., 2017; Qu etal., 2021). Recent explorations focus on enhancing the performance of retrievers and how to only utilize useful information in retrieved documents(Kandpal etal., 2023; Asai etal., 2024; Yoran etal., 2024). For example, (Luo etal., 2023) fine-tunes a language model on the Alpaca instruction-tuning data with top retrieved passages inserted before instructions. GenRead(Yu etal., 2023) generates contextual information with the implicit knowledge of LLM, and then produce the final answers with these relevant documents. Self-RAG(Asai etal., 2024) designs four special tokens and then generates instruction-tuning data labeled by GPT-4, which is then utilized to fine-tune a Llama2.

In this work, given the unique attributes of our research, retrieving relevant documents from external databases like Wikipedia or conventional search engines may be challenging and inefficient. Furthermore, leveraging ChatGPT’s extensive training and learning from over 40TB of internet data, we believe that ChatGPT is adept at serving as an external knowledge source. Thus, given an input x𝑥xitalic_x consisting of task instructions and input content, we begin by using ChatGPT to generate relevant documents d𝑑ditalic_d, followed by extracting and summarizing useful snippets to create the final evidence passages p𝑝pitalic_p. Ultimately, for LLM M𝑀Mitalic_M, the answer A𝐴Aitalic_A is produced as follows:

A=M(x,p)𝐴𝑀𝑥𝑝A=M(x,p)italic_A = italic_M ( italic_x , italic_p )(4)
Can large language models understand uncommon meanings of common words? (3)

2.2.5 Visualization Technique

Text visualization (Cao etal., 2016; Zhu etal., 2023) can intuitively and effectively convey information. To evaluate LLM’s recognition capabilities on fine-grained semantic information within the sentence (word or phrase), we introduce an attention visualization technique here, i.e. Attention by Gradient, which assigns scores based on gradient norms for each word.

As an effective method utilized in open-source models, attention by gradient aims to determine the significance of each word in the input through gradient analysis. Specifically, given an input x𝑥xitalic_x consisting of k𝑘kitalic_k words and n𝑛nitalic_n tokens labeled as y𝑦yitalic_y, we express it as x=[t11,t12,,tkn]𝑥superscriptsubscript𝑡11superscriptsubscript𝑡12superscriptsubscript𝑡𝑘𝑛x=[t_{1}^{1},t_{1}^{2},...,t_{k}^{n}]italic_x = [ italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ], where tijsuperscriptsubscript𝑡𝑖𝑗t_{i}^{j}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT denotes the i𝑖iitalic_i-th word and j𝑗jitalic_j-th token. Thus, tokens corresponding to the same word should be concatenated using a mapping function wi=fmap(tij)subscript𝑤𝑖subscript𝑓𝑚𝑎𝑝superscriptsubscript𝑡𝑖𝑗w_{i}=f_{map}(t_{i}^{j})italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_f start_POSTSUBSCRIPT italic_m italic_a italic_p end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ). For the model M, we first compute the token-level gradient as follows:

gtij=L(fM(x),y)tijsubscript𝑔superscriptsubscript𝑡𝑖𝑗𝐿subscript𝑓𝑀𝑥𝑦superscriptsubscript𝑡𝑖𝑗g_{t_{i}^{j}}=\frac{\partial L(f_{M}(x),y)}{\partial t_{i}^{j}}italic_g start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = divide start_ARG ∂ italic_L ( italic_f start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( italic_x ) , italic_y ) end_ARG start_ARG ∂ italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT end_ARG(5)

where L𝐿Litalic_L is the loss function (cross-entropy default in our paper), fMsubscript𝑓𝑀f_{M}italic_f start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT refers to the function of model M.

Upon obtaining the above gradient, word-level gradient can be computed by summing the token-level gradients corresponding to each word as follows:

gwi=j0,1,,ngtijs.t.wi=fmap(tij)formulae-sequencesubscript𝑔subscript𝑤𝑖subscript𝑗01𝑛subscript𝑔superscriptsubscript𝑡𝑖𝑗s.t.subscript𝑤𝑖subscript𝑓𝑚𝑎𝑝superscriptsubscript𝑡𝑖𝑗g_{w_{i}}=\sum_{j\in 0,1,...,n}g_{t_{i}^{j}}\quad\text{s.t. }w_{i}=f_{map}(t_{%i}^{j})italic_g start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j ∈ 0 , 1 , … , italic_n end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT end_POSTSUBSCRIPT s.t. italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_f start_POSTSUBSCRIPT italic_m italic_a italic_p end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT )(6)

Finally, we calculate the l2subscript𝑙2l_{2}italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm of each word’s gradient, subsequently applying normalization to yield a score, denoted as swisubscript𝑠subscript𝑤𝑖s_{w_{i}}italic_s start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT for each word:

swi=gwi2mingwimaxgwimingwisubscript𝑠subscript𝑤𝑖superscriptnormsubscript𝑔subscript𝑤𝑖2subscript𝑔subscript𝑤𝑖subscript𝑔subscript𝑤𝑖subscript𝑔subscript𝑤𝑖s_{w_{i}}=\frac{||g_{w_{i}}||^{2}-\min g_{w_{i}}}{\max g_{w_{i}}-\min g_{w_{i}}}italic_s start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = divide start_ARG | | italic_g start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - roman_min italic_g start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_ARG start_ARG roman_max italic_g start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT - roman_min italic_g start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_ARG(7)

It should be noted that the magnitude of attention scores reflects the model’s focus on specific words or nuanced semantics. In other words, the higher attention scores, the correspondingly greater the color depth in visualizations.

3 Experimental results and analysis

In this section, we provide comprehensive experiments and anticipate answering three challenging yet meaningful questions: §3.1 whether the LSU issue exists in SOTA LLMs, §3.2 how to mitigate, and §3.3 why it persists and proves difficult to resolve. We implement all experiments in this paper with an Nvidia A100 GPU.

3.1 Are LLMs proficient in lexical semantic understanding?

We comprehensively compare model performances on LeSC with the main results illustrated in Figure 3 and 4, and detailed results are listed in Appendix B. Each reported value is the absolute accuracy on the whole dataset under different settings (prompt types, model architectures, sizes, etc.). Our results offer three key insights as follows.

3.1.1 LLMs including GPT-4 perform poorly on LSU

As shown in Figure 3, all considered cutting-edge LLMs, both open-source and closed-source, with diverse scales and architectures, consistently exhibit less-than-satisfactory performances far below expectations on the LSU task. Specifically, as one of the most powerful LLMs in open-source domains(Liu etal., 2023), the 30B vicuna model surprisingly achieves less than 50%percent\%% accuracy on this task. At the same time, vicuna-7B exhibits a complete lack of understanding, with a performance of 16.94%percent\%% that is even worse than random selections. Furthermore, despite slightly better improvements from instruction fine-tuning on llama families and better generalization to new instructions in zero-shot scenarios, vicuna-family models still fall short in this fundamental comprehension task. This suggests that the emergence of this issue could not be attributed to LLMs’ difficulty in adhering to instructions(Jang etal., 2023), but rather stems from essential challenges in intrinsic fine-grained comprehension.

For larger-scale commercial models, SOTA GPT-4 exhibits an average performance gap of 3.9%percent\%% compared with 16-year-old humans while GPT-3.5 leads to 22.3%percent\%% performance gap, which contrasts starkly with their near-perfect performance on diverse semantic understanding tasks like reading comprehension(Bubeck etal., 2023). This thought-provoking discrepancy prompts us to reflect on whether LLMs truly possess fine-grained accurate human-like language comprehension and analytical skills rather than simply relying on co-occurrence contrasts or mimicing language patterns, i.e. stochastic parrots.

Can large language models understand uncommon meanings of common words? (4)

3.1.2 Role-oriented prompts outperform task-oriented

Despite the overall suboptimal performance, a comparison between the two prompt types (Figure 3) illustrates a certain excellence in role-oriented prompts over task-oriented ones. This could be attributed to the inherent properties of LLMs in emulating acquired knowledge and task types, enabling effective handling of various problems. Thus, when endowed with role-specific instructions, LLMs may swiftly and accurately navigate the relevant state space, thereby fully leveraging their intrinsic comprehension powers.

3.1.3 Cross-lingual transfer capacity of LLMs encounters limitations

Given the original questions and options are set in English and Chinese, respectively, this configuration inadvertently assesses the language capability transfer in LLMs. For in-depth analysis, we employ GPT-4 to convert all options into English, subsequently creating a new dataset named LeSC-EN. As depicted in Figure 4, partial test results on LeSC-EN are presented, leading us to the following findings. Note that we have transformed the metric value of Vicuna-7B-CN (+10%percent\%%) for better presentation, which, nevertheless, doesn’t influence our conclusions.

Firstly, for models of the same types (Vicuna-CN), we observe a progressive augmentation tendency in the transfer effect of cross-lingual comprehension abilities as the scale ranges from 7B to 33B. However, this trend remains limited to some extent and exhibits a poor correlation with the higher inference costs that come with model scaling. Secondly, disparities in pre-training corpora may account for substantial divergences in cross-lingual transfer among different models. Taking the Vicuna and Baichuan models as an example, Baichuan2-13B, benefiting from a larger share of Chinese pre-training corpora, surpasses Vicuna-13B in the original LeSC task. Nevertheless, upon switching to English (LeSC-EN) for testing, the Vicuna models, endowed with a greater proportion of English corpora, manifest a pronounced performance improvement (about 10%percent\%%), whereas the Baichuan model undergoes a relative decline in performance.

Our results reveal notable constraints in the cross-lingual transfer proficiency of current LLMs. Consequently, we advocate for an in-depth assessment of models’ capabilities and attributes across diverse transfer scenarios, including domains, cultures, and linguistic styles. We consider this imperative for better advancements in LLM research and leave further exploration for future work.

MethodExplanationAbsolute AccuracyWeighted Accuracy
7B13B33B7B13B33B
Base/16.9437.8344.5015.9635.4941.49
RandomRandom selection22.7722.7722.7722.7722.7722.77
HumanHuman level91.9091.9091.9091.9091.9091.90
Few-shotIn-context learning32.36+15.42subscript32.3615.4232.36_{{\color[rgb]{0,0.69140625,0.3125}+15.42}}32.36 start_POSTSUBSCRIPT + 15.42 end_POSTSUBSCRIPT45.81+7.98subscript45.817.9845.81_{{\color[rgb]{0,0.69140625,0.3125}+7.98}}45.81 start_POSTSUBSCRIPT + 7.98 end_POSTSUBSCRIPT49.22+4.72subscript49.224.7249.22_{{\color[rgb]{0,0.69140625,0.3125}+4.72}}49.22 start_POSTSUBSCRIPT + 4.72 end_POSTSUBSCRIPT32.04+16.08subscript32.0416.0832.04_{{\color[rgb]{0,0.69140625,0.3125}+16.08}}32.04 start_POSTSUBSCRIPT + 16.08 end_POSTSUBSCRIPT45.11+9.62subscript45.119.6245.11_{{\color[rgb]{0,0.69140625,0.3125}+9.62}}45.11 start_POSTSUBSCRIPT + 9.62 end_POSTSUBSCRIPT49.80+8.31subscript49.808.3149.80_{{\color[rgb]{0,0.69140625,0.3125}+8.31}}49.80 start_POSTSUBSCRIPT + 8.31 end_POSTSUBSCRIPT
CoTCoT31.67+14.73subscript31.6714.7331.67_{{\color[rgb]{0,0.69140625,0.3125}+14.73}}31.67 start_POSTSUBSCRIPT + 14.73 end_POSTSUBSCRIPT51.03+13.20subscript51.0313.2051.03_{{\color[rgb]{0,0.69140625,0.3125}+13.20}}51.03 start_POSTSUBSCRIPT + 13.20 end_POSTSUBSCRIPT35.149.36subscript35.149.3635.14_{{\color[rgb]{1,0,0}-9.36}}35.14 start_POSTSUBSCRIPT - 9.36 end_POSTSUBSCRIPT31.30+15.33subscript31.3015.3331.30_{{\color[rgb]{0,0.69140625,0.3125}+15.33}}31.30 start_POSTSUBSCRIPT + 15.33 end_POSTSUBSCRIPT51.87+16.38subscript51.8716.3851.87_{{\color[rgb]{0,0.69140625,0.3125}+16.38}}51.87 start_POSTSUBSCRIPT + 16.38 end_POSTSUBSCRIPT37.354.14subscript37.354.1437.35_{{\color[rgb]{1,0,0}-4.14}}37.35 start_POSTSUBSCRIPT - 4.14 end_POSTSUBSCRIPT
0-CoTZero-shot CoT23.83+6.89subscript23.836.8923.83_{{\color[rgb]{0,0.69140625,0.3125}+6.89}}23.83 start_POSTSUBSCRIPT + 6.89 end_POSTSUBSCRIPT42.00+4.17subscript42.004.1742.00_{{\color[rgb]{0,0.69140625,0.3125}+4.17}}42.00 start_POSTSUBSCRIPT + 4.17 end_POSTSUBSCRIPT47.81+3.31subscript47.813.3147.81_{{\color[rgb]{0,0.69140625,0.3125}+3.31}}47.81 start_POSTSUBSCRIPT + 3.31 end_POSTSUBSCRIPT22.44+6.48subscript22.446.4822.44_{{\color[rgb]{0,0.69140625,0.3125}+6.48}}22.44 start_POSTSUBSCRIPT + 6.48 end_POSTSUBSCRIPT38.84+3.35subscript38.843.3538.84_{{\color[rgb]{0,0.69140625,0.3125}+3.35}}38.84 start_POSTSUBSCRIPT + 3.35 end_POSTSUBSCRIPT43.81+2.32subscript43.812.3243.81_{{\color[rgb]{0,0.69140625,0.3125}+2.32}}43.81 start_POSTSUBSCRIPT + 2.32 end_POSTSUBSCRIPT
RAG/49.78¯+32.84subscript¯49.7832.84\underline{49.78}_{{\color[rgb]{0,0.69140625,0.3125}+32.84}}under¯ start_ARG 49.78 end_ARG start_POSTSUBSCRIPT + 32.84 end_POSTSUBSCRIPT53.67¯+15.84subscript¯53.6715.84\underline{53.67}_{{\color[rgb]{0,0.69140625,0.3125}+15.84}}under¯ start_ARG 53.67 end_ARG start_POSTSUBSCRIPT + 15.84 end_POSTSUBSCRIPT62.81¯+18.31subscript¯62.8118.31\underline{62.81}_{{\color[rgb]{0,0.69140625,0.3125}+18.31}}under¯ start_ARG 62.81 end_ARG start_POSTSUBSCRIPT + 18.31 end_POSTSUBSCRIPT49.00¯+33.04subscript¯49.0033.04\underline{49.00}_{{\color[rgb]{0,0.69140625,0.3125}+33.04}}under¯ start_ARG 49.00 end_ARG start_POSTSUBSCRIPT + 33.04 end_POSTSUBSCRIPT52.28¯+16.79subscript¯52.2816.79\underline{52.28}_{{\color[rgb]{0,0.69140625,0.3125}+16.79}}under¯ start_ARG 52.28 end_ARG start_POSTSUBSCRIPT + 16.79 end_POSTSUBSCRIPT59.76¯+18.27subscript¯59.7618.27\underline{59.76}_{{\color[rgb]{0,0.69140625,0.3125}+18.27}}under¯ start_ARG 59.76 end_ARG start_POSTSUBSCRIPT + 18.27 end_POSTSUBSCRIPT
ModifiedModified problems29.81+12.87subscript29.8112.8729.81_{{\color[rgb]{0,0.69140625,0.3125}+12.87}}29.81 start_POSTSUBSCRIPT + 12.87 end_POSTSUBSCRIPT52.14+14.31subscript52.1414.3152.14_{{\color[rgb]{0,0.69140625,0.3125}+14.31}}52.14 start_POSTSUBSCRIPT + 14.31 end_POSTSUBSCRIPT59.31+14.81subscript59.3114.8159.31_{{\color[rgb]{0,0.69140625,0.3125}+14.81}}59.31 start_POSTSUBSCRIPT + 14.81 end_POSTSUBSCRIPT27.74+11.78subscript27.7411.7827.74_{{\color[rgb]{0,0.69140625,0.3125}+11.78}}27.74 start_POSTSUBSCRIPT + 11.78 end_POSTSUBSCRIPT49.51+14.02subscript49.5114.0249.51_{{\color[rgb]{0,0.69140625,0.3125}+14.02}}49.51 start_POSTSUBSCRIPT + 14.02 end_POSTSUBSCRIPT56.81+15.32subscript56.8115.3256.81_{{\color[rgb]{0,0.69140625,0.3125}+15.32}}56.81 start_POSTSUBSCRIPT + 15.32 end_POSTSUBSCRIPT
Ins-hintInst.+hint18.69+1.75subscript18.691.7518.69_{{\color[rgb]{0,0.69140625,0.3125}+1.75}}18.69 start_POSTSUBSCRIPT + 1.75 end_POSTSUBSCRIPT44.89+7.06subscript44.897.0644.89_{{\color[rgb]{0,0.69140625,0.3125}+7.06}}44.89 start_POSTSUBSCRIPT + 7.06 end_POSTSUBSCRIPT46.69+2.19subscript46.692.1946.69_{{\color[rgb]{0,0.69140625,0.3125}+2.19}}46.69 start_POSTSUBSCRIPT + 2.19 end_POSTSUBSCRIPT17.57+1.61subscript17.571.6117.57_{{\color[rgb]{0,0.69140625,0.3125}+1.61}}17.57 start_POSTSUBSCRIPT + 1.61 end_POSTSUBSCRIPT41.93+6.44subscript41.936.4441.93_{{\color[rgb]{0,0.69140625,0.3125}+6.44}}41.93 start_POSTSUBSCRIPT + 6.44 end_POSTSUBSCRIPT43.30+1.81subscript43.301.8143.30_{{\color[rgb]{0,0.69140625,0.3125}+1.81}}43.30 start_POSTSUBSCRIPT + 1.81 end_POSTSUBSCRIPT
Ins-falseInst.+false info.6.0010.94subscript6.0010.946.00_{{\color[rgb]{1,0,0}-10.94}}6.00 start_POSTSUBSCRIPT - 10.94 end_POSTSUBSCRIPT10.3627.47subscript10.3627.4710.36_{{\color[rgb]{1,0,0}-27.47}}10.36 start_POSTSUBSCRIPT - 27.47 end_POSTSUBSCRIPT1.5043.00subscript1.5043.001.50_{{\color[rgb]{1,0,0}-43.00}}1.50 start_POSTSUBSCRIPT - 43.00 end_POSTSUBSCRIPT5.7510.21subscript5.7510.215.75_{{\color[rgb]{1,0,0}-10.21}}5.75 start_POSTSUBSCRIPT - 10.21 end_POSTSUBSCRIPT8.6726.82subscript8.6726.828.67_{{\color[rgb]{1,0,0}-26.82}}8.67 start_POSTSUBSCRIPT - 26.82 end_POSTSUBSCRIPT1.4740.02subscript1.4740.021.47_{{\color[rgb]{1,0,0}-40.02}}1.47 start_POSTSUBSCRIPT - 40.02 end_POSTSUBSCRIPT
Ins-false-hintInst.+false info.+hint13.723.22subscript13.723.2213.72_{{\color[rgb]{1,0,0}-3.22}}13.72 start_POSTSUBSCRIPT - 3.22 end_POSTSUBSCRIPT15.0322.80subscript15.0322.8015.03_{{\color[rgb]{1,0,0}-22.80}}15.03 start_POSTSUBSCRIPT - 22.80 end_POSTSUBSCRIPT2.5042.00subscript2.5042.002.50_{{\color[rgb]{1,0,0}-42.00}}2.50 start_POSTSUBSCRIPT - 42.00 end_POSTSUBSCRIPT12.713.25subscript12.713.2512.71_{{\color[rgb]{1,0,0}-3.25}}12.71 start_POSTSUBSCRIPT - 3.25 end_POSTSUBSCRIPT12.8422.65subscript12.8422.6512.84_{{\color[rgb]{1,0,0}-22.65}}12.84 start_POSTSUBSCRIPT - 22.65 end_POSTSUBSCRIPT2.2739.22subscript2.2739.222.27_{{\color[rgb]{1,0,0}-39.22}}2.27 start_POSTSUBSCRIPT - 39.22 end_POSTSUBSCRIPT

3.2 Can prompting techniques and retrieval-augmented-generation help mitigate?

To investigate the efficacy of advanced strategies in mitigating the issues mentioned in §3.1, we adopt a wide range of prompting approaches described in §2.2.3, including zero-shot prompting (Zero-shot), few-shot in-context learning (ICL)(Brown etal., 2020), chain-of-thought prompting (CoT)(Wei etal., 2022), zero-shot chain-of-thought (0-CoT)(Kojima etal., 2022) here. Notably, for CoT, to maintain acceptable time cost and prevent over-fitting in prompt engineering, we align with(Shi etal., 2023) on exemple creation; that means, we only utilize one simple exemple. For 0-CoT, we follow (Kojima etal., 2022) and directly present the candidate problem followed by “Answer: Let’s think step by step:”. Additionally, we incorporate manually designed prompts to explicitly guide LLMs towards adherence to human intuitions, commonly termed instructed prompting. For retrieval-augmented-generation (RAG), the overall process is illustrated in §2.2.4. Three notable findings can be discerned as follows.

Can large language models understand uncommon meanings of common words? (5)

3.2.1 ICL enhances performance within a certain range

As illustrated in Figure 5 on Llama2-13B, several key conclusions could be derived. Firstly, in terms of both stability and two specified metrics, incorporating appropriate examples (few-shot) consistently surpasses zero-shot scenarios. While prior work has shown that prompting with exemplars improves model robustness(Shi etal., 2023), our work indicates that in-context learning also results in notable comprehension improvement. This can be attributed to its efficacy in enhancing the model’s ability to capture contextual information adeptly, thereby facilitating a more comprehensive understanding of the input context and user intent. Secondly, with the increase in the number of examples (up to 4), the model performance exhibits a gradual upward trend, which can be explained by the extensive pre-training of LLMs on massive data, enabling the internalization of diverse language structures and knowledge. The introduction of additional examples serves to better activate LLMs’ inherent prior knowledge, coupled with contextual information, fostering a deeper understanding of the context and thereby effectively accomplishing NLU tasks. However, an excess of examples may introduce noise, potentially impairing the model’s generalizability and decision-making processes in novel scenarios. More results are shown in Appendix B.

3.2.2 RAG enhances model semantic understanding, especially for small models

As shown in Table1, introducing external knowledge contributes to enhancing LLMs’ comprehension capacities for nuanced semantics, especially for small models. For example, for vicuna-7B, the performance increased by 194%percent\%%, indicating that RAG can enhance semantic understanding. However, its effectiveness is still limited, and the impact of model sizes has been removed to some extent. In other words, the final model performance largely depends on the effectiveness of retrieved content.

3.2.3 Scaling challenges: diminishing benefits of advanced prompting in large language models

Can large language models understand uncommon meanings of common words? (6)

Table1 illustrates that advanced prompting techniques, such as CoT and instructed prompting, effectively enhance overall performance on LeSC. However, on very large LLMs, these strategies might be ineffective or even counterproductive. Our observations reveal a consistent inverse scaling law for methods like CoT and few-shot in-context learning, diminishing their efficacy in improving LLMs’ performance. Notably, on the Vicuna-33B model, CoT has unexpectedly resulted in detrimental effects. We conjecture that as the model scale increases, the biases in the built-in pre-training corpus escalate, leading to overconfidence in its existing prior knowledge when processing tasks. Simultaneously, larger models, calibrated with human intent(Ouyang etal., 2022), may exhibit more heightened adherence to human instructions. In such cases, additional cues like CoT may conflict with LLM’s intrinsic knowledge, causing uncertainty, potential hallucinations, and ultimately impacting decision-making accuracy.

Taking Figure 6 as an example, the question is to comprehend and infer the meaning of the word ’base’ within the sentence ”Their base conduct was condemned by everyone.” Employing the Cot method for stepwise analysis, the model is initially required to perform part-of-speech (POS) tagging to eliminate inappropriate options. While LLMs typically excel in POS tagging tasks(Chang etal., 2023; Bubeck etal., 2023), the Vicuna-33B model, as outlined above, anomalously identifies ”base” as a noun, deviating from the anticipated outcome.

TypePredInputs
BaseBInput:fgIfgdon’tfgfollowfgyourfglinefgoffgreasoning.fgA:fglinefgsegmentfgB:fgmethodfgAnswer:fg
Ins-falseAFalse Informationfg
Input:fgIfgdon’tfgfollowfgyourfglinefgoffgreasoning.fgA:fglinefgsegmentfgB:fgmethodfgAnswer:fg
Ins-false-hintAThefgfollowingfgnotefghasfgsomefgfalsefginformation,fgandfgjustfgfeelfgfreefgtofgignorefgthem.fg
False Informationfg
Input:fgIfgdon’tfgfollowfgyourfglinefgoffgreasoning.fgA:fglinefgsegmentfgB:fgmethodfgAnswer:fg

3.2.4 LLMs significantly focus more on misleading information over corrective instructions

To further investigate the underlying reasons for the above issues, we employ instructed prompting methods here(Shi etal., 2023). By providing both correct and incorrect information, we aim to discern whether LLMs are just stochastic parrots and genuinely lack nuanced semantic understanding or conform to previous research conclusions(Jang etal., 2023) that ‘LLMs merely struggle with following instructions, not its comprehension ability’. Specifically, three scenarios are configured: ‘instruction with correct hint’, ‘instruction with false information’, and ‘instruction with false information and also ‘correct ignoring hint’. The outcomes reveal a noteworthy impact of these instructions on LLMs, reaching up to 96%percent\%% effectiveness. In other words, within the defined scope here, models demonstrate proficiency in following instructions and subsequently executing tasks. Thus, we attribute the subpar performances in §3.1 primarily to LLMs’ inadequate semantic comprehension.

Additionally, we observe an intriguing finding that, in lexical comprehension tasks, LLMs significantly prioritize misleading information over valuable instructions. More specifically, with the ‘ins-hint’ setup, the benefits of correct instructions are relatively limited. Nevertheless, while introducing erroneous information (‘ins-false’), LLMs experience a pronounced decline in performance, exemplified by Vicuna-33B’s 96%percent\%% drop rate. Even provided with explicit instructions to ignore incorrect information, their dependency on incorrect instructions remains challenging to mitigate. This finding highlights the need for future research to focus more on the impact of inevitable false information and explore practical alleviating strategies. Table2 is an example using attention visualization illustrated in §2.2.5, where LLMs directly disregard the correct instruction ‘The following note has some false information, and just feel free to ignore them’.

ModelSizeAcc-BaseAcc-Translation
Vicuna-v1.57B16.9448.50+31.56subscript48.5031.56\textbf{48.50}_{{\color[rgb]{0,0.69140625,0.3125}+31.56}}48.50 start_POSTSUBSCRIPT + 31.56 end_POSTSUBSCRIPT
13B37.8356.50+18.67subscript56.5018.67\textbf{56.50}_{{\color[rgb]{0,0.69140625,0.3125}+18.67}}56.50 start_POSTSUBSCRIPT + 18.67 end_POSTSUBSCRIPT
33B44.5046.33+1.83subscript46.331.83\textbf{46.33}_{{\color[rgb]{0,0.69140625,0.3125}+1.83}}46.33 start_POSTSUBSCRIPT + 1.83 end_POSTSUBSCRIPT
Llama-27B17.2232.67+15.45subscript32.6715.45\textbf{32.67}_{{\color[rgb]{0,0.69140625,0.3125}+15.45}}32.67 start_POSTSUBSCRIPT + 15.45 end_POSTSUBSCRIPT
13B35.9445.50+9.56subscript45.509.56\textbf{45.50}_{{\color[rgb]{0,0.69140625,0.3125}+9.56}}45.50 start_POSTSUBSCRIPT + 9.56 end_POSTSUBSCRIPT

3.3 Conjectures on why this problem exists

In this section, we perform an in-depth analysis to unravel the intricacies of this issue, addressing the question of why this problem is hard to solve for LLMs.

3.3.1 Inherent limitation of stochastic parrots

We further supplement an experiment to delve into its underlying mechanisms, since word, phrase, or sentence-level semantic understanding tasks are inherently akin to translation tasks, which LLMs excel at(Chang etal., 2023). Particularly, we guide LLMs to translate the provided questions, subsequently employing GPT-4 to assess the results, examining their effectiveness in conveying nuanced semantic information. That means, for a candidate question, if GPT-4 determines that a translation accurately captures uncommon meanings of common words, we consider it a successful sample.

Can large language models understand uncommon meanings of common words? (7)

As shown in Table 3, while introducing translation steps, we observe a consistent improving trend on models of varying scales (7B, 13B, 33B) and types (instructed-tuning Vicuna-v1.5 and original Llama-2). Given this phenomenon, we argue that, in direct question-answering scenarios, the model’s comprehension and cross-lingual transfer capabilities are often constrained by substantial pre-training corpus biases and unfamiliarity with question formats. However, introducing translation steps fundamentally exposes the model to a more familiar and proficient task paradigm, thereby capitalizing on its advantages in co-occurrence contrast and imitation. This produces efficient extraction of relevant information from pre-training knowledge and semantic contextual details from provided questions, leading to more precise answers. Nevertheless, it must be noted that the translation gains appear to diminish with the increasing model scales because of the current constraints on the LLM’s sophisticated comprehension. This phenomenon also emphasizes the limited effectiveness of current approaches, highlighting the urgency for further research into more potent solutions.

ModelQuestion:
Prompt: In the given sentence, which option best represents the meaning of ’air’ in Chinese?
Input: The weekly meeting enables employees to air their complaints.
A: 空气 B: 外观 C: 公开发表 (Ground Truth) D: 晒干
ChatGLM3-6BA \faTimesCircle<Step 1>: In the given sentence, the word ’air’ is likely used as a verb meaning to express or share something. So, options ’A’, ’B’, ’C’, and ’D’ do not fit this context. <Step 2>: Option ’E’ can be eliminated because ’air’ is not used as a noun or a verb related to the meeting in this sentence. Therefore, the best option for the Chinese meaning of ’air’ in this context is ’A’. <Step 3>: The final answer is A \faTimesCircle
Llama2-7BA \faTimesCircle<Step 1>: In the given sentence, the word ’air’ is a verb, so option ’A’ (noun) is non-conforming and must be removed, leaving only ’B’, ’C’, and ’D’. <Step 2>: Then for remaining options [’B’, ’C’, ’D’], option B refers to the appearance or look of something, which does not align with the context of airing complaints. Option C means to publicly express or reveal something, which is close but not the best fit for the context of airing complaints. Option D means to dry or expose something to the air, which is also not the best fit for the context of airing complaints. Therefore, the best option for the Chinese meaning of ’air’ in the sentence is ’B’. <Step 3>: The final answer is B \faTimesCircle
Llama2-13BA \faTimesCircle<Step 1>: In the given sentence, the word ’air’ is a noun, so we need to find the option that best represents the meaning of ’air’ as a noun. <Step 2>: Options A and D both relate to the physical sensation of air, but option D specifically refers to the sensation of hot air, which is not the meaning of ’air’ in the sentence. Option C is a noun, but it does not fit the context of the sentence. <Step 3>: Therefore, the best option for the Chinese meaning of ’air’ based on how it is used in this sentence is option B, which refers to the appearance or look of something. <Step 4>: The final answer is B \faTimesCircle
Vicuna-13BA \faTimesCircle<Step 1>: In the given sentence, the word ’air’ is a verb, so option ’A’ (noun) is non-conforming and must be removed, leaving only ’B’, ’C’, and ’D’. <Step 2>: Then for remaining options [’B’, ’C’, ’D’], option B refers to appearance or exterior, which doesn’t fit the context of the sentence. Option C means ’to make something public or known’, which is not the meaning of ’air’ in this sentence. Option D means ’to expose to the air’, which is not the meaning of ’air’ in this sentence. Therefore, none of the options B, C, or D are suitable for the Chinese meaning of ’air’ based on how it is used in this sentence. <Step 3>: Since none of the options are suitable, the answer is ’None of the above’ \faTimesCircle
Baichuan-13BD \faTimesCircle<Step 1>: In the provided phrase, ”air” means express oneself freely without reservation or fear about what others might say as if you are speaking directly into an open space like freshly cleaned room where dust particles have been cleared out. Option D (”晒”) also has similar connotation but usually implies sharing personal experiences/stories publicly. Option C doesn’t fit well because it suggests publishing something formally while we need more freedom when using “air” here. <Step 2>: Among these choices, there seems no direct match between English words and corresponding terms in Mandarin except for ”air”. So let’s check whether any other term can replace them accurately. <Step 3>: Finally, considered above factors, I believe that the correct response should include ”air” \faTimesCircle
Qwen-14BC \faCheckCircle<Step 1>: In the given sentence, the word ’air’ is a verb, so option ’A’ (noun) is non-conforming and must be removed, leaving only ’C’ and ’D’. <Step 2>: Then for remaining options [’C’, ’D’], option C translates to ’publicly express’, which aligns with the context of airing complaints. And option D refers to drying, not fitting the meaning of airing complaints in this sentence. Therefore, C is the best choice for the Chinese meaning of ’air’ based on how it’s used in sentence. <Step 3>: The final answer is C \faCheckCircle

3.3.2 Overconfidence to some degree

As discussed in(Si etal., 2023; Ying etal., 2023), LLMs accumulate substantial knowledge within their parameters, and this knowledge capacity scales with model size. Here, we conjecture that LLMs, having undergone pre-training on vast data with billions of parameters, internalize extensive knowledge, thereby fostering heightened self-confidence and occasionally manifesting biased tendencies. In such a hypothesis, confronted with relatively novel tasks that necessitate fine-grained comprehension, even applying advanced techniques such as CoT to structure the problem into familiar subtasks like part-of-speech tagging, these models encounter challenges. Instead, they persistently furnish responses based on their perceived correctness. We believe it will be especially more difficult for large LLMs, since they are powerful language modeling representers, meaning that it would be harder to make them revert original thoughts by focusing on real-scenario problems. Our conjecture can be partially proven by Figure 7. We randomly collect 1200 samples each of correctly answered and incorrectly answered questions. Subsequently, we extract their outputs from the model’s final layer and calculated the uncertainty score as the negative likelihood value. Our results reveal that LLMs frequently make erroneous predictions when exhibiting low uncertainty and heightened confidence. Conversely, LLMs provide correct responses when they demonstrate a more cautious tendency.

3.3.3 Coarse-Grained Competence: Impaired Detail-Oriented Task Performance

LLMs exhibit remarkable proficiency in tasks mainly involving macro-level comprehension, such as translation and summarization. However, given their inclination towards imitating and co-reference resolution characteristics, in the process of massive data-driven pre-training, these models may merely map the acquired knowledge to a relatively coarse and sparse latent space, overlooking the profound semantics of fundamental discrete units like words. In other words, although LLMs are regarded as understanding the world well, they exhibit language patterns and logic mimicking characteristics on these fine-grained tasks, which we refer to as ‘stochastic parrots’. Hence, we propose that future research should delve into analyzing LLMs’ understanding of basic logical symbols and other fine-grained elements, aiming to facilitate the construction of more nuanced and enriched world-mapping supermodels.

4 Case Study

In addition to the example shown in Figure 1, we include more example problems and predictions by different models and prompting methods (Tables 4). We can observe that even with the implementation of advanced prompting techniques such as CoT, the generation of accurate responses remains challenging due to issues such as hallucination and the accumulation of errors. These challenges may stem from an insufficient capacity for fine-grained semantic comprehension.

5 Conclusions

Increasing studies highlight LLMs’ exceptional performance across complex NLU tasks. To enhance future progress in LLMs, researchers are increasingly exploring their fundamental nature: are they merely stochastic parrots based on probabilistic statistics, or do they truly possess human-like nuanced semantic understanding? This question remains a mystery, lacking fine-grained scientific assessments. To bridge this gap, this paper proposes a brand-new benchmark, LeSC, aiming at exploring the genuine linguistic-cognitive skills of LLMs. Comprehensive empirical results indicate that existing LLMs face significant challenges in accurately capturing nuanced lexical semantic information, even GPT-4 and GPT-3.5 exhibit noticeable disparities from 16-year-old human performance, with gaps of 3.9%percent\%% and 22.3%percent\%%, respectively. Furthermore, we examine various advanced prompting techniques and retrieval-augmented-generation, and reveal their limited alleviating effects. Finally, some explanations and case studies are also discussed. The findings of our work can aid in the development of more intelligent LLMs, and also extend to other AI areas like realistic image generation.

6 Future directions

In addition to offering novel insights into LLMs’ comprehension capabilities, we also emphasize three promising future directions. Firstly, given our findings, we encourage researchers to prioritize addressing this fundamental limitation while devoting efforts towards advancing LLM development. Secondly, incorporating additional models and prompting techniques is recommended to enhance the comprehensiveness of evaluation results, thereby contributing to a more profound understanding of LLMs. Last but not least, beyond the noted deficiency in language cross-lingual transfer, it’s worthwhile to further investigate LLMs’ transferability in domains such as cross-cultural and cross-linguistic styles, which may pave the way for powerful artificial general intelligence.

7 Data availability

The datasets analyzed during the current study are available at https://github.com/jinyangwu/LeSC.

8 Acknowledgements

This research was supported by the National Key Research &\&& Development Plan of China (No. 2023YFC3305903), and the National Natural Science Foundation of China (NSFC) (No. 62322120, No. 62306316).

References

  • Anil etal. (2023)Anil, G.T.G.R., Borgeaud, S., Wu, Y., etal., 2023.Gemini: a family of highly capable multimodal models.arXiv preprint arXiv:2312.11805 .
  • Asai etal. (2024)Asai, A., Wu, Z., Wang, Y., Sil, A., Hajishirzi, H., 2024.Self-RAG: Learning to retrieve, generate, and critique through self-reflection, in: The Twelfth International Conference on Learning Representations.URL: https://openreview.net/forum?id=hSyW5go0v8.
  • Bai etal. (2023)Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., Hui, B., Ji, L., Li, M., Lin, J., Lin, R., Liu, D., Liu, G., Lu, C., Lu, K., Ma, J., Men, R., Ren, X., Ren, X., Tan, C., Wang, S., Wang, W., Wu, S., Xu, B., Xu, J., Yang, A., Yang, H., Yu, B., Yuan, H., Yuan, Z., Zhang, J., Zhang, X., Zhang, Y.,Zhang, Z., Zhou, C., Zhou, J., Zhou, X., Zhu, T., 2023.Qwen technical report.arXiv preprint arXiv:2309.16609 .
  • Baichuan (2023)Baichuan, 2023.Baichuan 2: Open large-scale language models.arXiv preprint arXiv:2309.10305 URL: https://arxiv.org/abs/2309.10305.
  • Bender etal. (2021)Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S., 2021.On the dangers of stochastic parrots: Can language models be too big?, in: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623.
  • Borji (2023)Borji, A., 2023.Stochastic parrots or intelligent systems? a perspective on true depth of understanding in llms.A Perspective on True Depth of Understanding in LLMs (July 11, 2023) .
  • Brown etal. (2020)Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., etal., 2020.Language models are few-shot learners.Advances in neural information processing systems 33, 1877–1901.
  • Bubeck etal. (2023)Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S., etal., 2023.Sparks of artificial general intelligence: Early experiments with gpt-4.arXiv preprint arXiv:2303.12712 .
  • Cao etal. (2016)Cao, N., Cui, W., Cao, N., Cui, W., 2016.Overview of text visualization techniques.Introduction to Text Visualization , 11–40.
  • Chang etal. (2023)Chang, Y., Wang, X., Wang, J., Wu, Y., Zhu, K., Chen, H., Yang, L., Yi, X., Wang, C., Wang, Y., etal., 2023.A survey on evaluation of large language models.arXiv preprint arXiv:2307.03109 .
  • Chen etal. (2017)Chen, D., Fisch, A., Weston, J., Bordes, A., 2017.Reading Wikipedia to answer open-domain questions, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Vancouver, Canada. pp. 1870–1879.URL: https://aclanthology.org/P17-1171, doi:10.18653/v1/P17-1171.
  • Chen etal. (2024)Chen, W., Su, Y., Zuo, J., Yang, C., Yuan, C., Chan, C.M., Yu, H., Lu, Y., Hung, Y.H., Qian, C., Qin, Y., Cong, X., Xie, R., Liu, Z., Sun, M., Zhou, J., 2024.Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, in: The Twelfth International Conference on Learning Representations.URL: https://openreview.net/forum?id=EHg5GDnyq1.
  • Chiang etal. (2023)Chiang, W.L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., Gonzalez, J.E., etal., 2023.Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.See https://vicuna. lmsys. org (accessed 14 April 2023) .
  • Choi etal. (2023)Choi, M., Pei, J., Kumar, S., Shu, C., Jurgens, D., 2023.Do LLMs understand social knowledge? evaluating the sociability of large language models with SocKET benchmark, in: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Singapore. pp. 11370–11403.URL: https://aclanthology.org/2023.emnlp-main.699, doi:10.18653/v1/2023.emnlp-main.699.
  • Clark and Gerrig (1983)Clark, H.H., Gerrig, R.J., 1983.Understanding old words with new meanings.Journal of verbal learning and verbal behavior 22, 591–608.
  • Cowie (2019)Cowie, H., 2019.From birth to sixteen: Children’s health, social, emotional and linguistic development.Routledge.
  • DeDeyne etal. (2023)DeDeyne, S., Warner, S., Perfors, A., 2023.Common words, uncommon meanings: Evidence for widespread gender differences in word meaning., in: Proceedings of the Annual Meeting of the Cognitive Science Society.
  • Du etal. (2022)Du, Z., Qian, Y., Liu, X., Ding, M., Qiu, J., Yang, Z., Tang, J., 2022.Glm: General language model pretraining with autoregressive blank infilling, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320–335.
  • DuVivier (1999)DuVivier, K., 1999.Common words with uncommon meanings .
  • Gao etal. (2023)Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, H., 2023.Retrieval-augmented generation for large language models: A survey.arXiv preprint arXiv:2312.10997 .
  • Geeraerts (2002)Geeraerts, D., 2002.The theoretical and descriptive development of lexical semantics.The lexicon in focus. Competition and convergence in current lexicology , 23–42.
  • Gordon (2022)Gordon, S., 2022.16-year-old child development milestones: Your child’s growth and development at age 16.Very well Family. Medically reviewed by a board-certified physician .
  • Häggström (2023)Häggström, O., 2023.Are large language models intelligent? are humans?, in: Computer Sciences & Mathematics Forum, MDPI. p.68.
  • Hendrycks etal. (2021)Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., Steinhardt, J., 2021.Measuring massive multitask language understanding.Proceedings of the International Conference on Learning Representations (ICLR) .
  • Huang etal. (2023)Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., etal., 2023.A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions.arXiv preprint arXiv:2311.05232 .
  • Jang etal. (2023)Jang, J., Ye, S., Seo, M., 2023.Can large language models truly understand prompts? a case study with negated prompts, in: Transfer Learning for Natural Language Processing Workshop, PMLR. pp. 52–62.
  • Kandpal etal. (2023)Kandpal, N., Deng, H., Roberts, A., Wallace, E., Raffel, C., 2023.Large language models struggle to learn long-tail knowledge, in: International Conference on Machine Learning, PMLR. pp. 15696–15707.
  • Kojima etal. (2022)Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y., 2022.Large language models are zero-shot reasoners.Advances in neural information processing systems 35, 22199–22213.
  • Li (2023)Li, Z., 2023.The dark side of chatgpt: legal and ethical challenges from stochastic parrots and hallucination.arXiv preprint arXiv:2304.14347 .
  • Liu etal. (2023)Liu, X., Yu, H., Zhang, H., Xu, Y., Lei, X., Lai, H., Gu, Y., Ding, H., Men, K., Yang, K., etal., 2023.Agentbench: Evaluating llms as agents.arXiv preprint arXiv:2308.03688 .
  • Lonergan (1957)Lonergan, B.J., 1957.Insight; a study of human understanding. .
  • Luo etal. (2023)Luo, H., Zhang, T., Chuang, Y.S., Gong, Y., Kim, Y., Wu, X., Meng, H., Glass, J., 2023.Search augmented instruction learning, in: Findings of the Association for Computational Linguistics: EMNLP 2023, Association for Computational Linguistics, Singapore. pp. 3717–3729.URL: https://aclanthology.org/2023.findings-emnlp.242, doi:10.18653/v1/2023.findings-emnlp.242.
  • Miller (1967)Miller, G.A., 1967.The psychology of communication.Human Resource Management 6, 43.
  • OpenAI (2023)OpenAI, 2023.Introducing chatgpt.
  • Ouyang etal. (2022)Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., etal., 2022.Training language models to follow instructions with human feedback.Advances in Neural Information Processing Systems 35, 27730–27744.
  • Pal etal. (2022)Pal, A., Umapathi, L.K., Sankarasubbu, M., 2022.Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering, in: Conference on Health, Inference, and Learning, PMLR. pp. 248–260.
  • Peng etal. (2023)Peng, D., Wei, W., Mao, X.L., Fu, Y., Chen, D., 2023.An empirical study on the language modal in visual question answering, in: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, International Joint Conferences on Artificial Intelligence Organization. pp. 4109–4117.URL: https://doi.org/10.24963/ijcai.2023/457, doi:10.24963/ijcai.2023/457. main Track.
  • Qu etal. (2021)Qu, Y., Ding, Y., Liu, J., Liu, K., Ren, R., Zhao, W.X., Dong, D., Wu, H., Wang, H., 2021.RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online. pp. 5835–5847.URL: https://aclanthology.org/2021.naacl-main.466, doi:10.18653/v1/2021.naacl-main.466.
  • Rai etal. (2023)Rai, D., Zhou, Y., Wang, B., Yao, Z., 2023.Explaining large language model-based neural semantic parsers (student abstract), in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI Press.URL: https://doi.org/10.1609/aaai.v37i13.27014, doi:10.1609/aaai.v37i13.27014.
  • Riccardi and Desai (2023)Riccardi, N., Desai, R.H., 2023.The two word test: A semantic benchmark for large language models.arXiv preprint arXiv:2306.04610 .
  • Robinson and Wingate (2023)Robinson, J., Wingate, D., 2023.Leveraging large language models for multiple choice question answering, in: The Eleventh International Conference on Learning Representations.URL: https://openreview.net/forum?id=yKbprarjc5B.
  • Shi etal. (2023)Shi, F., Chen, X., Misra, K., Scales, N., Dohan, D., Chi, E.H., Schärli, N., Zhou, D., 2023.Large language models can be easily distracted by irrelevant context, in: International Conference on Machine Learning, PMLR. pp. 31210–31227.
  • Si etal. (2023)Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber, J., Wang, L., 2023.Prompting gpt-3 to be reliable, in: International Conference on Learning Representations (ICLR).URL: https://arxiv.org/abs/2210.09150.
  • Sternberg (1983)Sternberg, R.J., 1983.Components of human intelligence.Cognition 15, 1–48.
  • Touvron etal. (2023)Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., etal., 2023.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971 .
  • Wang etal. (2018)Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S., 2018.GLUE: A multi-task benchmark and analysis platform for natural language understanding, in: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, Brussels, Belgium. pp. 353–355.URL: https://aclanthology.org/W18-5446, doi:10.18653/v1/W18-5446.
  • Wei etal. (2022)Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou, D., etal., 2022.Chain-of-thought prompting elicits reasoning in large language models.Advances in Neural Information Processing Systems 35, 24824–24837.
  • Xu etal. (2024)Xu, R., Lin, H., Han, X., Sun, L., Sun, Y., 2024.Academically intelligent llms are not necessarily socially intelligent.arXiv preprint arXiv:2403.06591 .
  • Ying etal. (2023)Ying, J., Cao, Y., Xiong, K., He, Y., Cui, L., Liu, Y., 2023.Intuitive or dependent? investigating llms’ robustness to conflicting prompts.arXiv preprint arXiv:2309.17415 .
  • Yoran etal. (2024)Yoran, O., Wolfson, T., Ram, O., Berant, J., 2024.Making retrieval-augmented language models robust to irrelevant context, in: The Twelfth International Conference on Learning Representations.URL: https://openreview.net/forum?id=ZS4m74kZpH.
  • Yu etal. (2023)Yu, W., Iter, D., Wang, S., Xu, Y., Ju, M., Sanyal, S., Zhu, C., Zeng, M., Jiang, M., 2023.Generate rather than retrieve: Large language models are strong context generators, in: The Eleventh International Conference on Learning Representations.URL: https://openreview.net/forum?id=fB0hRu9GZUS.
  • Zhang etal. (2023)Zhang, X., Li, C., Zong, Y., Ying, Z., He, L., Qiu, X., 2023.Evaluating the performance of large language models on gaokao benchmark.arXiv preprint arXiv:2305.12474 .
  • Zhao etal. (2024)Zhao, P., Zhang, H., Yu, Q., Wang, Z., Geng, Y., Fu, F., Yang, L., Zhang, W., Cui, B., 2024.Retrieval-augmented generation for ai-generated content: A survey.arXiv preprint arXiv:2402.19473 .
  • Zheng etal. (2022)Zheng, Y., Zhou, J., Qian, Y., Ding, M., Liao, C., Jian, L., Salakhutdinov, R., Tang, J., Ruder, S., Yang, Z., 2022.FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland. pp. 501–516.URL: https://aclanthology.org/2022.acl-long.38, doi:10.18653/v1/2022.acl-long.38.
  • Zhu etal. (2023)Zhu, K., Wang, J., Zhou, J., Wang, Z., Chen, H., Wang, Y., Yang, L., Ye, W., Gong, N.Z., Zhang, Y., etal., 2023.Promptbench: Towards evaluating the robustness of large language models on adversarial prompts.arXiv preprint arXiv:2306.04528 .
Can large language models understand uncommon meanings of common words? (2024)

References

Top Articles
Latest Posts
Article information

Author: Kimberely Baumbach CPA

Last Updated:

Views: 5944

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Kimberely Baumbach CPA

Birthday: 1996-01-14

Address: 8381 Boyce Course, Imeldachester, ND 74681

Phone: +3571286597580

Job: Product Banking Analyst

Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.