llms

llms

大型语言模型的原理与实践应用全面解析

本项目全面介绍大型语言模型(LLMs)的基本概念、应用场景和技术演进。内容涵盖统计语言模型、神经网络语言模型,以及基于Transformer的预训练模型如GPT和BERT等。系统讲解LLMs核心原理,并探讨模型评估、文本生成和提示工程等实用技术。同时展示LLMs在计算机视觉等领域的创新应用,通过理论与实践结合,为读者提供深入了解LLMs技术的全面指南。

语言模型自然语言处理TransformerGPTBERTGithub开源项目

Large Language Models (llms)

lms.png Source A Survey of Large Language Models

Content

  • What is a language model?
  • Applications of language models
  • Statistical Language Modeling
  • Neural Language Models (NLM)
  • Conditional language model
  • Evaluation: How good is our model?
  • Transformer-based Language models
  • Practical LLMs: GPT, BERT, Falcon, Llama, CodeT5
  • How to generate text using different decoding methods
  • Prompt Engineering
  • Fine-tuning LLMs
  • Retrieval Augmented Generation (RAG)
  • Ask almost everything (txt, pdf, video, etc.)
  • Evaluating LLM-based systems
  • AI Agents
  • LLMs for Computer vision (TBD)
  • Further readings

Introduction: What is a language model?

Simple definition: Language Modeling is the task of predicting what word comes next.

"The dog is playing in the ..."

  • park
  • woods
  • snow
  • office
  • university
  • Neural network
  • ?

The main purpose of Language Models is to assign a probability to a sentence, to distinguish between the more likely and the less likely sentences.

Applications of language models:

  1. Machine Translation: P(high winds tonight) > P(large winds tonight)
  2. Spelling correction: P(about fifteen minutes from) > P(about fifteen minuets from)
  3. Speech Recognition: P(I saw a van) > P(eyes awe of an)
  4. Authorship identification: who wrote some sample text
  5. Summarization, question answering, dialogue bots, etc.

For Speech Recognition, we use not only the acoustics model (the speech signal), but also a language model. Similarly, for Optical Character Recognition (OCR), we use both a vision model and a language model. Language models are very important for such recognition systems.

Sometimes, you hear or read a sentence that is not clear, but using your language model, you still can recognize it at a high accuracy despite the noisy vision/speech input.

The language model computes either of:

  • The probability of an upcoming word: $P(w_5 | w_1, w_2, w_3, w_4)$
  • The probability of a sentence or sequence of words (according to the Language Model): $P(w_1, w_2, w_3, ..., w_n)$

Language Modeling is a subcomponent of many NLP tasks, especially those involving generating text or estimating the probability of text.

The Chain Rule: $P(x_1, x_2, x_3, …, x_n) = P(x_1)P(x_2|x_1)P(x_3|x_1,x_2)…P(x_n|x_1,…,x_{n-1})$

$P(The, water, is, so, clear) = P(The) × P(water|The) × P(is|The, water) × P(so|The, water, is) × P(clear | The, water, is, so)$

What just happened? The Chain Rule is applied to compute the joint probability of words in a sentence.


Statistical Language Modeling:

n-gram Language Models

Using a large amount of text (corpus such as Wikipedia), we collect statistics about how frequently different words are, and use these to predict the next word. For example, the probability that a word w comes after these three words students opened their can be estimated as follows:

  • P(w | students opened their) = count(students opened their w) / count(students opened their)

The above example is a 4-gram model. And we may get:

  • P(books | students opened their) = 0.4
  • P(cars | students, opened, their) = 0.05
  • P(... | students, opened, their) = ...

We can conclude that the word “books” is more probable than “cars” in this context.

We ignored the previous context before "students opened their"

Accordingly, arbitrary text can be generated from a language model given starting word(s), by sampling from the output probability distribution of the next word, and so on.

We can train an LM on any kind of text, then generate text in that style (Harry Potter, etc.).

<!-- ### How to estimate these probabilities? Amusing we have a large text corpus (data set like Wikipedia), we can count and divide as follows: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ --> <!-- Sparsity: Sometimes we do not have enough data to estimate the following: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ Markov Assumption (Simplifying assumption): - $P(clear |The, water, is, so) ≈ P(clear | so)$ - Or $P(clear |The, water, is, so) ≈ P(clear | is, so)$ Formally: - $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i | w_{i−k} … w_{i−1})$ - $P(w_i | w_1 w_2 … w_{i−1}) ≈ P(w_i | w_{i−k} … w_{i−1})$ - Unigram model: $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i)$ - Bigram model: $P(w_i | w_1 w_2 … w{i−1}) ≈ P(w_i | w_{i−1})$ -->

We can extend to trigrams, 4-grams, 5-grams, and N-grams.

In general, this is an insufficient model of language because the language has long-distance dependencies. However, in practice, these 3,4 grams work well for most of the applications.

<!--- ### Estimating bigram probabilities: The Maximum Likelihood Estimate (MLE): of all the times we saw the word wi-1, how many times it was followed by the word wi $P(w_i | w_{i−1}) = count(w_{i−1}, w_i) / count(w_{i−1})$ Practical Issue: We do everything in log space to avoid underflow $log(p1 × p2 × p3 × p4 ) = log p1 + log p2 + log p3 + log p4$ -->

Building Statistical Language Models:

Toolkits

  • SRILM is a toolkit for building and applying statistical language models, primarily for use in speech recognition, statistical tagging and segmentation, and machine translation. It has been under development in the SRI Speech Technology and Research Laboratory since 1995.
  • KenLM is a fast and scalable toolkit that builds and queries language models.

N-gram Models

Google's N-gram Models Belong to You: Google Research has been using word n-gram models for a variety of R&D projects. Google N-Gram processed 1,024,908,267,229 words of running text and published the counts for all 1,176,470,663 five-word sequences that appear at least 40 times.

The counts of text from the Linguistics Data Consortium LDC are as follows:

File sizes: approx. 24 GB compressed (gzip'ed) text files

Number of tokens:    1,024,908,267,229
Number of sentences:    95,119,665,584
Number of unigrams:         13,588,391
Number of bigrams:         314,843,401
Number of trigrams:        977,069,902
Number of fourgrams:     1,313,818,354
Number of fivegrams:     1,176,470,663

The following is an example of the 4-gram data in this corpus:

serve as the incoming 92
serve as the incubator 99
serve as the independent 794
serve as the index 223
serve as the indication 72
serve as the indicator 120
serve as the indicators 45
serve as the indispensable 111
serve as the indispensible 40

For example, the sequence of the four words "serve as the indication" has been seen in the corpus 72 times.

<!--- Try some examples of your own using [Google Books Ngram Viewer](https://books.google.com/ngrams/) and see the frequency of likely and unlikely N-grams. ![ngramviewer.png](images/ngramviewer.png) -->

Limitations of Statistical Language models

Sometimes we do not have enough data to estimate. Increasing n makes sparsity problems worse. Typically we can’t have n bigger than 5.

  • Sparsity problem 1: count(students opened their w) = 0? Smoothing Solution: Add small 𝛿 to the count for every w in the vocabulary.
  • Sparsity problem 2: count(students opened their) = 0? Backoff Solution: condition on (opened their) instead.
  • Storage issue: Need to store the count for all n-grams you saw in the corpus. Increasing n or increasing corpus increases storage size.

Neural Language Models (NLM)

NLM usually (but not always) uses an RNN to learn sequences of words (sentences, paragraphs, … etc) and hence can predict the next word.

Advantages:

  • Can process variable-length input as the computations for step t use information from many steps back (eg: RNN)
  • No sparsity problem (can feed any n-gram not seen in the training data)
  • Model size doesn’t increase for longer input ($W_h, W_e, $), the same weights are applied on every timestep and need to store only the vocabulary word vectors.

nlm01.png

As depicted, At each step, we have a probability distribution of the next word over the vocabulary.

Training an NLM:

  1. Use a big corpus of text (a sequence of words such as Wikipedia)
  2. Feed into the NLM (a batch of sentences); compute output distribution for every step. (predict probability dist of every word, given words so far)
  3. Loss function on each step t cross-entropy between predicted probability distribution, and the true next word (one-hot)

Example of long sequence learning:

  • The writer of the books (is or are)?
  • Correct answer: The writer of the books is planning a sequel
  • Syntactic recency: The writer of the books is (correct)
  • Sequential recency: The writer of the books are (incorrect)

Disadvantages:

  • Recurrent computation is slow (sequential, one step at a time)
  • In practice, for long sequences, difficult_ to access information_ from many steps back

Conditional language model

LM can be used to generate text conditions on input (speech, image (OCR), text, etc.) across different applications such as: speech recognition, machine translation, summarization, etc.

clm.png

<!-- - to do [beam search](https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/slides/cs224n-2019-lecture08-nmt.pdf) --> <!-- - _Greedy decoding_: take the most probable word on each step. Has no way to undo decisions. - _Beam search decoding_: On each step of the decoder, keep track of the k most probable partial _hypotheses_ outputs (eg: translations) where k is the beam size (in practice around 5 to 10), then Backtrack to obtain the full hypothesis. Decoding: stopping criterion: - _Greedy decoding_: Usually we decode until the model produces a _END_ token. - _Beam search decoding_: different hypotheses may produce _END_ tokens on different timesteps, When a hypothesis produces _END_, that hypothesis is complete, Place it aside and continue exploring other hypotheses via beam search. Usually, we continue beam search until: 1. We reach timestep T (where T is some pre-defined cutoff), or 2. We have at least n completed hypotheses (where n is pre-defined cutoff) After we have our list of completed hypotheses, we select the top one with the highest (length normalized) score. -->

Evaluation: How good is our model?

Does our language model prefer good (likely) sentences to bad ones?

Extrinsic evaluation:

  1. For comparing models A and B, put each model in a task (spelling, corrector, speech recognizer, machine translation)
  2. Run the task and compare the accuracy for A and for B
  3. Best evaluation but not practical and time consuming!

Intrinsic evaluation:

  • Intuition: The best language model is one that best predicts an unseen test set (assigns high probability to sentences).
  • Perplexity is the standard evaluation metric for Language Models.
  • Perplexity is defined as the inverse probability of a text, according to the Language Model.
  • A good language model should give a lower Perplexity for a test text. Specifically, a lower perplexity for a given text means that text has a high probability in the eyes of that Language Model.

The standard evaluation metric for Language Models is perplexity Perplexity is the inverse probability of the test set, normalized by the number of words

preplexity02.png

Lower perplexity = Better model

Perplexity is related to branch factor: On average, how many things could occur next.


Transformer-based Language models

Instead of RNN, let's use attention Let's use large pre-trained models

  • What is the problem? One of the biggest challenges in natural language processing (NLP) is the shortage of training data for many distinct tasks. However, modern deep learning-based NLP models improve when trained on millions, or billions, of annotated training examples.

  • Pre-training is the solution: To help close this gap, a variety of techniques have been developed for training general-purpose language representation models using the enormous amount of unannotated text. The pre-trained model can then be fine-tuned on small data for different tasks like question answering and sentiment analysis, resulting in substantial accuracy improvements compared to training on these datasets from scratch.

The Transformer architecture was proposed in the paper Attention is All You Need, used for the Neural Machine Translation task (NMT), consisting of:

  • Encoder: Network that encodes the input sequence.
  • Decoder: Network that generates the output sequences conditioned on the input.

As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

The main idea of attention can be summarized as mentioned in the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Based on this architecture (the vanilla Transformers!), encoder or decoder components can be used alone to enable massive pre-trained generic models that can be fine-tuned for downstream tasks such as text classification, translation, summarization, question answering, etc. For Example:

  • "Pre-training of Deep Bidirectional Transformers for Language Understanding" BERT is mainly based on the encoder architecture trained on massive text datasets to predict randomly masked words and "is-next sentence" classification tasks.
  • GPT, on the other hand, is an auto-regressive generative model that is mainly based on the decoder architecture, trained on massive text datasets to predict the next word (unlike BERT, GPT can generate sequences).

These models, BERT and GPT for instance, can be considered as the NLP's ImageNET.

bertvsgpt.png

As shown, BERT is deeply bidirectional, OpenAI GPT is unidirectional, and ELMo is shallowly bidirectional.

Pre-trained representations can be:

  • Context-free: such as word2vec or GloVe that generates a single/fixed word embedding (vector) representation for each word in the vocabulary (independent of the context of that word at test time)
  • Contextual: generates a representation of each word based on the other words in the sentence.

Contextual Language models can be:

  • Causal language model (CML): Predict the next token passed on previous ones. (GPT)
  • Masked language model (MLM): Predict the masked token based on the surrounding contextual tokens (BERT)
<!-- #### To do - Code Bert https://colab.research.google.com/drive/17sJR6JwoQ7Trr5WsUUIpHLZBElf8WrVq?usp=sharing#scrollTo=-u2Feyk5Gg7o - Code GPT - Code Falcon - Code GPT4ALL - Code CodeTF - Chat with my docs - etc. -->

💥 Practical LLMs

In this part, we are going to use different large language models

🚀 Hello GPT2

<a target="_blank" href="https://colab.research.google.com/drive/1eBcoHjJ2S4G_64sBvYS8G8B-1WSRLQAF?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>

GPT2 (a successor to GPT) is a pre-trained model on English language using a causal language modeling (CLM) objective, trained simply to predict the next

编辑推荐精选

博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
商汤小浣熊

商汤小浣熊

最强AI数据分析助手

小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。

imini AI

imini AI

像人一样思考的AI智能体

imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。

下拉加载更多