llms

llms

大型语言模型的原理与实践应用全面解析

本项目全面介绍大型语言模型(LLMs)的基本概念、应用场景和技术演进。内容涵盖统计语言模型、神经网络语言模型,以及基于Transformer的预训练模型如GPT和BERT等。系统讲解LLMs核心原理,并探讨模型评估、文本生成和提示工程等实用技术。同时展示LLMs在计算机视觉等领域的创新应用,通过理论与实践结合,为读者提供深入了解LLMs技术的全面指南。

语言模型自然语言处理TransformerGPTBERTGithub开源项目

Large Language Models (llms)

lms.png Source A Survey of Large Language Models

Content

  • What is a language model?
  • Applications of language models
  • Statistical Language Modeling
  • Neural Language Models (NLM)
  • Conditional language model
  • Evaluation: How good is our model?
  • Transformer-based Language models
  • Practical LLMs: GPT, BERT, Falcon, Llama, CodeT5
  • How to generate text using different decoding methods
  • Prompt Engineering
  • Fine-tuning LLMs
  • Retrieval Augmented Generation (RAG)
  • Ask almost everything (txt, pdf, video, etc.)
  • Evaluating LLM-based systems
  • AI Agents
  • LLMs for Computer vision (TBD)
  • Further readings

Introduction: What is a language model?

Simple definition: Language Modeling is the task of predicting what word comes next.

"The dog is playing in the ..."

  • park
  • woods
  • snow
  • office
  • university
  • Neural network
  • ?

The main purpose of Language Models is to assign a probability to a sentence, to distinguish between the more likely and the less likely sentences.

Applications of language models:

  1. Machine Translation: P(high winds tonight) > P(large winds tonight)
  2. Spelling correction: P(about fifteen minutes from) > P(about fifteen minuets from)
  3. Speech Recognition: P(I saw a van) > P(eyes awe of an)
  4. Authorship identification: who wrote some sample text
  5. Summarization, question answering, dialogue bots, etc.

For Speech Recognition, we use not only the acoustics model (the speech signal), but also a language model. Similarly, for Optical Character Recognition (OCR), we use both a vision model and a language model. Language models are very important for such recognition systems.

Sometimes, you hear or read a sentence that is not clear, but using your language model, you still can recognize it at a high accuracy despite the noisy vision/speech input.

The language model computes either of:

  • The probability of an upcoming word: $P(w_5 | w_1, w_2, w_3, w_4)$
  • The probability of a sentence or sequence of words (according to the Language Model): $P(w_1, w_2, w_3, ..., w_n)$

Language Modeling is a subcomponent of many NLP tasks, especially those involving generating text or estimating the probability of text.

The Chain Rule: $P(x_1, x_2, x_3, …, x_n) = P(x_1)P(x_2|x_1)P(x_3|x_1,x_2)…P(x_n|x_1,…,x_{n-1})$

$P(The, water, is, so, clear) = P(The) × P(water|The) × P(is|The, water) × P(so|The, water, is) × P(clear | The, water, is, so)$

What just happened? The Chain Rule is applied to compute the joint probability of words in a sentence.


Statistical Language Modeling:

n-gram Language Models

Using a large amount of text (corpus such as Wikipedia), we collect statistics about how frequently different words are, and use these to predict the next word. For example, the probability that a word w comes after these three words students opened their can be estimated as follows:

  • P(w | students opened their) = count(students opened their w) / count(students opened their)

The above example is a 4-gram model. And we may get:

  • P(books | students opened their) = 0.4
  • P(cars | students, opened, their) = 0.05
  • P(... | students, opened, their) = ...

We can conclude that the word “books” is more probable than “cars” in this context.

We ignored the previous context before "students opened their"

Accordingly, arbitrary text can be generated from a language model given starting word(s), by sampling from the output probability distribution of the next word, and so on.

We can train an LM on any kind of text, then generate text in that style (Harry Potter, etc.).

<!-- ### How to estimate these probabilities? Amusing we have a large text corpus (data set like Wikipedia), we can count and divide as follows: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ --> <!-- Sparsity: Sometimes we do not have enough data to estimate the following: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ Markov Assumption (Simplifying assumption): - $P(clear |The, water, is, so) ≈ P(clear | so)$ - Or $P(clear |The, water, is, so) ≈ P(clear | is, so)$ Formally: - $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i | w_{i−k} … w_{i−1})$ - $P(w_i | w_1 w_2 … w_{i−1}) ≈ P(w_i | w_{i−k} … w_{i−1})$ - Unigram model: $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i)$ - Bigram model: $P(w_i | w_1 w_2 … w{i−1}) ≈ P(w_i | w_{i−1})$ -->

We can extend to trigrams, 4-grams, 5-grams, and N-grams.

In general, this is an insufficient model of language because the language has long-distance dependencies. However, in practice, these 3,4 grams work well for most of the applications.

<!--- ### Estimating bigram probabilities: The Maximum Likelihood Estimate (MLE): of all the times we saw the word wi-1, how many times it was followed by the word wi $P(w_i | w_{i−1}) = count(w_{i−1}, w_i) / count(w_{i−1})$ Practical Issue: We do everything in log space to avoid underflow $log(p1 × p2 × p3 × p4 ) = log p1 + log p2 + log p3 + log p4$ -->

Building Statistical Language Models:

Toolkits

  • SRILM is a toolkit for building and applying statistical language models, primarily for use in speech recognition, statistical tagging and segmentation, and machine translation. It has been under development in the SRI Speech Technology and Research Laboratory since 1995.
  • KenLM is a fast and scalable toolkit that builds and queries language models.

N-gram Models

Google's N-gram Models Belong to You: Google Research has been using word n-gram models for a variety of R&D projects. Google N-Gram processed 1,024,908,267,229 words of running text and published the counts for all 1,176,470,663 five-word sequences that appear at least 40 times.

The counts of text from the Linguistics Data Consortium LDC are as follows:

File sizes: approx. 24 GB compressed (gzip'ed) text files

Number of tokens:    1,024,908,267,229
Number of sentences:    95,119,665,584
Number of unigrams:         13,588,391
Number of bigrams:         314,843,401
Number of trigrams:        977,069,902
Number of fourgrams:     1,313,818,354
Number of fivegrams:     1,176,470,663

The following is an example of the 4-gram data in this corpus:

serve as the incoming 92
serve as the incubator 99
serve as the independent 794
serve as the index 223
serve as the indication 72
serve as the indicator 120
serve as the indicators 45
serve as the indispensable 111
serve as the indispensible 40

For example, the sequence of the four words "serve as the indication" has been seen in the corpus 72 times.

<!--- Try some examples of your own using [Google Books Ngram Viewer](https://books.google.com/ngrams/) and see the frequency of likely and unlikely N-grams. ![ngramviewer.png](images/ngramviewer.png) -->

Limitations of Statistical Language models

Sometimes we do not have enough data to estimate. Increasing n makes sparsity problems worse. Typically we can’t have n bigger than 5.

  • Sparsity problem 1: count(students opened their w) = 0? Smoothing Solution: Add small 𝛿 to the count for every w in the vocabulary.
  • Sparsity problem 2: count(students opened their) = 0? Backoff Solution: condition on (opened their) instead.
  • Storage issue: Need to store the count for all n-grams you saw in the corpus. Increasing n or increasing corpus increases storage size.

Neural Language Models (NLM)

NLM usually (but not always) uses an RNN to learn sequences of words (sentences, paragraphs, … etc) and hence can predict the next word.

Advantages:

  • Can process variable-length input as the computations for step t use information from many steps back (eg: RNN)
  • No sparsity problem (can feed any n-gram not seen in the training data)
  • Model size doesn’t increase for longer input ($W_h, W_e, $), the same weights are applied on every timestep and need to store only the vocabulary word vectors.

nlm01.png

As depicted, At each step, we have a probability distribution of the next word over the vocabulary.

Training an NLM:

  1. Use a big corpus of text (a sequence of words such as Wikipedia)
  2. Feed into the NLM (a batch of sentences); compute output distribution for every step. (predict probability dist of every word, given words so far)
  3. Loss function on each step t cross-entropy between predicted probability distribution, and the true next word (one-hot)

Example of long sequence learning:

  • The writer of the books (is or are)?
  • Correct answer: The writer of the books is planning a sequel
  • Syntactic recency: The writer of the books is (correct)
  • Sequential recency: The writer of the books are (incorrect)

Disadvantages:

  • Recurrent computation is slow (sequential, one step at a time)
  • In practice, for long sequences, difficult_ to access information_ from many steps back

Conditional language model

LM can be used to generate text conditions on input (speech, image (OCR), text, etc.) across different applications such as: speech recognition, machine translation, summarization, etc.

clm.png

<!-- - to do [beam search](https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/slides/cs224n-2019-lecture08-nmt.pdf) --> <!-- - _Greedy decoding_: take the most probable word on each step. Has no way to undo decisions. - _Beam search decoding_: On each step of the decoder, keep track of the k most probable partial _hypotheses_ outputs (eg: translations) where k is the beam size (in practice around 5 to 10), then Backtrack to obtain the full hypothesis. Decoding: stopping criterion: - _Greedy decoding_: Usually we decode until the model produces a _END_ token. - _Beam search decoding_: different hypotheses may produce _END_ tokens on different timesteps, When a hypothesis produces _END_, that hypothesis is complete, Place it aside and continue exploring other hypotheses via beam search. Usually, we continue beam search until: 1. We reach timestep T (where T is some pre-defined cutoff), or 2. We have at least n completed hypotheses (where n is pre-defined cutoff) After we have our list of completed hypotheses, we select the top one with the highest (length normalized) score. -->

Evaluation: How good is our model?

Does our language model prefer good (likely) sentences to bad ones?

Extrinsic evaluation:

  1. For comparing models A and B, put each model in a task (spelling, corrector, speech recognizer, machine translation)
  2. Run the task and compare the accuracy for A and for B
  3. Best evaluation but not practical and time consuming!

Intrinsic evaluation:

  • Intuition: The best language model is one that best predicts an unseen test set (assigns high probability to sentences).
  • Perplexity is the standard evaluation metric for Language Models.
  • Perplexity is defined as the inverse probability of a text, according to the Language Model.
  • A good language model should give a lower Perplexity for a test text. Specifically, a lower perplexity for a given text means that text has a high probability in the eyes of that Language Model.

The standard evaluation metric for Language Models is perplexity Perplexity is the inverse probability of the test set, normalized by the number of words

preplexity02.png

Lower perplexity = Better model

Perplexity is related to branch factor: On average, how many things could occur next.


Transformer-based Language models

Instead of RNN, let's use attention Let's use large pre-trained models

  • What is the problem? One of the biggest challenges in natural language processing (NLP) is the shortage of training data for many distinct tasks. However, modern deep learning-based NLP models improve when trained on millions, or billions, of annotated training examples.

  • Pre-training is the solution: To help close this gap, a variety of techniques have been developed for training general-purpose language representation models using the enormous amount of unannotated text. The pre-trained model can then be fine-tuned on small data for different tasks like question answering and sentiment analysis, resulting in substantial accuracy improvements compared to training on these datasets from scratch.

The Transformer architecture was proposed in the paper Attention is All You Need, used for the Neural Machine Translation task (NMT), consisting of:

  • Encoder: Network that encodes the input sequence.
  • Decoder: Network that generates the output sequences conditioned on the input.

As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

The main idea of attention can be summarized as mentioned in the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Based on this architecture (the vanilla Transformers!), encoder or decoder components can be used alone to enable massive pre-trained generic models that can be fine-tuned for downstream tasks such as text classification, translation, summarization, question answering, etc. For Example:

  • "Pre-training of Deep Bidirectional Transformers for Language Understanding" BERT is mainly based on the encoder architecture trained on massive text datasets to predict randomly masked words and "is-next sentence" classification tasks.
  • GPT, on the other hand, is an auto-regressive generative model that is mainly based on the decoder architecture, trained on massive text datasets to predict the next word (unlike BERT, GPT can generate sequences).

These models, BERT and GPT for instance, can be considered as the NLP's ImageNET.

bertvsgpt.png

As shown, BERT is deeply bidirectional, OpenAI GPT is unidirectional, and ELMo is shallowly bidirectional.

Pre-trained representations can be:

  • Context-free: such as word2vec or GloVe that generates a single/fixed word embedding (vector) representation for each word in the vocabulary (independent of the context of that word at test time)
  • Contextual: generates a representation of each word based on the other words in the sentence.

Contextual Language models can be:

  • Causal language model (CML): Predict the next token passed on previous ones. (GPT)
  • Masked language model (MLM): Predict the masked token based on the surrounding contextual tokens (BERT)
<!-- #### To do - Code Bert https://colab.research.google.com/drive/17sJR6JwoQ7Trr5WsUUIpHLZBElf8WrVq?usp=sharing#scrollTo=-u2Feyk5Gg7o - Code GPT - Code Falcon - Code GPT4ALL - Code CodeTF - Chat with my docs - etc. -->

💥 Practical LLMs

In this part, we are going to use different large language models

🚀 Hello GPT2

<a target="_blank" href="https://colab.research.google.com/drive/1eBcoHjJ2S4G_64sBvYS8G8B-1WSRLQAF?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>

GPT2 (a successor to GPT) is a pre-trained model on English language using a causal language modeling (CLM) objective, trained simply to predict the next

编辑推荐精选

音述AI

音述AI

全球首个AI音乐社区

音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。

lynote.ai

lynote.ai

一站式搞定所有学习需求

不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。

AniShort

AniShort

为AI短剧协作而生

专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。

seedancetwo2.0

seedancetwo2.0

能听懂你表达的视频模型

Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。

nano-banana纳米香蕉中文站

nano-banana纳米香蕉中文站

国内直接访问,限时3折

输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动

扣子-AI办公

扣子-AI办公

职场AI,就用扣子

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

堆友

堆友

多风格AI绘画神器

堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。

图像生成AI工具AI反应堆AI工具箱AI绘画GOAI艺术字堆友相机AI图像热门
码上飞

码上飞

零代码AI应用开发平台

零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

下拉加载更多