Source A Survey of Large Language Models
Simple definition: Language Modeling is the task of predicting what word comes next.
"The dog is playing in the ..."
The main purpose of Language Models is to assign a probability to a sentence, to distinguish between the more likely and the less likely sentences.
For Speech Recognition, we use not only the acoustics model (the speech signal), but also a language model. Similarly, for Optical Character Recognition (OCR), we use both a vision model and a language model. Language models are very important for such recognition systems.
Sometimes, you hear or read a sentence that is not clear, but using your language model, you still can recognize it at a high accuracy despite the noisy vision/speech input.
The language model computes either of:
Language Modeling is a subcomponent of many NLP tasks, especially those involving generating text or estimating the probability of text.
The Chain Rule: $P(x_1, x_2, x_3, …, x_n) = P(x_1)P(x_2|x_1)P(x_3|x_1,x_2)…P(x_n|x_1,…,x_{n-1})$
$P(The, water, is, so, clear) = P(The) × P(water|The) × P(is|The, water) × P(so|The, water, is) × P(clear | The, water, is, so)$
What just happened? The Chain Rule is applied to compute the joint probability of words in a sentence.
Using a large amount of text (corpus such as Wikipedia), we collect statistics about how frequently different words are, and use these to predict the next word. For example, the probability that a word w comes after these three words students opened their can be estimated as follows:
The above example is a 4-gram model. And we may get:
We can conclude that the word “books” is more probable than “cars” in this context.
We ignored the previous context before "students opened their"
Accordingly, arbitrary text can be generated from a language model given starting word(s), by sampling from the output probability distribution of the next word, and so on.
We can train an LM on any kind of text, then generate text in that style (Harry Potter, etc.).
<!-- ### How to estimate these probabilities? Amusing we have a large text corpus (data set like Wikipedia), we can count and divide as follows: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ --> <!-- Sparsity: Sometimes we do not have enough data to estimate the following: - $P(clear |The, water, is, so) = Count (The, water, is, so, clear) / Count (The, water, is, so)$ Markov Assumption (Simplifying assumption): - $P(clear |The, water, is, so) ≈ P(clear | so)$ - Or $P(clear |The, water, is, so) ≈ P(clear | is, so)$ Formally: - $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i | w_{i−k} … w_{i−1})$ - $P(w_i | w_1 w_2 … w_{i−1}) ≈ P(w_i | w_{i−k} … w_{i−1})$ - Unigram model: $P(w_1 w_2 … w_n ) ≈ ∏i P(w_i)$ - Bigram model: $P(w_i | w_1 w_2 … w{i−1}) ≈ P(w_i | w_{i−1})$ -->We can extend to trigrams, 4-grams, 5-grams, and N-grams.
In general, this is an insufficient model of language because the language has long-distance dependencies. However, in practice, these 3,4 grams work well for most of the applications.
<!--- ### Estimating bigram probabilities: The Maximum Likelihood Estimate (MLE): of all the times we saw the word wi-1, how many times it was followed by the word wi $P(w_i | w_{i−1}) = count(w_{i−1}, w_i) / count(w_{i−1})$ Practical Issue: We do everything in log space to avoid underflow $log(p1 × p2 × p3 × p4 ) = log p1 + log p2 + log p3 + log p4$ -->Google's N-gram Models Belong to You: Google Research has been using word n-gram models for a variety of R&D projects. Google N-Gram processed 1,024,908,267,229 words of running text and published the counts for all 1,176,470,663 five-word sequences that appear at least 40 times.
The counts of text from the Linguistics Data Consortium LDC are as follows:
File sizes: approx. 24 GB compressed (gzip'ed) text files
Number of tokens: 1,024,908,267,229
Number of sentences: 95,119,665,584
Number of unigrams: 13,588,391
Number of bigrams: 314,843,401
Number of trigrams: 977,069,902
Number of fourgrams: 1,313,818,354
Number of fivegrams: 1,176,470,663
The following is an example of the 4-gram data in this corpus:
serve as the incoming 92
serve as the incubator 99
serve as the independent 794
serve as the index 223
serve as the indication 72
serve as the indicator 120
serve as the indicators 45
serve as the indispensable 111
serve as the indispensible 40
For example, the sequence of the four words "serve as the indication" has been seen in the corpus 72 times.
<!--- Try some examples of your own using [Google Books Ngram Viewer](https://books.google.com/ngrams/) and see the frequency of likely and unlikely N-grams.  -->Sometimes we do not have enough data to estimate. Increasing n makes sparsity problems worse. Typically we can’t have n bigger than 5.
NLM usually (but not always) uses an RNN to learn sequences of words (sentences, paragraphs, … etc) and hence can predict the next word.
Advantages:

As depicted, At each step, we have a probability distribution of the next word over the vocabulary.
Training an NLM:
Example of long sequence learning:
Disadvantages:
LM can be used to generate text conditions on input (speech, image (OCR), text, etc.) across different applications such as: speech recognition, machine translation, summarization, etc.

Does our language model prefer good (likely) sentences to bad ones?
The standard evaluation metric for Language Models is perplexity Perplexity is the inverse probability of the test set, normalized by the number of words

Lower perplexity = Better model
Perplexity is related to branch factor: On average, how many things could occur next.
Instead of RNN, let's use attention Let's use large pre-trained models
What is the problem? One of the biggest challenges in natural language processing (NLP) is the shortage of training data for many distinct tasks. However, modern deep learning-based NLP models improve when trained on millions, or billions, of annotated training examples.
Pre-training is the solution: To help close this gap, a variety of techniques have been developed for training general-purpose language representation models using the enormous amount of unannotated text. The pre-trained model can then be fine-tuned on small data for different tasks like question answering and sentiment analysis, resulting in substantial accuracy improvements compared to training on these datasets from scratch.
The Transformer architecture was proposed in the paper Attention is All You Need, used for the Neural Machine Translation task (NMT), consisting of:
As mentioned in the paper:
"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"
The main idea of attention can be summarized as mentioned in the OpenAi's article:
"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."
Based on this architecture (the vanilla Transformers!), encoder or decoder components can be used alone to enable massive pre-trained generic models that can be fine-tuned for downstream tasks such as text classification, translation, summarization, question answering, etc. For Example:
These models, BERT and GPT for instance, can be considered as the NLP's ImageNET.

As shown, BERT is deeply bidirectional, OpenAI GPT is unidirectional, and ELMo is shallowly bidirectional.
Pre-trained representations can be:
Contextual Language models can be:
In this part, we are going to use different large language models
GPT2 (a successor to GPT) is a pre-trained model on English language using a causal language modeling (CLM) objective, trained simply to predict the next


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者, 还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。


选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能 以快速、准确的响应满足您的需求,让您的生活更加智能便捷。


像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。