波兰语自然语言处理资源与预训练模型库
该项目汇集了多种波兰语自然语言处理资源,包括词嵌入、语言模型和机器翻译模型。提供Word2Vec、FastText、GloVe等词向量,以及ELMo、RoBERTa等上下文嵌入模型。还包含压缩词向量和Wikipedia2Vec等特色资源。涵盖从基础词向量到预训练模型的多个层面,为波兰语NLP研究和应用提供支持。
This repository contains pre-trained models and language resources for Natural Language Processing in Polish created during my research. Some of the models are also available on Huggingface Hub.
If you'd like to use any of those resources in your research please cite:
@Misc{polish-nlp-resources, author = {S{\l}awomir Dadas}, title = {A repository of Polish {NLP} resources}, howpublished = {Github}, year = {2019}, url = {https://github.com/sdadas/polish-nlp-resources/} }
The following section includes pre-trained word embeddings for Polish. Each model was trained on a corpus consisting of Polish Wikipedia dump, Polish books and articles, 1.5 billion tokens at total.
Word2Vec trained with Gensim. 100 dimensions, negative sampling, contains lemmatized words with 3 or more ocurrences in the corpus and additionally a set of pre-defined punctuation symbols, all numbers from 0 to 10'000, Polish forenames and lastnames. The archive contains embedding in gensim binary format. Example of usage:
from gensim.models import KeyedVectors if __name__ == '__main__': word2vec = KeyedVectors.load("word2vec_100_3_polish.bin") print(word2vec.similar_by_word("bierut")) # [('cyrankiewicz', 0.818274736404419), ('gomułka', 0.7967918515205383), ('raczkiewicz', 0.7757788896560669), ('jaruzelski', 0.7737460732460022), ('pużak', 0.7667238712310791)]
FastText trained with Gensim. Vocabulary and dimensionality is identical to Word2Vec model. The archive contains embedding in gensim binary format. Example of usage:
from gensim.models import KeyedVectors if __name__ == '__main__': word2vec = KeyedVectors.load("fasttext_100_3_polish.bin") print(word2vec.similar_by_word("bierut")) # [('bieruty', 0.9290274381637573), ('gierut', 0.8921363353729248), ('bieruta', 0.8906412124633789), ('bierutow', 0.8795544505119324), ('bierutowsko', 0.839280366897583)]
Global Vectors for Word Representation (GloVe) trained using the reference implementation from Stanford NLP. 100 dimensions, contains lemmatized words with 3 or more ocurrences in the corpus. Example of usage:
from gensim.models import KeyedVectors if __name__ == '__main__': word2vec = KeyedVectors.load_word2vec_format("glove_100_3_polish.txt") print(word2vec.similar_by_word("bierut")) # [('cyrankiewicz', 0.8335597515106201), ('gomułka', 0.7793121337890625), ('bieruta', 0.7118682861328125), ('jaruzelski', 0.6743760108947754), ('minc', 0.6692837476730347)]
Pre-trained vectors using the same vocabulary as above but with higher dimensionality. These vectors are more suitable for representing larger chunks of text such as sentences or documents using simple word aggregation methods (averaging, max pooling etc.) as more semantic information is preserved this way.
GloVe - 300d: Part 1 (GitHub), 500d: Part 1 (GitHub) Part 2 (GitHub), 800d: Part 1 (GitHub) Part 2 (GitHub) Part 3 (GitHub)
Word2Vec - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)
FastText - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)
This is a compressed version of the Word2Vec embedding model described above. For compression, we used the method described in Compressing Word Embeddings via Deep Compositional Code Learning by Shu and Nakayama. Compressed embeddings are suited for deployment on storage-poor devices such as mobile phones. The model weights 38MB, only 4.4% size of the original Word2Vec embeddings. Although the authors of the article claimed that compressing with their method doesn't hurt model performance, we noticed a slight but acceptable drop of accuracy when using compressed version of embeddings. Sample decoder class with usage:
import gzip from typing import Dict, Callable import numpy as np class CompressedEmbedding(object): def __init__(self, vocab_path: str, embedding_path: str, to_lowercase: bool=True): self.vocab_path: str = vocab_path self.embedding_path: str = embedding_path self.to_lower: bool = to_lowercase self.vocab: Dict[str, int] = self.__load_vocab(vocab_path) embedding = np.load(embedding_path) self.codes: np.ndarray = embedding[embedding.files[0]] self.codebook: np.ndarray = embedding[embedding.files[1]] self.m = self.codes.shape[1] self.k = int(self.codebook.shape[0] / self.m) self.dim: int = self.codebook.shape[1] def __load_vocab(self, vocab_path: str) -> Dict[str, int]: open_func: Callable = gzip.open if vocab_path.endswith(".gz") else open with open_func(vocab_path, "rt", encoding="utf-8") as input_file: return {line.strip():idx for idx, line in enumerate(input_file)} def vocab_vector(self, word: str): if word == "<pad>": return np.zeros(self.dim) val: str = word.lower() if self.to_lower else word index: int = self.vocab.get(val, self.vocab["<unk>"]) codes = self.codes[index] code_indices = np.array([idx * self.k + offset for idx, offset in enumerate(np.nditer(codes))]) return np.sum(self.codebook[code_indices], axis=0) if __name__ == '__main__': word2vec = CompressedEmbedding("word2vec_100_3.vocab.gz", "word2vec_100_3.compressed.npz") print(word2vec.vocab_vector("bierut"))
Wikipedia2Vec is a toolkit for learning joint representations of words and Wikipedia entities. We share Polish embeddings learned using a modified version of the library in which we added lemmatization and fixed some issues regarding parsing wiki dumps for languages other than English. Embedding models are available in sizes from 100 to 800 dimensions. A simple example:
from wikipedia2vec import Wikipedia2Vec wiki2vec = Wikipedia2Vec.load("wiki2vec-plwiki-100.bin") print(wiki2vec.most_similar(wiki2vec.get_entity("Bolesław Bierut"))) # (<Entity Bolesław Bierut>, 1.0), (<Word bierut>, 0.75790733), (<Word gomułka>, 0.7276504), # (<Entity Krajowa Rada Narodowa>, 0.7081445), (<Entity Władysław Gomułka>, 0.7043667) [...]
Download embeddings: 100d, 300d, 500d, 800d.
Embeddings from Language Models (ELMo) is a contextual embedding presented in Deep contextualized word representations by Peters et al. Sample usage with PyTorch below, for a more detailed instructions for integrating ELMo with your model please refer to the official repositories github.com/allenai/bilm-tf (Tensorflow) and github.com/allenai/allennlp (PyTorch).
from allennlp.commands.elmo import ElmoEmbedder elmo = ElmoEmbedder("options.json", "weights.hdf5") print(elmo.embed_sentence(["Zażółcić", "gęślą", "jaźń"]))
Language model for Polish based on popular transformer architecture. We provide weights for improved BERT language model introduced in RoBERTa: A Robustly Optimized BERT Pretraining Approach. We provide two RoBERTa models for Polish - base and large model. A summary of pre-training parameters for each model is shown in the table below. We release two version of the each model: one in the Fairseq format and the other in the HuggingFace Transformers format. More information about the models can be found in a separate repository.
<table> <thead> <th>Model</th> <th>L / H / A*</th> <th>Batch size</th> <th>Update steps</th> <th>Corpus size</th> <th>Fairseq</th> <th>Transformers</th> </thead> <tr> <td>RoBERTa (base)</td> <td>12 / 768 / 12</td> <td>8k</td> <td>125k</td> <td>~20GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_fairseq.zip">v0.9.0</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-transformers-v3.4.0/roberta_base_transformers.zip">v3.4</a> </td> </tr> <tr> <td>RoBERTa‑v2 (base)</td> <td>12 / 768 / 12</td> <td>8k</td> <td>400k</td> <td>~20GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_fairseq.zip">v0.10.1</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_transformers.zip">v4.4</a> </td> </tr> <tr> <td>RoBERTa (large)</td> <td>24 / 1024 / 16</td> <td>30k</td> <td>50k</td> <td>~135GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_large_fairseq.zip">v0.9.0</a> </td> <td> <a字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、 解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类 人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号