Large language models (LLMs) are transformer-based neural network architectures pre-trained with language models as loss functions using extensive amounts of text data, often hundreds of billion or even trillions of words, and a large number of model parameters,...
CITATION STYLE
Wu, Y. (2024). Large Language Model and Text Generation (pp. 265–297). https://doi.org/10.1007/978-3-031-55865-8_10
Mendeley helps you to discover research relevant for your work.