Modern Large Language Models: A First-Principles Guide to Building and Understanding Transformer-Based Language Models
Large language models now sit at the core of modern software systems. They power search, recommendation engines, coding assistants, conversational interfaces, and autonomous agents. Yet for many engineers and practitioners, these models remain opaque—understood through fragments of code, borrowed recipes, or surface-level explanations. This book was written to change that. Modern Large Language Models is a clear, systems-level guide to understanding how transformer-based language models actually work—starting from first principles and building upward toward complete, modern LLM systems. Rather than treating large language models as black boxes, this book explains the fundamental ideas that make them possible: probabilistic language modeling, vector representations, attention mechanisms, optimization, and architectural composition. Concepts are introduced gradually, with visual intuition and concrete reasoning before full implementations, allowing readers to develop understanding that transfers beyond any single framework or model version. The book takes you from the foundations of language modeling to the realities of training, fine-tuning, evaluation, and deployment. Along the way, it connects theory to practice, showing how design decisions shape model behavior, performance, and limitations. This is not a collection of shortcuts or prompt recipes. It is a guide for readers who want to reason about large language models as engineered systems—systems that can be analyzed, debugged, improved, and deployed with confidence. What You'll Learn • How language modeling works at a probabilistic level—and why it matters • How tokens, embeddings, and vector spaces encode meaning • How self-attention and transformer architectures operate internally • How complete GPT-style models are built from first principles • How training pipelines work, including optimization and scaling considerations • How fine-tuning, instruction tuning, and preference optimization fit together • How embeddings, retrieval, and RAG systems extend model capabilities • How modern LLM systems are evaluated, deployed, and monitored responsibly What Makes This Book Different Most books on large language models focus either on high-level descriptions or narrow implementation details. This book takes a first-principles, systems-oriented approach, emphasizing understanding over memorization and architecture over tools. The examples use PyTorch for clarity, but the ideas are framework-agnostic and designed to remain relevant as tooling and architectures evolve. Clean diagrams, structured explanations, and carefully reasoned trade-offs replace hype and jargon. Who This Book Is For This book is written for software engineers, data scientists, machine learning practitioners, researchers, and technically curious readers who want to move beyond surface familiarity with LLMs. You do not need to be an expert in machine learning to begin, but you should be comfortable with programming and willing to engage with ideas thoughtfully. Readers looking for quick tutorials or platform-specific recipes may want supplementary resources; readers seeking durable understanding will find this book invaluable. What This Book Is Not This book does not promise instant mastery, viral tricks, or platform-specific shortcuts. It does not focus on prompt engineering in isolation, nor does it attempt to catalog every model variant or benchmark. Instead, it focuses on what lasts: the principles that explain why large language models work—and how to think cle
-
Autore:
-
Anno edizione:2025
-
Editore:
-
Formato:
-
Lingua:Inglese
Formato:
Gli eBook venduti da Feltrinelli.it sono in formato ePub e possono essere protetti da Adobe DRM. In caso di download di un file protetto da DRM si otterrà un file in formato .acs, (Adobe Content Server Message), che dovrà essere aperto tramite Adobe Digital Editions e autorizzato tramite un account Adobe, prima di poter essere letto su pc o trasferito su dispositivi compatibili.
Cloud:
Gli eBook venduti da Feltrinelli.it sono sincronizzati automaticamente su tutti i client di lettura Kobo successivamente all’acquisto. Grazie al Cloud Kobo i progressi di lettura, le note, le evidenziazioni vengono salvati e sincronizzati automaticamente su tutti i dispositivi e le APP di lettura Kobo utilizzati per la lettura.
Clicca qui per sapere come scaricare gli ebook utilizzando un pc con sistema operativo Windows