Xuanyu Chen

Topic: #llm

A collection of 2 posts about llm.

How LoRA works, how quantization reduces memory, and how QLoRA combines both for efficient fine-tuning.

8 min read
#ai#fine-tuning#llm

The fundamentals of fine-tuning a large language model — from data preparation and hyperparameters to the training loop and evaluation.

6 min read
#ai#fine-tuning#llm