Large language models are trained in two stages: (1) unsupervised pretraining
from raw text, to …
Not a ton more than what was in the abstract: it only takes a small number of fine-tuning samples to greatly improve the [human-scored] performance of LLMs.