Jacob T. wants to read LIMA: Less Is More for Alignment by Chunting Zhou May 22, 2023 Public LIMA: Less Is More for Alignment (2023, Arxiv) No rating Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to … Curious to see how model fine-tuning works as well as see about detection accuracy for tuned LLMs.