Distilling Knowledge from Large LLMs: Fine-tuning Mistral with LoRA

As large language models (LLMs) continue to advance, there is a growing need to distill their knowledge into smaller, more efficient models suitable for real-world applications. One promising approach is knowledge distillation via fine-tuning using techniques like LoRA (Low-Rank Adaptation). In this article, we’ll dive into best practices for fine-tuning the 7B parameter Mistral model with LoRA.

The LoRA Advantage

Traditional fine-tuning updates all the weights of a pre-trained LLM, which can be computationally expensive and data-hungry, especially for large models. LoRA circumvents this by injecting trainable rank decomposition matrices into the LLM layers, enabling efficient adaptation to new tasks without modifying the original model weights.

Compared to full fine-tuning, LoRA requires significantly less compute and data, making it well-suited for fine-tuning models like Mistral. It has been shown to match or even exceed the performance of full fine-tuning on various tasks while using orders of magnitude fewer trainable parameters.

Selecting the Optimal LoRA Rank

The LoRA rank (r) determines the number of trainable parameters and directly impacts the model’s capacity to capture task-specific knowledge. A higher rank allows the model to better approximate the ideal fine-tuned weights, potentially improving performance. However, it also increases memory requirements and the risk of overfitting.

For Mistral, common ranks used are r=64 or r=128, though some have experimented with higher values like r=256 which can finetune around 8% of the model’s parameters. The optimal rank depends on the complexity of the task and dataset size – simple tasks may work well with lower r, while more complex ones may benefit from higher r.

Dataset Size and Quality

While LoRA is data-efficient compared to full fine-tuning, having sufficient high-quality training data is still crucial for achieving good performance. For a 7B model like Mistral, researchers recommend at least 50,000 examples for reasonable results, with 100,000+ examples often yielding better performance.

However, even smaller datasets of 1,000 – 10,000 carefully curated examples can be effective when using LoRA, outperforming full fine-tuning which requires much more data. Data quality and relevance to the target task are more important than sheer quantity – high-quality, curated datasets can outperform larger, noisier ones.

Using too little data (e.g. less than 1,000 examples) may lead to overfitting or poor performance. For very large datasets (>1M examples), full fine-tuning may be more effective than LoRA, depending on available compute resources.

Putting it All Together

So, what are the best practices for fine-tuning Mistral with LoRA? Based on current research, a good starting point could be:

  • LoRA rank (r) = 128
  • 10,000 – 100,000 high-quality, task-relevant examples

During training, it’s essential to monitor performance on a held-out validation set to select the best checkpoint and avoid overfitting. Additionally, increasing the LoRA alpha (lora_alpha) can help counteract a lower rank but may introduce instability.

Distillation Approaches

Beyond LoRA, researchers have explored various distillation approaches for transferring knowledge from large LLMs to smaller models:

  1. Reverse KL Divergence: Replacing the standard forward KL divergence loss with reverse KL can prevent the student model from overestimating low-probability regions of the teacher LLM’s distribution, making it more suitable for generative tasks.
  2. Multi-Task Learning with Rationales: Training the student on two tasks – label prediction and rationale generation, where rationales are intermediate reasoning steps extracted from the LLM teacher. This creates an explicit connection between inputs and outputs.
  3. Data Augmentation: Leveraging data augmentation to generate context-rich, skill-specific training data from the LLM teacher. This helps the student model approximate the teacher’s contextual abilities and ethical alignment.

The Future of LLM Distillation

As LLMs continue to grow in size and capability, techniques like LoRA and knowledge distillation will become increasingly important for making these models accessible and deployable across a wide range of applications.

By following best practices, leveraging the latest research, and adhering to legal and ethical considerations when working with LLM outputs, practitioners can effectively distill the knowledge from large models like Mistral into smaller, more efficient models tailored to their specific needs.

The possibilities for LLM distillation are vast, paving the way for a future where the power of large language models is available to everyone, regardless of computational resources.

Revolutionizing AI Efficiency: How Microsoft’s LLMLingua-2 is Changing the Game with 8x Less Memory

  • LLMLingua-2 is a novel compression technology developed by Microsoft Research, achieving state-of-the-art results with 8 times less GPU memory on tasks typically handled by models like GPT-4.
  • It introduces innovative approaches such as « Data Distillation, » « Bidirectional Token Classification, » and optimized compression objectives to efficiently compress prompts without losing key information.
  • The technology has shown superior performance across various language tasks and demonstrated remarkable generalization across different LLMs and languages, from GPT-3.5 to Mistral-7B and from English to Chinese.
  • Compared to existing prompt compression methods, LLMLingua-2 is 3 to 6 times faster, accelerates end-to-end inference by 1.6 to 2.9 times, and significantly reduces GPU memory usage by a factor of 8.
  • This advancement represents a significant step forward in making language AI more practical and scalable for real-world applications, demonstrating Microsoft Research’s leadership in the field.

https://arxiv.org/pdf/2403.12968.pdf

sample

https://huggingface.co/microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank

Unified Time Series Model

UniTS is a unified time series model that can process various tasks across multiple domains with shared parameters and does not have any task-specific modules.

Foundation models, especially LLMs, are profoundly transforming deep learning. Instead of training many task-specific models, we can adapt a single pretrained model to many tasks via few-shot prompting or fine-tuning. However, current foundation models apply to sequence data but not to time series, which present unique challenges due to the inherent diverse and multi-domain time series datasets, diverging task specifications across forecasting, classification and other types of tasks, and the apparent need for task-specialized models. 

We developed UniTS, a unified time series model that supports a universal task specification, accommodating classification, forecasting, imputation, and anomaly detection tasks. This is achieved through a novel unified network backbone, which incorporates sequence and variable attention along with a dynamic linear operator and is trained as a unified model. 

Across 38 multi-domain datasets, UniTS demonstrates superior performance compared to task-specific models and repurposed natural language-based LLMs. UniTS exhibits remarkable zero-shot, few-shot, and prompt learning capabilities when evaluated on new data domains and tasks. We will release the source code and datasets.

https://arxiv.org/pdf/2403.00131v1.pdf

https://zitniklab.hms.harvard.edu/projects/UniTS/

https://github.com/mims-harvard/UniTS

Renumics Spotlight

Spotlight helps you to understand unstructured datasets fast. You can create interactive visualizations from your dataframe with just a few lines of code. You can also leverage data enrichments (e.g. embeddings, prediction, uncertainties) to identify critical clusters in your data.

https://spotlight.renumics.com/

Revolutionizing AI Reading Comprehension: ReadAgent’s Breakthrough in Handling Documents with 20 Million Tokens

  • Introduction to ReadAgent by Google DeepMind
  • Development of ReadAgent, an AI capable of understanding long texts beyond the limits of its language model.
  • Utilizes a human-like reading strategy to comprehend complex documents.
  • Challenges Faced by Language Models
  • Context length limitation: Fixed token processing capacity leading to performance decline.
  • Ineffective context usage: Decreased comprehension with increasing text length.
  • Features of ReadAgent
  • Mimics human reading by forming and using « gist memories » of texts.
  • Breaks down texts into smaller « episodes » and generates gist memories for each.
  • Looks up relevant episodes when needed for answering questions.
  • Performance Enhancements
  • Capable of understanding documents « 20 times longer » than its base language model.
  • Shows improved performance on long document question answering datasets:
    • QuALITY: Accuracy improved from 85.8% to 86.9%.
    • NarrativeQA: Rating increased by 13-32% over baselines.
    • QMSum: Rating improved from 44.96% to 49.58%.
  • Potential Applications
  • Legal contract review, scientific literature analysis, customer support, financial report summarization, automated online course creation.
  • Indicates the future potential of AI in mastering lengthy real-world documents through human-like reading strategies.

https://read-agent.github.io/

Publié dans LLM | Marqué avec

LORAX

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

LoRAX (LoRA eXchange) is a framework that allows users to serve thousands of fine-tuned models on a single GPU, dramatically reducing the cost of serving without compromising on throughput or latency.

https://github.com/predibase/lorax

Publié dans LLM

QA-LoRA: Fine-Tune a Quantized Large Language Model on Your GPU

State-of-the-art large language models (LLMs) are pre-trained with billions of parameters. While pre-trained LLMs can perform many tasks, they can become much better once fine-tuned.

Thanks to LoRA, fine-tuning costs can be dramatically reduced. LoRA adds low-rank tensors, i.e., a small number of parameters (millions), on top of the frozen original parameters. Only the parameters in the added tensors are trained during fine-tuning.

LoRA still requires the model to be loaded in memory. To reduce the memory cost and speed-up fine-tuning, a new approach proposes quantization-aware LoRA (QA-LoRA) fine-tuning.

In this article, I explain QA-LoRA and review its performance compared with previous work (especially QLoRA). I also show how to use QA-LoRA to fine-tune your own quantization-aware LoRA for Llama 2.

https://towardsdatascience.com/qa-lora-fine-tune-a-quantized-large-language-model-on-your-gpu-c7291866706c

Meta COT prompting

Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models

Meta-CoT is a generalizable CoT prompting method in mixed-task scenarios where the type of input questions is unknown. It consists of three phases: (i) scenario identification: categorizes the scenario of the input question; (ii) demonstration selection: fetches the ICL demonstrations for the categorized scenario; (iii) answer derivation: performs the answer inference by feeding the LLM with the prompt comprising the fetched ICL demonstrations and the input question

https://arxiv.org/abs/2310.06692

https://github.com/Anni-Zou/Meta-CoT