“RAG vs Fine-Tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture”, 2024-01-16 ():
There are two common ways in which developers are incorporating proprietary and domain-specific data when building applications of Large Language Models (LLMs): Retrieval-Augmented Generation (RAG) and Fine-Tuning. RAG augments the prompt with the external data, while fine-Tuning incorporates the additional knowledge into the model itself. However, the pros and cons of both approaches are not well understood.
In this paper, we propose a pipeline for fine-tuning and RAG, and present the tradeoffs of both for multiple popular LLMs, including LLaMA-2-13B, GPT-3.5, and GPT-4.
Our pipeline consists of multiple stages, including extracting information from PDFs, generating questions and answers, using them for fine-tuning, and leveraging GPT-4 for evaluating the results. We propose metrics to assess the performance of different stages of the RAG and fine-Tuning pipeline.
We conduct an in-depth study on an agricultural dataset [eg. “best time to plant X“]. Agriculture as an industry has not seen much penetration of AI, and we study a potentially disruptive application—what if we could provide location-specific insights to a farmer?
Our results show the effectiveness of our dataset generation pipeline in capturing geographic-specific knowledge, and the quantitative and qualitative benefits of RAG and fine-tuning.
We see an accuracy increase of over 6 percentage points (p.p.) when fine-tuning the model and this is cumulative with RAG, which increases accuracy by 5 p.p. further. In one particular experiment, we also demonstrate that the fine-tuned model leverages information from across geographies to answer specific questions, increasing answer similarity 47% → 72%.
Overall, the results point to how systems built using LLMs can be adapted to respond and incorporate knowledge across a dimension that is critical for a specific industry, paving the way for further applications of LLMs in other industrial domains.
[GPT-4 finetuning] …Lastly, we also fine-tuned GPT-4 in this setting. Being larger and more expensive, our goal was to assess if the model would benefit from additional knowledge in comparison to its base training. Due to its complexity and the amount of available data, we used Low Rank Adaptation (LoRA) ( et al 2021) for the fine-tuning process. This technique provides an efficient way to adapt parameter-heavy models, requiring less memory and computational resources compared to traditional re-training. By tuning a reduced set of parameters in the attention modules of the architecture, it embeds domain-specific knowledge from new data without losing knowledge gained during base training. In our study, optimization was done for 4 epochs, with a batch size of 256 samples, and a base learning rate of 1 × 10−4 that decayed as training progressed. The fine-tuning was carried out on 7 nodes, each with 8 A100 GPUs, over a total runtime of 1.5 days.