6 Areas to Evaluate when looking at RAG vs Fine-Tuning

When looking to create or improve performance on existing Retrieval Augmented Generation or AI Question Answering systems it’s important to understand the characteristics of the problem.  Six areas that we look at are:

  • Knowledge Source: What is the source or sources of the knowledge you want to use in the system, what characteristics does it have in terms of size, privacy, ownership?
  • Knowledge Update Cadence: How frequently does the data in the source get updated? Are more recent records more valuable?  Do historical records or versions need to be removed?
  • Availability of Training Data for Fine-Tuning: Is there an existing repository of examples or approved outputs that can be used for fine-tuning? Has this data been vetted for quality?
  • Impact of Minimizing Hallucinations: How do factual errors in the LLM output impact your users and your business?
  • Need for Audibility: Is it a requirement for the users of the system to be able to trace back to source facts and examples?
  • Need to Replace LLM Tone or Behavior: Do you have custom needs for output language, tone, formats or other characteristics where foundational models don’t behave how you want?

This table summarizes how assessing your specific problem situation can guide towards RAG, Fine-tuning or a hybrid approach for your system:

Assessing your situation in these 6 areas can help you decide whether fine-tuning or a RAG system, or a hybrid of the two is the right choice for your application.

Dave Greenfield
Dave Greenfield
CPO