Open vs. Closed: The Fine-Tuning Divide in AI Models

From OpenAI's GPT-4o to Google's T5 AI models, fine-tuning your AI models will require a different strategy when using a closed-weight or open-weight model.

Explore the differences, using GPT-4 and T5 as examples of closed and open weight models, respectively, and how you can adjust (or not) the inner workings of AI models to best meet your needs.

Everything AI can deliver comes from the foundation of the models used.

The evolution of such models has driven the need for fine-tuning, which is an essential step when creating AI systems. It ensures that the models can be applied to specific needs.

However, all fine-tuning isn't the same, and differences emerge when comparing large language models (LLM) like OpenAI's GPT-4 to Google T5. So, what does fine-tuning look like in closed-weight (GPT-4o) and open-weight (T5) models?

In this article, you'll learn the differences between fine-tuning GPT-4 using data provided in formats such as JSONL and fine-tuning a model like T5 using more traditional, fully annotated data.

Understanding Fine-Tuning

What is fine-tuning of AI models? It's a process that adjusts pre-trained models to accomplish more domain-specific tasks. GPT-4 and T5 both have fine-tuning capabilities, but the depth and control over the model differ significantly.

Closed-Weight Models (e.g., GPT-4): These models have weights that are not publicly accessible. Instead of modifying the model's internal structure during fine-tuning, you adjust how the model behaves by providing examples in a format like JSONL. This type of fine-tuning indirectly influences responses but doesn't alter the core model weights.

Closed Weight Fine Tuning Model
Closed Weight Fine Tuning Model

 

Open-Weight Models (e.g., T5): Open-weight models provide access to the internal weights, allowing them to be modified directly through training. These models are highly customisable and can be tuned more precisely to individual tasks.

Open Weight Fine Tuning Model
Open Weight Fine Tuning Model

 

Fine-Tuning GPT-4 with JSONL Data

Fine-tuning GPT-4 involves using a dataset formatted in JSONL (JSON Lines), which consists of prompt-completion pairs. The model's behaviour adjusts based on the examples provided, but the actual model weights don't change.

{"prompt": "What is the capital of France?", "completion": "The capital of France is Paris."}
{"prompt": "Who wrote '1984'?", "completion": "The book '1984' was written by George Orwell."}
{"prompt": "What is the boiling point of water?", "completion": "The boiling point of water is 100°C or 212°F at sea level."}
{"prompt": "What is the largest planet in our solar system?", "completion": "The largest planet in our solar system is Jupiter."}
{"prompt": "Translate 'Hello' to Spanish.", "completion": "Hola"}

Example JSONL Data


Here are the pros and cons of using closed-weight fine-tuning.

Pros:

  • Easy to use: JSONL files are easy to create, especially if the task involves providing existing forms of text. There's no need to annotate or process data extensively to build the prompt-completion pairs.
  • Flexible: This method is the best option if you want to change how GPT-4 responds. It can make the model more formal and concise.
  • Quick adjustments: In closed-weight, the core weights don't change, so fine-tuning adapts relatively quickly to new tasks.

Cons:

  • No Changes to Model Weights: The actual model weights of GPT-4 remain untouched. The fine-tuning process indirectly influences behaviour through examples, limiting the depth of customisation.
  • Surface-Level Adjustments: This method is ideal for refining a model's performance on minor adjustments, but it's less effective for creating a highly specialised model.

Fine-Tuning T5 with Annotated Data

In contrast, fine-tuning a model like T5 involves providing it annotated datasets where input-output pairs are tokenised, and the model's internal weights are updated based on the data. This more involved fine-tuning fundamentally changes the model, allowing it to specialise in a given domain.

Here are the pros and cons of using this method.

Pros:

  • Direct control over learning: Because T5's weights are adjusted during fine-tuning, you have much more control over how the model behaves. It's not just tweaking its responses; it's learning the task, resulting in better specialisation.
  • Task specialisation: For tasks requiring high accuracy, like machine translation or summarisation, fine-tuning T5 delivers more precise results. The model becomes an expert in the task.
  • Custom data agility: Using annotated data allows for highly refined datasets, not restricted by the more straightforward prompt-completion pairs seen in GPT-4.

Cons:

  • Complexity: Fine-tuning T5 requires more preparation. To start, you have to annotate and tokenise data. Be prepared for a more time-consuming process.
  • Longer training time: Since the model's weights are updated, fine-tuning is longer and more resource-intensive.
  • Data-intensive: Fine-tuning T5 often requires larger datasets to achieve optimal results, making it less practical for smaller data sources.

Open-Weight Models: A Broader Trend

T5 is part of a growing family of open-weight models, where access to the adjust the model's internal structure is given. Other examples of open-weight models include Meta's LLaMA and newer models like those from Mistral AI. Open-weight models allow developers to customise the models by adjusting their weights to perform specific tasks. This level of flexibility is highly valuable for those who want to develop specialised AI systems without relying on the constraints of closed systems like GPT-4.

However, the trade-offs are not insignificant. Open-weight models require more technical expertise and considerable computational resources to fine-tune effectively. The process is also more complex, as developers need to manage the input-output examples and the entire end-to-end process of training and fine-tuning the model.

A Quick Comparison

Aspect

GPT-4 

T5 

Model Type

Closed-Weight Model

Open-Weight Model

Model Weights

Not changed; behaviour influenced by examples

Model weights are updated to fit the specific task

Learning

Learns how to respond better based on examples

Learns the entire task, altering weights based on data

Data Format

JSONL (prompt-completion pairs)

Custom dataset (input-output pairs) with tokenisation

Control Over Model

Indirect control over behaviour through examples

Direct control over the model’s learning and performance

Usage

Better for instructional tasks or minor adjustments

Better suited for specific task specialisation

 

Evaluate Needs and Capabilities

In the era of AI model fine-tuning, no single process solves every need. There are pros and cons to closed-weight and open-weight models. The latter enables greater customisation and more in-depth learning, while the former is ready quickly but only for more straightforward tasks.

When determining which is best for you, follow these tips:

  • Define how complex the task is.
  • Determine what, if any, specialisation is necessary.
  • Identify data and resource availability.

In this quick assessment, you can choose simplicity and speed with closed-weight or customisation and specialised learning with open-weight.


Digital transformation in government

23 August 2024

We reflect on our work with government on digital transformation and the unique challenges – and opportunities – faced in providing great digital services for citizens.

Apple Intelligence

10 August 2024

Explore Apple Intelligence in iOS 18: Smarter Siri, AI tools, and groundbreaking innovations for developers.

Scroll to top