BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News OpenAI Releases New Fine-Tuning API Features

OpenAI Releases New Fine-Tuning API Features

OpenAI announced the release of new features in their fine-tuning API. The features will give model developers more control over the fine-tuning process and better insight into their model performance.

The updates include the ability to create a model checkpoint after every training epoch during fine-tuning, compute metrics over the entire validation dataset, and integrate with 3rd-parties such as Weights and Biases. Besides changes to the API, OpenAI also updated the fine-tuning dashboard, giving developers more control of training hyperparameters and jobs as well as better insight into metrics. The model playground now has a side-by-side model comparison feature that allows users to enter a single prompt and compare the output of different standard and fine-tuned models. Finally, OpenAI announced an update to their Custom Model program: assisted fine-tuning, where OpenAI's team works with an organization to help fine-tune a model. According to OpenAI:

We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case. With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations. The key is to clearly scope the use case, design and implement evaluation systems, choose the right techniques, and be prepared to iterate over time for the model to reach optimal performance.

Although foundation models such as GPT-3.5 and GPT-4 can perform well on a variety of tasks "out of the box," a fine-tuned model can provide better performance on specific tasks or can be made to "exhibit specific ingrained behavior patterns." Further, since these models often require less verbose prompts, they can operate with lower cost and latency. InfoQ covered the initial launch of OpenAI's fine-tuning API in 2023. Since then, OpenAI claims that it has been used to train "hundreds of thousands of models."

OpenAI announced its Custom Model program at their 2023 Dev Day. In this program, "selected" organizations can work with OpenAI's researchers to modify any step of the training process to produce a bespoke model for the organization "from scratch." OpenAI claims that one customer in this program built a custom model that showed an "83% increase in factual responses." The new service announced for the program doesn't build a completely new model. Instead, it offers customers fine-tuning features not available in the API, including "bespoke parameters and methods to maximize model performance."

In a Hacker News discussion about the release, one user pointed out:

Btw, if you've tried fine-tuning OpenAI models before January and came away unimpressed with the quality of the finished model, it's worth trying again. They made some unannounced changes in the last few months that make the fine-tuned models much stronger. That said, we've found that Mixtral fine-tunes still typically outperform GPT-3.5 fine tunes, and are far cheaper to serve.

OpenAI's YouTube channel includes a talk from the 2023 Dev Day that compares different performance-improving techniques, including fine-tuning and prompt engineering, given by the engineering lead of their Fine-Tuning Product. The OpenAI docs also offer suggestions on alternatives to fine-tuning, including prompt engineering and function calling.

About the Author

Rate this Article

Adoption
Style

BT