Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Published : Apr 03, 2024
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Apr 2024
Assess ?

We continue to caution against rushing to fine-tune large language models (LLMs) unless it’s absolutely critical — it comes with a significant overhead in terms of costs and expertise. However, we think LLaMA-Factory can be useful when fine-tuning is needed. It’s an open-source, easy-to-use fine-tuning and training framework for LLMs. With support for LLaMA, BLOOM, Mistral, Baichuan, Qwen and ChatGLM, it makes a complex concept like fine-tuning relatively accessible. Our teams used LLaMA-Factory's LoRA tuning for a LLaMA 7B model successfully, so, if you have a need for fine-tuning, this framework is worth assessing.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes