The first open LLM model: "CLAIRE" is on Hugging Face

LINAGORA and the OpenLLM France community have published the first open LLM model: "CLAIRE" on Hugging Face".

This is the Claire-7B-0.1 model. Particularly suitable for processing data resulting from dialogues in French.

The training data selected are French conversational data available under an open licence:

  • Either from work carried out by LINAGORA's R&D teams to create suitable corpora;

  • Open corpora proposed by the natural language processing community. The datasets used are detailed in the model maps;

Claire-7B-0.1 is available in two versions depending on the licences and learning data:

  • The first model is released under the CC-BY-NC-SA open licence, because it was learned from data some of which was in CC-BY-NC-SA. This is the model that benefited from the largest dataset;

  • A second model is available under the Apache V2 open source licence. Its training uses only data under compatible licences;

These models are the result of the " continuous pre-training " enrichment of the Falcon 7B model to improve its behaviour on French dialogue data.

Congratulations to our R&D teams and partners!

Jean-Pierre LORRE, head of R&D at LINAGORA, takes a look back at the 2 basic models that gave rise to CLAIRE and assures us that CLAIRE-Falcon-7B-0.1 outperforms its adapted Mistral counterpart in the Fluidity and Relevance categories.

 

" For this work, we considered two basic models: Falcon 7B and Mistral 7B-v0.1, both of which we trained with our data.

After a rigorous evaluation involving a cohort that we will describe in a forthcoming paper, we selected the Falcon-7B model, which performed better.

To reach this conclusion, we compared the productions of the Falcon-7B, Mistral-7B-v0.1, Claire-Falcon-7B-v0.1 and Claire-Mistral-7B-v0.1 models on conversational prompts.

Each of the four responses generated was assessed on three dimensions: interaction, fluency and relevance.

Our results confirm that the continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to an improvement over the basic models in the three evaluation dimensions, and that Claire-Falcon-7B-0.1 outperforms its adapted Mistral counterpart in the Fluency and Relevance categories. ".

w/ Ismail HarrandoJulie HunterJérome LouradourMichel-Marie MaudetVirgile RenardGuokan Shang

Et Christophe CerisaraPierre-Carl LanglaisAnastasia StasenkoPierre Colombo,

The OpenLLM-France community Discord server

 


#IA #NLP #TALN #LLM #opensource
 

How can I help you?

CAPTCHA
4 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.