FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using PocketDoc/Dans-PersonalityEngine-V1.1.0-12b + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
models:
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
- model: Elizezen/Himeyuri-v0.1-12B
- model: inflatebot/MN-12B-Mag-Mell-R1
- model: NeverSleep/Lumimaid-v0.2-12B
- model: cyberagent/Mistral-Nemo-Japanese-Instruct-2408
merge_method: sce
dtype: bfloat16
parameters:
normalize: true
select_topk: 0.5