Commit
·
888ac0e
1
Parent(s):
3b261fa
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,9 @@ datasets:
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
|
|
|
|
|
|
|
|
|
| 9 |
📣 **FLAN-T5** is also useful in text-to-audio generation. Find our work at [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) if you are interested.
|
| 10 |
|
| 11 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
|
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
| 9 |
+
📣 Curious to know the performance of 🍮 🦙 **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
📣 **FLAN-T5** is also useful in text-to-audio generation. Find our work at [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) if you are interested.
|
| 13 |
|
| 14 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|