embed-bge-m3/FlagEmbedding/dataset/README.md

15 lines
1.3 KiB
Markdown

# DataSet
This will point to the training data we use for training various models.
| Dataset | Introduction |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Document Retrieval Dataset, covering 13 languages |
| [bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data) | Fine-tuning data used by [bge-m3](https://huggingface.co/BAAI/bge-m3) |
| [public-data](https://huggingface.co/datasets/cfli/bge-e5data) | Public data identical to [e5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct) |
| [full-data](https://huggingface.co/datasets/cfli/bge-full-data) | The full dataset we used for training [bge-en-icl](https://huggingface.co/BAAI/bge-en-icl) |
| [bge-multilingual-gemma2-data](https://huggingface.co/datasets/hanhainebula/bge-multilingual-gemma2-data) | The full multilingual dataset we used for training [bge-multilingual-gemma2](https://huggingface.co/BAAI/bge-multilingual-gemma2) |
| [reranker-data](https://huggingface.co/datasets/Shitao/bge-reranker-data) | a mixture of multilingual datasets |