Update README.md
This commit is contained in:
parent
ecf34d3096
commit
ede10a3053
@ -7,6 +7,7 @@ tags:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
|
||||||
|
|
||||||
# BGE-M3
|
# BGE-M3
|
||||||
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
|
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
|
||||||
@ -183,7 +184,7 @@ The small-batch strategy is simple but effective, which also can used to fine-tu
|
|||||||
- MCLS: A simple method to improve the performance on long text without fine-tuning.
|
- MCLS: A simple method to improve the performance on long text without fine-tuning.
|
||||||
If you have no enough resource to fine-tuning model with long text, the method is useful.
|
If you have no enough resource to fine-tuning model with long text, the method is useful.
|
||||||
|
|
||||||
Refer to our [report]() for more details.
|
Refer to our [report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) for more details.
|
||||||
|
|
||||||
**The fine-tuning codes and datasets will be open-sourced in the near future.**
|
**The fine-tuning codes and datasets will be open-sourced in the near future.**
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user