Several small spelling errors in README (#48)
- Several small spelling errors in README (2f3b461246cd6b756de7c85150aabebbb2c36905) Co-authored-by: Joel Wigton <joelwigton@users.noreply.huggingface.co>
This commit is contained in:
parent
1a310852cf
commit
44eb404449
@ -104,14 +104,14 @@ The project aims to train sentence embedding models on very large sentence level
|
|||||||
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
|
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
|
||||||
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
|
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
|
||||||
|
|
||||||
We developped this model during the
|
We developed this model during the
|
||||||
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
|
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
|
||||||
organized by Hugging Face. We developped this model as part of the project:
|
organized by Hugging Face. We developed this model as part of the project:
|
||||||
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
|
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
|
||||||
|
|
||||||
## Intended uses
|
## Intended uses
|
||||||
|
|
||||||
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
|
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
|
||||||
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
|
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
|
||||||
|
|
||||||
By default, input text longer than 256 word pieces is truncated.
|
By default, input text longer than 256 word pieces is truncated.
|
||||||
@ -130,7 +130,7 @@ We then apply the cross entropy loss by comparing with true pairs.
|
|||||||
|
|
||||||
#### Hyper parameters
|
#### Hyper parameters
|
||||||
|
|
||||||
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
|
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
|
||||||
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
|
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
|
||||||
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
|
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user