Trivial: standardize single curly quotes (don't ask)

This commit is contained in:
Sean Owen 2023-06-08 18:04:44 +00:00 committed by huggingface-web
parent d0aa7ea43d
commit 43925aa353

@ -10,7 +10,7 @@ datasets:
# dolly-v2-12b Model Card
## Summary
Databricks `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
@ -29,7 +29,7 @@ running inference for various GPU configurations.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAIs](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
@ -139,7 +139,7 @@ Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as w
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-Js pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.