From 43925aa35365e40ac579bad9eeeb98b2c215bc6c Mon Sep 17 00:00:00 2001 From: Sean Owen Date: Thu, 8 Jun 2023 18:04:44 +0000 Subject: [PATCH] Trivial: standardize single curly quotes (don't ask) --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 8c4d644..46be5c5 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ datasets: # dolly-v2-12b Model Card ## Summary -Databricks’ `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform +Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, @@ -29,7 +29,7 @@ running inference for various GPU configurations. ## Model Overview `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from -[EleutherAI’s](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned +[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage @@ -139,7 +139,7 @@ Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as w ### Dataset Limitations Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses. -- **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, +- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations.