From e2d7c0224ae473d051b0830f7d78e52e1e8eb87b Mon Sep 17 00:00:00 2001 From: Bleys Date: Thu, 13 Jul 2023 03:14:57 +0000 Subject: [PATCH] Update README.md --- README.md | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index 20905ed..89610b6 100644 --- a/README.md +++ b/README.md @@ -45,9 +45,16 @@ We are thrilled to announce the release of the Open Orca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707). It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! +## Preview Model Release + +We have now released our first model preview! +[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B) +This model was trained in less than a day, for <$200, with <10% of our data. +It beats current state of the art models on BigBench-Hard and AGIEval, and achieves ~60% of the improvements reported in the Orca paper. + -Dataset Summary +# Dataset Summary The Open Orca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688). Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. @@ -56,7 +63,7 @@ The data is primarily used for training and evaluation in the field of natural l -Dataset Attribution +# Dataset Attribution We would like to give special recognition to the following contributors for their significant efforts and dedication: @@ -90,7 +97,7 @@ Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atla -Supported Tasks and Leaderboards +# Supported Tasks and Leaderboards This dataset supports a range of tasks including language modeling, text generation, and text augmentation. It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing. @@ -98,24 +105,24 @@ Further information on leaderboards will be updated as they become available. -Languages +# Languages The language of the data is primarily English. -Dataset Structure +# Dataset Structure -Data Instances +## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. -Data Fields +## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. @@ -125,17 +132,17 @@ The fields are: -Data Splits +## Data Splits The data is unsplit. -Dataset Creation +# Dataset Creation -Curation Rationale +## Curation Rationale The dataset was created to provide a source of augmented text data for researchers and developers. The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4. @@ -143,7 +150,7 @@ This "reasoning trace" augmentation has demonstrated exceptional results, allowi -Source Data +## Source Data The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below: @@ -157,24 +164,24 @@ Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. C -Dataset Use +# Dataset Use -Use Cases +## Use Cases The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. -Usage Caveats +## Usage Caveats Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements. Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper. -Getting Started +## Getting Started This dataset is organized such that it can be naively loaded via Hugging Face datasets library. We recommend using streaming due to the large size of the files.