-
Notifications
You must be signed in to change notification settings - Fork 719
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add model 2023-04-13-CyberbullyingDetection_ClassifierDL_tfhub_en (#13757) Co-authored-by: Naveen-004 <chinna.nk4@gmail.com> * 2023-04-20-distilbert_base_uncased_mnli_en (#13761) * Add model 2023-04-20-distilbert_base_uncased_mnli_en * Add model 2023-04-20-distilbert_base_turkish_cased_allnli_tr * Add model 2023-04-20-distilbert_base_turkish_cased_snli_tr * Add model 2023-04-20-distilbert_base_turkish_cased_multinli_tr * Update and rename 2023-04-20-distilbert_base_turkish_cased_allnli_tr.md to 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_allnli_tr.md * Update and rename 2023-04-20-distilbert_base_turkish_cased_multinli_tr.md to 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_multinli_tr.md * Update and rename 2023-04-20-distilbert_base_turkish_cased_snli_tr.md to 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_snli_tr.md * Update and rename 2023-04-20-distilbert_base_uncased_mnli_en.md to distilbert_base_zero_shot_classifier_turkish_cased_snli * Rename distilbert_base_zero_shot_classifier_turkish_cased_snli to distilbert_base_zero_shot_classifier_turkish_cased_snli_en.md * Update 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_snli_tr.md * Update 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_multinli_tr.md * Update 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_allnli_tr.md --------- Co-authored-by: ahmedlone127 <ahmedlone127@gmail.com> * 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_multinli_tr (#13763) * Add model 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_multinli_tr * Add model 2023-04-20-distilbert_base_zero_shot_classifier_uncased_mnli_en * Add model 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_snli_tr * Add model 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_allnli_tr * Update 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_multinli_tr.md * Update 2023-04-20-distilbert_base_zero_shot_classifier_turkish_cased_snli_tr.md --------- Co-authored-by: ahmedlone127 <ahmedlone127@gmail.com> * 2023-05-04-roberta_base_zero_shot_classifier_nli_en (#13781) * Add model 2023-05-04-roberta_base_zero_shot_classifier_nli_en * Fix Spark version to 3.0 --------- Co-authored-by: ahmedlone127 <ahmedlone127@gmail.com> Co-authored-by: Maziyar Panahi <maziyar.panahi@iscpif.fr> * 2023-05-09-distilbart_xsum_6_6_en (#13788) * Add model 2023-05-09-distilbart_xsum_6_6_en * Add model 2023-05-09-distilbart_xsum_12_6_en * Add model 2023-05-09-distilbart_cnn_12_6_en * Add model 2023-05-09-distilbart_cnn_6_6_en * Add model 2023-05-09-bart_large_cnn_en * Update 2023-05-09-bart_large_cnn_en.md * Update 2023-05-09-distilbart_cnn_12_6_en.md * Update 2023-05-09-distilbart_cnn_6_6_en.md * Update 2023-05-09-distilbart_xsum_12_6_en.md * Update 2023-05-09-distilbart_xsum_6_6_en.md --------- Co-authored-by: prabod <prabod@rathnayaka.me> Co-authored-by: Maziyar Panahi <maziyar.panahi@iscpif.fr> --------- Co-authored-by: jsl-models <74001263+jsl-models@users.noreply.github.com> Co-authored-by: Naveen-004 <chinna.nk4@gmail.com> Co-authored-by: ahmedlone127 <ahmedlone127@gmail.com> Co-authored-by: prabod <prabod@rathnayaka.me>
- Loading branch information
1 parent
4569b1d
commit b262eed
Showing
6 changed files
with
533 additions
and
0 deletions.
There are no files selected for viewing
105 changes: 105 additions & 0 deletions
105
docs/_posts/ahmedlone127/2023-05-04-roberta_base_zero_shot_classifier_nli_en.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,105 @@ | ||
--- | ||
layout: model | ||
title: RoBertaZero-Shot Classification Base roberta_base_zero_shot_classifier_nli | ||
author: John Snow Labs | ||
name: roberta_base_zero_shot_classifier_nli | ||
date: 2023-05-04 | ||
tags: [en, open_source, tensorflow] | ||
task: Zero-Shot Classification | ||
language: en | ||
edition: Spark NLP 4.4.2 | ||
spark_version: [3.0] | ||
supported: true | ||
engine: tensorflow | ||
annotator: RoBertaForZeroShotClassification | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
This model is intended to be used for zero-shot text classification, especially in English. It is fine-tuned on NLI by using Roberta Base model. | ||
|
||
RoBertaForZeroShotClassificationusing a ModelForSequenceClassification trained on NLI (natural language inference) tasks. Equivalent of RoBertaForZeroShotClassification models, but these models don’t require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it’s slower but it is much more flexible. | ||
|
||
We used TFRobertaForSequenceClassification to train this model and used RoBertaForZeroShotClassification annotator in Spark NLP 🚀 for prediction at scale! | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_zero_shot_classifier_nli_en_4.4.2_3.0_1683228241365.zip){:.button.button-orange} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_zero_shot_classifier_nli_en_4.4.2_3.0_1683228241365.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
|
||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
document_assembler = DocumentAssembler() \ | ||
.setInputCol('text') \ | ||
.setOutputCol('document') | ||
|
||
tokenizer = Tokenizer() \ | ||
.setInputCols(['document']) \ | ||
.setOutputCol('token') | ||
|
||
zeroShotClassifier = RobertaForSequenceClassification \ | ||
.pretrained('roberta_base_zero_shot_classifier_nli', 'en') \ | ||
.setInputCols(['token', 'document']) \ | ||
.setOutputCol('class') \ | ||
.setCaseSensitive(True) \ | ||
.setMaxSentenceLength(512) \ | ||
.setCandidateLabels(["urgent", "mobile", "travel", "movie", "music", "sport", "weather", "technology"]) | ||
|
||
pipeline = Pipeline(stages=[ | ||
document_assembler, | ||
tokenizer, | ||
zeroShotClassifier | ||
]) | ||
|
||
example = spark.createDataFrame([['I have a problem with my iphone that needs to be resolved asap!!']]).toDF("text") | ||
result = pipeline.fit(example).transform(example) | ||
``` | ||
```scala | ||
val document_assembler = DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val tokenizer = Tokenizer() | ||
.setInputCols("document") | ||
.setOutputCol("token") | ||
|
||
val zeroShotClassifier = RobertaForSequenceClassification.pretrained("roberta_base_zero_shot_classifier_nli", "en") | ||
.setInputCols("document", "token") | ||
.setOutputCol("class") | ||
.setCaseSensitive(true) | ||
.setMaxSentenceLength(512) | ||
.setCandidateLabels(Array("urgent", "mobile", "travel", "movie", "music", "sport", "weather", "technology")) | ||
|
||
val pipeline = new Pipeline().setStages(Array(document_assembler, tokenizer, zeroShotClassifier)) | ||
val example = Seq("I have a problem with my iphone that needs to be resolved asap!!").toDS.toDF("text") | ||
val result = pipeline.fit(example).transform(example) | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|roberta_base_zero_shot_classifier_nli| | ||
|Compatibility:|Spark NLP 4.4.2+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Input Labels:|[token, document]| | ||
|Output Labels:|[multi_class]| | ||
|Language:|en| | ||
|Size:|466.4 MB| | ||
|Case sensitive:|true| |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
--- | ||
layout: model | ||
title: BART (large-sized model), fine-tuned on CNN Daily Mail | ||
author: John Snow Labs | ||
name: bart_large_cnn | ||
date: 2023-05-09 | ||
tags: [bart, summarization, cnn, text_to_text, en, open_source, tensorflow] | ||
task: Summarization | ||
language: en | ||
edition: Spark NLP 4.4.2 | ||
spark_version: 3.0 | ||
supported: true | ||
engine: tensorflow | ||
annotator: BartTransformer | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
BART model pre-trained on English language, and fine-tuned on [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart). | ||
|
||
Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. | ||
|
||
### Model description | ||
|
||
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. | ||
|
||
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.0_1683645394389.zip){:.button.button-orange} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.0_1683645394389.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
You can use this model for text summarization. | ||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
bart = BartTransformer.pretrained("bart_large_cnn") \ | ||
.setTask("summarize:") \ | ||
.setMaxOutputLength(200) \ | ||
.setInputCols(["documents"]) \ | ||
.setOutputCol("summaries") | ||
``` | ||
```scala | ||
val bart = BartTransformer.pretrained("bart_large_cnn") | ||
.setTask("summarize:") | ||
.setMaxOutputLength(200) | ||
.setInputCols("documents") | ||
.setOutputCol("summaries") | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|bart_large_cnn| | ||
|Compatibility:|Spark NLP 4.4.2+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Language:|en| | ||
|Size:|975.3 MB| |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,86 @@ | ||
--- | ||
layout: model | ||
title: Abstractive Summarization by BART - DistilBART CNN | ||
author: John Snow Labs | ||
name: distilbart_cnn_12_6 | ||
date: 2023-05-09 | ||
tags: [bart, summarization, cnn, distill, text_to_text, en, open_source, tensorflow] | ||
task: Summarization | ||
language: en | ||
edition: Spark NLP 4.4.2 | ||
spark_version: 3.0 | ||
supported: true | ||
engine: tensorflow | ||
annotator: BartTransformer | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. | ||
|
||
This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CNN) Dataset. | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.0_1683644937231.zip){:.button.button-orange} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.0_1683644937231.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
|
||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
bart = BartTransformer.pretrained("distilbart_cnn_12_6") \ | ||
.setTask("summarize:") \ | ||
.setMaxOutputLength(200) \ | ||
.setInputCols(["documents"]) \ | ||
.setOutputCol("summaries") | ||
``` | ||
```scala | ||
val bart = BartTransformer.pretrained("distilbart_cnn_12_6") | ||
.setTask("summarize:") | ||
.setMaxOutputLength(200) | ||
.setInputCols("documents") | ||
.setOutputCol("summaries") | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|distilbart_cnn_12_6| | ||
|Compatibility:|Spark NLP 4.4.2+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Language:|en| | ||
|Size:|870.4 MB| | ||
|
||
## Benchmarking | ||
|
||
```bash | ||
### Metrics for DistilBART models | ||
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | | ||
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | ||
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | ||
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | ||
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | ||
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | ||
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | ||
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | ||
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | ||
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | ||
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | ||
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,86 @@ | ||
--- | ||
layout: model | ||
title: Abstractive Summarization by BART - DistilBART CNN | ||
author: John Snow Labs | ||
name: distilbart_cnn_6_6 | ||
date: 2023-05-09 | ||
tags: [bart, summarization, cnn, distil, text_to_text, en, open_source, tensorflow] | ||
task: Summarization | ||
language: en | ||
edition: Spark NLP 4.4.2 | ||
spark_version: 3.0 | ||
supported: true | ||
engine: tensorflow | ||
annotator: BartTransformer | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. | ||
|
||
This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CNN) Dataset. | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.0_1683645206157.zip){:.button.button-orange} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.0_1683645206157.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
|
||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
bart = BartTransformer.pretrained("distilbart_cnn_6_6") \ | ||
.setTask("summarize:") \ | ||
.setMaxOutputLength(200) \ | ||
.setInputCols(["documents"]) \ | ||
.setOutputCol("summaries") | ||
``` | ||
```scala | ||
val bart = BartTransformer.pretrained("distilbart_cnn_6_6") | ||
.setTask("summarize:") | ||
.setMaxOutputLength(200) | ||
.setInputCols("documents") | ||
.setOutputCol("summaries") | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|distilbart_cnn_6_6| | ||
|Compatibility:|Spark NLP 4.4.2+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Language:|en| | ||
|Size:|551.9 MB| | ||
|
||
## Benchmarking | ||
|
||
```bash | ||
### Metrics for DistilBART models | ||
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | | ||
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | ||
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | ||
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | ||
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | ||
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | ||
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | ||
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | ||
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | ||
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | ||
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | ||
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | | ||
``` |
Oops, something went wrong.