Hi, there! Please star this repo if it helps you! Each Star helps PyABSA go further, many thanks.
| Overview | HuggingfaceHub | ABDADatasets | ABSA Models | Colab Tutorials |
- Aspect-based sentiment classification (Multilingual) (English, Chinese, etc.)
- Aspect term extraction & sentiment classification (English, Chinese, Arabic, Dutch, French, Russian, Spanish, Turkish, etc.)
- 方面术语提取和情感分类 (中文, etc.)
import requests
r = requests.post(url='https://hf.space/embed/yangheng/PyABSA-APC/+/api/predict/',
json={"data": ["I have had my [ASP]computer[ASP] for 2 weeks already and it [ASP]works[ASP] perfectly . !sent! Positive, Positive"]})
r.json()
import requests
r = requests.post(
url='https://hf.space/embed/yangheng/PyABSA-ATEPC/+/api/predict/',
json={"data": ['The wine list is incredible and extensive and diverse , '
'the food is all incredible and the staff was all very nice ,'
'good at their jobs and cultured .']})
r.json()
import requests
r = requests.post(url='https://hf.space/embed/yangheng/PyABSA-ATEPC-Chinese/+/api/predict/',
json={"data": ["这款手机真的很薄,但是颜色不太好看,总体上我很满意啦。"]})
r.json()
If you do not need the best models of APC or ATEPC, you can easily try our pretrained model to save your time!
To facilitate ABSA research and application, we train our fast-lcf-bert model based on
the microsoft/deberta-v3-base with all the english datasets provided
by ABSADatasets, the model is available
at yangheng/deberta-v3-base-absa-v1.1. You can use **
yangheng/deberta-v3-base-absa**
to easily improve your model if your model is based on the transformers
. e.g.:
The yangheng/deberta-v3-base-absa-v1.1
and yangheng/deberta-v3-large-absa-v1.1
are fine-tuned on the english
datasets (30k+ examples) from
ABSADatasets, and have the output layer to be used in the
sentiment-analysis pipeline in huggingface hub.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
# model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-large-absa-v1.1")
inputs = tokenizer("[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP]", return_tensors="pt")
outputs = model(**inputs)
The yangheng/deberta-v3-base-absa
and yangheng/deberta-v3-large-absa
are fine-tuned on the english datasets (
including the augmentation data, 180k+ examples) from
ABSADatasets, and have no output layer. They are more effective when being
used as backbone model compared to v1.1
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa")
model = AutoModel.from_pretrained("yangheng/deberta-v3-base-absa")
# model = AutoModel.from_pretrained("yangheng/deberta-v3-large-absa")
inputs = tokenizer("good product especially video and audio quality fantastic.", return_tensors="pt")
outputs = model(**inputs)