Skip to content

humanlab/GenLM-Inference-Wrapper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GenLM Inference Wrapper

Library to run Inference using Genarative Language Models

This repository is a wrapper around the HuggingFace Transformers library. It provides a simple interface to run inference using any generative language model for any task provided as textual description to the model.

Installation

  1. Clone the repository
git clone https://github.com/humanlab/GenLM-Inference-Wrapper.git
  1. Install the requirements through pip
pip install '.[all]'

You can also create a conda environment and install the requirements within the requirement.

conda create -n genlm python=3.8
conda activate genlm
python setup.py install

Usage

You can use this library in two ways:

1. Using with MySQL database

python mysql_interface -d db_name -t message_table -i 'Provide Instructions here' \
                        --output_table output_table_name \
                        --model_path '/path/to/model/or/hf_model_name' 

2. You can use the methods inside the src folder to run inference within your script

from src import PromptTemplator, GenLMInferenceWrapper

templater = PromptTemplater()
model = GenLMInferenceWrapper(model_checkpoint=model_path)

instruction = """Read the text thoroughly and classify the emotion of the text as one of the following: anger, fear, joy, and sadness."""
task_data = ['Words would fail to describe the feeling of being able to see the Taj Mahal for the first time. It was a surreal experience.', 
            'I don\'t know what to do. This is so frustrating that I want to break my phone to pieces.']
input_prompt = templater(input_text=task_data, instruction=instruction)
prediction_data = model.generate_outputs(input_data=input_prompt)

Note: You can override the implementation of the PromptTemplator to customize the prompts.

Additional Information

Using Socialite Llama Model

Socialite Llama is an instruction tuned version of llama2-7b on a collection of 20 social scientific tasks covering 5 broad domains: Emotion/Sentiment, Offensiveness, Trustworthy, Humor, and Other Social Factors. Socialite Llama performs better than the base model, llama, on 18 / 20 seen tasks. The specific tasks and its instructions are available under src/assets/socialite_llama_tasks.json. These instructions can be used to run inference using the Socialite Llama model.

python mysql_interface -d db_name -t message_table -i emotion_4_class \
                        --output_table 'pred$socialite_emotion_4_class$message_table$message_id' \
                        --model_path '/path/to/socialite_llama/' 

CITE US

If you use this library, please cite us:

@article{gen_lm_wrapper,
  author = {Gourab Dey, Adithya V Ganesan, Yash Kumar Lal, Salvatore Giorgi, Vivek Kulkarni, and H. Andrew Schwartz},
  title = {GenLM-Inference-Wrapper},
  year = {2023},
  publisher = {github}
}

About

Library to run Inference using Generative Language Models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages