Skip to content

Latest commit

 

History

History
27 lines (15 loc) · 1.5 KB

File metadata and controls

27 lines (15 loc) · 1.5 KB

Dialog Summarization Using Large Language Model

This repository contains notebook files that discuss Large Language Models (LLMs), covering topics like fine-tuning, prompt engineering, and techniques such as PEFT (Parameter Efficient Fine-Tuning) and PPO (Proximal Policy Optimization) etc.

1. Summarize_dialogue_LLM.ipynb:

Explore dialogue summarization with generative AI and discover how input text influences model output. Learn prompt engineering techniques to tailor the model's output for specific tasks. Compare zero-shot, one-shot, and few-shot inferences to enhance Large Language Models' generative capabilities.

2. Fine_tune_LLM.ipynb:

Dive into the fine-tuning process for improving dialogue summarization using the FLAN-T5 model. Utilize FLAN-T5, a high-quality instruction-tuned model, to summarize text effectively.

3. Fine_tune_model_to_detoxify_summaries.ipynb:

Understand how to fine-tune a FLAN-T5 model to generate less toxic content using Meta AI's hate speech reward model. Experiment with Proximal Policy Optimization (PPO) to reduce the model's toxicity and create safer summaries.

Reference

  1. Generative AI with Large Language Models - By Deeplearning.ai and AWS (Coursera Course)
This repository contains materials related to the Coursera course 'Generative AI with Large Language Models' by Deeplearning.ai and AWS. These materials were used for studying and experimenting with Large Language Models (LLMs). For in-depth insights, it is recommended to enroll in the course.