Skip to content

We are tasked with predicting whether a pair of questions are duplicates or not.

Notifications You must be signed in to change notification settings

ankanD1601/Quora-Question-Pairs

Repository files navigation

Quora Question Pairs

We are tasked with predicting whether a pair of questions are duplicates or not.

Business Problem

Description

Quora is a place to gain and share knowledge—about anything. It’s a platform to ask questions and connect with people who contribute unique insights and quality answers. This empowers people to learn from each other and to better understand the world.

Over 100 million people visit Quora every month, so it's no surprise that many people ask similarly worded questions. Multiple questions with the same intent can cause seekers to spend more time finding the best answer to their question, and make writers feel they need to answer multiple versions of the same question. Quora values canonical questions because they provide a better experience to active seekers and writers, and offer more value to both of these groups in the long term.


> Credits: Kaggle

__ Problem Statement __

  • Identify which questions asked on Quora are duplicates of questions that have already been asked.
  • This could be useful to instantly provide answers to questions that have already been answered.
  • We are tasked with predicting whether a pair of questions are duplicates or not.

Real world/Business Objectives and Constraints

1. The cost of a mis-classification can be very high. 2. You would want a probability of a pair of questions to be duplicates so that you can choose any threshold of choice. 3. No strict latency concerns. 4. Interpretability is partially important.

Machine Learning Probelm

- Data will be in a file Train.csv
- Train.csv contains 5 columns : qid1, qid2, question1, question2, is_duplicate
- Size of Train.csv - 60MB
- Number of rows in Train.csv = 404,290

Example Data point

"id","qid1","qid2","question1","question2","is_duplicate"
"0","1","2","What is the step by step guide to invest in share market in india?","What is the step by step guide to invest in share market?","0"
"1","3","4","What is the story of Kohinoor (Koh-i-Noor) Diamond?","What would happen if the Indian government stole the Kohinoor (Koh-i-Noor) diamond back?","0"
"7","15","16","How can I be a good geologist?","What should I do to be a great geologist?","1"
"11","23","24","How do I read and find my YouTube comments?","How can I see all my Youtube comments?","1"

Mapping the real world problem to an ML problem

It is a binary classification problem, for a given pair of questions we need to predict if they are duplicate or not.

Performance Metric

Source: https://www.kaggle.com/c/quora-question-pairs#evaluation

Metric(s):

We build train and test by randomly splitting in the ratio of 70:30 or 80:20 whatever we choose as we have sufficient points to work with.

Our Approach

After doing eda we found out that:

  1. There are more 0's than 1's.
  2. The 1's are 36.92% whereas the 0's are 63.08%
  3. There are 537,933 unique questions of which 111,780 occured more than once.
  4. Only one question appeared 157 times
  5. There were no duplicates but we found out that there are 2 questions with null Values. So we replaced them with a space

Basic Feature Extraction (before cleaning)

Let us now construct a few features like:

  • freq_qid1 = Frequency of qid1's
  • freq_qid2 = Frequency of qid2's
  • q1len = Length of q1
  • q2len = Length of q2
  • q1_n_words = Number of words in Question 1
  • q2_n_words = Number of words in Question 2
  • word_Common = (Number of common unique words in Question 1 and Question 2)
  • word_Total =(Total num of words in Question 1 + Total num of words in Question 2)
  • word_share = (word_common)/(word_Total)
  • freq_q1+freq_q2 = sum total of frequency of qid1 and qid2
  • freq_q1-freq_q2 = absolute difference of frequency of qid1 and qid2

Preprocessing of Text

  • Preprocessing:
    • Removing html tags
    • Removing Punctuations
    • Performing stemming
    • Removing Stopwords
    • Expanding contractions etc.( like 1000 to 1k, % to percentage etc)

Advanced Feature Extraction (NLP and Fuzzy Features)

Definition:

  • Token: You get a token by splitting sentence a space
  • Stop_Word : stop words as per NLTK.
  • Word : A token that is not a stop_word

Features:

  • cwc_min : Ratio of common_word_count to min lenghth of word count of Q1 and Q2
    cwc_min = common_word_count / (min(len(q1_words), len(q2_words))


- __cwc_max__ : Ratio of common_word_count to max lenghth of word count of Q1 and Q2
cwc_max = common_word_count / (max(len(q1_words), len(q2_words))

- __csc_min__ : Ratio of common_stop_count to min lenghth of stop count of Q1 and Q2
csc_min = common_stop_count / (min(len(q1_stops), len(q2_stops))

- __csc_max__ : Ratio of common_stop_count to max lenghth of stop count of Q1 and Q2
csc_max = common_stop_count / (max(len(q1_stops), len(q2_stops))

- __ctc_min__ : Ratio of common_token_count to min lenghth of token count of Q1 and Q2
ctc_min = common_token_count / (min(len(q1_tokens), len(q2_tokens))

  • ctc_max : Ratio of common_token_count to max lenghth of token count of Q1 and Q2
    ctc_max = common_token_count / (max(len(q1_tokens), len(q2_tokens))


  • last_word_eq : Check if First word of both questions is equal or not
    last_word_eq = int(q1_tokens[-1] == q2_tokens[-1])


  • first_word_eq : Check if First word of both questions is equal or not
    first_word_eq = int(q1_tokens[0] == q2_tokens[0])


  • abs_len_diff : Abs. length difference
    abs_len_diff = abs(len(q1_tokens) - len(q2_tokens))


  • mean_len : Average Token Length of both Questions
    mean_len = (len(q1_tokens) + len(q2_tokens))/2










  • longest_substr_ratio : Ratio of length longest common substring to min lenghth of token count of Q1 and Q2
    longest_substr_ratio = len(longest common substring) / (min(len(q1_tokens), len(q2_tokens))

ML MODELS:

  1. We did a random model so that we can find out the threshold for the log-loss.
  2. After Building it we found out that the log loss was 0.89 wich means that the models would have a log loss of less than 0.89.
  3. We did tf idf w2v and tiidf vectorizers on the dataset.
  4. We did the following models and following are our results:

tfidf w2v

SR Model Log-Loss
1. Random 0.887
2. Logistic Regression(SGD Classifier) 0.520
3. Linear SVM 0.489
4. XGBOOST 0.357

tfidf

SR Model Log-Loss
1. Random 0.890
2. Logistic Regression with SGD Classifier 0.492
3. Logistic Regression 0.488
4. linear SVM 0.492
5. XGBOOST 0.429

About

We are tasked with predicting whether a pair of questions are duplicates or not.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published