Skip to content

Latest commit

 

History

History
25 lines (12 loc) · 715 Bytes

Word2Vec Representation.md

File metadata and controls

25 lines (12 loc) · 715 Bytes

Word2Vec Representation Model

Quick Overview

  1. The main goal of this paper is to introduce techniques that can be used for learning high-quality word vectors from huge data sets with billions of words, and with millions of words in the vocabulary.​

  2. It proposes two novel model architectures - CBOW and Skip-Gram , for computing continuous vector representations of words from very large data sets.​

Presentation made for the discussion

Resources

  1. Paper
  2. Video