-
Notifications
You must be signed in to change notification settings - Fork 5
Language Information Processing
Joseph cheng edited this page Jun 15, 2018
·
100 revisions
- Generating Sentences from a Continuous Space
- Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning
- Using the Output Embedding to Improve Language Models
- Hierarchical Multiscale Recurrent Neural Networks
- Efficient softmax approximation for GPUs
- Neural Models for Sequence Chunking
- Improved Variational Autoencoders for Text Modeling using Dilated Convolutions
- Interactive Natural Language Acquisition in a Multi-modal Recurrent Neural Architecture
- Machine Learning on Sequential Data Using a Recurrent Weighted Average
- Linguistic Knowledge as Memory for Recurrent Neural Networks
- Improved Training of Wasserstein GANs
- Learning to Skim Text
- Supervised Learning of Universal Sentence Representations from Natural Language Inference Data (InferSent)
- Poincaré Embeddings for Learning Hierarchical Representations
- Neural Embeddings of Graphs in Hyperbolic Space
- Adversarial Generation of Natural Language
- One button machine for automating feature engineering in relational databases
- Natural Language Processing with Small Feed-Forward Networks
- Dynamic Entity Representations in Neural Language Models
- A Feature-Rich Vietnamese Named-Entity Recognition Model
- Improving Language Modeling using Densely Connected Recurrent Neural Networks
- Frustratingly Short Attention Spans in Neural Language Modeling
- Improving Neural Language Models with a Continuous Cache
- Online Representation Learning in Recurrent Neural Language Models
- Neural Lattice Language Models
- A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
- Chunk-Based Bi-Scale Decoder for Neural Machine Translation
- Attention Is All You Need
- Depthwise Separable Convolutions for Neural Machine Translation
- Confidence through Attention
- Unsupervised Neural Machine Translation
- Neural Semantic Encoders
- Match-Tensor: a Deep Relevance Model for Search
- A Structured Self-attentive Sentence Embedding
- Learning to Generate Reviews and Discovering Sentiment
- Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs
- An Attention Mechanism for Answer Selection Using a Combined Global and Local View
- Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
- Sentence-State LSTM for Text Representation
- End-To-End Memory Networks
- Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
- DMN+ Dynamic Memory Networks for Visual and Textual Question Answering
- Hierarchical Attention Networks for Document Classification
- Tracking the World State with Recurrent Entity Networks
- Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
- A Comparative Study of Word Embeddings for Reading Comprehension
- A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
- Gated-Attention Readers for Text Comprehension
- Attention-over-Attention Neural Networks for Reading Comprehension
- Question Answering from Unstructured Text by Retrieval and Comprehension
- Query-Reduction Networks for Question Answering
- Reasoning with Memory Augmented Neural Networks for Language Comprehension
- R-NET: Machine Reading Comprehension with Self-matching Networks
- Consensus Attention-based Neural Networks for Chinese Reading Comprehension
- Ruminating Reader: Reasoning with Gated Multi-Hop Attention
- Learning to Ask: Neural Question Generation for Reading Comprehension
- TableQA: Question Answering on Tabular Data
- S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
- Toward Incorporation of Relevant Documents in word2vec
- Question Dependent Recurrent Entity Network for Question Answering
- HyperQA: Hyperbolic Embeddings for Fast and Efficient Ranking of Question Answer Pairs
- Globally Normalized Reader
- Stochastic Answer Networks for Machine Reading Comprehension
- Adaptive Memory Networks
- Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension
- A Neural Conversational Model
- On-line Active Reward Learning for Policy Optimization in Spoken Dialogue Systems
- Deep Reinforcement Learning for Dialogue Generation
- Learning through Dialogue Interactions by Asking Questions
- Generating Long and Diverse Responses with Neural Conversation Models
- Learning to Decode for Future Success
- A Knowledge-Grounded Neural Conversation Model
- Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
- Composite Task-Completion Dialogue System via Hierarchical Deep Reinforcement Learning
- Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models
- Improving Frame Semantic Parsing with Hierarchical Dialogue Encoders
- Natural Language Generation for Spoken Dialogue System using RNN Encoder-Decoder Networks
- Key-Value Retrieval Networks for Task-Oriented Dialogue
- Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
- Sub-domain Modelling for Dialogue Managementwith Hierarchical Reinforcement Learning
- Learning to Query, Reason, and Answer Questions On Ambiguous Texts
- A Neural Attention Model for Abstractive Sentence Summarization
- CopyNet Incorporating Copying Mechanism in Sequence-to-Sequence Learning
- Get To The Point: Summarization with Pointer-Generator Networks
- A Deep Reinforced Model for Abstractive Summarization
- Controlling Linguistic Style Aspects in Neural Language Generation
- Deep Recurrent Generative Decoder for Abstractive Text Summarization
- Generating Sentences Using a Dynamic Canvas
- Neural Word Segmentation Learning for Chinese
- Transfer Learning for Low-Resource Chinese Word Segmentation with a Novel Neural Network
- Learning Character Representations for Chinese Word Segmentation
- Fast and Accurate Neural Word Segmentation for Chinese
- Improving Semantic Relevance for Sequence-to-Sequence Le arning of Chinese Social Media Text Summarization
- Dual Long Short-Term Memory Networks for Sub-Character Representation Learning
- Chinese NER Using Lattice LSTM
- An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation
- Learning to Compute Word Embeddings On the Fly
- Context Aware Document Embedding
- Neural Bag-of-Ngrams
- Learning Chinese Word Representations From Glyphs Of Characters
- One-shot and few-shot learning of word embeddings
- Numerically Grounded Language Models for Semantic Error Correction
- Iterative Multi-document Neural Attention for Multiple Answer Prediction
- Content-Based Table Retrieval for Web Queries
- On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis
- Joint Event Extraction via Recurrent Neural Networks
- End-to-End Information Extraction without Token-Level Supervision
- DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging
- Integrating Lexical and Temporal Signals in Neural Ranking Models for Searching Social Media Streams
- Learning to Attend, Copy, and Generate for Session-Based Query Suggestion