【论文阅读】ntuer at SemEval-2019 Task 3: Emotion Classification with Word and Sentence Representations in RCNN

最近都在忙着写论文的实验,所以已经很久没有正经看论文了,这次这个是ArXiv上随手翻到的,恰巧以前看过这个竞赛。

Basic Information

  • Name: ntuer at SemEval-2019 Task 3: Emotion Classification with Word and Sentence Representations in RCNN
  • Authors:  Peixiang Zhong, Chunyan Miao
  • ArXiv: https://arxiv.org/abs/1902.07867

Task & Contributions

The task of this paper is to detect emotions from textual conversations, as described in SemEval-2019 Task 3.
   The main contributions of this paper is listed as follows:

  1. A RCNN model extended with word and sentence representations has been proposed, as can be seen from the headline.
  2. A list of experiments with micro-averaged F1 as the evaluation criteria were conducted to prove the effectiveness and roubustness of the proposed model. The proposed model achieved good performances in the competition.

Method

The architecture of the proposed method(they should think a good name) is shown in the figure above. Given a conversation consisting of three utterances, the model first concatenated them as one sentence. For the word-level represention, the word embedding pre-trained on 330M English Twitter messages using Glove is used as the input of RCNN. The RCNN model mainly comprises of a BiLSTM, a linear transformation layer(dimension reduction) and a max-pooling layer(discriminative features extraction). In the sentence-level representation, DeepMoji is used as sentence embedding, which is concatenated with the outputs of RCNN. Finally, the sentence representations are fed to a softmax layer.

Experiments

The focus of experiments can be divided into three classes: word embeddings, sentence embeddings and hyper-parameters.

Comments

The model proposed in this paper is quite a simple model. The reason I read it carefully is the thought of adding sentence representation to word representation and experimental results proved the effectiveness of the method. When using S-LSTM, I can obtain a general sentence representation, but how to deal with it is a problem. Concatenating and adding are the simplest ways.
   When writing related work, if no work doing the same task can be found, we can choose work in the related field. By the way, the frame figure in this paper is quite misunderstanding.

Key Citations

  1. 【CNN】CNN-based methods can capture local dependencies, discriminative  features, and are parallelizable for efficient computation.
  2. 【LSTM】LSTM-based methods can capture the word-ordering information and have achieved state-of-the-art performances on many sentiment analysis datasets.