A Neural Probabilistic Language Model

  • 更新时间: 2017-04-19
  • 作者: Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian
  • 浏览数: 51
  • 发表评论

【摘 要】 A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed rep- resentation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these repre- sentations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly im- proves on a state-of-the-art trigram model.

【发布时间】 2003-01-23

【发布位置】 The Journal of Machine Learning Research

标签:

我来评分 :6
0