19

GitHub - stanfordnlp/GloVe: GloVe model for distributed word representation

 4 years ago
source link: https://github.com/stanfordnlp/GloVe
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

GloVe: Global Vectors for Word Representation

nearest neighbors of
frog Litoria Leptodactylidae Rana Eleutherodactylus Pictures 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f6c69746f7269612e6a7067 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f6c6570746f64616374796c696461652e6a7067 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f72616e612e6a7067 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f656c6575746865726f64616374796c75732e6a7067 Comparisons man -> woman city -> zip comparative -> superlative GloVe Geometry 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f6d616e5f776f6d616e5f736d616c6c2e6a7067 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f636974795f7a69705f736d616c6c2e6a7067 687474703a2f2f6e6c702e7374616e666f72642e6564752f70726f6a656374732f676c6f76652f696d616765732f636f6d70617261746976655f73757065726c61746976655f736d616c6c2e6a7067

We provide an implementation of the GloVe model for learning word representations, and describe how to download web-dataset vectors or train your own. See the project page or the paper for more information on glove vectors.

Download pre-trained word vectors

The links below contain word vectors obtained from the respective corpora. If you want word vectors trained on massive web datasets, you need only download one of these text files! Pre-trained word vectors are made available under the Public Domain Dedication and License.

  • Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors, 1.75 GB download): glove.42B.300d.zip
  • Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip
  • Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, uncased, 300d vectors, 822 MB download): glove.6B.zip
  • Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 200d vectors, 1.42 GB download): glove.twitter.27B.zip

Train word vectors on a new corpus

68747470733a2f2f7472617669732d63692e6f72672f7374616e666f72646e6c702f476c6f56652e7376673f6272616e63683d6d6173746572

If the web datasets above don't match the semantics of your end use case, you can train word vectors on your own corpus.

$ git clone http://github.com/stanfordnlp/glove
$ cd glove && make
$ ./demo.sh

The demo.sh script downloads a small corpus, consisting of the first 100M characters of Wikipedia. It collects unigram counts, constructs and shuffles cooccurrence data, and trains a simple version of the GloVe model. It also runs a word analogy evaluation script in python to verify word vector quality. More details about training on your own corpus can be found by reading demo.sh or the src/README.md

License

All work contained in this package is licensed under the Apache License, Version 2.0. See the include LICENSE file.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK