Fetching wikipedia text for training word2vec

  1. Word2vec - Wikipedia is a common NLP model that requires a large corpus of text to train.

  2. Wikipedia is a large corpus of text.

  3. I am aware of https://github.com/portstrom/parse_wiki_text/tree/master/examples/test

  4. It appears the bulk of the problem is that mediawiki was a format that evolved over time, and not with parseable-by-anything-besides-mediawiki with a requirement in mind.

  5. Anyone have experience extracting Wikipedia as a text file (or a tree)? I have no problem downloading the data. Parsing it is another issue, and I would prefer to not have to hit Wikipedia.org itself for HTML for every page I parse.

Thanks!

Original word2vec code has a script to clean wiki... Could use that as a starting point:

Torrent an official dump: Wikipedia:Database download - Wikipedia

@scottmcm : Have you worked with wikipedia dumps in the past? DB dumps are available at https://dumps.wikimedia.org/ in XML format.

The problem is not getting the dump. The problem is that the recent dumps are in XML, not rendered-HTML.

How do you know that the torrents provide HTML instead of XML ?

@jbowles : I tried that script. In the first 1.5 GB data dump of data.txt, the text looks quite noisy. It's impressive that word2vec can work on such noisy data, but I'd prefer to extract clean text if possible.

@zeroexcuses I hear ya. I happened to see this today: Preprocessed Wikipedia for HotpotQA

Haven't yet used it, maybe it fills your need?

1 Like

Forgot I also have an old experiment text parser (in go) that would clean wkipedia dataset. I'll run through that and gist examples.

@jbowles : the HotpotQA wikipedia data is excellent. Thanks!

1 Like