Wikipedia parser is Datacol-based module, which implements Wikipedia knowledge database extraction. In this example data are exported to TXT file. You can also adjust Datacol export settings to publish content to database, website (WordPress, Joomla, DLE) etc.
Main advantages of Datacol-based Wikipedia parser are listed below:
Step by Step test of Wikipedia parser
To test Wikipedia extractor:
1. Install Datacol trial version;
2. Choose content-parsers/wikipedia.org.par in the campaign tree and click Start button to launch Wikipedia extractor campaign.
Before launching content-parsers/wikipedia.org.par you can adjust the Input data. Select the campaign in the campaign tree for this purpose. In this way you can setup links to Wikipedia categories you need to extract content from.
Please contact us if the Wikipedia parser will not collect data after you have made changes to the Starting URL list.
3. Wait for data extraction results to appear. When you see the first results, you can force running campaign to stop (click Stop button).
4. After campaign is finished/stopped you can find TXT files in Documents folder.
Datacol Trial VS Activated
|Feature||Trial||License (Full version)|
|Preset default configuration for data extraction|
|Maximum data extraction results|
|Free software updates|
|Free email tech support|
|Paid skype+teamviewer consultations|
If the source website blocks your IP-address (after blocking you will get no more extraction results), use proxy.
Data processing options for content, collected by Wikipedia extractor:
Data export options for content, collected by Wikipedia extractor:
- Basic: CSV/TXT/Database/Excel;
- Online stores: Magento/PrestaShop/osCommerce/OpenCart/ZENCart/VirtueMart;
- Content CMS: WordPress/Joomla/DLE;
- All options.
If you have any questions, related to Wikipedia extractor, please ask via the contact form.