Cola is a high-level distributed crawling framework, used to crawl pages and extract structured data from websites. It provides simple and fast yet flexible way to achieve your data acquisition objective. Users only need to write one piece of code which can run under both local and distributed mode.
- Python2.7 (Python3+ will be supported later)
- Work on Linux, Windows and Mac OSX
The quick way:
pip install cola
Or, download source code, then run:
python setup.py install
Documents will update soon, now just refer to the
wiki <https://github.com/chineking/cola/tree/master/app/wiki>_ or
weibo <https://github.com/chineking/cola/tree/master/app/weibo>_ application.
For the wiki or weibo app, please ensure the installation of dependencies, weibo as an example:
pip install -r /path/to/cola/app/weibo/requirements.txt
In order to let your application support local mode, just add code to the entrance as below.
.. code-block:: python
from cola.context import Context
ctx = Context(local_mode=True)
Then run the application:
Stop the local job by
coca master -s [ip:port]
Start one or more workers:
coca worker -s -m [ip:port]
Then run the application(weibo as an example):
coca job -u /path/to/cola/app/weibo -r
Coca is a convenient command-line tool for the whole cola environment.
Kill master to stop the whole cluster:
coca master -k
List all jobs:
coca job -m [ip:port] -l
list jobs at master: 10.211.55.2:11103
====> job id: 8ZcGfAqHmzc, job description: sina weibo crawler, status: stopped
You can run a job which shown in the list above:
coca job -r 8ZcGfAqHmzc
Actually, you don't have to input the complete job name:
coca job -r 8Z
Part of the job name is fine if there's no conflict.
You can know the status of a running job by:
coca job -t 8Z
The status like counters during running and so on will be output
to the terminal.
You can kill a job by the kill command:
coca job -k 8Z
You can create an application by this command:
coca startproject colatest
Remember, help command will always be helpful:
coca master -h