BudouX

PyPI npm

Standalone. Small. Language-neutral.

BudouX is the successor to Budou, the machine learning powered line break organizer tool.

Example

It is standalone. It works with no dependency on third-party word segmenters such as Google cloud natural language API.

It is small. It takes only around 15 KB including its machine learning model. It’s reasonable to use it even on the client-side.

It is language-neutral. You can train a model for any language by feeding a dataset to BudouX’s training script.

Last but not least, BudouX supports HTML inputs.

Demo

https://google.github.io/budoux/

Natural languages supported by default

  • Japanese

Supported Programming languages

For details about the JavaScript module, please visit JavaScript README.

Python module

Install

$ pip install budoux

Usage

You can get a list of phrases by feeding a sentence to the parser.

import budoux
parser = budoux.load_default_japanese_parser()
print(parser.parse('今日は天気です。'))
# ['今日は', '天気です。']

You can also translate an HTML string by wrapping phrases with non-breaking markup.

print(parser.translate_html_string('今日は<b>とても天気</b>です。'))
# <span style="word-break: keep-all; overflow-wrap: break-word;">今日は<b ><wbr>とても<wbr>天気</b>です。</span>

If you have a custom model, you can use it as follows.

with open('/path/to/your/model.json') as f:
  model = json.load(f)
parser = budoux.Parser(model)

A model file for BudouX is a JSON file that contains pairs of a feature and its score extracted by machine learning training.
Each score represents the significance of the feature in determining whether to break the sentence at a specific point.

For more details of the JavaScript model, please refer to JavaScript module README.

Caveat

BudouX supports HTML inputs and outputs HTML strings with markup that wraps phrases, but it’s not meant to be used as an HTML sanitizer. BudouX doesn’t sanitize any inputs. Malicious HTML inputs yield malicious HTML outputs. Please use it with an appropriate sanitizer library if you don’t trust the input.

Background

English text has many clues, like spacing and hyphenation, that enable beautiful and readable line breaks. However, some CJK languages lack these clues, and so are notoriously more difficult to process. Line breaks can occur randomly and usually in the middle of a word or a phrase without a more careful approach. This is a long-standing issue in typography on the Web, which results in a degradation of readability.

Budou was proposed as a solution to this problem in 2016. It automatically translates CJK sentences into HTML with lexical phrases wrapped in non-breaking markup, so as to semantically control line breaks. Budou has solved this problem to some extent, but it still has some problems integrating with modern web production workflow.

The biggest barrier in applying Budou to a website is that it has dependency on third-party word segmenters. Usually a word segmenter is a large program that is infeasible to download for every web page request. It would also be an undesirable option making a request to a cloud-based word segmentation service for every sentence, considering the speed and cost. That’s why we need a standalone line break organizer tool equipped with its own segmentation engine small enough to be bundled in a client-side JavaScript code.

BudouX is the successor to Budou, designed to be integrated with your website with no hassle.

How it works

BudouX uses the AdaBoost algorithm to segment a sentence into phrases by considering the task as a binary classification problem to predict whether to break or not between all characters. It uses features such as the characters around the break point, their Unicode blocks, and combinations of them to make a prediction. The output machine learning model, which is encoded as a JSON file, stores pairs of the feature and its significance score. The BudouX parser takes a model file to construct a segmenter and translates input sentences into a list of phrases.

Building a custom model

You can build your own custom model for any language by preparing training data in the target language.
A training dataset is a large text file that consists of sentences separated by phrases with the separator symbol “▁” (U+2581) like below.

私は▁遅刻魔で、▁待ち合わせに▁いつも▁遅刻してしまいます。
メールで▁待ち合わせ▁相手に▁一言、▁「ごめんね」と▁謝れば▁どうにか▁なると▁思っていました。
海外では▁ケータイを▁持っていない。

Assuming the text file is saved as mysource.txt, you can build your own custom model by running the following commands.

$ pip install -r requirements_dev.txt
$ python scripts/encode_data.py mysource.txt -o encoded_data.txt
$ python scripts/train.py encoded_data.txt -o weights.txt
$ python scripts/build_model.py weights.txt -o mymodel.json

Please note that train.py takes time to complete depending on your computer resources.
Good news is that the training algorithm is an anytime algorithm, so you can get a weights file even if you interrupt the execution. You can build a valid model file by passing that weights file to build_model.py even in such a case.

Constructing a training dataset from the KNBC corpus for Japanese

The default model for Japanese (budoux/models/ja_knbc.json) is built using the KNBC corpus.
You can create a training dataset, which we name source_knbc.txt here, from that corpus by running the command below.

$ python scripts/load_knbc.py -o source_knbc.txt

Author

Shuhei Iitsuka

Disclaimer

This is not an officially supported Google product.

GitHub

View Github