M-LSD

M-LSD: Towards Light-weight and Real-time Line Segment Detection
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

Geonmo Gu*, Byungsoo Ko*, SeoungHyun Go, Sung-Hyun Lee, Jingeun Lee, Minchul Shin (* Authors contributed equally.)

@NAVER/LINE Vision

![](https://github.com/navervision/mlsd/raw/master/.github/mlsd_demo.gif =x400)

Overview

![](https://github.com/navervision/mlsd/raw/master/.github/teaser.png =x250) ![](https://github.com/navervision/mlsd/raw/master/.github/mlsd_mobile.png =x250)

First figure: Comparison of M-LSD and existing LSD methods on GPU. Second figure: Inference speed and memory usage on mobile devices.

We present a real-time and light-weight line segment detector for resource-constrained environments named Mobile LSD (M-LSD). M-LSD exploits extremely efficient LSD architecture and novel training schemes, including SoL augmentation and geometric learning scheme. Our model can run in real-time on GPU, CPU, and even on mobile devices.

Line segment & box detection demo

![](https://github.com/navervision/mlsd/raw/master/.github/demo_public.png =x500)

We prepared a line segment and box detection demo using M-LSD models. This demo is developed based on python flask, making it easy to see results through a web browser such as Google Chrome.

All M-LSD family are already converted to tflite models. Because it uses tflite models, it does not require a GPU to run the demo.

Note that we make the model to receive RGBA images (A is for alpha channel) as input to the model when converting the tensorflow model to the tflite model, in order to follow TIPs for optimization to mobile gpu.

Don't worry about alpha channel. In a stem layer of tflite models, all zero convolutional kernel is applied to alpha channel. Thus, results are same regardless of the value of alpha channel.

Post-processing codes for a box detection are built in Numpy. If you consider to run this box dectector on mobile devices, we recommend porting post-processing codes to eigen3-based codes.

![](https://github.com/navervision/mlsd/raw/master/.github/realtime_demo1.gif =x240) ![](https://github.com/navervision/mlsd/raw/master/.github/realtime_demo2.gif =x240) ![](https://github.com/navervision/mlsd/raw/master/.github/realtime_demo3.gif =x240)

Above examples are captured using M-LSD tiny with 512 input size

How to run demo

Install requirements

$ pip install -r requirements.txt

Run demo

$ python demo_MLSD.py

Colab notebook

You can jump right into line segment and box detection using M-LSD with our Colab notebook. The notebook supports interactive UI with Gradio as below.

![](https://github.com/navervision/mlsd/raw/master/.github/gradio_example.png =x350)

Pytorch demo

https://github.com/lhwcv/mlsd_pytorch (by lhwcv)

Citation

If you find M-LSD useful in your project, please consider to cite the following paper.

@misc{gu2021realtime,
    title={Towards Real-time and Light-weight Line Segment Detection},
    author={Geonmo Gu and Byungsoo Ko and SeoungHyun Go and Sung-Hyun Lee and Jingeun Lee and Minchul Shin},
    year={2021},
    eprint={2106.00186},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

GitHub

https://github.com/navervision/mlsd