Teleport Ur Logs with Love
Whatever you pipe into
tull, will get a unique UUID and the data gets stored locally - accessible via a flask server with simple endpoints. You can use ngrok or localtunnel then to share it outside LAN as well. It won't break the console as it also redirects the stream transparently to stdout.
pip install tull
- clone the repo
- create a venv and activate it
- pip install -r requirements.txt
I have tested this on Mac M1 Big Sur; but most likely it should work for all
tull web and it will give you few urls. Open the one with TULL_WEB_URL in front.
For each session
tull generates a ID, and that ID is used to associate the data of that session.
Type anything into the active terminal. On the web also on the correponding ID page it will reflect.
Exit with Ctrl-D. (Currently Ctrl-C is causing the flask server to stop as well along with stream caputre, working on it)
Actual Use Case
ps ax | tull ; you can see the output of your command but also the logs are saved with a unique id. Go to TULL_WEB_URL(found via
tull web earlier)
- you have your logs stored for future reference in an organized manner
- you can share the url to anyone having access to your server via http.
What I do generally is hook it up with the ngrok tunnel. ngrok is a tool which you can use to create secure tunnels from your local ports in one-liner. So just do
ngrok http 17171 and you can share these logs with anyone on the other side of internet.
This is a personal project, don't use this in production or anywhere where you are not sure of security impacts. Until a v1.0 everything is considered unstable. :)
- Security - add basic auth
- Better UI for /web interface - make it easier to search/navigage/organize logs
- API pagination for /api interface
- Streaming for /raw interface - also, how to read last n lines fast!
- Make readme look good
How it works
When you run this, it creates a folder .tull in your user home directory. Also, at the same time it starts a background process which runs the flask server with some simple apis if it is already not started. Then whenever any data is piped into it, or it is invoked from command line, it creates a unique ID, and starts storing the pipe stream data into that file and also transparently writing it to stdout. That way it doesn't break your existing flow, saves the logs with unique ID and allows you to browse them later. Not too fancy, but useful.