Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Asciinema – Record and share your terminal sessions (asciinema.org)
275 points by andrewla on May 11, 2018 | hide | past | favorite | 60 comments


I think I was pushing the limits of what the service could handle with my fluid simulations. They're pretty choppy on first playback, but run smoothly the second time.

Block - https://asciinema.org/a/125371

Waterfall - https://asciinema.org/a/125380


Those are worth an HN thread of their own imo. Maybe a blog post or github page explaining how you did this


I built it to be a reference implementation for "Fluid Simulation for Computer Graphics" (1st Ed.) by Robert Bridson. The book is a cleaned up version of his free course notes. If I had the time, I'd clean up the code a little because the book already has a great explanation of how it works.

https://github.com/cgmb/euler


These remind me of the ASCII fluid submission [0] at IOCCC 2012 [1] by Yusuke Endoh. The source [2] itself can be used as an input to the program (see video [3]), and he also made a colour version [4].

[0]: https://www.ioccc.org/2012/endoh1/hint.html

[1]: https://www.ioccc.org/2012

[2]: https://www.ioccc.org/2012/endoh1/endoh1.c

[3]: https://youtu.be/QMYfkOtYYlg

[4]: https://www.ioccc.org/2012/endoh1/endoh1_color.c


That was one of the inspirations for my work. His simulation is Lagrangian (particle-based) while mine is Eularian (grid-based).


Wow!


My co-founder wrote a bit about Asciinema as part of a comprehensive comparison of terminal recorders [1]. There's a lot to love about it (wide availability, very nice JavaScript playback -- check out the Game of Life playback at [1]), but it's a bit of a chore to self host, if that's important for you.

[1]: https://intoli.com/blog/terminal-recorders/


The difficulty in self-hosting was what prevented us from using this at my job. Proprietary source code, and whatnot.


asciinema dev here. Where did you get impression it's proprietary? I'm really curious because it's fully open-source, with all components open-source and usable independently, and there's no company or business model behind it. Just me and my spare hobby time.


This is really cool, and if I wanted to host a demo on a public website or something like that (attempting to keep bandwidth use down), I would use it in a heartbeat.

My use case for terminal demos is very different from that. Usually I'm just recording something short and emailing it to one person. Bandwidth savings do not accumulate across many website visitors, and simple is perfect.

The nice thing about video is I can just email it. I don't need to host a web server instance for playback. Even at very high quality settings ("ffmpeg -qscale:v 1", nothing blurry about it) the resulting video is only about 1MB/minute (80x24 term, h264 or h265).

Here's my demo workflow, in case folks are curious:

1. simplescreenrecorder to record terminal window + voice.

2. ffmpeg -i recorded.mkv -vcodec libx264 -qscale:v 1 -acodec aac encoded.mp4 # or s/264/265/ for slightly better compression, if the receiver has new enough codecs

3. Email the video or whatever.

That's it.


The main killer feature, in my opinion, is that you can pause and copy and paste the text from any given point. Not that you can record and share terminal session, which you could do with basically any video recording software.

For example, at 00:11 of https://asciinema.org/a/139514

    Puzzle: 54-14343245-14134544-48EW.53NE                                          
        1 4 1 3 4 5 4 4            1 4 1 3 4 5 4 4                                  
       ┌─┬─┬─┬─┬─┬─┬─┬─┐          ┌─┬─┬─┬─┬─┬─┬─┬─┐                                 
     8 │ │ │ │━│ │ │ │ │ 5      8 │ │┏│━│━│┓│ │ │ │ 5                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     7 │ │ │ │ │ │ │ │ │ 4      7 │ │┃│ │ │┗│━│┓│ │ 4                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     6 │ │ │ │ │ │ │ │ │ 2      6 │ │┃│ │ │ │ │┃│ │ 2                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     A │━│ │ │ │ │ │ │ │ 3      A │━│┛│ │ │ │ │┃│ │ 3                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     4 │ │ │ │ │ │ │ │ │ 4      4 │ │ │ │ │┏│━│┛│ │ 4                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     3 │ │ │ │ │┗│ │ │ │ 3      3 │ │ │ │ │┗│ │ │ │ 3                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     2 │ │ │ │ │ │ │ │ │ 4      2 │ │ │ │ │ │ │ │ │ 4                               
       ├─┼─┼─┼─┼─┼─┼─┼─┤          ├─┼─┼─┼─┼─┼─┼─┼─┤                                 
     1 │ │ │ │┃│ │ │ │ │ 1      1 │ │ │ │┃│ │ │ │ │ 1                               
       └─┴─┴─┴─┴─┴─┴─┴─┘          └─┴─┴─┴─┴─┴─┴─┴─┘                                 
        1 2 3 B 5 6 7 8            1 2 3 B 5 6 7 8


That is cool, but not usually something I need from a demo video.


It depends what you demo and what hay you wear: if you're on the receiving end of a tutorial, being able to copy the text you just saw on the screen is nice. If you're just giving the demo (and/or the that thing you're demoing doesn't have anything useful to copy, e.g. it's not a tutorial) then you probably don't care about it.


You can send the recording(json file) of asciinema too and it's significantly smaller in size. The receiver does need to haveasciinema installed to playback though.


Sometimes you wonder how something that is so widely known and has been posted so many times [1] can reach No. 1... but yeah, its cool.

[1]: https://hn.algolia.com/?query=asciinema&sort=byPopularity&pr...


Perhaps because it has lots of fans who like to spread the word?

I use it only very occasionally, like once a year, but love the way it works. My first experience was perfect, I really couldn't think of a single thing to improve (and that's very rare), so that left a very good impression.


Don't forget you can take the recording and turn it into an animated GIF: https://github.com/asciinema/asciicast2gif

Super useful for sharing because then the other parties don't need this software installed.


Visual Studio Code LiveShare extension also has terminal sharing, but not with recording, simply being able to work with the remote's terminal.


Tried out asciinema earlier this year, and it is super duper cool.

TLDR: Wish it included some functionality to get finished usable media artifacts out.

In the end I gave up after failing to find a way to convert the resulting asciinema data files to a GIF or any other HTML-compatible media file. The open-source projects ended up not working for various reasons- broken shell scripts, abandoned projects, node.js programs that require a fully working phantomjs installation and then it still doesn't work, requiring docker (come on, really? Spin up containers just to convert to a GIF?), and these are only the problems I remember offhand.

Some breadcrumbs:

https://www.google.com/search?q=asciinema+convert+to+gif

https://unix.stackexchange.com/questions/314235/converting-a...

"asciinema doesn't provide this natively, there are tools out there that can facilitate that for you"

Right.


I've used ttyrecord[0] along with ttygif[1] a while ago to create this animated gif:

https://raw.github.com/laurent22/massren/animation/animation...

As far as I remember it was easy to use and the result was pretty good. Back then I've also tested Asciinema but gave up for the same reasons.

[0] http://0xcc.net/ttyrec/ [1] https://github.com/icholy/ttygif


Wouldn't animated svgs be a better output format for something like that?


Yup. There's project called svg-term-cli which generates animated SVG from asciicast file.


This has been an issue for me. My solution so far has been to just make a wrapper html page that uses their library.


I swear I can't be the only one who's bugged by the fact that the link to the recording[0] doesn't actually play out the same way as the session you just watched in the front page![1]

[0] https://asciinema.org/a/17648

[1] https://asciinema.org/


Yep, this looks like the link it should be:

https://asciinema.org/a/113463


Haha, yeah, you're not the first one: https://github.com/asciinema/asciinema/issues/279


how on earth does copy/paste work from inside a video?

i'm way more confused than disappointed and wonder how easy it would be to rip this for use in a non-public environment?


This is a JSON-based container format for timestamped terminal sequences, including the terminal dimensions, the shell and the terminal used. (i.e., values of SHELL and TERM). See this minimal example [0]:

  {"timestamp": 1526123655, "version": 2, "height": 55, "env": {"SHELL": "/bin/bash", "TERM": "screen"}, "width": 179}
  [0.046457, "o", "namibj@nb:~/git/namibj/ind3xlite$ "]
  [3.021128, "o", "l"]
  [3.116894, "o", "s"]
  [3.468747, "o", "\r\n"]
  [3.472283, "o", "\u001b[0m\u001b[01;32mind3xlite\u001b[0m  LICENSE  main.c  main.h  Makefile  README.md  sqlite3.c  sqlite3.h  sqlite3.o\r\n"]
  [3.472869, "o", "namibj@nb:~/git/namibj/ind3xlite$ "]
  [4.092707, "o", "e"]
  [4.340945, "o", "x"]
  [4.548934, "o", "i"]
  [4.653126, "o", "t"]
  [4.859118, "o", "\r\n"]
  [4.859605, "o", "exit\r\n"]
As you can see, it was easy for me to redact the user and hostname from the prompt, without you even seeing it. The canonical implementation is in python (3 iirc), but you should be able to hook a non-x11 terminal emulator up and make that spew out frames for each individual timestamp, and just create a variable framerate mkv from that. With uncompressed video. This is then trivial to feed into ffmpeg for transcoding into your favourite codec, possibly even including a different stream for audio into the resulting mux. I might, if I'd get a reason to do this before other projects, take an existing terminal emulator and teach that to do this (or possibly directly to lossless VC1, removing the need to search for matching glyphs later in the process, which should yield reasonably small files directly from the conversion, without quality loss). So, if someone want's me to do this, email's in my profile.


I discovered that my dockers demonstration isn't updating the background colour correctly through asciinema. It looks good on my terminal, but maybe I have old man eyes.

https://asciinema.org/a/tlpahVHp6cAUsdznznBCExGDo


I just wish they added a "live" presentation feature like the unmaintained "play it again sam" (https://github.com/rfk/playitagainsam) had.


Support for the necessary recording is available since version 2.0.0.

This can be done via jq [0]. See the simple script [1] I made. It relies on things like tab-expansion yielding the same results, so beware.

[0]: https://github.com/stedolan/jq [1]: https://0x0.st/sj0G.txt (License: AGPLv3, credit to my handle)


Wow - happens rare but I really can't understand your answer. I never would have guessed jq to have any use for presentation. Can you give a short example how to apply your script.


Uh, just feed it the resulting file from

  asciinema rec --stdin
and it will execute whatever was in there. If some programs were reading more than they actually wanted, it would have trouble, because they would likely get fed further commands/input, which would no longer go to the shell/later program.

Regarding the script, it's just using jq because asciinema uses a JSON-based format, of which some data needs to be extracted. It should be self-evident from looking at what jq is doing and what an interactive shell does different from a non-interactive one (hint: it's related to autocompletion), as well as a sample file created by the above command. If you still don't understand, tell, and I'll try to explain better.

And sorry for the delay, I wasn't expecting a timely response that asks me for something.


Thanks for the reply. Doesn't happen every day someone answers your sorrows with a two liner. I don't really see at what points it is checking for my pressed keys. I just was able to try your snippet. Unfortunately I do not get it to work - I just end up with lines like `ls: no such file or directory`.


You might want to check what your shell does, as this relies on auto-completion doing the same thing if you replay the same keystrokes, even if that happens faster. Specifically, try to intercept the output of the last pipe into a

less -U and look if that seems to be what you were typing. If you want, you can replay the special chars by pressing Ctrl+<whatever symbol is followed by the caret>. Try to not use a fancy auto-completion like fish or fzf provides you with, as they tend to not be the same when doing a replay.

This should not be hard, I used bash succesfully. I hope you did not forget to run it on the same filesystem state, as far as auto-complete behavior during the asciicast is concerned.

Also, this could be improved by delaying the replay as long as specified by the asciicast. But I couldn't quickly figure out how to get that done without further dependencies, so I won't provide that (now).

This script filters for input keys from the asciicast with the

  select("i" == .[1]?)
and then extracts only the string itself with

  .[2]
. I'd like to understand what is going wrong, but this style of debugging isn't working. I won't debug it on your system unless you find a way to make that sufficiently productive for me. And I really hope you created the file with --stdin, but judging from ls barfing on a non-existent file/directory, I assume the shell it spawned just did not result in ls being called with the same name, due to auto-complete changing it's behavior.


I appreciate your guidance. I don't expect you to debug it.

But it seems to me we were using the term "live" differently. I think I understand now that the code you provide runs the recording on the current machine - similar to a macro. The live mode I was referring from pias takes the recording - but times the keystrokes to what you type during the presentation - so one doesn't need to check. Makes also a nice trick for showing off as one can do fun stuff like type with your feet. Ah I see - they call this "manual typing", and have a "Live Replay" - just as you suggested. So it was me being inprecise - sorry.

You were right the errors I was experiencing had to do with the shell. I use fish - forcing everything to use bash made previous errors disappear. However in my test recording the it got stuck in vim - suppose escape didn't work.


Escape should work, you'd wanna check the keys around that tho. It might be a timing issue. The requirement to not use fish in that mode is just due to fish exhibiting hidden non-constant state. I did think about how to do that timing, but I could not figure out how to do it. It should be a very short python script or so tho, just replace the 'jq ksjdfkasjdfk |' with a script that reads the newline-delimited JSON and waits until the next timestamp, sends that data to the shell, and parses the next JSON. You'd just have to get a system time at the beginning, and not just sleep for the offset between the timestamps in the JSON, as you'd be slower by as much as the inaccuracy from the system sleep call and the processing you have to do to extract the data from the JSON. But otherwise this should yield a timing predictable enough for such a live replay. The escape char seems to be kinda weird indeed. I'm not sure where that issue is tho.


With all the terminal recordings I have seen, the content eventually ends up at the bottom. It would be awesome to have the cursor always stay in the middle... any idea if thats possible with Asciinema?


Why do you want to have it in the middle? Then you waste half the screen space right?

If you want it to push from the top, I could understand... but in the middle?


Also see ttyrec, for a standalone terminal recording/playback utility:

http://0xcc.net/ttyrec/


I last used Asciinema when presenting my thesis. It certainly gave the presentation a nice vibe, but I still had to fire up a real terminal window when answering questions.


This fixes my main grievance with ttyrec: no seeking!


tty-player [1] gives you seekability, scrolling, and copy/paste with standard ttyrec files. Much easier for self-hosting than Asciinema and permits the use of other existing tools that work with ttyrec output.

[1] http://tty-player.chrismorgan.info


Super cool!

IMO, the killer feature is how easy it is to embed recordings into your project readme or issues/comments in markdown/html.


Still doesn't support Windows =(


There's a “Contributing” link at the bottom of the page.


People who contribute to FOSS usually don't use Windows anyway, though.


Do you have stats at hand about people who whine software isn't available for their platform?


I read this as "Ascii enema". Yikes


Me too. What's the correct pronunciation? "Ass-cinema"?


I've always read 'as-ski-nuh-mah', like 'ascii' + 'nema' of 'cinema' - but with the timing of 'as' + 'cinema', stress on 'scii'.


From the website: "asciinema [as-kee-nuh-muh"


My brain did it as ascii-cinema, which may be non-standard.


Read it the exact same way.

This is a perfect example of why engineering has a separate marketing division.


I mean, O's wouldn't be so bad. #'s would hurt.


Or just the `script` command?

http://man7.org/linux/man-pages/man1/script.1.html

Still, that's pretty sick that you can pause the "video" and highlight/copy/paste in-browser


The script command doesn't record timing data, so it's not very useful for playback. It's designed for transcripts, not for videos.


Per the man page, `script -t` captures timing data which you can playback easily with `scriptreplay -t`

http://man7.org/linux/man-pages/man1/scriptreplay.1.html


Yes but script command on *BSD and macOS doesn't support this. Long long time ago asciinema was wrapping script then doing upload. But it work only on Linux and I wanted it to work for users of other Unix-like systems so here we are.


on linux it is does have timing recording try with

script -tshell_recording.time shell_recording.script bash

scriptreplay -t shell_recording.time shell_recording.script




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: