I think I was pushing the limits of what the service could handle with my fluid simulations. They're pretty choppy on first playback, but run smoothly the second time.
I built it to be a reference implementation for "Fluid Simulation for Computer Graphics" (1st Ed.) by Robert Bridson. The book is a cleaned up version of his free course notes. If I had the time, I'd clean up the code a little because the book already has a great explanation of how it works.
These remind me of the ASCII fluid submission [0] at IOCCC 2012 [1] by Yusuke Endoh. The source [2] itself can be used as an input to the program (see video [3]), and he also made a colour version [4].
My co-founder wrote a bit about Asciinema as part of a comprehensive comparison of terminal recorders [1]. There's a lot to love about it (wide availability, very nice JavaScript playback -- check out the Game of Life playback at [1]), but it's a bit of a chore to self host, if that's important for you.
asciinema dev here. Where did you get impression it's proprietary? I'm really curious because it's fully open-source, with all components open-source and usable independently, and there's no company or business model behind it. Just me and my spare hobby time.
This is really cool, and if I wanted to host a demo on a public website or something like that (attempting to keep bandwidth use down), I would use it in a heartbeat.
My use case for terminal demos is very different from that. Usually I'm just recording something short and emailing it to one person. Bandwidth savings do not accumulate across many website visitors, and simple is perfect.
The nice thing about video is I can just email it. I don't need to host a web server instance for playback. Even at very high quality settings ("ffmpeg -qscale:v 1", nothing blurry about it) the resulting video is only about 1MB/minute (80x24 term, h264 or h265).
Here's my demo workflow, in case folks are curious:
1. simplescreenrecorder to record terminal window + voice.
2. ffmpeg -i recorded.mkv -vcodec libx264 -qscale:v 1 -acodec aac encoded.mp4 # or s/264/265/ for slightly better compression, if the receiver has new enough codecs
The main killer feature, in my opinion, is that you can pause and copy and paste the text from any given point. Not that you can record and share terminal session, which you could do with basically any video recording software.
It depends what you demo and what hay you wear: if you're on the receiving end of a tutorial, being able to copy the text you just saw on the screen is nice. If you're just giving the demo (and/or the that thing you're demoing doesn't have anything useful to copy, e.g. it's not a tutorial) then you probably don't care about it.
You can send the recording(json file) of asciinema too and it's significantly smaller in size. The receiver does need to haveasciinema installed to playback though.
Perhaps because it has lots of fans who like to spread the word?
I use it only very occasionally, like once a year, but love the way it works. My first experience was perfect, I really couldn't think of a single thing to improve (and that's very rare), so that left a very good impression.
Tried out asciinema earlier this year, and it is super duper cool.
TLDR: Wish it included some functionality to get finished usable media artifacts out.
In the end I gave up after failing to find a way to convert the resulting asciinema data files to a GIF or any other HTML-compatible media file. The open-source projects ended up not working for various reasons- broken shell scripts, abandoned projects, node.js programs that require a fully working phantomjs installation and then it still doesn't work, requiring docker (come on, really? Spin up containers just to convert to a GIF?), and these are only the problems I remember offhand.
I swear I can't be the only one who's bugged by the fact that the link to the recording[0] doesn't actually play out the same way as the session you just watched in the front page![1]
This is a JSON-based container format for timestamped terminal sequences, including the terminal dimensions, the shell and the terminal used. (i.e., values of SHELL and TERM).
See this minimal example [0]:
As you can see, it was easy for me to redact the user and hostname from the prompt, without you even seeing it.
The canonical implementation is in python (3 iirc), but you should be able to hook a non-x11 terminal emulator up and make that spew out frames for each individual timestamp, and just create a variable framerate mkv from that. With uncompressed video. This is then trivial to feed into ffmpeg for transcoding into your favourite codec, possibly even including a different stream for audio into the resulting mux.
I might, if I'd get a reason to do this before other projects, take an existing terminal emulator and teach that to do this (or possibly directly to lossless VC1, removing the need to search for matching glyphs later in the process, which should yield reasonably small files directly from the conversion, without quality loss). So, if someone want's me to do this, email's in my profile.
I discovered that my dockers demonstration isn't updating the background colour correctly through asciinema. It looks good on my terminal, but maybe I have old man eyes.
Wow - happens rare but I really can't understand your answer. I never would have guessed jq to have any use for presentation. Can you give a short example how to apply your script.
and it will execute whatever was in there. If some programs were reading more than they actually wanted, it would have trouble, because they would likely get fed further commands/input, which would no longer go to the shell/later program.
Regarding the script, it's just using jq because asciinema uses a JSON-based format, of which some data needs to be extracted.
It should be self-evident from looking at what jq is doing and what an interactive shell does different from a non-interactive one (hint: it's related to autocompletion), as well as a sample file created by the above command.
If you still don't understand, tell, and I'll try to explain better.
And sorry for the delay, I wasn't expecting a timely response that asks me for something.
Thanks for the reply. Doesn't happen every day someone answers your sorrows with a two liner. I don't really see at what points it is checking for my pressed keys.
I just was able to try your snippet. Unfortunately I do not get it to work - I just end up with lines like `ls: no such file or directory`.
You might want to check what your shell does, as this relies on auto-completion doing the same thing if you replay the same keystrokes, even if that happens faster.
Specifically, try to intercept the output of the last pipe into a
less -U
and look if that seems to be what you were typing. If you want, you can replay the special chars by pressing Ctrl+<whatever symbol is followed by the caret>.
Try to not use a fancy auto-completion like fish or fzf provides you with, as they tend to not be the same when doing a replay.
This should not be hard, I used bash succesfully. I hope you did not forget to run it on the same filesystem state, as far as auto-complete behavior during the asciicast is concerned.
Also, this could be improved by delaying the replay as long as specified by the asciicast. But I couldn't quickly figure out how to get that done without further dependencies, so I won't provide that (now).
This script filters for input keys from the asciicast with the
select("i" == .[1]?)
and then extracts only the string itself with
.[2]
. I'd like to understand what is going wrong, but this style of debugging isn't working.
I won't debug it on your system unless you find a way to make that sufficiently productive for me.
And I really hope you created the file with --stdin, but judging from ls barfing on a non-existent file/directory, I assume the shell it spawned just did not result in ls being called with the same name, due to auto-complete changing it's behavior.
I appreciate your guidance. I don't expect you to debug it.
But it seems to me we were using the term "live" differently.
I think I understand now that the code you provide runs the recording on the current machine - similar to a macro.
The live mode I was referring from pias takes the recording - but times the keystrokes to what you type during the presentation - so one doesn't need to check. Makes also a nice trick for showing off as one can do fun stuff like type with your feet. Ah I see - they call this "manual typing", and have a "Live Replay" - just as you suggested. So it was me being inprecise - sorry.
You were right the errors I was experiencing had to do with the shell. I use fish - forcing everything to use bash made previous errors disappear. However in my test recording the it got stuck in vim - suppose escape didn't work.
Escape should work, you'd wanna check the keys around that tho. It might be a timing issue.
The requirement to not use fish in that mode is just due to fish exhibiting hidden non-constant state.
I did think about how to do that timing, but I could not figure out how to do it. It should be a very short python script or so tho, just replace the 'jq ksjdfkasjdfk |' with a script that reads the newline-delimited JSON and waits until the next timestamp, sends that data to the shell, and parses the next JSON. You'd just have to get a system time at the beginning, and not just sleep for the offset between the timestamps in the JSON, as you'd be slower by as much as the inaccuracy from the system sleep call and the processing you have to do to extract the data from the JSON.
But otherwise this should yield a timing predictable enough for such a live replay.
The escape char seems to be kinda weird indeed.
I'm not sure where that issue is tho.
With all the terminal recordings I have seen, the content eventually ends up at the bottom. It would be awesome to have the cursor always stay in the middle... any idea if thats possible with Asciinema?
I last used Asciinema when presenting my thesis. It certainly gave the presentation a nice vibe, but I still had to fire up a real terminal window when answering questions.
tty-player [1] gives you seekability, scrolling, and copy/paste with standard ttyrec files. Much easier for self-hosting than Asciinema and permits the use of other existing tools that work with ttyrec output.
Yes but script command on *BSD and macOS doesn't support this. Long long time ago asciinema was wrapping script then doing upload. But it work only on Linux and I wanted it to work for users of other Unix-like systems so here we are.
Block - https://asciinema.org/a/125371
Waterfall - https://asciinema.org/a/125380