Recently I've been experimenting with operating a live coding stream of my desktop. What this means in practice is that I focus on a single project or set of tasks for an hour or two, while streaming everything I do to my channel on justin.tv.
Here's a boring demonstration:
I'll save my thoughts on how the experiment is going for a later blog post, in this post I just wanted to share the how.
First I use this
#!/bin/sh -xe INFO=$(xwininfo -frame) API_KEY="YOUR_API_KEY_GOES_HERE" WIN_GEO=$(echo $INFO | grep -oEe 'geometry [0-9]+x[0-9]+' | grep -oEe '[0-9]+x[0-9]+') WIN_XY=$(echo $INFO | grep -oEe 'Corners:\s+\+[0-9]+\+[0-9]+' | grep -oEe '[0-9]+\+[0-9]+' | sed -e 's/\+/,/' ) FPS="15" INRES='1680x1010' OUTRES='1280x720' ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0+$WIN_XY \ -f alsa -ac 2 -i default -vcodec libx264 -s "$OUTRES" \ -acodec libmp3lame -ab 128k -ar 44100 -threads 0 \ -f flv "rtmp://live.justin.tv/app/$API_KEY"
There's a couple important things to mention about this:
xwininfoinvocation just gives me a cross-hairs cursor so I can select which window area I want to record. This doesn't technically restrict
ffmpeg(1)to just that window, rather it just grabs the offset from the window and uses those.
-f alsa -ac 2 -i defaultpertains to the audio input. According to
arecord -L, the pulse audio sink I should use is called "default", on some machines it might be called "pulse", your mileage may vary.
pavucontrol(GUI) tool to direct my audio out to the pulseaudio input, this allows me to share what I'm listening to with whoever is watching the stream.
$OUTRESparameters, I am techically live-resizing the video on the fly, which takes up a lot of CPU power, you may or may not want to do this depending on your machine speed, size of your screen and the amount of desktop space you want to stream.
That's about all there is too it, I wish I could either take credit for the script or at least attribute it, but I can do neither because I found it on a forum some where and then quickly lost the link. Whoops.