Saturday, September 27, 2014

Download and merge multiple video files (from twitch or youtube)

Last year I posted how to download and merge clips manually, recently I threw together a script that takes a twitch video, downloads the mp4 files, and them merges them all.

You'll have to download and compile ffmpeg which is a hassle, but since I use this script often it's worth the trouble.

tempfolder=$(mktemp -d --tmpdir=./)

pushd $tempfolder
read -p "Input the date (YYYY-MM-DD): " date
echo "files:"
youtube-dl -e $@    # lists the file titles
echo "choosing one"
default_name=$(youtube-dl -e $@ 2>&1| head -1)
read -p "Name is $default_name. Override?: " name
if [ -z "$name" ]; then

youtube-dl $@
file=$(mktemp --tmpdir=./)
for i in *.flv; do
    echo "file '$i'";
done > $file
ffmpeg -f concat -i $file -c copy "$date - $name.mp4"
mv "$date - $name.mp4" ..

rm -r $tempfolder

The script will actually take multiple links as well, and concat them in order according to the twitch IDs.

Psql Snapshot

Ever want to make some temporary changes to a (test) database, that you can't do with just commits? I can't remember why I needed this, but it looks handy.

# This was to create a quick snapshot of the database, make the changes I
# wanted, and then revert the database to exactly the way it was before I
# snapshotted it.

/etc/init.d/postgresql stop
umount /var/lib/postgresql/
lvcreate --snapshot --size 1G --name postgres-snap /dev/vg0/postgres
mount /dev/vg0/postgres-snap /var/lib/postgresql/
/etc/init.d/postgresql start

read -p "Press enter to start watching. And then ^C to drop the snapshot and switch to the old database" zzz
watch lvs

/etc/init.d/postgresql stop
umount /dev/vg0/postgres-snap && /sbin/lvremove --force /dev/vg0/postgres-snap
mount /var/lib/postgresql/
/etc/init.d/postgresql start

Transwiki copying

I was working with someone to set up another copy of a wiki, and I found that there was a way to download the wiki pages (/wiki/Special:Export/), and that using this it was possible to import them also:

So I thought I'd try scripting it.

Downloading the pages was the "easy" part. I had to toy around with some things, but I managed to throw together a wiki downloader.

It scans all the pages (from /wiki/Special:AllPages) generates a list of files (which can be cached) and then goes through and downloads each one individually in a folder that you specify.

The three main chunks to download are:
- categories
- files
- templates
- pages (main)

There's a switch for each one on the script. For the complete thing, include all 4.

You can merge the files into one file:
(echo ""; cat *.xml | grep -Ev "^<\/?mediawiki.*?>"; echo "" )  > ~/xml-download.xml
And transfer that to your host, but I had problems loading the large file with mediawiki's php script, so I just transferred each small file. (It crashed part way through, and I had to start over, I wasn't watching so I don't know why)

I'm doing the import on hostmonster, so I had to modify the php.ini file. I added the following to a local configuration:

Then I changed directory to the folder with the folders I uploaded and ran this:
find . -type f | while read f; do echo "processing $f"; php -c /home/user/php.ini /home/user/public_html/site/maintenance/importDump.php < "$f"; sleep 1; done

I added the 'sleep' because I was afraid over clocking might kill the process on hostmonster. But that might not be needed.

Rebuild recent changes (as requested) after loading everything.
php -c /home/user/php.ini /home/user/public_html/site/maintenance/rebuildrecentchanges.php

I also ran 'rebuildall' but I can't remember why.
php -c /home/user/php.ini ./maintenance/rebuildall.php

Import the images:
php -c /home3/user/php.ini /home/user/public_html/site/maintenance/importImages.php /home3/user/public_html/site/path_to_files/