Raspberry Pi + Lego Case :3

Maybe not that interesting, but I felt like this had to appear here. I got my RPi in July and recently started to build a Lego case for it. The one you see on the picture (fullsize) was initially just "find as much red parts as possible and use the other colors to create a prototype".
After a while I started to replace strange colors (brown, green, pink, etc.) and black as well as grey bricks in my prototype with similar parts in blue, white or yellow. What you see is how far I got ... a few flat parts and a bigger "window" for the LEDs are still to be found. The completely red version is far from complete. I'd need 9 1x1 flat bricks with a flat surface on the top — not sure if we have that much.

Anyway — the Pi is up and running, serving mostly as an IRC client at the moment; guess it'll have the honor to take care of one or the orther cronjob in the near future and offer some web based "services" for me.

Miscellaneous information: I got the idea to build a Lego case from Biz's LEGO case. I used pants for my case because they leave a bit more space inside. Everything located above a slot is fixed to the lid, so the Pi can be taken out once the case is opened withough the need to take it apart. I noticed that my case might not leave enough space for an RCA plug but I don't plan to use it anyway. I use Arch Linux ARM. Here's a picture of my Raspberry Pi next to a raspberry pie.

2012-08-14

JSONProxy

Before I begin with what this is actually about, some words for those who aren't familiar with AniDB, MyAnimelist, XDCC-Bots or at least one of those:
AniDB and MyAnimelist are both websites that let you create a list of anime titles. In the case of AniDB the focus lies on the anime episodes you have stored somewhere, MyAnimelist cares more about what episodes you saw.
XDCC bots are IRC bots that send you files on demand. They are a common distribution method for fansubbed anime and offer so called packlists, where the list all the files they offer.

Now to the real story: I'm a user of both AniDB and MyAnimelist, but I only maintain a list on the latter. I use AniDB of keeping up to date with newly released episodes of airing series. The problem is, that if you only want to see notifies for new releases of certain subgroups, you have to add episodes of the respective anime from that subgroup to your list. I, however, don't maintain a list on that site, so I'd get a ton of unnecessary notifies.

Now, since I get all my new episodes from XDCC bots I though I could make use of simply parsing their packlists, which was the first thing I did: write a JSON file with all neccessary info, a small PHP script to parse the packlist and a minimalistic site to display the output. With that I always had an up to date list of all released episodes of the series I watch and only from the subgroup I want them from.
The problem with that approach was, that I had to memorize the number of the last episode I saw for every of those anime. So I had to come up with something different. Since MyAnimelist always has the information on which episode of which anime I watched last it seemed natural to use that. Also, since I open the page with my list on it several times a day, it seemed to be a good idea to simply include the information about new episodes there.

So I wanted to write a userscript that should obtain the information from both my JSON file and the packlists, look at the site itself and, if there were any episodes released that I hadn't seen, inform me about that ... well, same origin policy says no. The userscript is written in JavaScript and common browsers won't let it perform an XMLHttpRequest for stuff stored on other servers than the one the site you are viewing when the script is executed is hosted on.
I had to find a workaround and I though of JSONP, a method where you create a new script tag in the head of the site with the remote thing you want to access as the src attribute of that tag. Problem is, that remote thing should be a string with a JavaScript method call, so that you actually can use what you got from the remote server. But I wanted to access packlists I had no control over. It then came to my mind, that I simply could write a PHP script that would access and parse the packlist and return a JSONP string — a JSONP proxy you could say. ^^ And then access this with my userscript.

This is what I ended up doning. :) My JSONProxy parses packlists based on my listing of urls, titles and subgroups and my userscript (actual code) uses that to inform me about new episodes. :3

I guess it would be quite easy to write a universal JSONProxy which takes a get parameter (the url to the document one wants to access) and returns a JavaScript method call with the whole site as a parameter. What I'm not sure about is, wether or not there are regulations on the src attribute of script tags and how messy the escaping of whole websites for the JS method call would get. ^^
Oh and by the way: since the actual request to the site you do not control (in my case the packlists) is not performed by the browser the JS is running on but by the PHP script on your server this method doesn't even violate the same origin policy more than JSONP itself — it's just a handy way to access data from other servers when you write userscripts. :)

2012-07-19

VZ Netzwerke message backup

A few days ago I read that the VZ Netzwerke (a group of dying social networks once popular in Germany) will be renamed to or restarted as idpool. Reading about those networks alone reminded me that ever since I stopped using them I wanted to backup all my messages. Hearing that the platform will be undergoing serious change some time soon made me take immediate action.

I wrote a Perl script to extract the messages and store them in json-files. To view the messages I wrote a simple web interface in PHP.
I ran into some problems with SSL connections which is why the script uses a session ID instead of an e-mail address and password as a launch parameter. Additionally, when dealing with large amounts of messages the script decided to freeze after a certain amount of parsed messages, so I wrote a second version using threads to parse messages and in case they freeze simply parse the message again. This second version, however, is painfully slow because of my crappy implementation — but since it does a job that only has to be done once I'm fine with that. :D

Concerning the web interface one thing was very important for me: having messages and answers that belong together being displayed at the same place. A thing that the VZ Netzwerke never offered, you only have your in- and outbox. Separated. Absolutely terrible if you want to read an old conversation again. So ... my web interface shows all messages to and from a person at one place. : )

In case you'd like to backup your messages too, here you go: vz_messageparser.tar.gz
Run perl vzmp.pl studi|schueler|mein <sessid> (use vmzp_crap.pl if neccessary) and view the index.php (webserver required ofc) as soon as the script finished.

Here is a video showing how it works.

2012-06-14

«

25  24  23  22  21  20  19  18  17  16  15  14  13  12  11  10  9  8  7  6  5  4  3  2  1  

»