Friday, 3 August 2012

GUADEC 2012 and Zeitgeist hackfest

As many others, I've been to great La Coruna to meet up with fellow gnomies and zeitgeistians, and even though I arrived on Sunday, I still managed to make it to a couple of interesting talks and on Monday we started Zeitgeist hackfest which lasted till Wednesday.

The biggest chunk of work I managed to do was to review RainCT's libzeitgeist2 branch (with more than 3 thousand line vala-diff), which extracts the datamodel and dbus interface bits from the not-that-long-ago-rewritten-in-vala zeitgeist-daemon and puts those bits in a library which will supersede the current version of libzeitgeist. The old version was conceived during my GSoC (in 2010) and was purely C-based and since at that point the daemon was written in python it had no connection to the current sources, and currently this was more of a maintenance burden for us - you can imagine that it's easier to keep the lib up-to-date when the daemon itself is built with it. By the end of the hackfest the branch was merged into master, and even though there are still some small pieces missing (like documentation and syntax sugaring), we should finish those in a couple of days. The API is currently very similar to the old libzeitgeist, although we did change the stealing behaviour that was used, therefore now it's not as convenient to use in C as it used to be. On the other hand though, it's straightforward to use it from introspected languages as well as Vala itself.

Other than reviews of huge and small branches, we were brainstorming about Zeitgeist's FTS extension (which does textual search of the log for us, but has issues). Unfortunately it seems that all the open source search engine libraries have some issues, be it memory balooning problems, fact that they're written in Java (which I think is pretty unusable on desktop), limbo state of commits to them, or lack of features. Currently the best option seems to be LucenePlusPlus, but it falls into the "limbo state of commits" category. That being said, perhaps proclaiming our interest in it could change that? Pretty please? :)

Besides Zeitgeist, I also managed to stop by at the PyGObject hackfest and bother Pitti with memory leaks we're seeing when using libdee. Although we didn't manage to tackle them, I have high hopes that we do. I also discussed ways to make a library as optional as possible with Ryan, and will apply that to the instrumentation lib I'm currently working on.

One of the things that pleasantly surprised me was an increased general interest in Zeitgeist from the community (at least when compared to last year's GUADEC) and big number of smaller contributions, which are of course great and integrating with Zeitgeist is the way to improve the general user experience. Plus overall it's nice to see this after pushing for it for the past couple of years. Hopefully we will even see direct support for Zeitgeist in GTK soon. ;)

Last but not least, I want to thank GNOME foundation for sponsoring my stay.
Sponsored by GNOME foundation

Saturday, 21 April 2012

FTS engines - memory usage

Following up on Mathias's great post on Full Text Search engines, I decided to take a look at the memory usage of some of the engines while performing queries. Mathias looked at Lucene++, SQLite, QtCLucene, Tracker and Xapian, I focused only on three of them - Lucene++, SQLite and Xapian (version numbers match those that Mathias used as I'm also testing on Ubuntu 12.04).

The procedure was simple - I grabbed the benchmark repo (https://gitorious.org/openismus-playground/fts-benchmark), used it to built two sets of the databases with 17251 and 121587 movies and then just ran valgrind's massif while only performing queries on the already built databases. Here are the values of peak memory usage:


Lucene++ SQLite Xapian
17251 1.4 MiB 2.5 MiB 1.2 MiB
121587 3.1 MiB 2.6 MiB 5.2 MiB

Of course, the peak memory usage by itself isn't terribly interesting value, what matters is also how does the engine work with memory over time, so let's look at that as well (images are courtesy of Milian's fantastic massif-visualizer, note that their scale is not relative to each other):


Lucene++ SQLite Xapian
17251
121587

We can see that both Lucene and SQLite seem to build a cache on the first query and then use it, Xapian on the other hand doesn't seem to be keeping a cache, as the rapid drops in mem usage suggest, but maybe there's different explanation to that.

So that's it for memory usage while performing standard queries, but what I was particularly interested in was memory usage when performing wildcard queries (as I saw some strange behaviour here and there with Xapian). Therefore I added one simple wildcard query "T*" to the list of executed queries and ran it on the largest DBs (ones with 121587 movies). As you can imagine the "T*" query is really generic and it matches around 110 thousand documents from the dataset, that's why I also added a limit of 10k results per query to each backend (although that shouldn't make much difference).

Let's look at the results:


Lucene++ SQLite Xapian
Peak mem usage 4.6 MiB 7.3 MiB 442.2 MiB
Visualization

Now we can clearly see that Xapian uses huge amount of memory during expansion of the wildcard query (can this be considered a bug report? :)), SQLite has a couple of peaks there, but nothing to be worried too much about, and Lucene++ shines with its fairly constant (and really low) mem usage.

You saw the data, so I'll leave any conclusions up to you. ;)

The small number of changes I had to do to the original benchmark repository is available as a simple diff here.

Sunday, 4 March 2012

Face detection with OpenCL

I've been meaning to write about the topic of my thesis for quite some time, but didn't really get to it until now, so even though it's almost a year late, here we go.

Before I get into some technical details, here's a youtube video where you can see the OpenCL implementation of my detector in action:


Pretty neat, right? :) So what you just saw was an implementation of a detector based on the WaldBoost algorithm (a variant of AdaBoost) that had as its input a classifier trained for detecting frontal faces (and an awesome video of course) running on a GPU.

If you know anything about boosting algorithms, you'll know that one strong classifier is usually composed of lots of weak classifiers (which are usually very simple and computationally inexpensive functions) - in my case there are 1000 weak classifiers where each uses Local Binary Patterns to extract a feature from the input texture. Unfortunately such strong classifier is resolution dependent, and to be able to detect objects of various sizes in the input image, we need a pre-processing step.

During pre-processing we create a pyramid of images by gradually down-scaling the input (oh and we don't need colors, so we also convert it to greyscale). This way the detector can still detect only faces with resolution of 24x24, but using a mapping function we will know when it actually detected something in any of the downscaled versions of the image and there we have resolution independent detector. Interesting tidbit: it turned out that creating the pyramid texture by putting the downscaled images horizontally instead of vertically (which you can see on the image below) slightly improved performance of the detector - simply because the texture cache unit had higher hit ratio in such setup, but since the pyramid texture is approximately 3.6 times larger than the width of the original image, the detector wouldn't be able to process HD (1280x720) nor Full-HD (1920x1080) videos, because maximum texture size for OpenCL image is 4096 pixels (when using vertical layout though 1080 x 3.6 ~= 3900, so even Full-HD videos can be processed).

Left - original image, right - pyramid of downscaled images (real pyramid texture also has the original on top)

Once we have our pyramid image, it's divided into small blocks, which are processed by the GPU cores and each work item (or thread if you wish) in this block is evaluating the strong classifier at a particular window position of the pyramid image. Overall we'll evaluate every window position - think of every pixel. (in reality it's more complicated than that - the detector is using multiple kernels and each is evaluating only a part of the strong classifier - that's because WaldBoost can preliminary reject a window without evaluating all weak classifiers, so when a kernel finishes it just reduces the number of candidate windows and next kernel continues to evaluate only windows that survived the previous steps - this also ensures that we keep most of the work items in the work groups busy).

Once the detector finishes, we have a couple of window positions in the pyramid image and response value of the strong classifier in these windows, and these are sent back to the host. The CPU can then finish the detection (by simply thresholding the response values) and map the coordinates back to the input image. If you watched the video carefully you'd have noticed that there are multiple positive responses around a face, so this would be also a good place to do some post-processing and merge these. Plus there's a false detection from time to time, so again good place to get rid of them.

You're surely asking how does this compare to a pure CPU implementation and as you can imagine having to evaluate every window position in the pyramid image is very costly and even optimized SSE implementations can't get close to performance of a GPU (even though you need to copy a lot of data between the host and the GPU). So a simple graph to answer that (note the logarithmic scale):

Processed video frames per second (CPU: Core2 Duo E8200 @ 2.66GHz; GPU: GeForce GTX 285 - driver ver 270)
So why do I talk about all this on my free software related blog? Well of course I'm making the source available for anyone to play with it, optimize it further (there's still plenty of room for that) or do whatever you feel like doing with it. But I need to warn you first - the implementation is heavily optimized for nvidia's hardware and was never really tested on anything else (the AMD CPU implementation of OpenCL doesn't support images, the Intel CPU implementation does support images, but not the image formats I'm using, so that basically leaves only AMD GPU implementation, but I didn't have such hardware available). I'm also making assumptions that are true only on nvidia's hardware - like that there are 32 work items running at a given time (which is true for nvidia's warp). There are even some helper methods that allowed this to be run on hardware without local atomic operations (so even OpenCL 1.0 was enough), but I see now that I can no longer run it on my old GeForce 9300 with latest nvidia's driver (although it did work with version 270). So I don't even know if it works at all with the compiler in the latest driver... you've been warned.

Grab the code branch from Launchpad (bzr branch lp:~mhr3/+junk/ocl-detector), or get the tarball (the only dependencies are glib, opencv plus libOpenCL.so somewhere where the linker can find it). Run it with `./oclDetector -s CAM` (and if that doesn't seem to detect anything try `./oclDetector -r -20 -s CAM`).