Creating an application for mac

I developed my cell segmentation project bigcellbrother on linux but it seems all the experimental collaborators use macs. So now that I have something which kind of half works I decided it was time to compile the application for the mac and create a distributable bundle. I used the lovely macports to install all the dependencies I used to create the project (essentially openCV and its dependencies).

The first step was to compile the core part of the application to a shared library, as I’d done on linux. This is easier said than done. Supposedly you only need to add a -dynamiclib flag to the compilation command, but since I am compiling on Snow Leopard there are compiler/architecture issues cropping up at seemingly every step. Apple doesn’t want you to develop with C++, this much is clear. Then I compiled the QT GUI part of the application with qt creator and automagically it created an application bundle with my app. That was the easy bit.

Of course, all the shared libraries the app requires are scattered everywhere throughout the drive. If only I could put them all in the application bundle. This tutorial was helpful, essentially the idea is to toss all dependencies into a Frameworks folder inside the app bundle. You can use otool -L to check which libraries are being called and then install_name_tool to change the paths of the libraries. Of course the shared libraries themselves have dependencies. So this would be a few hours’ worth of trouble if not for macdylbbundler which does this stuff automatically. yay! To bundle the Qt frameworks there’s another program called macdeployqt which takes care of everything and comes with Qt.

Update: some libraries don’t have enough space to change the dependencies’ path and dylibbundler fails to compile. If you run across this, you’ll need to apply a patch to macports and reinstall the affected libraries compiling from source: sudo port -v -s install libawesome.

Porting the rays app to processing.js

Over the past few years I have been on a quest to port my old flash web toys to an open platform. Javascript in conjunction with the HTML canvas element seemed to provide a good alternative. So I tried to port one of my favorite toys, the rays app, to processing.jsThis is the result.

I have mixed feelings about the experience. On the one hand, it was extremely easy to develop for, as it would be expected from javascript + processing. Since I had already implemented rays as a java applet with the original processing library, porting to javascript was mainly just changing static to dynamic typing and adjusting some function calls. I did not manage to get keyboard shortcuts to work, even though they work perfectly in the java version. This might be some quirk in javascript I am not aware of.

The major issue is, as I was afraid, not enough performance to implement a ‘glow effect’ as I did in flash (press tab to access the options). The idea in flash was to only draw the ‘dirty’ part of the image to an offscreen buffer, then draw it to the main framebuffer, then blur the offscreen buffer and blend it again into the main framebuffer. Then clear the offscreen buffer and start again. Perhaps it is easier to illustrate this with the corresponding actionscript.

Here the drawLines() function draws directly to the offscreen buffer, while drawToScreen() handles copying the buffer twice to the screen. (This code was written almost 5 years ago, makes me feel old). In processing.js the only option which runs at a fast enough speed is a simple direct draw to the main framebuffer. I reproduce here the main loop for the curious.

The only interesting thing to comment about this code is how I used the bezier curve to extract a bit more precision out of each time step. Normally I could just draw a line from the past to the present. Here instead I draw a bezier curve where the control points are given by euler steps into the future or past, respectively. This essentially produces an interpolation which uses the first derivative at each endpoint, meaning we get a higher order integration of the path and can use a much bigger time step to integrate the paths.

The curve and its derivatives

SASS is awesome and you should use it

Trying to fix some bugs in the mobile layout for prospicious I realized the stylesheets had become an unmaintainable mess. So I converted the codebase to use SASS which was a bit of a pain because I had to convert well over 800 lines by hand. Luckily though there is backward compatibility so I could leave many classes untouched. However monstrosities like

can be simplified by the use of mixins, essentially the SASS version of macros:

By using mixins and variables you can avoid the annoying repetition of property blocks so common in large stylesheets. Better, by defining global variables you can iterate properties such as colors quickly, without constantly using find and replace, which is extremely useful when you’re at the prototyping stage of your app.

I found SASS via foundation, the responsive css framework I used for prospicious. They suggest the use of SASS in conjunction with compass which is a css authoring framework. I tried playing around with it but creating a new project resulted in an insane project hierarchy in the filesystem with html files and asset directories when all I wanted was one folder with sass files and one with css to integrate in my existing project.

This appears to be a problem in general with frameworks, they try to do everything for you and you end up with a bloated code base with hundreds of unused lines of code. Foundation is modular enough that you can just pick whichever parts of it you want in your code and not include the rest. Perhaps compass offers this as well but I couldn’t immediately find it so I stuck with ‘normal’ SASS and a handmade folder structure which served me well.