Recently, I wanted to migrate some lightweight services from a virtual host to an account at Webfaction, since running a wiki/issue tracker (Trac) and CI server (Jenkins) for a couple of low-volume projects really shouldn’t take a whole machine of it’s own. Or should it?
This is when I realized that Jenkins is heavyweight in a world of cloud and shared hosts. I already kinda knew, since I’ve been administrating a Jenkins installation that claims several gigs of RAM, but that’s with over thirty Maven projects and a pretty high load.
Firing Jenkins up and configuring two (Ant) projects, it claims ~150Mb RAM – for doing nothing. On a shared host, that’s unacceptable. Under the old RAM limits on Webfaction, it’d be impossible to run, now its just claiming 3/5 of my total memory allotment.
So yesterday I set up continuous integration using a Python script that runs the build, determines fail or success, and publishes the log and/or artifacts; it took a bit less than an hour to get working from scratch (admittedly using my own Mercurial lib for integration).
Now I’m thinking of maybe creating something useful out of this. Right now, I publish logs as static web pages, but I could just post them as wiki pages to Trac via RPC. That would allow logs to tie into the ticket system and source browser, and I could show build status right there on the Kanban-ish-board next to the tickets.
I’ve got no firm design done yet, but I’m thinking about what the requirements for a minimalistic CI tool should be:
- Take no resources apart from disk space when idle
- Be able to publish fail and success logs in a useful format
- It should be trivial (for real) to implement support for new targets for result publishing
- Ability to limit resource usage (number of concurrent builds)
- Jobs can trigger each other