Categories
Programming

WatchDoc – an experiment in Chrome Extensions

I keep remembering that good ideas start from doing something fun, instead of doing something easy. About a month and a half ago I was reading the Google Doc that I have with some friends, listing ideas about “Changing the World”. It’s all the things that should/could be better in the world and “I wish….” it would be some way.

In the list of  a hundred or so I’ve seen one entry that was wishing for a way to see changes to our shared documents in an easier way than going back to the Docs home page and seeing if something has happened. Been thinking that that would be something I myself would totally use. Couldn’t find anything like that yet, but I had sometime in my hand to experiment, so WatchDoc was born….

Dropdown menu of WatchDoc
The interface of WatchDoc with lots of testing documents, emails blurred out just in case.

Getting it done

I haven’t written a Chrome Extension before, but seemed like such a suitable fit, and been looking for a project that I can try myself with. They are practically open source (since you can look into any one of them regardless of where you installed them from) and only using standard web tech, just like making a website (HTML + CSS + JavaScript). Indeed, all Chrome extension are some web-pages displayed in a special way, some javascript running in the background and/or javascript modifying the page you are on. Very neat, I would say…

Since I didn’t know where to start, I tried to find an extension that I can take apart and learn from. Fortunately, there was one, straight from Google, called YouTube Feed. So the first part of the project was slowly gutting that one out and replacing parts until I get something working that I wanted. It is easier to say than do, because the feed reader has somewhat different requirement than the code I had in mind, but at least close enough that most of the internal structure I could keep.

Some notes about the journey:

  • Authentication: oAuth can be pretty troublesome, but in the end it was easier this time than I expected from past experience. Maybe because YouTube Feed was written reasonably well already
  • Icons: Wanted to take the icons for the different document types from Google Docs itself. Took some time, but using the developer tools (inspect element) in Chrome I could finally find the links and download them.
  • More icons: Wanted to use the icons as a CSS image sprite just like the original extension, but couldn’t find some good program to combine PNG files into a single image. In the end wrote a quick Python script to do just that.
  • Extension icon: For the extension icon I used Google Docs’ own little icon (I think it’s kinda fair to use, especially now that they have changed it, it’s v9 of their icon theme compared to v7 at the time of my programming) and also on a free icon site I had found a matching hi-def icon. I just wish I could find it again for proper attribution, that still nags me.
  • Buggy notification: Some parts of the extension writing was pretty annoying. For example I wanted to get desktop notification work just the way it does in Gmail. I don’t know how do they do the auto-hiding, but I’m pretty sure not from the standard simple desktop notification type, because that I simply make to hide itself after a certain time. Instead I had to make it the more difficult way of using a HTML page to define content and then internal JavaScript to close it. Took a while to realize that I have to pass my parameters (the data that I wanted to display) as query parameters, but now it works pretty well.
  • Buggy libraries: this is always fun to have because have to find to work around it or  fix it. This time it was a jQuery URL Parsing library that crapped out on corner cases that the developer didn’t thought of but I fortunately run into. Took a while, but fixed it up, let’s see if upstream will incorporate it.
  • Optimization: it’s good to have things working first and then have them work well. For example originally I re-read the user’s whole feed, now it can read the part of the update feed that can contain relevant information, so I can have fewer requests to the server, and the updates come in several seconds quicker.

Release

After some work I thought it’s better to release first and ask questions later. So I registered on the Chrome Web Store (registration is $5 on time fee to get rid of spammers). Not much hustle, in about 20 minutes everything is done and now you can find WatchDoc in the Chrome Web Store. You can also find the source on GitHub.

The Chrome Web Store seems to have some strange policies, though, eg. didn’t let me update if I didn’t have the right shapes of preview pictures, and sometimes they only told me halfway into the submission of the new release, and I had to scramble to make some test documents and nice screenshots of the right aspect ratio so I can finish the submission. Nevertheless it is pretty easy to handle and useful too.

One beef I have is that I don’t have any way to reply to reviews. Much more people post negative stuff than positive and once I fixed something there’s no way to get in touch with the reviewers to ask for re-evaluation. This makes a very one-sided experience.

It is good – though addictive – to watch the number of +1s grow, as well as installs and users. The site must have some way to check people’s installations because the total number of users did decrease after the initial increase (the high water mark somewhat above 1500, now a bit below 1400). I guess I would need to improve the quality a bit more, and maybe there are not that many people sharing Google Docs with others than I have thought. :) Anyway, I think this is much more than how many people used any other software/site I made, so no ground to complain.

Post-release

The release was picked up surprisingly quickly by other websites, I think there are some automatic ones monitoring the latest submissions to the Web Store, and others take the news from them.

WatchDoc Google Analytics as of date
WatchDoc Google Analytics as of date, some review of it in the text, click to enlarge (in new window)

Here’s my Google Analytics since the release. The big spikes in the beginning go up to about 500 visit / day (tiny, but much more I have ever had), and are mostly due to Lifehacker. After that I have basically just a trickle of visitors (about 20 / day), with a little spike recently that is direct traffic, I wonder where from….

Some sites that reviewed WatchDoc (the ones with the highest referral counts)

I got up a support site on GetSatisfaction as well to be able to have a conversation with those who are persistent enough and submit some feedback regarding bugs. Used it a couple of times, and it’s quite practical. I finally learn how to do product support, which doesn’t mean it became easier. Still surprisingly maddening to debug a problem until it hits me as well by chance so I can check it locally. Got to figure out some superior remote debug system. At this point it seems to work well enough for about 1400 people, but the fact that I have lost about 200 shows that I still have a lot of problems to fix.

Into the future

It was a fun project to work on and it does most of the things I wanted to, but it’s dead ugly. I either find a designer and fix it, or just leave it like this because it doesn’t matter much…

I might work on it a bit more, especially bugfixes, though it would be better to have some ideas what is missing. Everything I can think of would complicate the user experience and that’s not my plan: document preview, ignore updates of certain documents, multiple-logins,…. What else?

Guess it is more likely that soon I will start to work on something else, let me know if you have something fun you wish to see :)

Categories
Programming

Language of the Month: Javascript

Continuing the Language of the Month serious after a little bit of break when I was busy with other stuff:

Javascript icon
Just a Javascript icon...

It is very long overdue, since I was looking at Javascript like…. 14 years now? But never really spent time to understand it because I never needed it really. I’m old enough to remember that when I started to browse the web, every time there was a page using Javascript, people’s sentiment was “oh, no, that’s going to be very slow, I don’t even want to check this site anymore”. Pretty much like it was/is with Flash later on.

Compare this to now, how 100% static sites are practically disappeared, and no web developer worth their salt should skip on learning it. I’m glad that browsers spent a lot of time improving performance and that so many interesting projects came out of it.

My impression of the current state of the art is that using HTML + CSS + Javascript now it is easier than ever to make good front-ends for programs. I’m mostly a command-line guy (and that’s pretty easy with many scripting languages as well as Python that I use), but I cannot deny that I’m the minority. Still, when i need convenience, I can even imagine people creating local (meaning not internet-enabled) software with those things.

Since more and more people had similar idea and started to work on it, there are plenty of projects that make this even easier, like jQuery and all its plugins. I don’t think people need a lot of introduction and already can do a lot of things easier using that. One drawback is that many things could be just as easily done in pure Javascript but people quite often don’t know that. I certainly have to learn a lot more.

In this month I was reading a few books and sites that people recommended on Hacker News, as well as I used it to do a few actual projects. Now that’s a change compared to the previous Language of the Month columns.

Projects

WatchDoc: a Chrome extension that notifies you when your shared documents on Google Docs change. Chrome extensions are merely HTML+CSS+JS code, so it was a perfect way to try a few things. (Will write it up here later) (wrote up here).

NowJS real time games hackathon: NowJS is a real-time communication plugin for NodeJS, the JS server. I wanted to make a game for this hackathon, but run out of time. Spent some time working with it, and it’s actually pretty awesome when I started to understand it, I’m do want to finish the game at a later time (it’s a multiplayer trivia game) . (Will write it up here later)

Venus & Mars: a little afternoon project using Facebook to help my friend’s research assignment at her university. Listing people’s status updates separated by gender. It looks awfully ugly, because I just wanted to make it work, but for the fun of it it’s NodeJS so good to practice my JS-foo.  (Will write it up here later)

Impressions

I definitely going to learn more of it, because now that I start to understand I quite like it, and I cannot imagine it going away anytime soon. Now that it is just a matter of seconds to set up a project on the web (really, on Heroku, Appengine, dotCloud are all one click away) there’s no good excuse not to do that.

Good

  • JSON, ’nuff said. That’s just such a good data format that is both human and machine readable. Seem to be pretty much the
  • No problem (it seems) with Unicode and international characters. Though I think it uses UTF-16 while many other code is using UTF-8, not sure if that makes any difference.
  • Feels quite light and flexible (from the language point of view, not necessarily the resources needed)
  • Since the source of websites is necessarily open, it is possible to learn from others’ examples much easier than otherwise.

Bad

  • Feels like it has a lot of baggage from it’s long(ish) and torrid life, which makes it feel a bit inconsistent. E.g. the first day of the month is 1, but the first month (January) is 0.
  • People generally seem to write pretty bad Javascript code. Because it’s so easy everyone can make some useful project, but they are full of bugs. Fortunately it’s Open Source, so I can try to figure it out, and I did find a handful of upstream bugs. But the stress…. huh…

Ugly

  • Formatting of JS code can be pretty unreadable (especially comparing with Python where formatting is not optional). It is made more difficult when I’m editing JS within a HTML file, since Emacs cannot handle that well.
  • Up until quite recently there weren’t any really good tools to troubleshoot things. Fortunately there’s Chrome and it’s Javascript console, and Firebug for Firefox. There are still some mysterious errors and the debugging has to be planned well ahead.
  • There are many things that are straightforward but require a lot of typing. Fortunately projects like jQuery are trying to fix that, but still there’s a long way to go.
  • Just like in Lua, for keys in dictionaries the quotation marks are not required and they are still understood as strings. These kind of magic can be convenient but there are occasions when confusing.

Links

Books

Interesting Javascript projects and sites

  • jQuery: making it easier to use JS, especially with respect to HTML DOM manipulation
  • Node.js: server side JS, thus it is possible for the first time (?) to use the same language for the front and back end on the web
  • jsFiddle: easy online editor, prototyping and code sharing for the web (JavaScript, MooTools, jQuery, Prototype, YUI, Glow and Dojo, HTML, CSS)

(last edited 2011-10-03)

Categories
Computers

Tech setup for an Ignite

Recently I was co-organizing our Ignite Taipei #2 event (see the pictures and watch the talks – this latter if you speak Chinese…) I try to be the self-proclaimed Chief Technology Officer (CTO) of the event. Either because I hope the tech side of things goes down well, or more likely if things fail then there’s no-one else to blame just myself. And things do indeed fail all the time all different ways.

So as a CTO I try to make sure that everything computer related runs well and I did collect some useful scripts (for that mostly command-line driven way I do things). I thought it would be useful to write them all up, mostly for me to remember, not just the scripts themselves but the rationale of some choices made along the way.

As Ignite’s motto is “Inspire us but make it quick”, I found that that the Ignite organizer’s motto can be “Everything will be just fine.”

The computers I used for getting things ready for the show
The computers I used for getting things ready for the show

Intro

Ignite in short is an evening of quick presentations, each one of them exactly 5 minutes, 20 slides that auto-advance every 15 seconds. Altogether there are 10-16 talks on evening, It aims to be an event for inspiring people, and something that should be relatively straightforward to organize.

Computer setup

I had 2 computers for the event, but those were actually 3 systems:

  • A Linux system (Arch Linux), my own computer that I knew very well and run the presentation off.
  • A Linux/Windows dual boot system that I borrowed, Windows (Vista of all things) for PPT-PDF conversion and Ubuntu for live streaming.
If I could, I’d ditch the Windows part altogether, but if can’t it would probably be better to run it in a virtual machine (so I don’t have to reboot, more on this later) or on a separate (third) computer.

Pre-event

Most of the organization was done on Google Docs with a shared document, keeping tabs of who we have as speakers, what needs to be done.

Shameless plug: next time it will be even easier with WatchDoc, a Chrome extension I wrote after Ignite to get notification when shared documents change :)

Besides keeping up with what to do, I had to take care of the presentations that the speakers sent to me. Since I’m not a fan of PowerPoint and its unreliability to display things the same way on different computers (and LibreOffice not being ready to handle .ppt/.pptx as well as I’d like it to), I had to convert everything into PDF. People with Mac, using Keynote were easy, they have PDF export. Most people using Windows/MS Office didn’t have. First I installed a “print to PDF” plugin but that was just terrible, awful quality photo conversion and all. In the end I had to get an Office 2007 just for this occasion and use their own Save to PDF add-in, for great good. Actually, it worked like a charm, I just wonder why they didn’t make the same thing for Office 2003 that seems to be much more abundant (I know, 8 years old software has no love). Finally I had everyone’s thing in the same, reliable format, and uploaded to Google Docs and Dropbox, so can transfer it between computers easily.

One more thing I had to do the the slides: add an empty slide on the end of the 20 slides of each presentation, so it’s easy to see when they are finished. Used pdfmanipulate for that. Prepare an empty slide, can use e.g. LibreOffice, export it as empty.pdf, then having all the talk PDFs in a a sub-directory called “original, run a script like this:

#!/bin/bash
DIR=original

for f in $DIR/*.pdf
do
    remf="${f%.pdf}"
    newf="${remf##*/}"
    echo "${newf}"
    pdfmanipulate merge --output="${newf}_extra.pdf" "$f" empty.pdf

Slides

The tech setup in the front; From Ignite Taipei #2

There were two kinds of slides to take care of: one with the speakers’ names and their talks titles, the other one is the talks themselves, prepares as described earlier.

The speaker intro slides were done in LibreOffice. Had a little problem with that one as well, as if Chinese characters were set to bold text, they showed up blurred. Not good looking, lot even legible sometimes. So just before start I was scrambling to change all text from bold to normal. It’s just a pain.

The talk slides were shown using Impressive, a command line presentation software written in Python. It is pretty good, quite flexible and easy to set the slide show times, transitions and total time progress bar.

Use the following script saved as e.g. present.sh to show the talk as present.sh nextalk_extra.pdf:

#!/bin/bash
# Run the presentation

PRESENTATION="$1"
SLIDETIME=15
TOTALTIME=300

./impressive.py -D 1 \
                -a ${SLIDETIME} \
                -d ${TOTALTIME} \
                -T 350 \
                -t CrossFade \
                -c persistent \
                "${PRESENTATION}"

There are some problems with it, though. Sometimes it took quite a while to show a slide that had a good quality photo, which messed up the timing. The total time didn’t quite work out the way I expected, I think every presentation was a bit longer than 5 minutes overall. Not too big problem., but if I can do right, then I could.

Previously used the Adobe Acrobat Reader’s auto-advance when full screen mode, that was just fine, might go back there next time if I cannot fix Impressive.

Web streaming + recording

A common thing to live-stream Ignite so other people can watch it to. I found it easiest to do with VLC + Justin.TV, they already have some software done that seems to work quite well. The first time I had some sound sync issues, this time managed to fix that, but in the end it didn’t matter.

To do the stream using my Logitech QuickCam S7500 and sound on the line-in from the venue’s sound system, start VLC with the following script:

#!/bin/bash
# http://community.justin.tv/forums/showthread.php?t=7081
# Using webcam + line-in audio
# Display + Transcode (Save + Stream)
# needs jtvlc to get it out

vlc v4l2:///dev/video0 \
     :input-slave=alsa://plughw:0,0 \
     --sout='#duplicate{dst=display,dst="transcode{venc=x264{keyint=60,idrint=2},vcodec=h264,vb=600,acodec=mp4a,ab=128,channels=1,deinterlaceaudio-sync}:duplicate{dst=standard{access=file,mux=mp4,dst=/home/user/ignite.mpg},dst=rtp{dst=127.0.0.1,port=1234,caching=2000,rtcp-mux,sdp=file:///tmp/vlc.sdp}}"}'

It as some mighty long line and not sure if it can really be wrapped anywhere. Basically it displays the webcam’s picture, transcodes it to x264 and sends it to an rtp socket and saves into a file in the same time. Might want to be careful with this, as the saved file will be overwritten every time the script is run.

The most curious thing is that in the copy that is displayed, the sound is not in sync, but actually the transcoded video is okay.

Next step is sending the stream out to Justin.tv, using jtvlc:

#!/bin/bash
./jtvlc-lin-0.41/jtvlc ${JUSTINTV_USERNAME} ${JUSTINTV_STREAM_KEY} "/tmp/vlc.sdp" -d

Here the username and stream key has to be filled in with your values. If everything’s fine then there’s a big stream of debug information on the screen and on the website the channel goes online.

At our event I had a problem that after reboot somehow the sound input is borked and all out video was without sound, the final saved file couldn’t even be opened. Never mind, fortunately we had recording from a proper camera.

HD recording + postprocessing

It is also essential to have some good recording of the show, since that can make it really reach a wider audience, and it’s good to look back at it later as well.

This time we had a cameraman helping out, and after the talks finished I got the videos from him. He’s using a Sony camera with a FAT32-formatted SD card, which was a nice pain to manage. Last the video we had was straightforward mp4 format, while this time 2Gb chunks of AVHCD, a proprietary format. Had to convert that into something I can manage.

It took a wile to figure out, but in the end the result was acceptable.

1) join the spilt parts with tsMuxeR. Had to make sure I’m using joins, not just listing all the files and saving them as one (sounds like the same thing but it isn’t). In the end I had some “.m2ts” files

2) Next I had to convert those into mp4. The m2ts files had actually h264 video that is good for mp4, so ended up just copying it. The audio had to be transcoded to aac, because the original was ac3 while that’s not accepted audio codec for mp4. Fortunately ffmpeg could handle AVHCD files now. The only really tricky part was that the original video was 1080i – interlaced. Next time have to make sure that the cameraman sets things to progressive, makes life so much easier.

#!/bin/bash
INFILE=$1
OUTFILE=$2
# Start recreating video
ffmpeg -deinterlace \
       -i ${INFILE} \
       -f mp4 \
       -vcodec copy \
       -strict experimental \
       -acodec aac \
       -ab 128k \
       -y ${OUTFILE}

This took quite short time, which is a relief, before I figured out that things can be this simple I had transcoded files in hundreds of giga-bytes, and things took hours.

3) Spilt the video for uploading to YouTube. Used Avidemux for that, which can – barely – handle h264 encoded video. Somehow it couldn’t find the key-frames so it decided that every 30th frame must be that and I can only split things there. This resulted in every video having some strange pictures in the beginning (duh, missing keyframe). Might be better with re-encoding, or rather finding a better program to handle h264 video.

Future

Fixing stuff at halftime; From Ignite Taipei #2

Of course there are some things to improve next time:

  • Better video recording that eases post-processing. Preferably have our own camera that we learn how to use and not have to figure out something new every time
  • One a computer is set up and tested, no more reboots
  • Check the timing of the slides to make sure they are really 15 seconds each
  • Improve Impressive, maybe prepare some patches: exit when presentation finished, show empty screen after last slide, timing issues
  • Switch to scripted intro slides so I don’t have to edit each of them to make sure they look the same. Maybe use one of the Javascript web presentation frameworks and a full-screen Chrome window…
  • Set up our own website for Ignite. This one is the big one. Should allow us to do a lot of interesting things, but I try get myself to make it first with a few features only, not with all bells and whistles. A WordPress page with some plugins or a full blown Django site or something else? Time will tell…
Categories
Programming

Language of the Month: Prolog, part 2

If something, then this is a belated post. Prolog was the language of July and now it’s September. Anyway, before I completely fail, let’s just wrap it up and go on the next language with this one month hiatus in August.

I really enjoyed the language and one month is indeed barely enough to start doing something useful. So I have to come back to it again and maybe keep reading about it in the meantime. It’s actually quite interesting to try to figure out Prolog code on paper, without actually running something. I think one of the books I was reading had plenty of exercises like this: without running the code, can you guess what is it doing? Needless to say, there were plenty of tricky bits/

From the comments

During two months, some additional resources did show up. A dear commenter on the previous post recommended me the following book: Richard O’Keefe: The Craft of Prolog. I got about a third, maybe halfway through it and it’s interesting, written long time ago in a style that since gone out of fashion, unfortunately. It’s a programming book that is fun to read and one can see that the author is very knowledgeable. Some aspects of the book didn’t age very well, though. The author keeps comparing Prolog to other languages – many of them are not very widely available either. Also, some of the language features are specific to his version of Prolog, that wasn’t the same one I was using. Would recommend, though!

This last part, the different implementations of the same language, can be a real problem. Of the three compilers available for me, all of them had specific strengths and weaknesses. Guess they are converging, but not quite yet. So far I was mostly using SWI Prolog, but this might change in the future.

The said commenter also recommended a cross-compiler, doing Prolog-to-C magic, for portability and other goodness. Can grab it from here: http://ftp.shaw.ca/irvinsh/prolog.tar.gz I haven’t had time to try it yet, but once I have, I’ll do a comparison.

From the web

From a friend’s recommendation I was checking out a site with free textbooks. They are all advertisement-sponsored, which is an intriguing idea (for another post). The IT section had not one but two books on Prolog: Prolog Techniques and Applications of Prolog.

Two prolog book covers
Free textbooks on Prolog from Bookboon

This latter one has a Hungarian author so I’m even more intrigued, we used to have great computer scientists (John von Neumann / Neumann János, anyone?), so I hope we keep up that tradition. (Oh, yeah, and had great physicists as well, maybe I can do more on that front later).

I was only skimmed them a little bit, but looks like these will be good addition for my “programming for fun and efficiency” library.

Will update the original LotM:Prolog page with these links as well. Now onto September’s Language, fortunately I have idea what I want to learn for the next ~3.5 weeks. October will be something Artificial Intelligence related since I signed up for the AI-class.

Categories
Taiwan

Startup Weekend Taipei

I really should have started to write this up about two weeks ago, just after StartupWeekend Taipei really happened. Better late than never (if there was ever a good excuse then this is), so taking some time out on this typhoon weekend, here’s my experience of that good 54 hours.

Board with the StartupWeekend Taipei logo

Start

After StartupBus and the Taiwan Enterpreneurship Challenge, I cannot deny that I have a lot of fun at these kinds of events. So I signed up for Startup Weekend Taipei quite a long time ago, especially since I had a free StartupWeekend voucher from Microsoft BizSpark.

I kept recommending the event to more and more people and it seems there were quite a few of my friends who wanted to come but couldn’t because it was all sold out. “Sold out” in this case is 150 participants. That’s probably about the same number of people as on all the Busses, though this time stationary and all in one place. I wasn’t sure what mixture of people will come, just that it will be quite different as Taiwan still seems to have less of a hacker culture.

It turned out that more than 2/3 of the people were Taiwanese and much smaller proportion of foreigners than I expected. This is great – for Taiwan. For me it was a bit of a roadblock.

25 people went up to the stage to pitch their ideas in the hope of getting a team. Of those, only 5 were in English. After the pitching everyone got their three pieces of voting post-it notes and could mark which ideas they liked the most. Based on the votes, only the most popular 12 pitches were kept and only those people could carry on their ideas. Well, since Taiwanese were really not voting for ideas pitched in English, there were in the end 2 ideas that I could join up with… This time it worked out well (oops, is this a spoiler?), but next time probably the organizers should look at this whether the procedure of setting up teams worked or not.

12 teams for 150 people also meant that every team was huge… At the Bus our team had 8 people and I felt it was pretty big. Certainly if I want to start my own company I would probably get going with less then that.

Actually, that large team count is good for getting things done once we agreed on what’s to be done, but it’s pretty bad for reaching such agreement.

Our team had 7 people: 3 coders, 1 designer, 3 marketing/business planning. One (two) sentence pitch: Restaurant search engine for menu items. Tell us what you wanna eat, we show you where are the restaurants serving that in the neighbourhood.

Whiteboard planning for FoodJing at StartupWeekend Taipei
Whiteboard planning for FoodJing interface and functionality

Simple idea, but with our team we had quite a bit of back and forth when discussing the focus of execution. I tend to get very involved once I’m sold on an idea, maybe a little bit too involved. After a bit of discussion I had the role of back-end designer, creating the infrastructure on which all of the user-facing services can be built. I choose that one, because I felt that’s the part of the architecture where I can add the most in terms of making something that other people can rely on and can build on relatively easily. I do feel that without a good back-end no amount a front-end glitter can save things….

Of course in part I chose this role because on the Bus I worked with a great team who taught me a lot about that and wanted to try myself out.

Exercise for the first evening (Friday night): A long, long discussion about a name. Next up is getting some ideas of the feature set then simplify, cut, reduce and then reduce some more. It’s great, I recommend it to everyone. I think I was a bit too combative at that time (sorry, Dobes!), but at least I realized that and tone back quite a bit. Since at 11pm they closed the venue we went home till the morning. I wanted to get something little done by that time so I can show that off for the team. Of course I slept like a log instead.

Next morning (Saturday) I woke up quite early, earlier than usually on a weekday. That’s a very good sign. At the venue they already had some breakfast prepared for everyone. I was too nervous and excited to eat in the beginning. Then when I tried my bagel an hour later, it was amazing! Run back to the table to get another one, but obviously everyone is enjoyed them a lot and were less nervous. All of them were grabbed up.

Filled table with good breakfast and drinks
Breakfast time at StartupWeekend, these were totally yummy

After some more discussion we got working on an actual thing. The technology used:

  • Bottle, a Python micro-framework, it’s a single file. I like it a lot, have to check it out more later, especially because there are a swarm of Python micro-frameworks so good to know the strength and weaknesses of each.
  • MongoDB, through MongoLab, wanted to use for our database and geolocation “nearest place” lookup, but run into some weird Unicode bugs that I couldn’t solve in about an hour. Scratch that, will check it when there’s time
  • Google AppEngine, hosting and database. Perfect for this kind of thing. Had some problem with data export and import (“list” datatypes are not imported back correctly) so I wrote some custom remote imports and all fine at this level. Oh, and Geomodel, that’s useful.
  • lots and lots of Javascript (jQuery, Mustache, something for the instant search,…) for the front-end. That wasn’t me, so not exactly sure what else was going on there. I was checking with the front-end people only as much as it affected the schema of the API response.
Most of the day was spent on setting up an API, working out data lookup with the chosen database and structure, making a data input interface, some helper pages, and fixing a lot of bugs. Saturday evening we had basically everything down conceptually that we needed to have working. Going home at night again was a bit of bug fixing (let’s call it The Time of Duh).

Home stretch

Sunday morning getting up pretty early again, I love this kind of inspired work when I just cannot stop myself. All the way to the venue I was thinking how to use this experience to improve my day job (though it is pretty inspired already, so I guess I’m lucky).

Most of the day was spent by fixing more and more bugs, getting the front-end right (not me, fortunately, I have no real sense of design), getting some real data, real restaurants and menus into the database, figuring out and polishing the pitch, working on the feedback from the surveys our marketing people were doing since Friday evening, do some Facebook page based hyping….. This sort of StartupWeekend stuff.

FoodJing team working
FoodJing team working last minute

I was hoping that we could get an Android app in the end as well, but the person who was working on that I felt over-complicated the thing. Yeah, because I don’t know how much the others must have thought that I’m over-complicating my job… Anyway, I haven’t been writing Android code since the Bus, but actually in about 2.5 hours there it was, an map interface showing real data from our real database. Slap on a search bar and you are golden. There wasn’t any time for finishing that up, but still I was satisfied – it was possible because of a good back-end. (okay, enough of this patting myself on the back, dude)

Then it was finally time to pitch. The panel of judges was impressive. Real investors and business people from Taiwan, US and China, about a dozen of them, maybe more. All very experienced people.

Pandey is introducing FoodJing
FoodJing team pitching for Judges at StartupWeekend

Our presentation was quite good, because the team really prepared for the questions, really answered the concerns an investor would have and covered all our bases. Many of the other presentations were more emphasizing “fun”, had “pie in the sky” models, or had something that some of the judges already gave them feedback in the development phase and they didn’t fix it. It’s probably mostly down to experience. I haven’t pitched before, so I guess I’m not the most reliable source of useful information about this.

Anyway, the punchline: we got first prize.

Foodjing team posing for photo after their win
A winning team and cheque

Of course it feels pretty good. Some non-monetary things (mostly services by the sponsors), but there was a last minute donation of NT$60.000 (about US$2000) from the judges. That comes handy, my share will run that server I’m renting and pay for some domain names for future projects….

Postscript

The event has been covered on TechOrange and Penn-Olson So two people from the team will actually continue Foodjing. They are based in a different city, and there are some other, administrative issues why I wouldn’t be able to take part in that, but it’s all fine. When they get it done, I’m sure I’ll be an user. And I have plenty of lessons to take home:

  • I really can get excited about a lot of different ideas. Most ideas do have a useful core that can be developed, so on an event like this, the idea that one chooses almost makes no difference. Choose the team instead of the pitch.
  • One weekend is perfectly fine to get something done. In the end we had a working (albeit ugly) prototype. If it was done now, can be done any time. Got to use my weekends better.
  • Talk more to people who can and willing to help. Had a lot of mentors who had great feedback on everything.
  • This is not a hackathon. I took it as it was one, and my goal was getting something working. Talking to some of the organizers it took me by surprise that this is really business and by real I mean real. That these things we are making are as real as it gets. I was just thinking in terms of fun, have to take things more seriously, but without losing the ability of having a good time.
  • This time we succeeded. This cannot make me risk-averse that I don’t try anything unless I’m sure to win, cannot go to the next StartupWeekend with the mindset that I have to win again.
  • There will always be more ideas. For every one of them that fails, or succeeds but goes on without me, there will be 10 more that can be taken up. So where’s my next 10?

All in all, it was a great time and looking forward to the next event like this.