Categories
Life

Retro-spectacular: 2011

Last day of the year is customarily used for reflection. Very useful artificial boundary that makes us think in a time span (a year) that is still quite manageable for humans. One year is quite long to create change, but still short so most of one’s projects are only on the way and didn’t reach their full potential. Taking time to think about those projects and the change make the good parts more permanent and the bad parts more temporary.

This year was really a game changer for me. It’s different in quantities and qualities as well. I remember thinking at the end of 2010 that I missed a lot of opportunities and time just passed me by for one year. Actually, for quite a few years I think there wasn’t too many things to speak of – though all of them have sown the seeds of the awesomeness that was 2012.

Calendar with all days checked off except today, December 31.
The year is almost out

Let’s take some (probably incomplete) stock.

Writing

This blog itself is almost one year old, I have started it in January. Haven’t written as much this year as I wanted (like how come there’s nothing since October?). This is not good, and it could be conveniently be blamed on being too busy to with all the projects I’ve been up to, but cannot completely. It benefits me greatly to write as much as possible, sometime in an organized way (like here), sometimes completely just in a flow. It is not really an excuse either that “I couldn’t find enough topic”, anyone who talks to me off-line often cannot get me to shut up about a thousand things. So better get to it.

Doing NaNoWriMo was limited to a few days of novel writing for me, got only 10% done, which is almost nothing. I said almost, because in those few days there were times indeed when storytelling was working, never before felt so good about writing, and changed the way I now read other people’s writing. So it’s not a total loss.

Doing 750 words was probably the best influence: write at least 750 words every day, about anything. I signed up for every monthly challenge  to write each and every day. I made the first one in October I think, since then I failed every time and often in a stupid way just forgetting about it. Next year I got to get myself off the Wall of Shame again. One thing I learned from it that once something is not perfect (i.e. I missed a day), I tend to let it go and miss more days: failures aggregate if one lets them. A habit worth getting rid of.

Brain stuff

I really enjoy programming, and the Language of the Month series was great, to learn some about Scala, Lua, Prolog, Javascript, and a bit of Go that I haven’t written up yet. It wasn’t every month in the end, so maybe should rethink the project, but I want to continue: there are just too many programming languages and they are awesome way to exercise one’s brain and learn completely new ways of thinking.

I could create a couple of small tools and sites as well, like WatchDoc that I use myself all the time. Nothing too big and still looking for a project that I can build something substantial for. But being able to make your own tools in the online when you need them is just as rewarding as working with your hands offline. The computers/internet is the next generation of Lego if you know how to talk to it.

Took part in the first online Stanford Classes: Machine Learning, Artificial Intelligence and Databases. All three totally worth it, it’s an experience I’m yet to write up, but I’m already using so many things I have learned. Also, my friend’s friends come to me saying “I heard you were doing the Stanford online courses this year, do you want to do it together next year?” There are many more really promising courses announced for the spring semester next year, now I have the problem of having to choose between them. 3 was kinda manageable, so right now I still have to cut down from the 7 noted down.

Somehow I had time to read more this year as well, though I have failed my Yearly Reading Challenge, totaling out at 30 books. I planned one for each week, but it’s just the motivation, the ones I had read this year all really worth it. I have hundreds of books on my to-read list, many of them from friends’ recommendations, so looking forward what will  next year’s 52 books be.

Community

The biggest change with regards of community and my drive as a force of change was taking riding the StartupBus. I cannot overstate how much that changed me: the people, the things we did, the travel and sights that were connected. I’m really glad to still be in touch with many of the people there – and hope to get back in touch with quite a few more. It is a community I’m proud to be part of and can’t wait to see what else comes out of it.

The bus set a few different things in motion, one of them ended up being Ignite Taipei. It is probably the single most important thing that I did this year. Or actually did 3 of them, together with the most inspiring people I can ever wish for. It is something I want to continue on for the foreseeable future, and want to grow it, as well as I’m sure I’ll be growing with it.

A small side project that was Geek Dinner, which is getting together, eat, and being able to talk in a way that I don’t have to hold back on anything. Programming, art, social networks, photography, microcontrollers, laptops, phones, all fair game and when you people don’t tune out  within 10 seconds. It seems there’s really a need for something like this in Taipei, people were really happy and eager. I hope I can carry it on.

A lot has changed about how do I use communities, how do I interact with people on Facebook/Google+, because I realized a few different things how I prefer to be interacted with. This lead for example to the No-Like Manifesto where I try whenever it’s possible to give a (meaningful) comment and not use likes/+1s, just as a last resort and when that’s meaningful in itself. This lead me to so much more discussion and been able to connect to people better. I can indeed say I know my friends a bit better now than I did before, and in big part because of the 24/7 interaction availability of whatever technology or network there is available for it.

Looking forward

This is just a short summary, I must have missed countless things. Now, however, it is really time to look forward. I’m already feeling so excited about a lot of projects that I’m planning for next year, and if I can carry over the excitement of 2011, it will be extraordinary too. Will include films, art, electronics, definitely include a lot, a LOT of people, friends, a lot of kicking ass.

And hopefully a lot of happiness. ^^
Cheers to You!

Categories
Life

Blog mixtape #1

Today ’bout 5am in the morning I just thought I just make a mixtape to match my mood.

I was playing around with Grooveshark more and more recently, and today I spent half the night sorting through songs. I guess I really prefer this to YouTube in terms of finding the songs I know and want to listen to again. Not as good for music discovery, but most of the time (say 4 out of 5) it has what I’m looking for. It likely would need some proper cleanup, most if not all artist’s songs are a mess, loads of duplicates and all. One can upload songs too to be able to listen everywhere, but not sure what goes into the main collection. None of the things I uploaded did.

Tape clip art

So here’s the mix, give it a try. Mostly things I have already known before, probably the majority of the people don’t.  Not very well researched, just something that comes naturally at this wee hour. The sky starts to brighten by now, fortunately it’s a public holiday today in Taiwan, so it won’t make much difference.

Let me know what you think.

Categories
Programming

WatchDoc – an experiment in Chrome Extensions

I keep remembering that good ideas start from doing something fun, instead of doing something easy. About a month and a half ago I was reading the Google Doc that I have with some friends, listing ideas about “Changing the World”. It’s all the things that should/could be better in the world and “I wish….” it would be some way.

In the list of  a hundred or so I’ve seen one entry that was wishing for a way to see changes to our shared documents in an easier way than going back to the Docs home page and seeing if something has happened. Been thinking that that would be something I myself would totally use. Couldn’t find anything like that yet, but I had sometime in my hand to experiment, so WatchDoc was born….

Dropdown menu of WatchDoc
The interface of WatchDoc with lots of testing documents, emails blurred out just in case.

Getting it done

I haven’t written a Chrome Extension before, but seemed like such a suitable fit, and been looking for a project that I can try myself with. They are practically open source (since you can look into any one of them regardless of where you installed them from) and only using standard web tech, just like making a website (HTML + CSS + JavaScript). Indeed, all Chrome extension are some web-pages displayed in a special way, some javascript running in the background and/or javascript modifying the page you are on. Very neat, I would say…

Since I didn’t know where to start, I tried to find an extension that I can take apart and learn from. Fortunately, there was one, straight from Google, called YouTube Feed. So the first part of the project was slowly gutting that one out and replacing parts until I get something working that I wanted. It is easier to say than do, because the feed reader has somewhat different requirement than the code I had in mind, but at least close enough that most of the internal structure I could keep.

Some notes about the journey:

  • Authentication: oAuth can be pretty troublesome, but in the end it was easier this time than I expected from past experience. Maybe because YouTube Feed was written reasonably well already
  • Icons: Wanted to take the icons for the different document types from Google Docs itself. Took some time, but using the developer tools (inspect element) in Chrome I could finally find the links and download them.
  • More icons: Wanted to use the icons as a CSS image sprite just like the original extension, but couldn’t find some good program to combine PNG files into a single image. In the end wrote a quick Python script to do just that.
  • Extension icon: For the extension icon I used Google Docs’ own little icon (I think it’s kinda fair to use, especially now that they have changed it, it’s v9 of their icon theme compared to v7 at the time of my programming) and also on a free icon site I had found a matching hi-def icon. I just wish I could find it again for proper attribution, that still nags me.
  • Buggy notification: Some parts of the extension writing was pretty annoying. For example I wanted to get desktop notification work just the way it does in Gmail. I don’t know how do they do the auto-hiding, but I’m pretty sure not from the standard simple desktop notification type, because that I simply make to hide itself after a certain time. Instead I had to make it the more difficult way of using a HTML page to define content and then internal JavaScript to close it. Took a while to realize that I have to pass my parameters (the data that I wanted to display) as query parameters, but now it works pretty well.
  • Buggy libraries: this is always fun to have because have to find to work around it or  fix it. This time it was a jQuery URL Parsing library that crapped out on corner cases that the developer didn’t thought of but I fortunately run into. Took a while, but fixed it up, let’s see if upstream will incorporate it.
  • Optimization: it’s good to have things working first and then have them work well. For example originally I re-read the user’s whole feed, now it can read the part of the update feed that can contain relevant information, so I can have fewer requests to the server, and the updates come in several seconds quicker.

Release

After some work I thought it’s better to release first and ask questions later. So I registered on the Chrome Web Store (registration is $5 on time fee to get rid of spammers). Not much hustle, in about 20 minutes everything is done and now you can find WatchDoc in the Chrome Web Store. You can also find the source on GitHub.

The Chrome Web Store seems to have some strange policies, though, eg. didn’t let me update if I didn’t have the right shapes of preview pictures, and sometimes they only told me halfway into the submission of the new release, and I had to scramble to make some test documents and nice screenshots of the right aspect ratio so I can finish the submission. Nevertheless it is pretty easy to handle and useful too.

One beef I have is that I don’t have any way to reply to reviews. Much more people post negative stuff than positive and once I fixed something there’s no way to get in touch with the reviewers to ask for re-evaluation. This makes a very one-sided experience.

It is good – though addictive – to watch the number of +1s grow, as well as installs and users. The site must have some way to check people’s installations because the total number of users did decrease after the initial increase (the high water mark somewhat above 1500, now a bit below 1400). I guess I would need to improve the quality a bit more, and maybe there are not that many people sharing Google Docs with others than I have thought. :) Anyway, I think this is much more than how many people used any other software/site I made, so no ground to complain.

Post-release

The release was picked up surprisingly quickly by other websites, I think there are some automatic ones monitoring the latest submissions to the Web Store, and others take the news from them.

WatchDoc Google Analytics as of date
WatchDoc Google Analytics as of date, some review of it in the text, click to enlarge (in new window)

Here’s my Google Analytics since the release. The big spikes in the beginning go up to about 500 visit / day (tiny, but much more I have ever had), and are mostly due to Lifehacker. After that I have basically just a trickle of visitors (about 20 / day), with a little spike recently that is direct traffic, I wonder where from….

Some sites that reviewed WatchDoc (the ones with the highest referral counts)

I got up a support site on GetSatisfaction as well to be able to have a conversation with those who are persistent enough and submit some feedback regarding bugs. Used it a couple of times, and it’s quite practical. I finally learn how to do product support, which doesn’t mean it became easier. Still surprisingly maddening to debug a problem until it hits me as well by chance so I can check it locally. Got to figure out some superior remote debug system. At this point it seems to work well enough for about 1400 people, but the fact that I have lost about 200 shows that I still have a lot of problems to fix.

Into the future

It was a fun project to work on and it does most of the things I wanted to, but it’s dead ugly. I either find a designer and fix it, or just leave it like this because it doesn’t matter much…

I might work on it a bit more, especially bugfixes, though it would be better to have some ideas what is missing. Everything I can think of would complicate the user experience and that’s not my plan: document preview, ignore updates of certain documents, multiple-logins,…. What else?

Guess it is more likely that soon I will start to work on something else, let me know if you have something fun you wish to see :)

Categories
Programming

Language of the Month: Javascript

Continuing the Language of the Month serious after a little bit of break when I was busy with other stuff:

Javascript icon
Just a Javascript icon...

It is very long overdue, since I was looking at Javascript like…. 14 years now? But never really spent time to understand it because I never needed it really. I’m old enough to remember that when I started to browse the web, every time there was a page using Javascript, people’s sentiment was “oh, no, that’s going to be very slow, I don’t even want to check this site anymore”. Pretty much like it was/is with Flash later on.

Compare this to now, how 100% static sites are practically disappeared, and no web developer worth their salt should skip on learning it. I’m glad that browsers spent a lot of time improving performance and that so many interesting projects came out of it.

My impression of the current state of the art is that using HTML + CSS + Javascript now it is easier than ever to make good front-ends for programs. I’m mostly a command-line guy (and that’s pretty easy with many scripting languages as well as Python that I use), but I cannot deny that I’m the minority. Still, when i need convenience, I can even imagine people creating local (meaning not internet-enabled) software with those things.

Since more and more people had similar idea and started to work on it, there are plenty of projects that make this even easier, like jQuery and all its plugins. I don’t think people need a lot of introduction and already can do a lot of things easier using that. One drawback is that many things could be just as easily done in pure Javascript but people quite often don’t know that. I certainly have to learn a lot more.

In this month I was reading a few books and sites that people recommended on Hacker News, as well as I used it to do a few actual projects. Now that’s a change compared to the previous Language of the Month columns.

Projects

WatchDoc: a Chrome extension that notifies you when your shared documents on Google Docs change. Chrome extensions are merely HTML+CSS+JS code, so it was a perfect way to try a few things. (Will write it up here later) (wrote up here).

NowJS real time games hackathon: NowJS is a real-time communication plugin for NodeJS, the JS server. I wanted to make a game for this hackathon, but run out of time. Spent some time working with it, and it’s actually pretty awesome when I started to understand it, I’m do want to finish the game at a later time (it’s a multiplayer trivia game) . (Will write it up here later)

Venus & Mars: a little afternoon project using Facebook to help my friend’s research assignment at her university. Listing people’s status updates separated by gender. It looks awfully ugly, because I just wanted to make it work, but for the fun of it it’s NodeJS so good to practice my JS-foo.  (Will write it up here later)

Impressions

I definitely going to learn more of it, because now that I start to understand I quite like it, and I cannot imagine it going away anytime soon. Now that it is just a matter of seconds to set up a project on the web (really, on Heroku, Appengine, dotCloud are all one click away) there’s no good excuse not to do that.

Good

  • JSON, ’nuff said. That’s just such a good data format that is both human and machine readable. Seem to be pretty much the
  • No problem (it seems) with Unicode and international characters. Though I think it uses UTF-16 while many other code is using UTF-8, not sure if that makes any difference.
  • Feels quite light and flexible (from the language point of view, not necessarily the resources needed)
  • Since the source of websites is necessarily open, it is possible to learn from others’ examples much easier than otherwise.

Bad

  • Feels like it has a lot of baggage from it’s long(ish) and torrid life, which makes it feel a bit inconsistent. E.g. the first day of the month is 1, but the first month (January) is 0.
  • People generally seem to write pretty bad Javascript code. Because it’s so easy everyone can make some useful project, but they are full of bugs. Fortunately it’s Open Source, so I can try to figure it out, and I did find a handful of upstream bugs. But the stress…. huh…

Ugly

  • Formatting of JS code can be pretty unreadable (especially comparing with Python where formatting is not optional). It is made more difficult when I’m editing JS within a HTML file, since Emacs cannot handle that well.
  • Up until quite recently there weren’t any really good tools to troubleshoot things. Fortunately there’s Chrome and it’s Javascript console, and Firebug for Firefox. There are still some mysterious errors and the debugging has to be planned well ahead.
  • There are many things that are straightforward but require a lot of typing. Fortunately projects like jQuery are trying to fix that, but still there’s a long way to go.
  • Just like in Lua, for keys in dictionaries the quotation marks are not required and they are still understood as strings. These kind of magic can be convenient but there are occasions when confusing.

Links

Books

Interesting Javascript projects and sites

  • jQuery: making it easier to use JS, especially with respect to HTML DOM manipulation
  • Node.js: server side JS, thus it is possible for the first time (?) to use the same language for the front and back end on the web
  • jsFiddle: easy online editor, prototyping and code sharing for the web (JavaScript, MooTools, jQuery, Prototype, YUI, Glow and Dojo, HTML, CSS)

(last edited 2011-10-03)

Categories
Computers

Tech setup for an Ignite

Recently I was co-organizing our Ignite Taipei #2 event (see the pictures and watch the talks – this latter if you speak Chinese…) I try to be the self-proclaimed Chief Technology Officer (CTO) of the event. Either because I hope the tech side of things goes down well, or more likely if things fail then there’s no-one else to blame just myself. And things do indeed fail all the time all different ways.

So as a CTO I try to make sure that everything computer related runs well and I did collect some useful scripts (for that mostly command-line driven way I do things). I thought it would be useful to write them all up, mostly for me to remember, not just the scripts themselves but the rationale of some choices made along the way.

As Ignite’s motto is “Inspire us but make it quick”, I found that that the Ignite organizer’s motto can be “Everything will be just fine.”

The computers I used for getting things ready for the show
The computers I used for getting things ready for the show

Intro

Ignite in short is an evening of quick presentations, each one of them exactly 5 minutes, 20 slides that auto-advance every 15 seconds. Altogether there are 10-16 talks on evening, It aims to be an event for inspiring people, and something that should be relatively straightforward to organize.

Computer setup

I had 2 computers for the event, but those were actually 3 systems:

  • A Linux system (Arch Linux), my own computer that I knew very well and run the presentation off.
  • A Linux/Windows dual boot system that I borrowed, Windows (Vista of all things) for PPT-PDF conversion and Ubuntu for live streaming.
If I could, I’d ditch the Windows part altogether, but if can’t it would probably be better to run it in a virtual machine (so I don’t have to reboot, more on this later) or on a separate (third) computer.

Pre-event

Most of the organization was done on Google Docs with a shared document, keeping tabs of who we have as speakers, what needs to be done.

Shameless plug: next time it will be even easier with WatchDoc, a Chrome extension I wrote after Ignite to get notification when shared documents change :)

Besides keeping up with what to do, I had to take care of the presentations that the speakers sent to me. Since I’m not a fan of PowerPoint and its unreliability to display things the same way on different computers (and LibreOffice not being ready to handle .ppt/.pptx as well as I’d like it to), I had to convert everything into PDF. People with Mac, using Keynote were easy, they have PDF export. Most people using Windows/MS Office didn’t have. First I installed a “print to PDF” plugin but that was just terrible, awful quality photo conversion and all. In the end I had to get an Office 2007 just for this occasion and use their own Save to PDF add-in, for great good. Actually, it worked like a charm, I just wonder why they didn’t make the same thing for Office 2003 that seems to be much more abundant (I know, 8 years old software has no love). Finally I had everyone’s thing in the same, reliable format, and uploaded to Google Docs and Dropbox, so can transfer it between computers easily.

One more thing I had to do the the slides: add an empty slide on the end of the 20 slides of each presentation, so it’s easy to see when they are finished. Used pdfmanipulate for that. Prepare an empty slide, can use e.g. LibreOffice, export it as empty.pdf, then having all the talk PDFs in a a sub-directory called “original, run a script like this:

#!/bin/bash
DIR=original

for f in $DIR/*.pdf
do
    remf="${f%.pdf}"
    newf="${remf##*/}"
    echo "${newf}"
    pdfmanipulate merge --output="${newf}_extra.pdf" "$f" empty.pdf

Slides

The tech setup in the front; From Ignite Taipei #2

There were two kinds of slides to take care of: one with the speakers’ names and their talks titles, the other one is the talks themselves, prepares as described earlier.

The speaker intro slides were done in LibreOffice. Had a little problem with that one as well, as if Chinese characters were set to bold text, they showed up blurred. Not good looking, lot even legible sometimes. So just before start I was scrambling to change all text from bold to normal. It’s just a pain.

The talk slides were shown using Impressive, a command line presentation software written in Python. It is pretty good, quite flexible and easy to set the slide show times, transitions and total time progress bar.

Use the following script saved as e.g. present.sh to show the talk as present.sh nextalk_extra.pdf:

#!/bin/bash
# Run the presentation

PRESENTATION="$1"
SLIDETIME=15
TOTALTIME=300

./impressive.py -D 1 \
                -a ${SLIDETIME} \
                -d ${TOTALTIME} \
                -T 350 \
                -t CrossFade \
                -c persistent \
                "${PRESENTATION}"

There are some problems with it, though. Sometimes it took quite a while to show a slide that had a good quality photo, which messed up the timing. The total time didn’t quite work out the way I expected, I think every presentation was a bit longer than 5 minutes overall. Not too big problem., but if I can do right, then I could.

Previously used the Adobe Acrobat Reader’s auto-advance when full screen mode, that was just fine, might go back there next time if I cannot fix Impressive.

Web streaming + recording

A common thing to live-stream Ignite so other people can watch it to. I found it easiest to do with VLC + Justin.TV, they already have some software done that seems to work quite well. The first time I had some sound sync issues, this time managed to fix that, but in the end it didn’t matter.

To do the stream using my Logitech QuickCam S7500 and sound on the line-in from the venue’s sound system, start VLC with the following script:

#!/bin/bash
# http://community.justin.tv/forums/showthread.php?t=7081
# Using webcam + line-in audio
# Display + Transcode (Save + Stream)
# needs jtvlc to get it out

vlc v4l2:///dev/video0 \
     :input-slave=alsa://plughw:0,0 \
     --sout='#duplicate{dst=display,dst="transcode{venc=x264{keyint=60,idrint=2},vcodec=h264,vb=600,acodec=mp4a,ab=128,channels=1,deinterlaceaudio-sync}:duplicate{dst=standard{access=file,mux=mp4,dst=/home/user/ignite.mpg},dst=rtp{dst=127.0.0.1,port=1234,caching=2000,rtcp-mux,sdp=file:///tmp/vlc.sdp}}"}'

It as some mighty long line and not sure if it can really be wrapped anywhere. Basically it displays the webcam’s picture, transcodes it to x264 and sends it to an rtp socket and saves into a file in the same time. Might want to be careful with this, as the saved file will be overwritten every time the script is run.

The most curious thing is that in the copy that is displayed, the sound is not in sync, but actually the transcoded video is okay.

Next step is sending the stream out to Justin.tv, using jtvlc:

#!/bin/bash
./jtvlc-lin-0.41/jtvlc ${JUSTINTV_USERNAME} ${JUSTINTV_STREAM_KEY} "/tmp/vlc.sdp" -d

Here the username and stream key has to be filled in with your values. If everything’s fine then there’s a big stream of debug information on the screen and on the website the channel goes online.

At our event I had a problem that after reboot somehow the sound input is borked and all out video was without sound, the final saved file couldn’t even be opened. Never mind, fortunately we had recording from a proper camera.

HD recording + postprocessing

It is also essential to have some good recording of the show, since that can make it really reach a wider audience, and it’s good to look back at it later as well.

This time we had a cameraman helping out, and after the talks finished I got the videos from him. He’s using a Sony camera with a FAT32-formatted SD card, which was a nice pain to manage. Last the video we had was straightforward mp4 format, while this time 2Gb chunks of AVHCD, a proprietary format. Had to convert that into something I can manage.

It took a wile to figure out, but in the end the result was acceptable.

1) join the spilt parts with tsMuxeR. Had to make sure I’m using joins, not just listing all the files and saving them as one (sounds like the same thing but it isn’t). In the end I had some “.m2ts” files

2) Next I had to convert those into mp4. The m2ts files had actually h264 video that is good for mp4, so ended up just copying it. The audio had to be transcoded to aac, because the original was ac3 while that’s not accepted audio codec for mp4. Fortunately ffmpeg could handle AVHCD files now. The only really tricky part was that the original video was 1080i – interlaced. Next time have to make sure that the cameraman sets things to progressive, makes life so much easier.

#!/bin/bash
INFILE=$1
OUTFILE=$2
# Start recreating video
ffmpeg -deinterlace \
       -i ${INFILE} \
       -f mp4 \
       -vcodec copy \
       -strict experimental \
       -acodec aac \
       -ab 128k \
       -y ${OUTFILE}

This took quite short time, which is a relief, before I figured out that things can be this simple I had transcoded files in hundreds of giga-bytes, and things took hours.

3) Spilt the video for uploading to YouTube. Used Avidemux for that, which can – barely – handle h264 encoded video. Somehow it couldn’t find the key-frames so it decided that every 30th frame must be that and I can only split things there. This resulted in every video having some strange pictures in the beginning (duh, missing keyframe). Might be better with re-encoding, or rather finding a better program to handle h264 video.

Future

Fixing stuff at halftime; From Ignite Taipei #2

Of course there are some things to improve next time:

  • Better video recording that eases post-processing. Preferably have our own camera that we learn how to use and not have to figure out something new every time
  • One a computer is set up and tested, no more reboots
  • Check the timing of the slides to make sure they are really 15 seconds each
  • Improve Impressive, maybe prepare some patches: exit when presentation finished, show empty screen after last slide, timing issues
  • Switch to scripted intro slides so I don’t have to edit each of them to make sure they look the same. Maybe use one of the Javascript web presentation frameworks and a full-screen Chrome window…
  • Set up our own website for Ignite. This one is the big one. Should allow us to do a lot of interesting things, but I try get myself to make it first with a few features only, not with all bells and whistles. A WordPress page with some plugins or a full blown Django site or something else? Time will tell…