Categories
Computers

Editing talk videos for Ignite Taipei

As an organizer for Ignite Taipei, there are plenty of technical things that I need to take care of. Managing people’s slides (plenty of fun there), runing the presentations (it needs the Ignite type 15s autoadvance), setting up web stream (where everything can go wrong), recording the talks themselves. One thing I haven’t done yet, is editing the videos.

For all previous events we got some more experienced people to help out. The results were okay, nothing surprisingly great but plenty usable. It still did take a lot of time to do that, and I don’t like to ask people favours. Instead this time, for Ignite Taipei #7, after almost a month of procrastination I decided to do it myself. Ignite is an experimental ground for me, where I try a lot of things I think about startups (even if it is a non-profit organization, it can have a startup like “organization”), and I thought if I was a CEO I would make it such that “the buck stops” really here with me. Have to experience the work myself to be able to find better people doing it later. And I like video editing (from the good-old DVD ripping days, that came really handy this time as well).

Recording

The recording was done with my Canon EOS550D. When I got the camera, I could have gotten a 600D, but I thought “I won’t be using this for videos!” Duh! Now I have to put up with freaking large files, because 550D cannot compress them on the fly. Also, need to check whether I can get better SD cards, because at 1080p it broke the recording last time (that’s why the Ignite #6 videos are still not edited). 720p (at 50fps) seemed to work somehow, and looks good too. Will try to fix that still in the future. Now I have a GoPro Hero3, that  can do 1080p without even breaking sweat, and I plan to use that one for the next Ignite.

Title screen

A video starts with good title screen. Previous editors could manage animations and all the works, which I didn’t have any idea how to do this time, so just keep it simple: a static page. But still have to add my own spice: not just a simple title and header, but something good-looking, and expressing the spirit of our organization. So got a cool HDR photo of Taipei City (well, actually Zhonghe, the suburbs to the south) from my friend, Chia-Da Hsu, tuned it a bit to be a background image, and plugged it into Inkscape.

Editing window of Inkscape
Creating the video titles in Inkscape

I like Inkscape a lot, open source, capable, fun.

Put in the background, position the text. The font used by Ignite, and thus for most of our own text is Insignia LT. To make it more readable (20% gray setting), make a black outline to the text: Copy/Paste in Place/Stroke paint to black/Make it thick (20px thickness for us), and blur with a suitable parameter value (was 2 for us), then move the layer one below. Voila, outlined text, drapper. Just had to make one, then replace the text (and recreate the outline).

Editing

I needed to put together the video with the slides, side by side, with synchronized timing, so it would look similar to the original talk. First, I needed to get the slides in image format, that I keep doing with ImageMagick, whit this script:

Unfortunately, the ImageMagick on my ArchLinux seem to be chocking on this script, that used to work so well 6 months ago. Looks like it outputs only the first slide. I had another computer with an older version of ImageMagick and it worked like a charm. Got to investigate it – or wait until the next Ignite and hope the problem goes away.

Next I had to find a suitable software for editing. As much as I looked, Linux didn’t have many good ones, or even many of any kind. Avidemux don’t like MP4, and it has bitten me in the butt before, Cinelerra didn’t work for some other reason, and these are the two in the official ArchLinux repo that I could find. Had to look more to get to Flowblade, an editor written in Python. I got to say, it’s such an amazing bugfest, that it drove me nuts sometimes, but got the work done, and that’s more than I can say about the rest of the candidates.

Combining the video on one channel, and the slide images on the other channel, translation to move them side by side, then finally a blending compositor so that the layers don’t overlap.

Flowblade window during editing
Combining the video and slides in Flowblade

Looks quite good, was a rocky start to figure out how it went, but when it went well, then I got one talk done in about 10 minutes, which is great! It helps that the Ignite format is predictable (15s each, could just add an image, and found the new insertion point for the next one in a very short time. Though now I see that Acrobat Reader that I use to display the slides, don’t do accurate timing usually.

Then it came the rendering. Originally I used a pre-compressed version of the videos (from 2Gb for 5 minutes to about 200MB), but the quality just wasn’t that good after the joint rendering. Went back to the original source, and tweaked the parameters. In the end, discarded the entire stock parameter setting, and used these ones manually. To get a good result, the important ones are preset (from ultrafast to veryslow, increasing quality) and crf (lower number means better quality, 18 is good, 17 a bit overkill because I like that, about 20% larger file than 18):

Too bad that Flowblade cannot do 2-pass encoding, maybe can do that another time differently, for better quality and smaller filesize. Also, had to make sure that I marked the beginning and the end of the video well and just rendered that part, otherwise it crashed right at the end of the job. Not good, considering that it took about 40 minutes each to be done.

Flowblade is unfortunately using absolute paths to remember things, and cannot edit the settings very easily, so couldn’t move the rendering onto a more powerful machine. Maybe could make a tool for save-file conversion, it’s all Python anyways…

When that done, I remembered from listening to a lot of Youtube videos, that the sound had something to desire, and all I had was puny DSLR microphone recording, so I run it through a normalization to tweak the level of the audio.

Upload

Time to upload to Youtube. Fortunately the network is great here in Taiwan, so it was a very short time, sometimes the postprocessing took longer than the actual upload.

The Youtube video info editing window
Editing all the options for the uploaded video on Youtube

From the settings I always go for Creative Commons, not that anyone had remixed our videos yet. Also set some tags, maybe that can improve our visibility in search, as well as the recording date and location. The description writing is tricky. Most of our videos are found through recommendation on the Youtube website, so it should be good. I wonder what would make it work better?

Ready to share

All that done, just created a playlist to tie the videos together, and found this new (to me at least) look, which is not too bad:

Youtube playlist window with all the uploaded videos
Finally, got the playlist ready

Now, after 2 days we have over 200 views, which is not bad considering that I could barely advertise yet, only friends sharing on Facebook and such (and you know how Facebook hides everyone’s precious updates, so it’s a surprise we even have this many).

Finally, the results can be all enjoyed.

 

Categories
Programming

The Gift of Code

I like advent calendars a lot. They can bring a lot of surprise, preparation, focus, and joy. They can come in many shapes and forms, and they encourage DIY – make your own calendar, count the things that are important.

This year, I got to play with a very interesting “advent calendar”, called 24PullRequests. It is the kind of thing that I don’t understand why people haven’t done it before. The mission: help out open source projects by submitting enhancements and fixes (i.e. “pull requests“), and do that for 24 days counting down to Christmas.

my 24 Pull Requests calendar
my 24 Pull Requests calendar

I had to take part in that one, and while the result wasn’t as successful as I wanted, it was so far my best contribution to open source.

My pull requests

Instead of 24, I managed to make 4 enhancements that were ready to be sent off. That’s not consoling me that it seems nobody managed 24, but never mind. Here are the things I made:

  • SmoothieCharts: make the charts use newer browser animation technologies that have better performance, and save on battery as well. This one was prepared somewhat earlier than December, but the final version was pushed within the right time frame. Being tested, not merged yet.
  • OpenHack: I’m organizing the event in Taipei, and noticed that some other place has broken image link. Hunted down the same pic from Google Cache, and set it up again.
  • Python Guide: added some info about installing certain Python packages in Arch Linux and Ubuntu. This is embarrassingly tiny fix, there’s so much more to do here
  • AngularJS: this is fixing that one couldn’t run the build script if the system Java can’t run in 32bit mode. I didn’t know that this was a Google project, until they sent me a request to sign some contributor agreement. I feel strangely humbled.

Lessons learned

Four contributions were already a lot of experience, because all of them were so different. Here are some lessons learned:

  • Write good pull requests – that starts with writing good commit message! People keep saying that, but seriously, no excuse not to do that.
  • When the changes have been sent in, don’t mind that they are not accepted yet. Every project have their own pace. Keep working on whatever you like
  • I was looking for ow hanging fruit, but one has to go in there still to make some meaningful contribution.
  • The issue tracker is a good start to see what to fix, but not always helpful, as it can be difficult to understand what propblem the others try to describe, if you are new to the project. On the other hand, try to use the code, I’m sure you’ll find some pain points right away (that was AngularJS). Also, the busiest issue trackers are not the best, they are full of things that would side-track you for a long time. Projects with a medium count are good for such an improve-and-run contribution.
  • Don’t be afraid to do things, but still do them the best you can. Your contribution doesn’t always feel meaningful, but still a little improvement is more than most people do. (just like PythonGuide was)
  • Keep things simple – easier to do, easier to pull. Even if sometimes that takes longer to write (the AngularJS contribution srunk to the quarter of its size while I was trying to figure out the simplest way to achieve what I wanted)
  • If interested, don’t worry if the project uses programming language you don’t know. You can pick up new things easier than it seems. Also, many projects give you feedback on your contribution, to help you improve it.
  • This project don’t encourage to work on your own stuff, but that doesn’t matter, there are another 11 months for that, or every day after these contributions are done
  • How to do this for the whole year? Bug squashing day in general? Still need to get deeper in projects, but go and explore. Can also see CodeTriage and ContribHub, linked from 24PullRequests
  • If stuck in the fixing, but the problem is interesting, don’t worry if it doesn’t fit in the 24 days. Keep working on it, the recipients will be happy any time (I have have 1 or 2 such patches)

Now let’s be a better coder in 2013.

Categories
Computers Thinking

The Internet hates programming

I was annoyed the other day by something the other day (well, by meetings in general), and wanted to check if other people are just as annoyed by it than I am. So I went to Google, started to enter “meetings are…” and since search is correcting what I type according to what people look for and type around the Internet, I got this:

Google's suggestions for what meetings are
Google’s suggestions for what meetings are according to the Internet

The results were really for my liking, and I was thinking whether I can use this to get some insights into fields of knowledge, for example what does the Internet think about different programming languages?

Started with LabView, since that’s the one I’m struggling at work at the moment (that’s for some other stories). I don”t want to overly generalize, and I have a lot to learn about it, but still, I am not that surprised by the Number 1 spell correction:

Labview is crap
Labview is….. crap, according to the Internet

So let’s see how the other languages and software engineering keywords come are treated?

Positive sentiments

There were some keywords with positive sentiments, never really among the top suggestions and never too many but there were

Neutral sentiments

There are some languages, that come up only with language related corrections, I guess that’s mostly because people don’t have strong negative or positive feelings, or maybe it’s more confusing then infuriating? Or people just don’t know much about some of these?

Mostly negative sentiments

Now onto the good parts…

I think my favorites are “worse than crap” and “an exceptionally bad idea”, with so explicit phrases, people must feel very strongly about it. Also, I’m surprised that with this many “dead” languages there’s still any programming going on! Of course, this is not an exhaustive list, would love to see more examples, and add to the list

Anyways, let’s head back to coding now that I have cheered myself up. Programming is hard, and the Internet knows that too.

Categories
Admin Lab Programming

Laboratory 2.0 – a monitoring system

Looks like that one of my specialty as a physicist, and contribution to the labs where I have worked so far, is bringing different kinds of programming techniques, and technologies to the table. I’m not saying I’m any better than many of the professors, post-docs, and students I’ve met so far (there are plenty of ingenious ones), it’s more like I experiment with different tools, have tried more of the cutting edge or recent technologies, did some web programming and could whip up something quick – that might not work very well at first, but does broaden the horizon for the rest of the people.

Also, I’m a lazy person, so want to automate as much as possible. That was on my mind recently when we have been preparing to do a vacuum-system bake-out. It’s essentially a procedure to have a delicate experimental system, mostly made up of steel, glass, and stuff like that, closed up from the atmosphere, all the air pumped out, then heated up to high temperature (~150-300°C). One has to be careful, because things can break, there are temperature limitations for some materials, also on how quickly that temperature can change, requiring careful monitoring of the status of the system. And the whole thing takes something like two weeks or more. Perfect setting for automation.

Set up the electronics

The pressure measurements are done by some expensive other equipment so didn’t have to bother with that one yet, so set to work first on the temperature monitoring. Before it was a bunch of thermocouples and multimeters, requiring manual intervention and lots of labour. Instead, got some inspiration from Adafruit’s Thermocouple Breakout Board, using the MAX31855 chip, and also from the Thermocouple Multiplexer Shield. It can handle only one channel, but can use some other chip together with it to switch between the different thermocouples, and so we can read it out one-by-one. The Adafruit board could only handle 1 channel, and the multiplexer shield was using an older chip for the measurement that I could not buy anymore. In the end, found a good analog multiplexer that one that is sold in the computer market here in Taipei, the CD4067B, and it works pretty well.

Breadboard setup for temperature monitoring Arduino
Breadboard setup for temperature monitoring with Arduino

Of course, setting it all up was quite a bit of fun times, as there were way too many gotchas along the way.

  • MAX31855 is a surface-mount component, and haven’t worked with it before. Not too bad, and can be much neater, just takes some plactice
  • MAX31855 is a 3.3V circuit, so the CMOS voltage levels used by my Arduino Mega ADK had to be level shifted
  • Unlike the older chip, MAX31855 really needs differential input, and it’s much more sensitive to the environment. This required different kind of analog multiplexer than that board had
  • The Arduino Mega is a new model for me, and had some strange behaviour in terms of the serial communication
  • Surprisingly there are not too many options for 3.3V voltage regulators over here, just the LM1117, which is different from what others are using elsewhere
  • Lots of noise and stability issues until figured out what should be how. For example under no circumstance should touch the thermocouple to conducting surfaces, and avoid ground loops
  • While MAX31855 says it’s “cold-point compensated”, meaning that it accounts for the chip-s local temperature when measuring the thermocouple, it doesn’t appear completely compensated, meaning that we can have unexpected measurement change because the chip is heating up for example by being in a closed box.
  • Figuring out the right amount of time to wait between switching channels (375ms seems to be good enough, 500ms is totally fine)
In the end, though, we did have a nice 16 channel thermocouple multiplexer, sending off the measurements onto an LCD screen and to the computer over an USB cable.
Temperature monitoring board soldered
Temperature monitoring board in it’s lab setting with 16 thermocouple channels

This is then saved in a database, and can be accessed from elsewhere.

Visualize!

The thing that my co-workers were most amazed by wasn’t the electronics. Sure, they haven’t worked with Arduinos, but did do similar stuff. Instead they liked the monitoring interface much more, this is the one on the picture right here (can click to enlarge)

Bakeout Monitor  interface showing the vacuum system, temperatures, pressures and long term graphs
Bakeout Monitor interface (click image for full view)

It’s the schematic layout of our equipment, with the temperatures positioned where the actual sensors are. Also, the change of the measured values in time are also displayed with live scrolling.

I’m not saying it’s great. Thinking about it, the major insight that made it good for the rest of the people is that I realized how much more people understand visual data: the placement of the values to the corresponding locations on the schematics. That’s the only thing.

So inside it’s a MongoDB database (learned from previous mistakes, using a replica-set at least), with Python scripts talking to the sensors and saving the data, NodeJS / Smoothie Charts for visualization (and plain old CSS positioning of <input> tags for the reading display), nginx‘s upstream module for running two monitoring servers just in case. It’s mostly in the Github repo of the monitoring code, as well as the Arduino sketch for talking to the electronics.

It was actually quite fun to write it all, and the gradual improvements, trying the new tech, trying not to lose to much data, amazed how well it works. Especially had a good time learning about the database, scaling, fault tolerance, performance…

Of course there could be room for a lot more improvements.

  • My failover-restart bash scripts are awful, though they do seem to work more or less and counteract the USB unreliablilities
  • There were some changes to Smoothie Charts that I could improve on: logarithmic plotting, some display enhancements, wonder if it can be more optimized for performance
  • More efficient data loading. 12h data is about 30Mb in JSON format, that I send compressed, apparently it gets down to ~5% in size, but it still takes quite a bit of time to process on the frontend
  • The layout now can be changed from config files if the sensors change, so co-workers can do that without programming knowledge. I wonder if that can be simplified even more

Of course, I’m a person who generally overengineers stuff, so maybe it’s good to stop somewhere. And the somewhere might be when I got to the point to use my Kindle for monitoring (craps out on 1h data already, but some real time things are good enough).

Bakeout Monitor interface running on Kindle
Bakeout Monitor on running on Kindle 3, not perfect but does work

Get on with it

I did learn a lot along the way, and I’m sure that with this experience I will be let to do a little bit more in the lab in terms of programming ideas. I don’t like that the rest of the system is currently forced to be LabView, but that’s for another post, and there are so many things that can be improved in general as well. Let’s just go and do that.

Categories
Computers Startups

A different smart dressing up

I was out with a few friends the other day, they were forming a team to go to StartupWeekend Taiwan Hardware next month. I have been to one or two previous StartupWeeekends and they are good fun. Haven’t made up my mind about the hardware one, but since they asked me could they prepare for it, since it’s their first experience, I did try to gather some advice. Not sure how better they are off with it, but I hope at least a little. Looking at the previous events, a little bit of experience and knowledge can put people way ahead, because Taiwan is just learning startups, every experience is golden.

Later, though I started to think what I have told them. One particular advice I had to examine: don’t start with a tech that you find interesting, start with a story instead and choose your tech for that.

Now I’m not that sure that this is a good advice for every occasion, especially because I was brainstorming about one particular tech that really excites me, and if I was right earlier, then I’m wrong now. And the more I thought about it, the more I felt, that there can be a case (especially for StartupWeekends) for starting with a tech, even if it is probably the harder way to get something awesome out of the process. Still, given that limitations make one more creative, it worth doing at least some proper brainstorming about it.

Lilypad Arduino

The tech I was thinking about that time was the Lilypad Arduino. I haven’t got one (it’s sitting in my Sparkfun basket, ready to be ordered), but really want to make a project using it.

Lilypad Arduino Ebroidery
Lilypad Arduino Ebroidery by Bekathwia (click picture for Flickr link)

The Lilypad Arduino is a microcontroller (a very tiny computer) that you can stitch (using conductive thread),  and people most often use it for wearable computing / smart clothes. It has a lot of periferials, could use it with LEDs, accelerometers, buzzers, buttons, light sensors, temperature sensors, wireless communication, vibration motors, and whatnot… Really the imagination is the limit.

So now the question is, what kind of projects I could come up with that could use this. In the last few days of brainstorming (and now writing this), I came up with a couple of opportunities, not sure if any of them has been done. Not sure about the quality either.

Project ideas

Smart bag: figure out how to make some simple/small way of communicating with a little sensor that can be attached to items that one does want to always bring along: keys, wallet, phone…. The bag would sense it if they are out of its range and warn the user. Never leave stuff behind at home or at a cafe.

Visual turn-by-turn navigation for cycles: use a smartphone to get navigation directions to where you want to go with your bicycle or motorcycle. A jacket is outfitted with lights on both arm, and would have a Lilypad to communicate with the smartphone, signaling which way to turn and when.

Movement direction clothes:  smart clothes that would detect the position and posture of the person wearing them, and use light or vibration to signal what movement is the next one. Could maybe correct choreography or teach karate katas.

Communicating clothes: this could be done in multiple ways, wireless, infrared (like the TV remote control, just two way), consciously controlled or in the background. Ultimate spy clothes, send messages between people in the crowd without them actually doing anything. Could local business send out signals that are received the clothes could prompt you with a deal or order ahead your favorite if you are a regular (though, this can really easily be very spammy). Also, the different units can synchronize with each other – cue super visual flashmob.

Smart bedding: what if your bed could monitor your sleep and  wake you up when the morning comes? Bit like Wakemate, but for the bed.

People tracking: build into work clothes for a factory and can log in and out people just by sensing them. Make it part of the ticket in an amusement park, and can see which rides are too popular, can communicate back to advise about waiting time and suggest other ways to pass the time. Finding people in a crowded place in emergency, see if anyone’s left behind. Though these can be very 1984 if done badly….

Health sensing: monitor vital body functions for people who are somehow at risk: older people, partying in town (drink responsively), doing sports…. Warn them when some critical situation is predicted.

Style guide: clothes having smart tags with “style” and “colour” and “pattern”, and either all of them collectively, or a central piece of clothing would check whether these thing you wear do match with each other. Could also make recommendations what to wear. “I have this trousers, shoes, and so on – which shirt should I wear to that?” – press a button, the right choice lights up or vibrates in the closet.

Well, that’s for now. Most of these, I realize, are quite a bit tentative, many of them seem to miss some key ingredient, or have a (technical) problem to solve before it would work. Which one would I bring to a StartupWeekend if I’d go, which one is ready to be made and could be made in 54 hours? Tough one…

Found anything interesting here, or have some more ideas? I’d love to hear them, let me know in the comments!