Categories
Admin Maker

Personal phone server, or Can you hear me now?

Ever since someone donated an IP phone to the Taipei Hackerspace, I’m trying to find time to set up an internal phone network between the hackerspace members. It should be fun to make our own infrastructure. Recently did some research, and started with it. Since if I get into something then I dive deep for a while, this was an intense week. This post is to summarize where I have got in this time

Asterisk & FreePBX

A bit of searching turned up Asterisk, a PBX (“private branch exchange” aka telephone network) software. It looked interesting because it came with a story: a guy building something awesome because he doesn’t know that it was supposed to be difficult. It’s also open source from the start, with a successful company build on top of the project.

Also found, that there’s a graphical control panel called FreePBX that makes using the all-command-line-and-config-files Asterisk easier to use. Both projects had a seemingly very detailed wiki, long track record, and strong following that made it worth checking them out.

The Server

Judging from the original install instructions on the FreePBX wiki, it looked like installing Asterisk & FreePBX is a complex (or rather many-step) process. Didn’t want to litter my own computer with broken installation artefacts, so enter VirtualBox. Using a virtual machine makes it easy to wipe and restart.

There’s a dedicated, preinstalled FreePBX distro based on CentOS, but had enough of CentOS for a while. Instead I just took Ubuntu 12.04.3 as a base, and FreePBX 2.11 and Asterisk 11 from the wiki. The install instructions were clear enough, though occasionally there were small differences needing a fix. Nothing major, but had to play around. After 5-6 reinstalls with increasing experience I got the basic functionality working, calls placed between and such, but the performance and sound quality wasn’t really that good. After thinking what could I improve, decided to take the next step: get out of the virtual machine, and up the version numbers (I’m Arch Linux user with a reason, living on the bleeding edge).

Enter DigitalOcean, a hosting provider that I used for other projects before (cheap, fast with SSD, good service). Set up a machine (aka “droplet”) in their Singaporean center (since that’s probably the closest one to Taiwan). I chose the 1Gb memory instance, because from experience with VirtualBox Asterisk+FreePBX maxed out at around that with a few test accounts.

Upping the version numbers I went with FreePBX 12.0 (from git) and Asterisk 12.1.1 (from download), both are testing versions. Asterisk had an extra dependency of libjansson-dev compared to version 11, didn’t check if any of the earlier dependencies are not required anymore.

The information control panel of FreePBX.
FreePBX interface for Asterisk

Got the whole system working (after a few droplet wipes), and played with the installation with a bit more confidence. From initial experience, Asterisk is a bit like the Linux Kernel. It’s modular, complex, focus on reliability, and the “make menuselect” is a familiar environment after years of “make menuconfig” compiling my own kernel. On the other hand, FreePBX is a bit like WordPress. It has its own auto-updater (just like updating plugins in WordPress), loads and loads of menus, focuses on configuration and tries not to let any faulty module take down the system (found quite a few buggy behaviour, so that’s a good idea). The Kernel and WordPress are two familiar environments, so felt home here too somehow.

Asterisk has a bunch of vocabulary that I’m so far barely familiar with, and lots of functionality that I haven’t had a chance to test yet. FreePBX has a lot of functionality too, and still it’s a bit difficult for me to tell where does an Asterisk function (module, resource?) end and one FreePBX function (plugin?) start. The fact is that I got to feel excited about programmable phone routing (with Lua), fax-to-pdf, hotel style wake-up calls, voicemail recording, call tracing, speaking time, simple conference talks, intercom functionality, regardless from whether it’s a module or a plugin…

Some additional server notes: voicemail requires email out for notification, I set that up with Mandrill and postfix. For such testing it might not be important, but good to secure the server at least a bit with fail2ban and ufw (Uncomplicated Firewall), and probably other things I don’t do well yet. Just sayin’.

Accounts / Extensions

Accounts on  the server are the extensions on which someone (or something), a numerical value. The vocabulary and concepts are also new to me, so it took a while to understand how things supposed to interoperate. Asterisk has a bunch of different kinds of extensions, of which I have tried two main ones: SIP and IAX.

SIP

SIP stands for Session Initiation Protocol. As far as I see it is basically a messaging protocol, to set up a connection between two parties, and also provide some other services, for example presence information (Online, Away, Busy….), messaging, and what not. The actual data of the call (voice or video) is trhough RTP (Real-time Transport Protocol).

The voice data in the transmission is compressed with one of the many codecs available:

  • ulaw and alaw (G.711) are a pair of the standard codecs, okay quality, one of the basic one to have in any client
  • speex is a variable bitrate codec, haven’t used that much
  • gsm is lower bitrate, but lower quality too (think of crappy cell phone reception voice)
  • G.722 is a hi-def (HD) voice codec, really good! I think beats Skype, and on par with a good Google Hangout quality,
  • G.729 is a non-free codec, shows up here and there, but haven’t had a chance to try it, this is the other HD codec that I’ve seen recommended

In testing, this was some of the learning curve, how to set up clients, and also the server that they can communicate with each other. Who choses the codec (caller, callee, server)? How to prioritize the codecs in different clients? What does it look like (or sounds like) when there’s a problem in this area? How to debug and fix?

Asterisk 12 has two different SIP channels or components: their classic library (chan_sip), and a rewritten one (chan_pjsip). The latter one is a standalone library that can be used for other purposes as well. SIP usually works on UDP, while PJSIP can do UDP/TCP/WebSockets too, and feels stable and fast. Definitely would use that if I have to choose between these two. Still, it is in test phase (both in Asterisk and FreePBX), so not without headaches.

There are bunch of different clients that I tried:

On Linux:

  • Ekiga is nice, simple, can sign into multiple accounts in multiple networks. Presence information, well integrated into the desktop with notifications and such. Does not seem to be able to handle non-standard SIP ports (which will be an issue further down)
  • Linphone is really multiplatform (Linux, Win, OSX, smartphones…), but it was crashing on me quite a bit, doesn’t integrate into the desktop (no notification just sound on call), and can be confusing with the lot of settings (the control panel looks a mess). Can handle non-standard ports too.
  • SFLPhone is good, works pretty well, simple, and can do IAX communication besides SIP.

On Android:

  • Android actually has full SIP handling capabilities built in for a while now (under “internet phone”). That would be awesome, if there were more information how to set up and use, but in theory a SIP account can be fully integrated into the system.
  • CSIPSimple really impressed me, probably the best working client I found. Integrates with the system (calls are handled as ‘calls’ with all the icons, history, and so on), good sound quality (can use the G.722 codec) and so on.
  • SipDroid, VIMPhone, LinPhone…. these other clients, don’t even remember them, all of them fell short somehow
  • Zoiper stands out as well, not just because it’s multiplatform, but because it’s pretty much the only one I found that can do both SIP and IAX. The Android system integration is not as close as CSIPSimple, but quite okay.

One of (the many) good thing about SIP that it is well known and pretty well supported. If there’s a “softphone” (phone in software), it’s quite likely to have SIP communication capabilities.

The bad thing about SIP though that it is well known and pretty well resented by the phone service providers. Many of those providers block SIP messages on their network, or sabotage the connection in some other way. On my own cell phone / 3G provider’s network, I couldn’t connect to Asterisk. In the forums some suggested that changing the port number that Asterisk listens on for SIP connections can solve things – and indeed after moving away from 5060/5061 to somewhere else, I could connect. The celebration was short lived, though because even though the calls now can reach the destination, RTP communication (the part that actually transports voice) was still broken. I don’t want to use VPN all the time (though might need to soon), and want to keep moving parts and settings to the minimum if I want others in the Hackerspace to join this network as well, thus SIP looks like a no-go because of the phone companies (darn).

IAX

Looking around, I found another type of channel, using the IAX (Inter-Asterisk eXchange) protocol. The bad thing about it that it is much less supported, but in turn it is not blocked by the phone companies either (since they don’t know about it).

Using SFLPhone and Zoiper I could successfully talk over 3G! Still it is not all good, the devil is in the details.

Phone interface showing incoming call from IAX test user
Zoiper incoming call on Android
  • It’s good that no need for custom ports
  • It’s bad that the IAX channel seems to be more unstable on Asterisk (or maybe I messed up my install after a while?): some extensions have trouble logging on for a while until the server is restarted; the wakeup-calls plugin misbehaved with IAX extension.
  • The less support also means less choice in clients. The ones I found cannot do G.722 so no HD voice anymore
  • Has a security setting (requirecalltoken) that not all clients support, not sure if there are any implications.

It’s also good, that Asterisk can route incoming SIP calls onto IAX extensions (i.e. the caller doesn’t have to care what technology the callee is using). On the note of routing, I could set things up such that outside calls can be routed into the system. E.g. every hackerspace could have their own Asterisk server and interoperate to call members at other spaces – sounds like a lot of work and might not worth it, but it also sounds awesome.

Summary & Future

I had a lot of fun playing with Asterisk. On the surface phone networks are familiar to everyone, but going deeper both makes things more confusing and opens my eyes how many possibilities there are for making something useful. 

There are a lot of things that I thought about, but haven’t tried yet:

  • Programmable dialplans (“what happens when a call is received”), via Lua. Lua is an awesome language and probably a lot more large piece of software has it embedded (since that’s one of its strength)
  • Could script a lot more too via the Asterisk Gateway Interface (AGI).
  • There are a bunch of other protocols and acronyms in Asterisk, for example Secure Real-Time Protocol (SRTP) and ZRTP, that could worth figuring out for a deeper understanding and security
  • There’s an Asterisk on Raspberry Pi project that looks interesting (if nothing else then how do they lower the memory usage below the RPi’s <512MB?). Since Asterisk can be used with multiple servers in a network, the RPi can provide one kind of service (e.g. GSM gateway) while other servers with more resources do other stuff
  • Using physical phones in the network, for example traditional phone network to come into the server, and IP Phone to ring out.  Maybe setting up fax endpoint (and sending it out as PDF or printing it). Basically anything that is working on the threshold between physical and digital.
  • Should check out how the likes of Line, Voxer, and Viber are doing VoIP on Android, do they have any interoperability?
  • How about Twillio, can their system be a similar PBX on a much larger scale?

The funny thing is that looks like the original IP phone that started this whole adventure does not work with Asterisk. Never mind, it will be good for another project.

Categories
Admin

Switched to SPDY and now Google’s confused

Out of interest, I recently switched this site to SPDY, party because I like to try out new things, and partly because I would want to make things be better and faster. So far it’s a mixed experience, with some puzzling changes, that I cannot make heads or tails of.

The first step for the switch was bringing everything onto HTTPS, which I have done with a free SSL certificate from StartSSL. Redirected everything from the HTTP to the secure connection, with the 301 http code so I thought Google will be able to follow it well and replace the addresses in their index. Then enabled the SPDY module in Nginx, and checking the result looked like I was in business.

Some time has passed, and a scary graph started to manifest itself in Google Analytics:

Google Analytics impression count, the site has changed around May 8.
Google Analytics impression count, the site has changed around May 8.

Right after I have made the changes, my impression count on Google dropped like a brick, now being exactly 0. That’s not really the change I wanted to see. Digging more into it, though, it looks like I still have a constant stream of visitors from Google Search:

Visitor numbers from Google Search, same time interval as the impression count.
Visitor numbers from Google Search, same time interval as the impression count.

How can I have zero impressions, but still a half a dozen visitors from Search? The results in the Webmaster Tools mirror things: dropping impression count, no crawl errors, same or even better indexed count, and relatively good stats:

Google Crawler stats, with a big spike when switched over HTTPS/SPDY when needed to reindex everything
Google Crawler stats, with a big spike when switched over HTTPS/SPDY when needed to reindex everything

The crawl seemed to have gotten a bit slower (the bottom plot of the three), but more consistent.

I wonder what could be the change, does the impression count depend on the method of access (http/https)? Or did I made some braking changes? If so, then why’s the conflicting information?

Being a scientist, my main concern is not actually the raw value of any visitor count, but understanding the reactions to my actions, and consistency of the “experimental results”.  I wonder what kind of technique I could use to debug all this?

Update 2013/May/28: 

Following some recommendations from the comments, it looks like that the https:// version of my URL has to added to the Webmaster Tools separately. Now there’s a http://gergely.imreh.net and a https://gergely.imreh.net section as well. In the latter section, I can see that there are some impressions reported. Some weird things still exist: the sum of impressions from both is less than how many visitors I reportedly get from Google Search; the crawl stats is shared between the two sections (ie. the https version reports a lot of crawl stats even from the time there wasn’t https enabled), while most other data is separate for the two sections (e.g. impression, search queries, sitemaps). Still probably this is on the right path.

The impression count after adding a https version of my site's records to the Webmaster  Tools
The impression count after adding a https version of my site’s records to the Webmaster Tools

After the Webmaster Tools changes, I have just switched the Google Analytics association from one WMT property to the other. Hopefully this will freak me out less, though it will likely take some days to see the changes in the result.

Categories
Admin

Fighting forum spam

As one of the managers of Ignite Taipei, I’m trying to come up with new ways to let the community communicate, new ways to share information, advice and all. A while ago I have set up a forum at http://bbs.ignitetaipei.tw/ and I thought that will be an interesting experiment. Well, so far it is useless for communication, but turned out to be a very interesting experience from the sysadmin point of view.

I used FluxBB, because it looked simple enough, seemed to be quite fast (for low traffic volume at least), and well configurable. Except that within a very short time I run into a spam problem, so many fake users registered, and lots of algorithmically generated garbage text with a bit of advertisement here and there.

First I looked into FluxBB’s own solutions, and looks like it might not have been a great choice, because many of the spam-fighting plugins are out of date, or not supported anymore, or just a real pain to set up. The immediate practical step I could take was updating my security questions, roll my own version of “written with words, how much is 5 + 4?”, the regular low-tech captcha on FluxBB. Looks like the original answers are already in the database everywhere, so had to write my own set, which seemed to work for a while, cutting down on red-flagged registrations. But it’s not ideal, since I want to make this a dual-language forum (Ignite Taipei has both English & Chinese as official language).

Instead I turned on email confirmation. When someone registers, the password is sent to their email and have to use that to sign in. It was okay for a tiny bit, then crazy registration boom happened. I think I might be the only one real member of the board (I said that it is a failure so far for communication:) and there are 500 other spam members. Looking at their email addresses, it seems all of them have Hotmail. That kinda suggests a giant failure at Hotmail to restrict automatic registration, which is probably a problem overall. I cannot just throw out Hotmail addresses either, because it’s a popular mail provider here in Taiwan too (my first email was Hotmail too, but that was a looooong time ago, before it was Microsoft property).

So captcha don’t work, email don’t work. What to do instead? At the time I was playing around with Cloudflare, to act as an easy to use CDN. I tried it before for our Ignite Taipei blog, which is hosted on Tumblr, and that doesn’t play well with Cloudflare unfortunately. Couldn’t use it for this blog before because of my DNS provider, but now I switched, so started playing with it again.

The dashboard of the Cloudflare interface
Cloudflare stats snapshot (parts of it)

Instead of enabling Cloudlfare for the entire ignitetaipei.tw domain, just turned it on for the forum, since it’s hosted elsewhere. And that totally did it. Spam stopped that very moment, and haven’t returned since. I think what happens is that Cloudflare knows globally a lot of web/forum/email span hosts, and can challenge them or generally ignore them. Can even see where those spammers are coming from.

List of captured threats on the Cloudflare threats console
Cloudflare Threats Console

One weird (but actually not that surprising) thing is that the most active web crawler on the site (Cloudflare gives that info as well) was Baidu by far, so I guess more people knew about the site in China than elsewhere. Why’s that? Some forums that share vulnerable sites, or something like that? I barely had any Chinese content at that time, so it cannot be that. And since I turned on the threat control part, Baidu seem to have dropped quite a bit (submitted the site to Google so now that’s the busiest crawler).

All in all, Cloudflare is an interesting experiment. I can really mess up my DNS with it, and could blocked my own site for several hours, but in general it worth it. Just have to be careful. For example when testing, use their own name servers to check the information, and maybe instead if “automatic” time-to-live, set some very short time first. I usually use Google’s 8.8.8.8, and they pick up the first wrong setting really quickly, then it takes hours to pick up the correction I made just minutes after the first one.

After a bit of playing around, at least I have no spam anymore (keep fingers crossed). Now just have to get people to use the forums. :)

Categories
Admin Lab Programming

Laboratory 2.0 – a monitoring system

Looks like that one of my specialty as a physicist, and contribution to the labs where I have worked so far, is bringing different kinds of programming techniques, and technologies to the table. I’m not saying I’m any better than many of the professors, post-docs, and students I’ve met so far (there are plenty of ingenious ones), it’s more like I experiment with different tools, have tried more of the cutting edge or recent technologies, did some web programming and could whip up something quick – that might not work very well at first, but does broaden the horizon for the rest of the people.

Also, I’m a lazy person, so want to automate as much as possible. That was on my mind recently when we have been preparing to do a vacuum-system bake-out. It’s essentially a procedure to have a delicate experimental system, mostly made up of steel, glass, and stuff like that, closed up from the atmosphere, all the air pumped out, then heated up to high temperature (~150-300°C). One has to be careful, because things can break, there are temperature limitations for some materials, also on how quickly that temperature can change, requiring careful monitoring of the status of the system. And the whole thing takes something like two weeks or more. Perfect setting for automation.

Set up the electronics

The pressure measurements are done by some expensive other equipment so didn’t have to bother with that one yet, so set to work first on the temperature monitoring. Before it was a bunch of thermocouples and multimeters, requiring manual intervention and lots of labour. Instead, got some inspiration from Adafruit’s Thermocouple Breakout Board, using the MAX31855 chip, and also from the Thermocouple Multiplexer Shield. It can handle only one channel, but can use some other chip together with it to switch between the different thermocouples, and so we can read it out one-by-one. The Adafruit board could only handle 1 channel, and the multiplexer shield was using an older chip for the measurement that I could not buy anymore. In the end, found a good analog multiplexer that one that is sold in the computer market here in Taipei, the CD4067B, and it works pretty well.

Breadboard setup for temperature monitoring Arduino
Breadboard setup for temperature monitoring with Arduino

Of course, setting it all up was quite a bit of fun times, as there were way too many gotchas along the way.

  • MAX31855 is a surface-mount component, and haven’t worked with it before. Not too bad, and can be much neater, just takes some plactice
  • MAX31855 is a 3.3V circuit, so the CMOS voltage levels used by my Arduino Mega ADK had to be level shifted
  • Unlike the older chip, MAX31855 really needs differential input, and it’s much more sensitive to the environment. This required different kind of analog multiplexer than that board had
  • The Arduino Mega is a new model for me, and had some strange behaviour in terms of the serial communication
  • Surprisingly there are not too many options for 3.3V voltage regulators over here, just the LM1117, which is different from what others are using elsewhere
  • Lots of noise and stability issues until figured out what should be how. For example under no circumstance should touch the thermocouple to conducting surfaces, and avoid ground loops
  • While MAX31855 says it’s “cold-point compensated”, meaning that it accounts for the chip-s local temperature when measuring the thermocouple, it doesn’t appear completely compensated, meaning that we can have unexpected measurement change because the chip is heating up for example by being in a closed box.
  • Figuring out the right amount of time to wait between switching channels (375ms seems to be good enough, 500ms is totally fine)
In the end, though, we did have a nice 16 channel thermocouple multiplexer, sending off the measurements onto an LCD screen and to the computer over an USB cable.
Temperature monitoring board soldered
Temperature monitoring board in it’s lab setting with 16 thermocouple channels

This is then saved in a database, and can be accessed from elsewhere.

Visualize!

The thing that my co-workers were most amazed by wasn’t the electronics. Sure, they haven’t worked with Arduinos, but did do similar stuff. Instead they liked the monitoring interface much more, this is the one on the picture right here (can click to enlarge)

Bakeout Monitor  interface showing the vacuum system, temperatures, pressures and long term graphs
Bakeout Monitor interface (click image for full view)

It’s the schematic layout of our equipment, with the temperatures positioned where the actual sensors are. Also, the change of the measured values in time are also displayed with live scrolling.

I’m not saying it’s great. Thinking about it, the major insight that made it good for the rest of the people is that I realized how much more people understand visual data: the placement of the values to the corresponding locations on the schematics. That’s the only thing.

So inside it’s a MongoDB database (learned from previous mistakes, using a replica-set at least), with Python scripts talking to the sensors and saving the data, NodeJS / Smoothie Charts for visualization (and plain old CSS positioning of <input> tags for the reading display), nginx‘s upstream module for running two monitoring servers just in case. It’s mostly in the Github repo of the monitoring code, as well as the Arduino sketch for talking to the electronics.

It was actually quite fun to write it all, and the gradual improvements, trying the new tech, trying not to lose to much data, amazed how well it works. Especially had a good time learning about the database, scaling, fault tolerance, performance…

Of course there could be room for a lot more improvements.

  • My failover-restart bash scripts are awful, though they do seem to work more or less and counteract the USB unreliablilities
  • There were some changes to Smoothie Charts that I could improve on: logarithmic plotting, some display enhancements, wonder if it can be more optimized for performance
  • More efficient data loading. 12h data is about 30Mb in JSON format, that I send compressed, apparently it gets down to ~5% in size, but it still takes quite a bit of time to process on the frontend
  • The layout now can be changed from config files if the sensors change, so co-workers can do that without programming knowledge. I wonder if that can be simplified even more

Of course, I’m a person who generally overengineers stuff, so maybe it’s good to stop somewhere. And the somewhere might be when I got to the point to use my Kindle for monitoring (craps out on 1h data already, but some real time things are good enough).

Bakeout Monitor interface running on Kindle
Bakeout Monitor on running on Kindle 3, not perfect but does work

Get on with it

I did learn a lot along the way, and I’m sure that with this experience I will be let to do a little bit more in the lab in terms of programming ideas. I don’t like that the rest of the system is currently forced to be LabView, but that’s for another post, and there are so many things that can be improved in general as well. Let’s just go and do that.

Categories
Admin

There’s a war out there

Since I have set up my little Virtual Private server about two months ago, I keep reading and learning more about its administration. In particular I’m trying to make it more secure, since nobody likes data lost or their things used behind their back. I know that the Internet is a tough place. Most computer users are nicely isolated behind their routers and internal networks, nevertheless I had my freshly installed WinXP being infected in less then 5 minutes when connected to the Net. (Well, since then I don’t install anything Microsoft and first thing to take care is the security, so things are much better).

Thwarting brute force attacks

One of the first thing is securing the remote login access to the machine: disabling root login for SSH is always a good idea. But since I’m interested in cleverer methods, I wanted to do something more potent and general. Found this blog post about how to limit brute-force attack with iptables, so I set out to implement it. The basic idea is that if another computer is trying to connect too many times in short succession, then it is likely an attack. Use the firewall to see how many connections are made in a specific time interval to the sensitive ports and if a threshold is passed then ban that host from connecting for a while. I like it and had to implement it.

The information on the linked page is quite detailed and very useful. Just save the current iptables rules, edit them, and then restore.

# iptables-save > myrules
.... edit them rules ....
# iptables-restore < myrules

For remote servers one thing to be extra careful about is not to block the SSH connections completely: keep the current connection open, try to make a new connection and if you can log in, then things should be fine.

The only thing I have changed compare to the other site is the log level, so i can separate them better. In the following line there was originally --log-level 7 (debug) I’m using --log-level 4 (warning):
-A ATTACKED -m limit --limit 5/min -j LOG --log-prefix "IPTABLES (Rule ATTACKED): " --log-level 4

Then update the line in /etc/syslog.conf to:
kern.warning   /var/log/warnings

Of course this might vary somewhat from Linux distro to distro: the above is for my CentOS install with syslog,

From the logs

Well, not sure if my host was particularly busy or not – I assume it wasn’t since I don’t rank high in Google so fewer attackers would find my little “home”. Still, in the last month there’s a nice little collection of IP addresses which triggered that ATTACKED rule of the firewall.

Using Python I extracted the IP addresses from the logs, run them through GeoIP Python API to get their locations and fed that into the Google Maps Static API, to get this picture:

Location of hosts that triggered my ATTACKED iptables rules
Location of hosts that triggered my ATTACKED iptables rules. Red: once, blue: 2-9 times, yellow: 10+ times

Altogether in about 1 month, I logged 110 ATTACKED triggers from 47 different hosts. Most of them tries only once, there was one that did 48 times. According to GeoIP database, it is from Varna, Bulgaria. Well, if there is one good thing that came out of this, that Varna actually looks quite good and I’d be interested to visit it. :) Talk about strange my reactions to things…

It seems Europe and China are up to no good. Not sure if American baddies are less or just targeting mostly Americans. Might investigate the regional differences some time later. Though this is just for curiosity and fun, if I was serious, then I could set up a proper honeypot.

Some technical notes on making this picture:

  • GeoIP Python API looks one of the worst documented codes I’ve ever seen. I found a tutorial that helped me to get the results I wanted: cities and locations, not just countries.
  • Static maps are quick, dirty and limited. Will try to figure out use the Google Map API for a proper zoomable, scrollable, annotated map. Could imagine making a heat-map of threats, or better colour-coding of the number of attempts from each IP/City.

Anyways, at least there’s no sign of unauthorized entry so far, since most of these attacks are not sophisticated at all. I wonder if I’d recognize if I ever was targeted by a sophisticated attack, but that’s not something to fret over. Just keep the automated backups going and it will be all fine. :D

Update:

The Python script I used to get that map can be found over here.