Friday, October 4, 2013

Ubuntu on early 2009 Mac Mini (3,1)

Well that was a pain but at least it seems to be working now. I wasted so much time on this that I figured it would be worth while leaving some notes.

Here's what didn't work:

  • Creating bootable USB drives using the following method http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx
    • The default 64 bit 12.04.3 server image
    • The default 64 bit 12.04 desktop image
    • The default 64 bit 13.04 desktop image
    • All three of these show up as 3 options when booting and fail to start the installer
  • Then I discovered that there are mac specific images for 12.10 and 13.04 - I chanced upon these and never once saw them mentioned in any guide that I read elsewhere. Check the releases site, eg. http://releases.ubuntu.com/raring/
    • Still I had problems with 13.04 desktop - it installed but would not boot into the desktop unless I started it in recovery mode. I almost got caught up in trying to make this work but then remembered I only wanted a headless server install in the first place
So here's what did work:
  • The 64-bit Mac (AMD64) server install image worked fine and now I have what I wanted (almost - I would have preferred the 12.04.3 LTS version but I can live with 13.04)

BuildItAndTheyWillCo.me part 2

Well I don't think they will. I got this running and along the way I learnt a bit more about AWS, Chef and Knife. But now I kinda lost interest :o and will take my new knowledge on to the next project.

For now there's a blank word press installation running at test.builditandtheywillco.me but it won't be there long. I'll keep hold of the domain name for a while though and add it to the list (along with IKnowTh.is).

Saturday, September 21, 2013

AWS Revisited - the birth of BuildItAndTheyWillCo.me

I've been playing with Chef a lot recently, mainly with Vagrant to separate and control development environments, but now I feel the need to get fully into the whole DevOps thing. With that in mind I'm going to see if can't get something running in a whole continuous deployment kind of way with AWS.

What to build though? To be honest I don't really care so first I figured, how about a personal web site. Then I realised that I may like to blog about the experience and remembered this blog. Then I had a moment and thought what about a blog site that documents the process of building itself (Wooooah!).

What about a name then? Well, as I'm building something and maybe hope that people will visit it I've gone for 'BuildItAndTheyWillCo.me' (now duly registered) in honour of the Kevin Costner classic 'Field of Dreams', ghosts are not welcome though ;)

Saturday, October 20, 2012

Gah! TCP doesn't won't work the way I want :(

Well, I've been having more fun messing around with Node.js and allowing myself to be distracted by interesting problems. The latest of which was triggered by my desire to integrate the BrowserStack beta API for cross browser testing. This is a nice service that will fire up any number of different versions of browsers and point them at a URL that you specify. Integrating this with Testacular and Mocha means that I can run all my browser javascript tests in all browser variants and get the results right back in my shell immediately, without having to run a myriad of browser versions locally :) This even includes mobile platforms :D

So what's the catch?

Well, in order for BrowserStack to connect to my Testacular server it needs to hit a public URL. Unfortunately my development machine is not reachable on a public URL (nor do I want it to be, at least not really public). The solution suggested by BrowserStack was to use a simple service called LocalTunnel. This service provides a client with which you can create an SSH tunnel to a local port that you specify. The service then allocates a random subdomain of localtunnel.com from which it will forward HTTP requests to your local port. Very useful and sounds easy, right? Unfortunately when I tried the client it didn't work and the only clues were leading me into a world of SSH keys, etc.

Hence the distraction. As I probably want to fire up my tunnel and browsers programatically I'm not so fond of relying on command line interfaces and really I want a node module to do it. What's more if I'm going to dig around in secure connections why don't I take the opportunity to expand my knowledge in a direction that I want it expanded. So I decided I would implement my own tunnel service and client solution in node and thus the tls-tunnel package was born.

Early on I figured I didn't want to mess about with generating random sub domains and trying to route based on the sub domain on which a connection was made so instead I decided to assign ports on the server to satisfy client connections. This way whenever a new client connects and requests a tunnel the server will allocate a port from a predefined range of available ports and start listening on that.

My plan was to use a free Heroku or Nodejitsu instance to then deploy my tls-tunnel server when I needed it.

This is where I learnt a hard lesson in the problems of bottom up development. Although I am applying TDD principles I did in fact fail to validate one of my initial assumptions - that I could use multiple ports! Both Heroku and Nodejitsu will only expose one port to your application... this could/should have been a red flag. I realised this early on but plowed ahead anyway thinking that at a later date I could apply a small change to my tunnel and instead use the random subdomain solution to differentiate between tunnels.

So I got my tunnel working using TLS (hence the name) with clients and servers authenticating each other with their own self signed SSL certificates. I was pretty proud of myself for implementing something that was in theory protocol agnostic - I had noticed that other similar solutions were limited to HTTP traffic... this should have been a red flag!

I next turned to the problem of making it all work on one port. Having already learnt quite a bit about the TLS/SSL problem domain I now learned a hard lesson about the TCP domain or more specifically the Node.js net domain.

I had made the assumption that when a raw TCP socket was connected to a server I would be able to read out the domain name that it had used... Wrong!!!

What LocalTunnel is doing is using the HTTP protocol to get the domain name that was used for the connection. GAH!! and what do you know this is the same reason the Heroku and Nodejitsu limit access to a single port. Double GAH!!!

So now I'm left with a choice. My solution can still work but I'm going to have to put it on an Amazon EC2 instance or something (I can get one for free for now). Or I can bite the bullet and implement the same HTTP restriction (boo) and do subdomain based tunnelling.

It's not such a simple choice though. On the one hand it's easy to integrate Heroku and Nodejitsu into my development and testing process (and even share that) as opposed to the hoops I will have to jump through to get it up and running on an EC2 instance. But on the other I don't want to limit my solution to HTTP and I haven't actually verified yet that I can use random subdomains on either service (once bitten, etc).

Perhaps there is a third way though - maybe if I only support one tunnel at a time I can use a single port...

That said, I'm leaning towards the EC2 solution for flexibility ("lean"-ing might be a bad choice of word here though - if you'll excuse the pun ;))

Saturday, July 21, 2012

Scheduling for the internet

While trying to figure out the best way to synchronise scheduled event start times across different users in different time zones, I managed to work my way around in a bit of a circle yesterday.

I have an app that allows users to schedule events and obviously specify start times. In my initial hacking I was specifying those start times and storing them in the database as strings (not even validated as dates, really just a place holder to mock up the site). So yesterday I thought I would tackle this to make it more functional.

I added a date picker widget and a time picker widget and fixed things so that only dates and times could be specified which I then stored in my database.

But wait, I thought, how do i know what timezone the user intends. After all, when i publish this event to other users I will want to give the start time in their timezone. Hmm... So I started my research on timezones.

I started out trying to put a timezone picker on my event scheduler page which would default to the current timezone of the clients browser. This actually isn't so simple.

A major complication is that I don't really want the timezone, I actually want the locale of which the timezone is only a feature. The other feature is daylight savings time (DST). There are only so many time zones (which is quite manageable) but there are lots of variations in the treatment of DST (not so manageable). Unfortunately for me I need to consider DST if I am to know what real time an event organiser is actually aiming for (they will always be working in local time I presume and would not care to specify start times in UTC).

Here are a few of the interesting libraries that I looked at to get get a handle on this.
  • On the client to detect and select a timezone
    • Josh Fraser provided the best hope for something simple with his client timezone and DST detection algorithm in Javascript. But he does mention that instead folks should use...
    • Jon Nylander's jsTimezoneDetect solution. This is apparently much more advanced and works off the complete list of time locales from the Olson zoneinfo database. Unfortunately this would be tricky to integrate in my web page and would provide a huge number of options for users. I've seen these before on the internet and they are annoying.
  • Then on the server to get a nice unix time in my database
    • node-time looked interesting
    • moment.js seemed to talk the talk but on further analysis I wasn't sure if it knew about DST or if I would have to tell it
    • timezone-js may have been the most promising
But then came my small eureka moment... Why am I doing all this work? Well, pretty much because my client side controls give me strings and not date objects. However the browser does know what time locale it's in and how to present dates. So this is where I returned almost to my starting point.

I ripped out all the timezone selector stuff from my page and instead I used the client side date functions to generate a Date object there and transmit a nice simple unix time back to the server for storage. For those that don't know, unix times are the number of milliseconds since 00:00:00.000 01/01/1970 (UTC). They don't care about time locales. So now I do all the locale specific formatting and parsing in the browser. Seemed obvious after I'd done it :)

I may add a widget to my pages to let the user know which zone the times are being displayed in but I'm not sure even that is worth the effort. It would only catch a few issues with people working on devices with the wrong timezone set.

Friday, July 20, 2012

Grunt watch and the Node.js require cache revisited

Still inspired by James Shore's series "Let's Code: Test-Driven Javascript",  I've been continuing with my endeavors to get the Grunt watch stuff working flawlessly(?).

In my last post I mentioned some niggles that were remaining from my previous workaround.
  • The workaround only addresses the Mongoose issue
  • The workaround assumes intimate knowledge of Mongoose
  • Grunt watch still explodes silently when unhandled errors are encountered in tests
    • undefined references
    • nonexistent requires
    • etc.
The good news is that I think I have addressed all of these. In addition to that I've figured out some stuff about how to extend grunt and how to manipulate the Node.js require cache.

First off I thought I'd take a look at Mocha to see if it handled things better. After all Mocha also has a watch function.
  • Mocha watch does not explode on undefined references (which is nice)
  • Mocha watch does still explode on nonexistent requires (actually I didn't find this out till much later on when integrating with grunt)
  • Mocha watch still failed to handle my Mongoose issue
  • Unfortunately Mocha watch doesn't integrate with JSHint and actually I'd quite like to lint my code on file changes too
So despite only having a small advantage in not falling over so much I thought Mocha showed more promise than NodeUnit and as James noted it is much more active on GitHub. In fact it's under the same banner as Express and Jade which are definitely very popular and well maintained frameworks for Node.js.

Next thing was to integrate Mocha with Grunt so that i can use the Grunt watch function to both lint and run tests on file changes.

The nice thing about writing my own task to run Mocha instead of NodeUnit is that it was then quite easy to fix the issue of exploding on nonexistent requires... It just needed a try/catch around the mocha.run call. In retrospect I could probably have added this to the existing NodeUnit task but by the time I got to this point, I'd already ported all my tests to Mocha.

[A short interlude on Mocha and Should...]

James noted in his videos that Mocha is targeted as a BDD test framework and as such he is not so keen on it's verbosity. I can see what he means but, to be honest, I don't find it much of an issue and in fact quite like it, so for a while at least, I think i'll stick with it.

I also tried the should.js assert library that provides an interesting take on asserts by making them a bit more natural language like. Things like: thing.should.have.property(things).with.length(5);

On first take I thought cool and went full steam ahead in making all my asserts like this. Currently though I'm not sure I like it.

For one, I keep thinking that I should be able to write something in a natural way but find that it's not really supported - it kinda feels like I'm being teased. This will lessen I guess as I really learn the idioms.

A more annoying problem though is related to the way Javascript handles types and comparisons. I keep finding comparisons that i think should work and don't and then comparisons that I think shouldn't work and do! I think this is made worse by hiding the comparisons inside assert functions. As a result I'm starting to come to the opinion that not only is the should framework more trouble than it's worth but in fact any assert framework that hides comparison logic is not such a good idea to use in tests in Javascript. This includes very standard things like: assert.equal(object1, object2);

I may revert to just a single check function that will better reflect how comparisons would actually be written in production code. Ie: assert(conditionalCodeThatResolvesToTrueOrFalse);

[...interlude over]

So there I have it, I can now run my tests as files change and rely on the watch task to keep going no matter what happens (so far!). Just the mongoose problems to resolve then, and actually I added another.
  • If a unit test beforeEach function falls over then the after functions are not run
    • This means that as I open a database connection in before and close it in after, when I get such an error I then continue to get failures when files change due to not being able to open the database anymore (it's already open)
    • Not as serious as the silent failures as at least the watch process keeps pinging me and I can restart it. But still a little annoying
This new issue got me thinking again about the require cache. My previous investigations here had proven fruitless but then, perhaps I had been led astray by some dubious comments on StackOverflow. Beware, this code does not work:

for (var key in Object.keys(require.cache)) {delete require.cache[key];}

So now I was thinking about the Mongoose module.
  • The problem isn't that the changed module is still in cache
  • The problem is that the Mongoose module is still in cache
  • In fact the problem is that any modules are still in cache
  • I must clear the cache completely before running my tests!
    • Actually I had tried this and it didn't seem to work
    • However I had tried it in my tests themselves, now I could try it in my new grunt task :)
      • I had already needed to add code that dropped all my own files from cache to make things work. It made sense to drop the rest too when I come to think about it.
So i fixed the code above:

for (var key in require.cache) {delete require.cache[key];}

Tidied up my mocha task adding support for options and this is what I have in a new module...


To use this I dropped it in a grunt tasks directory and updated my grunt.js file...


Note that the call to loadTasks takes the directory name. Also note that I overrode the built in NodeUnit test task and that the options to pass into mocha are given in the mocha config property.

So that's it I no longer have to use my Mongoose workaround as the Mongoose module is cleaned up along with everything else before I run the tests :)

I hope this will save me from similar gotchas in other modules too, but I guess I'll just have to code and find out :D