Saturday, March 19, 2016

Is it defensible to write industrial software in JavaScript?

I've been working for some time on an industrial instrumentation system. The software is predominantly written in JavaScript. That is such an obscure choice - why not in a Microsoft language, or in a more typical PLC framework?

The "firmware" that runs on the sensor boards has always been written in C. This is very traditional for programming microcontrollers. Some other options are emerging, including embedded JavaScript, but none seems to come close to C yet.

The first version of the software that runs on the server was written in C#. This was my first real project in C#, but I think that anyone with Java experience can be proficient at C#. C# seemed a responsible choice - something that anyone should be able to pick up after me. The web client code was necessarily in JavaScript. I suppose Flash, Silverlight, Java applets (remember them?) or even CoffeeScript would also have been possibilities, but it's not much of a simplification to say that JavaScript is the only option for browser-side programming.

There's respectable advice from extremely intelligent people suggesting that big re-writes are rarely the best approach.

But I did a rewrite anyway. I switched from C#.Net to NodeJS. There were a few reasons.

  • The code was becoming harder to maintain. I hadn't been refactoring enough. I could claim it was "technical debt" but I think it was just a mess.
  • I was drawn to the emerging NodeJs bandwagon/fad.
  • I felt myself becoming more proficient at JavaScript, and enjoying the lightness and simplicity of being able to do everything from a simple text editor and browser, without the "heavy" IDE required for C#.
  • It seemed like JavaScript would become a serious language. There was so much happening in the JavaScript community: so many nice open source packages, test frameworks, and clever people offering advice.

As the luminaries above would have predicted, the rewrite didn't solve all my problems. The new code is certainly not a model of programming virtue. But there is no question that it is better now than it had been. Most importantly, there are unit tests. It would be ridiculous to say that C# doesn't allow or even encourage unit tests. But somehow the JavaScript test frameworks (Mocha and Jasmine) feel simpler, lighter, and easier to use. Continuous integration servers like Jenkins can run these tests easily, without relying on that enormous TFS monolith.

The event-driven aspects of JavaScript took some time to get used to, and it took even longer to reach what I feel is a level of proficiency. Again, it's entirely feasible to be event-driven in C#, but just more natural in JavaScript. My software is even now broken into separate self contained services. Clearly I can't leave that without calling them "microservices". (If only I can get Docker into the picture, I'll be up with all of the people who speak at conferences these days!)

I've just seen the results of the 2016 stackoverflow.com survey. It's interesting, gratifying, and perhaps a relief to see that JavaScript is again the most popular language. I doubt that this would be true of the subset of people developing industrial instrumentation systems, but I feel that IT/OT convergence is happening whether we like it or not, and I'm not afraid to be helping it along. In looking at language popularity, it's also useful to look at some other surveys:
TIOBE has JavaScript at number 8 (behind Java, C, C++, C#, Python, PHP, and Visual Basic).

An interesting aside: the Stackoverflow survey found Visual Basic to be the most hated language!
There are similar results from Stephen Cass in IEEE Spectrum  and PYPL. Two github analyses put JavaScript at the top. Clearly all these surveys are estimates, and the concept of popularity is vague anyway. But the message is: I don't think JavaScript was too obscure a choice for a modern software system.

I mentioned the openness and momentum in the JavaScript world, which was one of the points of attraction. People have recently been speaking of JavaScript Fatigue - difficulty in trying to keep up with all the alleged best practices that are emerging so quickly.  I have experienced this. But I don't believe it will be fatal, or even serious. It may encourage a degree of extra caution before adopting the Latest New Thing.

For anyone overwhelmed by the apparent complexity of the JavaScript world, there are some good recommendations to get you started.  I'm a bit behind, since I haven't made it into React yet. Maybe one day. It won't take a rewrite!

I now have the opportunity to bring some new people into my project. It feels a bit overdue. But it makes me realise how many giants' shoulders I'm standing on. There are so many things for a newcomer to learn, that it's hard to know where to start. Maybe that's why I had to write this blog.

Sunday, October 18, 2015

Windows Nano with vagrant, virtualbox, and powershell

I'm attracted by the vision of immutable infrastructure and phoenix servers. Interactively logging into a server and working on it seems rather non-repeatable, and a source of potential errors. I'm not very far along the journey, but I am moving forward slowly.

Prompted by Matt Callanan's presentation of Expedia's "primer" platform, and Hashicorp's recent release of otto, I wanted to make some more progress. Reading about Microsoft's Nano Server (also this and this) was the trigger.

On my Windows 7 home PC, I've already played with vagrant before using virtualbox, so it was easy to find Matt Wrock's box to try. I thought I'd jot down a few of the obstacles I came across, in case it helps someone (perhaps me) in the future.

As a preface, I recommend installing chocolatey and then patheditor.

Install Vagrant and Virtualbox

Download and install the Vagrant msi package. I have version 1.7.4. Add the bin directory to your path. For me, it was C:\HashiCorp\Vagrant\bin

Download and install the Virtualbox installer. I have version 5.0.6. Add the directory containing VBoxManage.exe to your path. For me it was C:\Program Files\Oracle\VirtualBox

Clear Virtualbox Network Adapters

If you have already been using Virtualbox, it may be best to remove any network adapters that have been created. It took me several steps to get this right. There must be a nice command line way to do this, but I have resorted to the GUI. Start Virtualbox, and in File > Preferences, choose the Network entry in the list on the left. In the Host-only Networks tab (perhaps the NAT networks tab too), select any network, and click on the "-" button on the right.
If it seems that nothing is happening, check the task bar to see if there's a UAC shield requesting permission for Virtualbox to modify the system. That will happen for each network adapter you need to delete, and also again when the VM first starts.

Set up directory

Make yourself a fresh directory somewhere to do this test, and then open a cmd prompt to that directory. To set up the Vagrantfile, type
vagrant init mwrock/WindowsNano
And then start downloading the box file (only 327MB for a windows distribution!) if you don't already have it, and then building and booting the VM, type.
vagrant up --provider virtualbox
This will probably take several minutes. For me, the script finishes with a window containing the VM, and the cmd window showing a multiline error starting with "An error occurred executing a remote WinRM command." 

> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'mwrock/windowsNano'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'mwrock/windowsNano' is up to date...
==> default: Setting the name of the VM: vagrant_default_1445065064697_71724
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 5985 => 55985 (adapter 1)
    default: 5986 => 55986 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: WinRM address: 127.0.0.1:55985
    default: WinRM username: vagrant
    default: WinRM transport: plaintext
An error occurred executing a remote WinRM command.

Shell: powershell
Command: hostname
if ($?) { exit 0 } else { if($LASTEXITCODE) { exit $LASTEXITCODE } else { exit 1 } }
Message: [WSMAN ERROR CODE: 2150859072]: The WinRS client cannot process the request. The server cannot set Code Page. You may want to use the CHCP command to change the client Code Page to 437 and receive the results in English.

This error can be ignored: it's a known issue for this box. On my PC, it isn't important to use the "--provider virtualbox" arguments, but it may be for you.

If you can't resist logging into the shiny new VM, type the username "vagrant", press tab twice to skip over the domain line, and type the password "vagrant", and press enter. You will see details of network adapters and address.

This is the extent of the user interface available with Nano Server. The only options available are to shutdown or reboot.

The network interface that this VM has is a NAT type, with two ports (5985, 5986) routed to the host. I wanted to have a "hostonly" network so that I could (eventually) interact with a variety of applications on the server.

Add hostonly network 

Getting a workable hostonly network was the part that took me the most work. Assuming that your priority is to get it working quickly, rather than see how little I know about virtualbox and vagrant, I won't describe my failed attempts. There may be better ways, but this one worked for me.

First step is to remove the last VM and ensure there are no hostonly interfaces lying around. Shutdown the virtual machine with ctrl-F12 enter. In the Virtualbox management GUI, right click on the vagrant_default machine in the column, and choose Remove... and Delete all files.

In File > Preferences, choose Networks and if there are any host-only adapters, remove them (again noting that this may require UAC permission).

With a text editor, edit the file Vagrantfile that was created in your directory. I found a section at lines 27-29 that looked like this
  # Create a private network, which allows host-only access to the machine
  # using a specific IP.                
  # config.vm.network "private_network", ip: "192.168.33.10"
Add a line below this one saying:
config.vm.network "private_network", type: "dhcp"
Save and close the editor, and then in your command window, type again:
vagrant up 
You may need to approve one or two UAC permission requests. This time, when the machine starts up and you login with vagrant/vagrant (no domain, again), you may see something like this.

At this point, you may look at your (host) computer's network adapters control panel, and see that there's one called "VirtualBox Host-Only Network". When I double click on it, choose Properties, and then double click on Internet Protocol v4, I see that the adapter address is 192.168.33.1. This initially seemed disappointing, since the VM address of 172.28.128.3 is not in the right subnet. Indeed, trying to ping 172.28.128.3 doesn't work.

But, in the Virtualbox manager GUI, look at the network adapter properties: File > Preferences, select Networks, then the Host-Only tab, select "VirtualBox Host-Only Network", and then click on the screwdriver icon. It shows me that the adapter's IP address is 172.28.128.1, which is in the same subnet as the machine. Out of interest, the next tab (DHCP server) shows that the VM has been given the first available address from the range indicated.

So why doesn't the ping work? I don't know. Maybe ping isn't enabled on the Nano box, or maybe there's a firewall. Perhaps I'll find out one day.

Connect Powershell to new VM

Start an elevated powershell. I do this by right-clicking on a CMD icon and choosing "Run as administrator", and then typing powershell.

To start working with the new VM, it's necessary to add it to your host computer's list of trusted hosts. To see if there are any there already, type
get-item wsman:\localhost\client\trustedhosts
If there are none, then you can add the new VM with
set-item wsman:\localhost\Client\TrustedHosts -value 172.28.128.3
If you already have some trusted hosts, and need to append this one to the list, the details here should help. The entries stored here will persist across reboots of your host computer, so you may not need to repeat this step.

The following commands (taken from Channel 9) don't need to be in an elevated shell, but perhaps it's easier to keep using the one you have. These two commands will set useful variables:
$ip = "172.28.128.3

$s = New-PSSession -ComputerName $ip -Credential $ip\vagrant
You'll be asked for the password in a separate dialog box. You know it's "vagrant". Then you can start working on your new VM with:
Enter-PSSession -Session $s
At last! You are now connected to the new machine, and you can use a variety of powershell (or cmd) commands to navigate around.

At this point, I recommend watching Rickster's 4-minute video to learn about copying files to and from the VM, and even editing and debugging scripts. It's on MSDN Channel 9. I discovered that I had to upgrade to Powershell version 5. The first download link I found (KB2908075) contained outdated certificates, but this one (KB3066439) seems ok.

Thanks to those whose work helped me get this far. If anything here didn't work for you, please leave a comment in case it helps others.

Saturday, April 19, 2014

Developing OpenShift Node apps with local Grunt

The last episode didn't really have an ending. I decided that perhaps it wasn't the right approach. While I enjoyed what Yeoman angular-fullstack offered, it may be a bit too opinionated/confined. I still think that OpenShift is a platform worth playing with, and my new aim is to find a way to make local development (perhaps with livereload) play nicely with OpenShift. So my new approach is to start with OpenShift, and gradually load other parts in. Ideally, I'd like this to work from my Windows machine without having to run a local OpenShift Origin.

If you want to follow along, you'll need to create an OpenShift account.

I started by creating a new node.js app on OpenShift Online, which I called nodule. OpenShift gives me the command line to clone the app locally:
> git clone ssh://5350ba6be0b8cd0c52000024@nodule-yesberg.rhcloud.com/~/git/nodule.git/
> cd nodule/
The app consists of  5 files and an empty directory.
nodule> ls -l
total 47
-rw-rw-rw-   1 user     group         178 Apr 18 15:51 README.md
-rw-rw-rw-   1 user     group         457 Apr 18 15:51 deplist.txt
-rw-rw-rw-   1 user     group       39855 Apr 18 15:51 index.html
drwxrwxrwx   1 user     group           0 Apr 18 15:51 node_modules
-rw-rw-rw-   1 user     group         701 Apr 18 15:51 package.json
-rw-rw-rw-   1 user     group        4790 Apr 18 15:51 server.js
I want to be able to develop this code locally, so I need to be able to run it. I tried node server.js, but it gave an error at (ironically) "throw err;". Time to delve a little more.

The README.md file points to the OpenShift documentation for the nodejs cartridge. The deplist.txt file contains a message noting that it's deprecated and that dependencies should be described in package.json. The package.json file shows that the app has a single dependency, express.js 3.4.4. To install that, I used npm.

nodule> npm install
A screen full of npm http GET commands rolled past, and there is now an express directory inside node_modules, and another dozen inside express\node_modules. Now I can successfully run the app:

nodule> node server.js
No OPENSHIFT_NODEJS_IP var, using 127.0.0.1
Warning: express.createServer() is deprecated, express applications no longer inherit from http.Server, please use:
  var express = require("express");
  var app = express();
Fri Apr 18 2014 16:01:40 GMT+1000 (E. Australia Standard Time): Node server started on 127.0.0.1:8080 ...
I pointed Chrome to http://127.0.0.1:8080 and saw the familiar OpenShift app boilerplate index.html. That's a good start.

Before I make a change to index.html and commit and push, I want to check out what the local file system is like on the server. I would like to be able to avoid committing the node_modules, so I want to understand how that works. After using ssh to connect, and tree to see the file system hierarchy, it seems that the server has a bunch of node_modules available (async, connect, express, formidable, generic_pool, mime, mkdirp, mongodb, mysql, node-static, pg, and qs) at /dependencies/nodejs/node_modules.

So the next step is to make a small change to index.html, and to see if I can see that in the browser. Refresh. Refresh. Change isn't appearing. Stop the node server and restart - works. Well it's good to see, but it's not satisfactory for a development environment. Can't wait to get the livereload going! But ideally it will be part of the development environment only, and not the OpenShift one. It would be nice to have a staging/testing server in the cloud, and it might not matter if such a server had dev-dependencies loaded. But I want to make sure that I can configure a production-shaped system there.

Well perhaps it's best to exercise the commit/push process once before playing with connect-reload. I'm pretty new at git...

nodule>git status
# On branch master
# Changed but not updated:
#   (use "git add ..." to update what will be committed)
#   (use "git checkout -- ..." to discard changes in working directory)
#
#       modified:   index.html
#
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#
#       node_modules/.bin/
#       node_modules/express/
no changes added to commit (use "git add" and/or "git commit -a")
nodule>git add index.html
nodule>git commit -m "Modify index"
[master c25d1a4] 
Modify index 1 files changed, 
263 insertions(+), 270 deletions(-) 
rewrite index.html (83%)
nodule>git push
Counting objects: 5, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 313 bytes, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping NodeJS cartridge
remote: Fri Apr 18 2014 03:55:55 GMT-0400 (EDT): Stopping application 'nodule' ...
remote: Fri Apr 18 2014 03:55:55 GMT-0400 (EDT): Stopped Node application 'nodule'
remote: Saving away previously installed Node modules
remote: Building git ref 'master', commit c25d1a4
remote: Building NodeJS cartridge
remote: npm info it worked if it ends with ok
remote: npm info using npm@1.2.17
remote: npm info using node@v0.10.5
remote: npm info preinstall OpenShift-Sample-App@1.0.0
remote: npm info trying registry request attempt 1 at 03:56:01
remote: npm http GET https://registry.npmjs.org/express
remote: npm http 200 https://registry.npmjs.org/express
remote: npm info retry fetch attempt 1 at 03:56:01
remote: npm http GET https://registry.npmjs.org/express/-/express-3.4.8.tgz
remote: npm http 200 https://registry.npmjs.org/express/-/express-3.4.8.tgz
remote: npm info shasum aa7a8986de07053337f4bc5ed9a6453d9cc8e2e1
remote: npm info shasum /tmp/npm-452368-ggNewZcR/1397807761356-0.1033226354047656/tmp.tgz
remote: npm info shasum b9556fdb117f47bb5a97bc61ab5af7fc2dad8928
remote: npm info shasum /var/lib/openshift/5350ba6be0b8cd0c52000024/.npm/express/3.4.8/package.tgz
remote: npm info install express@3.4.8 into /var/lib/openshift/5350ba6be0b8cd0c52000024/app-root/runtime/repo
remote: npm info installOne express@3.4.8
remote: npm info /var/lib/openshift/5350ba6be0b8cd0c52000024/app-root/runtime/repo/node_modules/express unbuild
remote: npm info preinstall express@3.4.8
remote: npm info trying registry request attempt 1 at 03:56:02
remote: npm http GET https://registry.npmjs.org/connect/2.12.0
And about 10 screenfuls later,

remote: npm info ok
remote: Preparing build for deployment
remote: Deployment id is d0a8dd36
remote: Activating deployment
remote: Starting NodeJS cartridge
remote: Fri Apr 18 2014 03:56:18 GMT-0400 (EDT): Starting application 'nodule' ...
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://5350ba6be0b8cd0c52000024@nodule-yesberg.rhcloud.com/~/git/nodule.git/   
105dafe..c25d1a4  master -> master
Yes, it seems that the OpenShift Online page now shows my update --- good. When I login via ssh, it seems that the /app-root/runtime/repo/node_modules directory now has express and below that all its dependencies. I don't understand. Why did it work before with express elsewhere? Was it something I did that made node want to install express in the application itself? That's a mystery for another time, I suppose.

Now to start with the development environment. It seems that for now the essential tool is Grunt. That will run karma tests, jshint, and the live reloading that I find so neat. It took me quite a while to get the live reloading working. There are so many plugins that all seem to do similar things. I found it hard to understand how they should work together - each one seems to publish only a fraction of a gruntfile on its own readme.

I found that Romaric Pascal's tutorial at Rhumaric was the best way to get started with live reloading.

To start with, I need to install Grunt and some plugins.

> npm install grunt grunt-contrib-watch grunt-express grunt-open load-grunt-tasks --save-dev
Then I created a Gruntfile.js to get things started:

'use strict';

var path = require('path');

module.exports = function (grunt) {

  // Load grunt tasks automatically
  require('load-grunt-tasks')(grunt);

  // Define the configuration for all the tasks
  grunt.initConfig({

    express: {
      options: {
        port: 8080
      },
      devServer: {
        options: {
	  bases: path.resolve('.'),
	  livereload: true,
        }
      },
    },
    open: {
      server: {
        url: 'http://localhost:<%= express.options.port %>'
      }
    },
    watch: {
      all: {
	files: 'index.html',
	options: {
          livereload: true
        }
      }
    },

  });

  grunt.registerTask('default', [
      'express:devServer', 'open', 'watch'
  ]);
};

This file sets up three tasks (express, open, and watch), and then runs them all as the default target. It's very basic for the moment, with no karma or jshint, and only serving the index.html as a static file (rather than through the express app). I saved that file at the top level of my project and started it all up from the command line.

nodule>grunt
Running "express:devServer" (express) task

Running "express-server:devServer" (express-server) task
Web server started on port:8080, no hostname specified [pid: 10184]

Running "open:server" (open) task

Running "watch" task
Waiting...

It was nice to see that a new tab opened on my browser (Chrome) and showed the index.html page. I edited and saved the file, and magically the page in my browser updated! The console showed

>> File "index.html" changed.
Completed in 0.001s at Sat Apr 19 2014 15:25:25 GMT+1000 (E. Australia Standard Time) - Waiting...
Now, I need to git add the package.json and Gruntfile.js, git commit, and git push, and see what OpenShift makes of it all. There is heaps of line noise as OpenShift tries to install all the devDependencies, but it finally all breaks because grunt-cli isn't installed. There are other 3rd-party cartridges you could use if you wanted grunt on OpenShift, but I don't. What I want is to make OpenShift only install the production dependencies. And it seems this is now possible.

nodule>rhc env set NODE_ENV=production -a nodule
Password: *********
Setting environment variable(s) ... done
I touched the package file and added, committed, and pushed, and OpenShift deployed the system successfully.

That's all I have time for at the moment. Next steps will be to improve the Gruntfile to add some karma and jshint, and to make sure that it's running the express app, rather than just serving static html.

Monday, January 27, 2014

Adventures with the MEAN Stack and Openshift

I'm rewriting an application. The existing user experience isn't too bad, but it could be better. And the code... But I'm also trying to stay up-to-date, or at least not-too-far-behind. So my aim is to use some modern technologies to make a Good single page application. Technologies of interest include:
  • Node. I'm learning more and more about just how great Javascript can be. 
  • Angular. Before choosing angular, I read a few comparisons with Ember, Knockout, and others. I think Angular was a fairly easy choice, but I wanted to be sure that I would have company.
  • Mongodb. I haven't used a NoSQL for anything more than a tutorial or two. This one seems to be pretty popular. My application will certainly not qualify as Big Data, so I'm sure that just about any SQL/NoSQL system would be fine. But at least I'll learn. Mongoose seems to be the obvious object-database bridge.
  • Bootstrap. I've played with YUI and Google Web Toolkit, but never used Less or any serious CSS framework.
  • Karma and Grunt for rapid testing and development.
  • Openshift: Red Hat's PaaS offering.

MEAN stack running Locally

I decided that the Yeoman angular-fullstack generator would be a good ramp to get me going. There are a few pre-requisites (Node, Yeoman, Bower) which I had already installed before documenting this. If I get time, I'll try on a fresh VM to clarify these. But to start my application, I made myself a nice fresh directory and typed
yo angular-fullstack
I declined the offer to use Sass and Compass for now, but did choose Twitter Bootstrap, all the Angular components (resource, cookies, sanitize, and route), and Mongo & Mongoose. The screen filled with all sorts of downloads. I'm sure I won't even see most of these, let alone learn what they actually do or how to use them in anger. But that's layering (complexity management) for you. After a few minutes, my scaffolded app was ready, and all I had to do was type
grunt serve
and the Node server started, warned me that it couldn't connect to a Mongo on localhost, and then my app page appeared in my browser. It was a little short on detail, but I didn't know that at the time.

I installed mongo in c:\mongodb, and created a config file which simply included a line "dbpath = c:/mongodb/db". I started mongo daemon with the command
bin\mongod --config conf\mongodb.conf
Now I can terminate grunt (ctrl-c) and restart it. A new page appeared, this time with  list of "awesomeThings" retrieved from the database. But that wasn't the most awesome thing: if I edit one of the files, such as app/views/index.html, and save, the browser refreshes. And if I edit server-side javascript and save, the server restarts, and then the browser refreshes. Thank you connect-livereload!

After experimenting with, and learning a little about angular, express, and bootstrap, I wanted to show what I had to a friend. I decided I'd look for a cheap cloud host. Openshift (Paas) was the winner (small app for free), closely followed by DigitalOcean (IaaS). (I've also been playing around with Vagrant and noticed that Packer.io supports DigitalOceans nicely.)

MEAN on Openshift

The question was: how to get the Yeoman app onto Openshift. On Openshift, I created an account, and then started a new app with Node 0.10.0 and Mongo 2.2. It came with a default starter app, which I could clone with git. I could also log in using Putty, which is useful for debugging. It's great that something as simple as "git push" can stop, rebuild, and restart the Openshift app. Similar to Jenkins, but still great.

So after cloning the repository to my local pc, I can download all the dependencies and start the system with 
npm install
node server.js
Then I can point my browser to http://localhost:8080 and there's the app (a single page with some helpful links to Openshift/Node info. And for fun, I can look at the other route installed: http://localhost:8080/asciimo .

The Openshift default app is much smaller than the Yeoman angular-fullstack one. It only has a single npm dependency in the node_packages directory (express), compared to the Yeoman's 45 (including a variety of Grunt support, Karma testing support, and mongoose). So perhaps I should simply copy the Yeoman app into the Openshift directory and commit & push. That way I could have the Karma and Grunt benefits locally.

One significant challenge with this approach is that Openshift installs all those dependencies - even the ones which are for development only. It's true that Openshift has Jenkins support, so running tests is quite feasible. But it means that the post-git-push build step takes fifteen minutes (in stark contrast to the 2s Grunt connect-livereload!), and it uses 75MB of storage and more than 12,000 files! Not ideal for a free system, even if there's 1GB quota per gear, with the first 3 gears free. I'd like to be able to tell my Openshift npm to only install the production npms. I decided (at least for this test) not to commit the node_packages directory. The advice seems to be that node_packages should be committed, but only for long term stability.

So I commit and push, and wait the 15 minutes for all the npm activity (which shows up as part of the git push) to finish. It finishes with 
remote: npm info ok
remote: Preparing build for deployment
remote: Deployment id is a6d28b2f
remote: Activating deployment
remote: Starting MongoDB cartridge
remote: Starting NodeJS cartridge
remote: Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://52e60ad95004463c5b000384@test2-yesberg.rhcloud.com/~/git/test2.git/
   809f542..5897269  master -> master  

The words "Status: success" sound good. But when I go to my app page, I get a 503 Service Temporarily Unavailable. To locate the problem, I need to login with Putty. (Or I could use the rhc app: rhc ssh). To see the log files, I need to use the tail_all command. (Note that this only works from the home directory.) Every couple of seconds, it seems there's a new copy of the following error:
Error: listen EACCES    at errnoException (net.js:884:11)    at Server._listen2 (net.js:1003:19)    at listen (net.js:1044:10)    at Server.listen (net.js:1110:5)    at Function.app.listen (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:533:24)    at Object. (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/server.js:39:5)    at Module._compile (module.js:456:26)    at Object.Module._extensions..js (module.js:474:10)    at Module.load (module.js:356:32)    at Function.Module._load (module.js:312:12)DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
It looks like the listen() call on line 39 of server.js is failing. The default with Yeoman had been 9000 (although the server.js code has 3000 as another possibility), whereas the Openshift port was 8080. And if I type "export" at the Openshift shell, there is an environment variable 

 OPENSHIFT_NODEJS_PORT="8080"

On systems I'm used to, listening on the wrong port wouldn't give an EACCES error. But I notice that there are SELINUX environment variables - perhaps this is all part of multitenant hosting (assuming that's what Red Hat does with Openshift). I decided to try to adjust my local app to use 8080. And then I need to ensure that the livereload facility doesn't cause problems - it uses websockets on a high numbered port to ask the browser to reload.


I don't really understand exactly how it's all working under the hood at this stage. But I notice the following snippet in server.js
// Start server
var port = process.env.PORT || 3000;
app.listen(port, function () {
  console.log('Express server listening on port %d in %s mode', port, app.get('env'));
});
It looks like something must be setting the PORT environment variable to 9000 - otherwise we'd be using 3000. So if I grep for 9000, I find the Gruntfile.js includes
    express: {
      options: {
        port: process.env.PORT || 9000
      },
      dev: {
        options: {
          script: 'server.js',
          debug: true
        }
      },
      prod: {
        options: {
          script: 'server.js',
          node_env: 'production'
        }
      }
    },
So I could just change the 9000 to 8080 in the Gruntfile, but then Openshift isn't using Grunt. So I decided to use a Console.log just before the listen command (instead of just after it) to display the value of port. And given the length of time it takes to deploy (only 5 minutes this time), I thought I'd add in another "or":
var port = process.env.OPENSHIFT_NODEJS_PORT || process.env.PORT || 3000;
console.log("Attempting to listen on port %d",port);
The push, deploy and activation succeeds again.but still gives a 503 error. The log shows the following every 2-3s.


connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
Attempting to listen on port 8080
events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: listen EACCES
    at errnoException (net.js:884:11)
    at Server._listen2 (net.js:1003:19)
    at listen (net.js:1044:10)
    at Server.listen (net.js:1110:5)
    at Function.app.listen (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:533:24)
    at Object. (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/server.js:40:5)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
So it is now attempting to use 8080, but still throwing the EACCES error. An answer on Stackoverflow suggests that it might be the hostname, rather than the port. I guess I need to add another argument to the listen().
var port = process.env.OPENSHIFT_NODEJS_PORT || process.env.PORT || 3000;var ip = process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1";console.log("Attempting to listen on port %d on IP %s",port,ip);app.listen(port, ip, function () {  console.log('Express server listening on port %d in %s mode', port, app.get('env'));});
While I'm there, it seems that I should adjust the database connection details. Rather than hard coding, I can use an environment variable. This is an extract from lib/db/mongo.js
var uristring =
    process.env.OPENSHIFT_MONGODB_DB_URL ||
  process.env.MONGOLAB_URI ||
  process.env.MONGOHQ_URL ||
  'mongodb://localhost/test';
The 503 error has now disappeared, but is replaced by:
Error: Failed to lookup view "index" in views directory "/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/app/views"    at Function.app.render (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:493:17)    at ServerResponse.res.render (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/response.js:798:7)    at exports.index (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/lib/controllers/index.js:18:7)    at callbacks (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:164:37)    at param (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:138:11)    at pass (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:145:5)    at Router._dispatch (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:173:5)    at Object.router (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:33:10)    at next (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/node_modules/connect/lib/proto.js:193:15)    at Object.methodOverride [as handle] (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/node_modules/connect/lib/middleware/methodOverride.js:48:5) 
GET / 500 46ms - 1.52kB
Using Putty, it seems that there's no "app/views" directory. I am more comfortable with hg than git, but could I have forgotten to commit the views somehow? Then I noticed that the .gitignore file provided by Yeoman included a line "views". I don't understand why that would be appropriate, so I deleted the line, committed and pushed.

This time, when I refresh my Openshift app page, the result is a blank white page. View Source shows that the index.html file is there. The problem is that all the css and javascript files refer to the bower_components folder, which is absent.
       

When Grunt starts up the system locally, it runs Bower, which downloads the appropriate js & css components according to bower.json. But Grunt isn't running on Openshift.  I decided to check to see whether the browser was receiving a 404 error for those files. But it wasn't. In fact, it was receiving a copy of index.html for each one of the files. The server.js file was routing any default GET to the index.index function:

// Angular Routes
app.get('/partials/*', index.partials);
app.get('/*', index.index);

So I need to choose one of the following approaches
  • run bower on Openshift
  • install all the framework files (.js and .css) into the repository,
  • or point to the CDN instead of bower_components
(To be continued...)










Saturday, June 30, 2012

Unity TDD with MPLAB C18

I'm lucky enough to be involved in a project at work that requires some microcontroller development. Being a convert to Test Driven Development for enterprise-style code, I was keen to take advantage of a testing framework for my embedded code. I bought James Grenning's book Test-Driven Development for Embedded C to get myself started. The Unity framework seems to be just what I need.

I chose the Microchip PIC family of microcontrollers because they appear to be very popular, flexible, easy to program, there are evaluation boards, and there are so many different variants that it should be possible to find just the right one for any occasion. The MPLAB IDE doesn't come close to the IntelliJ or Eclipse benchmarks, but I'm comfortable doing TDD in a simple text editor (Notepad++), so that's ok.

I downloaded Unity and installed the three source files into my MPLAB project. I wrote a simple test:

    #include "unity.h"

    void setUp(void) { }

    void tearDown(void) { }

    void test_demo()
    {
        TEST_ASSERT_EQUAL_INT(2,3);   
    }

and a simple test runner:

    #include "unity.h"
    void test_demo(void);

    void main(void)
    {
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Compiling showed that unity_internals.h was looking for a stdint.h header file, but not finding it. I had to define the constant UNITY_EXCLUDE_STDINT_H in my build. To do that, I used Project > Build Options > Project, and on the MPLAB C18 tab, clicked "Add..." to add it as a preprocessor macro.

The next problem was the lack of a putchar() function. I added this function to my test runner:
int putchar(int c)
{
    putc((char)c,stdout);
}

I selected the MPLAB debugger (Debugger > Select Tool > MPLAB Sim), and enabled the UART output tab (Debugger > Settings > Uart 1 IO > Enable, and choose Window for output). When I clicked Run, my SIM Uart1 window filled up with trash. I decided to check putc:

     void main(void)
    {
         putc('a',stdout);  // try this one
         while(1);
         UnityBegin();
         RUN_TEST(test_demo,1);
         UnityEnd();   
     }

Yes, when that runs, the SIM Uart1 window shows an 'a'. Each time I press reset, I get an extra 'a'. Try puts:

    void main(void)
    {
        putc('a',stdout);
        puts("abcde"); // try this one
        while(1);
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Yes, that works too. I looked into UnityBegin() (not much there) and then UnityEnd(). That starts with a UnityPrint. Let's try that one.

    void main(void)
    {
        putc('a',stdout);
        puts("abcde");
        UnityPrint("Does this print?");
        while(1);
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Looks like there's a problem with UnityPrint(). After some searching, I discovered (in the C18 C Compiler Getting Started manual) that the C18 compiler puts string constants in the code section, so they are const rom char*, rather than just const char *.

After a lot of experimentation, I decided that I needed to have two versions of the UnityPrint() function: one with a const rom char* parameter, and the original one with const char *. The two versions only differ in the signature and the first line:

    void UnityPrint(const char* string)
    {
        const char* pch = string;

    void UnityPrintRom(const rom char* string)
    {
        const rom char* pch = string;
   
When I changed my code to use UnityPrintRom, it worked. By the way, I don't really understand why it should help to have that apparently unused char c. It seems to be necessary to convince the compiler to use the right addressing.

Now I had to make Unity use UnityPrintRom at the appropriate times. I did this by two global search and replaces in unity.c. The first was to change all occurrences of UnityPrint(" to UnityPrintRom(" (there were 8 of these) and the second was to change UnityPrint(UnityStr to UnityPrintRom(UnityStr (there were 50 of these). The last two to change were UnityPrintRom(file) and UnityPrintRom(Unity.CurrentTestName). And I added a couple of prototypes into unity.h:

    void UnityPrintRom(const rom char* string);
    int putchar(int c);

Now, it works.

    testDemo.c:8:test_demo:FAIL: Expected 2 Was 3
    -----------------------
    1 Tests 1 Failures 0 Ignored
    FAIL

And when I fix up the assertion, I get:

    testDemo.c:1:test_demo:PASS
    -----------------------
    1 Tests 0 Failures 0 Ignored
    OK

The last step was to surround these changes with some #ifdef UNITY_MPLAB directives. The resulting code is now in a fork of the original repository at https://github.com/johnyesberg/Unity.



Sunday, May 13, 2012

Windows Development VMs - a long adventure through Virtualbox, Vagrant, VeeWee, Ruby, RVM, Cygwin

It's a deep rabbit hole, I admit. But the prospects of gold at the end of the mixed-metaphor rainbow are enough to make it worth burrowing all the way. Here's the story so far:

In the last five years or so, I've been enjoying a journey towards continuous delivery, with a number of tour-guides and companions (Erik Dörnenburg, Glennn Moy, Steve Dalton, Craig Aspinall, Rob Nielsen, various Yow! conference attendees and presenters, and many bloggers, especially Uncle Bob and lots of Thoughtworkers). It started with pair programming, and then progressed to test-driven development, and most recently, continuous integration, test, and deployment. At each stage, I found that I had acquired additional armour that  provided increasing levels of courage, confidence, freedom, and protection. It's now hard to go back.

My latest adventure is towards DevOps. The gradual accretion of software packages into my development environment can make it unclear what our actual dependencies are. Rather than creating (or building, or deploying) new projects on existing machines, I want to be able to spin up a shiny new vanilla virtual machine, and install a clean set of known software on it. In the Windows world, I don't want to forget to install the right version of the .Net Framework, or the ASP.NET component, or msdeploy, or all the other little bits that seem to be needed.

I've been running a couple of VMs on Virtualbox for some time. Vagrant is a very neat way to start, configure, and stop various VMs - especially when you need multiple VMs to work together. (I also like cumberbatch for testing such configurations.) All I need is a way to create the right base VM. All that interactive clicking to install Windows, SQLServer, SharePoint, etc is ok, but it's hard to document. A script would be better. And VeeWee is the tool for that. It's been working well on linux-related machines for some time, and support has recently been added for Windows. The VeeWee installation guide recommends using RVM (Ruby Version Manager). So to build RVM on Windows, I first had to install (well, add a few packages to) Cygwin.

So how did it all go? Well, there were a few errors. I'm running on a Windows 7 (64 bit) machine, in case that helps anyone.

First, the RVM on Cygwin script did cause a couple of dialog boxes "expr.exe has stopped working" to pop up for each version of Ruby that it installed (1.9.3-p194 and 1.9.2-p320). The log file suggested problems with the compiler, but it seems to have finished...

John@margaux ~/.rvm/log/ruby-1.9.3-p194/yaml
$ cat configure.log
]  ./configure --prefix="/cygdrive/c/Users/John/.rvm/usr"
checking for a BSD-compatible install... config/install-sh -c
checking whether build environment is sane... yes
./configure: line 2562: /cygdrive/c/Program: No such file or directory
configure: WARNING: `missing' script is too old or missing
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... gcc
checking whether the C compiler works... no
configure: error: in `/c/Users/John/.rvm/src/yaml-0.1.4':
configure: error: C compiler cannot create executables
See `config.log' for more details
c:\Users\John\Downloads\UnxUtils\usr\local\wbin\sed.exe: -e expression #1, char 1: Unknown command: ``C''

John@margaux ~/.rvm/log/ruby-1.9.3-p194/yaml
I configured RVM to use 1.9.2 as the default, which is what the VeeWee installation suggested.
rvm --default use 1.9.2
 The second error was in the VeeWee installation "bundle install" step.

John@margaux ~/repos/veewee
$ bundle install
Fetching gem metadata from http://rubygems.org/......
Using rake (0.9.2.2)
Installing CFPropertyList (2.1.1)
Installing Platform (0.4.0)
Installing ansi (1.3.0)
Installing archive-tar-minitar (0.5.2)
Installing builder (3.0.0)
Using bundler (1.1.3)
Installing ffi (1.0.11) with native extensions
Installing childprocess (0.3.2)
Installing diff-lcs (1.1.3)
Installing json (1.5.4) with native extensions
Installing gherkin (2.10.0) with native extensions
Installing cucumber (1.2.0)
Installing erubis (2.7.0)
Installing excon (0.9.6)
Installing formatador (0.2.1)
Installing mime-types (1.18)
Installing multi_json (1.0.4)
Installing net-ssh (2.2.2)
Installing net-scp (1.0.4)
Installing nokogiri (1.5.2) with native extensions
Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /cygdrive/c/Users/John/.rvm/rubies/ruby-1.9.2-p320/bin/ruby.exe extconf.rb
checking for libxml/parser.h... no
-----
libxml2 is missing.  please visit http://nokogiri.org/tutorials/installing_nokogiri.html for help with installing dependencies.
-----
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
        --with-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        --ruby=/cygdrive/c/Users/John/.rvm/rubies/ruby-1.9.2-p320/bin/ruby
        --with-zlib-dir
        --without-zlib-dir
        --with-zlib-include
        --without-zlib-include=${zlib-dir}/include
        --with-zlib-lib
        --without-zlib-lib=${zlib-dir}/lib
        --with-iconv-dir
        --without-iconv-dir
        --with-iconv-include
        --without-iconv-include=${iconv-dir}/include
        --with-iconv-lib
        --without-iconv-lib=${iconv-dir}/lib
        --with-xml2-dir
        --without-xml2-dir
        --with-xml2-include
        --without-xml2-include=${xml2-dir}/include
        --with-xml2-lib
        --without-xml2-lib=${xml2-dir}/lib
        --with-xslt-dir
        --without-xslt-dir
        --with-xslt-include
        --without-xslt-include=${xslt-dir}/include
        --with-xslt-lib
        --without-xslt-lib=${xslt-dir}/lib


Gem files will remain installed in /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/nokogiri-1.5.2 for inspection.
Results logged to /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/nokogiri-1.5.2/ext/nokogiri/gem_make.out
An error occured while installing nokogiri (1.5.2), and Bundler cannot continue.
Make sure that `gem install nokogiri -v '1.5.2'` succeeds before bundling.
So I had to install libxml2 to get nokogiri working. I went back to my cygwin setup, added libxml2 and libxml2-devel, and tried again. A similar error, reporting that libxslt is missing. In installed libxslt and libxslt-devel with cygwin setup. Then, the nokogiri & the rest of VeeWee installation went smoothly.

At this point, I was ready to start VeeWee-ing for Vagrant, using these directions. I started by creating a new directory ~/boxes to put my VMs in. But veewee wouldn't work from there. I looked into RVM a little, but didn't find anything obvious, so I decided to leave that as an exercise for later. I could make boxes inside ~/repos/veewee for now.
John@margaux ~/repos/veewee
$ veewee vbox define '2008r2' 'windows-2008R2-serverstandard-amd64'
The basebox '2008r2' has been succesfully created from the template 'windows-2008R2-serverstandard-amd64'
You can now edit the definition files stored in definitions/2008r2 or build the box with:
veewee vbox build '2008r2'
$ veewee vbox build  '2008r2'
Error: We executed a shell command and the exit status was not 0
- Command :VBoxManage -v.
- Exitcode :127.
- Output   :
sh: VBoxManage: command not found

Wrong exit code for command VBoxManage -v

John@margaux ~/repos/veewee
$ VBoxManage
bash: VBoxManage: command not found

John@margaux ~/repos/veewee
$
Looks like my virtualbox directory isn't in the path. Added it to the windows environment variable, and restarted my Cygwin bash. Hmm - not there. C:\Program Files\Oracle\Virtualbox shows up in a Cmd window echo %PATH%, but not in a Cygwin bash echo $PATH. How do I get Cygwin to reload the windows environment? Hooray for Stackoverflow, which already had the answer.
John@margaux ~
$ export PATH="$PATH:$(cygpath -pu "`reg query 'HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' /v PATH| grep PATH | cut -c23-`")"

John@margaux ~
$ VBoxManage -v
4.1.12r77245

John@margaux ~
$
So would my build work now? No.
$ veewee vbox build  '2008r2'
/cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/openssl.rb:1:in `require': no such file to load -- openssl (LoadError)
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/openssl.rb:1:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/buffer.rb:2:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/buffer.rb:2:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/algorithms.rb:1:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/algorithms.rb:1:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/session.rb:7:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/session.rb:7:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh.rb:10:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh.rb:10:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:20:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:20:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:18:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:4:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:3:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:2:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:1:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/box.rb:2:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/box.rb:2:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/virtualbox/box.rb:1:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/virtualbox/box.rb:1:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/provider.rb:34:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/provider.rb:34:in `get_box'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/command/virtualbox.rb:17:in `build'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:263:in `dispatch'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:109:in `invoke'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:205:in `block in subcommand'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:263:in `dispatch'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/base.rb:389:in `start'
        from /cygdrive/c/Users/John/repos/veewee/bin/veewee:18:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/bin/veewee:23:in `load'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/bin/veewee:23:in `
'
How to install openssl for ruby? I tried a few things, but I think what I needed to do was install openssl-devel (as well as the openssl I already had) using Cygwin setup. Then I had to
rvm remove 1.9.2
rvm install 1.9.2
rvm --default use 1.9.2
gem install bundler
bundle install
Finally, I'm at the point where running veewee vbox build '2008r2' will actually download the iso from the Internet!

John@margaux ~/repos/veewee
$ veewee vbox build  '2008r2'
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Creating an iso directory
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso
Moving /tmp/open-uri20120513-4468-1cp4y7g to /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.isooooooooooooooooooooooooooooooooooooooooooooooo|  48.4MB 357.7KB/s ETA:   0:00:00
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []

We did not find an isofile in /iso.

The definition provided the following download information:
- Download url: http://care.dlservice.microsoft.com//dl/download/7/5/E/75EC4E54-5B02-42D6-8879-D8D3A25FBEF7/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
- Md5 Checksum: 4263be2cf3c59177c45085c0a7bc6ca5


Download? (Yes/No) Yes
Checking if isofile 7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
|  34.9MB 306.6KB/s ETA:   2:46:10

I decided interrupt this download, and to modify my definitions/2008r2/definition.rb by commenting out the iso filename line and adding one I had prepared earlier.
    #:iso_file => "7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso",
    :iso_file => "en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso",
The next problem seems to be that when running VBoxManage, veewee is passing a Cygwin path, rather than a Windows path.
John@margaux ~/repos/veewee
$ veewee vbox build  '2008r2'
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso

The isofile VBoxGuestAdditions_4.1.12.iso already exists.
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []

The isofile en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso already exists.
Received port hint - 59856
Found port 59856 available
Creating vm 2008r2 : 384M - 1 CPU - Windows2008_64
Creating new harddrive of size 10140
Mounting cdrom: /cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Error: We executed a shell command and the exit status was not 0
- Command :VBoxManage storageattach "2008r2" --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard
_enterprise_datacenter_web_x64_dvd_x15-50365.iso".
- Exitcode :1.
- Output   :
VBoxManage.exe: error: Could not find file for the medium 'C:\cygdrive\c\Users\John\repos\veewee\iso\en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso' (VERR_PATH
_NOT_FOUND)
VBoxManage.exe: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Medium, interface IMedium, callee IUnknown
Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 210 of file VBoxManageDisk.cpp
VBoxManage.exe: error: Invalid UUID or filename "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso"

Wrong exit code for command VBoxManage storageattach "2008r2" --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server
_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso"

John@margaux ~/repos/veewee
$
At this point, I googled to see what could be done, but found little in the way of advice. Maybe next time, this blog entry will come up.I didn't see anything that talked about VeeWee converting from the Cygwin path to a windows path. So I decided to take matters into my own hands, and modify the VeeWee code. I've read a little about ruby, but I'm not an experienced ruby programmer. I wasn't sure how to write code to detect whether it was running under Cygwin, so someone else may have to help with that soon, and perhaps contribute the results back to the project. But my changes were limited to two files.

In the file ~/repos/veewee/lib/veewee/provider/virtualbox/box/helper/create.rb I added a new method at the beginning of the BoxCommand module, i.e. line 6:
        def cygpath(s)
          `/bin/cygpath -w "#{s}"`.chomp
        end
This was inspired by Kevin Kleinfelter's code in this thread, although I had to add the -w flag to make sure the result was a windows path, not a cygwin path, and add quotes to ensure that it would work when there were spaces in the pathname (virtualbox puts spaces in the "Virtualbox VMs" folder name). I then had to call this method a four times: in the attach_disk, attach_isofile, attach_guest_additions, and attach_floppy methods, so that the resulting construction of the command variable looked like this:
command ="#{@vboxcmd} storageattach \"#{name}\" --storagectl \"SATA Controller\" --port 0 --device 0 --type hdd --medium \"#{cygpath(location)}\""
I modified some of the ui.info lines (including moving them below the command assignment) to print the whole command, rather than just the filename.

The second file that needed changing was ~/repos/veewee/lib/veewee/provider/core/box/floppy.rb. It seemed that the construction of the image of the floppy was not working properly for the same reason as above. So I added the same cypath(s) method. But that wasn't enough either. I saw errors like this:
Unable to access jarfile C:UsersJohnreposveeweelibjavadir2floppy.jar
It seemed that Java was devouring an extra backslash. So I added another method escape to floppy.rb.
       def escape(s)
         s.gsub('\\','\\\\\\\\')
       end
And I added a new line assigning jar_file, and updated the assignment to command, and creating a log message:
           jar_file=File.join(javacode_dir,"dir2floppy.jar")
           command="java -jar #{escape(cygpath(jar_file))} \"#{escape(cygpath(temp_dir))}\" \"#{escape(cygpath(floppy_file))}\""
           ui.info("Creating floppy: #{command}")
           shell_exec("#{command}")
Getting all this right took several iterations of running
$ veewee vbox build  '2008r2' --force
I had to add the --force to make veewee really construct the box again. I also had to manually delete the folder C:\Users\John\VirtualBox VMs\2008r2 each time before running that command. But after all these modifications, I eventually saw a Virtualbox window come up, and install windows (with no user intervention), rebooting a few times along the way. My VeeWee seemed to be waiting for sshd to work, but there seemed to be nothing happening on the VM:
$ veewee vbox build  '2008r2' --force
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso

The isofile VBoxGuestAdditions_4.1.12.iso already exists.
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []
- force : true

The isofile en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso already exists.
Received port hint - 59856
Found port 59856 available
Creating vm 2008r2 : 384M - 1 CPU - Windows2008_64
Creating new harddrive of size 10140
Mounting cdrom: C:\Users\John\repos\veewee\iso\en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Mounting guest additions: C:\Users\John\repos\veewee\iso\VBoxGuestAdditions_4.1.12.iso
Attaching disk: C:\Users\John\VirtualBox VMs/2008r2/2008r2.vdi
Creating floppy: java -jar C:\\Users\\John\\repos\\veewee\\lib\\java\\dir2floppy.jar "C:\\cygwin\\tmp\\d20120513-6476-np2ws7" "C:\\Users\\John\\repos\\veewee\\definitions\\2008r2\\virtualfloppy.v
fd"
Received port hint - 59856
Found port 59856 available
Changing ssh port from 5985 to 59856
Waiting 1 seconds for the machine to boot

Typing:[1]:
Done typing.

Skipping webserver as no kickstartfile was specified
Starting a webserver :
Waiting for ssh login on 127.0.0.1 with user vagrant to sshd on port => 59856 to work, timeout=10000 sec
........................
I noticed that there was a C:\cygwin folder on my new VM, but it was empty. No wonder sshd wasn't connecting. The floppy (A:) contained install-cygwin-sshd.bat which was supposed to install cygwin. But it didn't contain cygwin-setup.exe like it should. (I ran the bat file inside a command prompt so that I could see where the first error was occurring.) So I copied that file to C:\Users\John\repos\veewee\definitions\2008r2, added a reference to that file in the ~/repos/veewee/definitions/2008r2/definition.rb file as below.
    :floppy_files => [
      "Autounattend.xml", # automate install and setup winrm
      "install-cygwin-sshd.bat",
      "install-winrm.bat",
      "cygwin-setup.exe",
      "oracle-cert.cer"],
Then shutdown & deleted my VM, and re-ran veewee build. Now I see Cygwin downloading various packages. It will be nice when it can use the repository from my host machine, rather than downloading from the Internet again. I know it should only have to do this once, but if we're developing...

So now I have a server sitting there with Cygwin installed, but it's not answering the sshd request that my host is trying to make. It seems the install-cygwin-sshd.bat may still not be completing. I don't want to run the whole thing, since it would install cygwin again. So I checked parts individually.
  • The cygrunsrv command worked. 
  • The /etc/group and /etc/passed files are there. 
  • The ssh-host-config command seems to run ok.
  • Although the ssh rule is in the firewall configuration, the SSHD one isn't. When I try to run the command manually, the SSHD one complains. It seems the word "SSHD" near the end might be causing problems, since it works correctly if I leave that bit out. Must look into that a bit more.
  • Running "net start sshd" returns " "System error 1069 has occurred. The service did not start due to a logon failure". Checked in the services control panel, and saw that there was an entry for Cygwin sshd, that indeed was not running. Trying to start it caused the same error.
And I'm out of time for today. I hope I'll be able to finish this journey soon, and describe further experience. I'd be pleased to hear from anyone who can suggest better ways to do all this.