Saturday, April 19, 2014

Developing OpenShift Node apps with local Grunt

The last episode didn't really have an ending. I decided that perhaps it wasn't the right approach. While I enjoyed what Yeoman angular-fullstack offered, it may be a bit too opinionated/confined. I still think that OpenShift is a platform worth playing with, and my new aim is to find a way to make local development (perhaps with livereload) play nicely with OpenShift. So my new approach is to start with OpenShift, and gradually load other parts in. Ideally, I'd like this to work from my Windows machine without having to run a local OpenShift Origin.

If you want to follow along, you'll need to create an OpenShift account.

I started by creating a new node.js app on OpenShift Online, which I called nodule. OpenShift gives me the command line to clone the app locally:
> git clone ssh://5350ba6be0b8cd0c52000024@nodule-yesberg.rhcloud.com/~/git/nodule.git/
> cd nodule/
The app consists of  5 files and an empty directory.
nodule> ls -l
total 47
-rw-rw-rw-   1 user     group         178 Apr 18 15:51 README.md
-rw-rw-rw-   1 user     group         457 Apr 18 15:51 deplist.txt
-rw-rw-rw-   1 user     group       39855 Apr 18 15:51 index.html
drwxrwxrwx   1 user     group           0 Apr 18 15:51 node_modules
-rw-rw-rw-   1 user     group         701 Apr 18 15:51 package.json
-rw-rw-rw-   1 user     group        4790 Apr 18 15:51 server.js
I want to be able to develop this code locally, so I need to be able to run it. I tried node server.js, but it gave an error at (ironically) "throw err;". Time to delve a little more.

The README.md file points to the OpenShift documentation for the nodejs cartridge. The deplist.txt file contains a message noting that it's deprecated and that dependencies should be described in package.json. The package.json file shows that the app has a single dependency, express.js 3.4.4. To install that, I used npm.

nodule> npm install
A screen full of npm http GET commands rolled past, and there is now an express directory inside node_modules, and another dozen inside express\node_modules. Now I can successfully run the app:

nodule> node server.js
No OPENSHIFT_NODEJS_IP var, using 127.0.0.1
Warning: express.createServer() is deprecated, express applications no longer inherit from http.Server, please use:
  var express = require("express");
  var app = express();
Fri Apr 18 2014 16:01:40 GMT+1000 (E. Australia Standard Time): Node server started on 127.0.0.1:8080 ...
I pointed Chrome to http://127.0.0.1:8080 and saw the familiar OpenShift app boilerplate index.html. That's a good start.

Before I make a change to index.html and commit and push, I want to check out what the local file system is like on the server. I would like to be able to avoid committing the node_modules, so I want to understand how that works. After using ssh to connect, and tree to see the file system hierarchy, it seems that the server has a bunch of node_modules available (async, connect, express, formidable, generic_pool, mime, mkdirp, mongodb, mysql, node-static, pg, and qs) at /dependencies/nodejs/node_modules.

So the next step is to make a small change to index.html, and to see if I can see that in the browser. Refresh. Refresh. Change isn't appearing. Stop the node server and restart - works. Well it's good to see, but it's not satisfactory for a development environment. Can't wait to get the livereload going! But ideally it will be part of the development environment only, and not the OpenShift one. It would be nice to have a staging/testing server in the cloud, and it might not matter if such a server had dev-dependencies loaded. But I want to make sure that I can configure a production-shaped system there.

Well perhaps it's best to exercise the commit/push process once before playing with connect-reload. I'm pretty new at git...

nodule>git status
# On branch master
# Changed but not updated:
#   (use "git add ..." to update what will be committed)
#   (use "git checkout -- ..." to discard changes in working directory)
#
#       modified:   index.html
#
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#
#       node_modules/.bin/
#       node_modules/express/
no changes added to commit (use "git add" and/or "git commit -a")
nodule>git add index.html
nodule>git commit -m "Modify index"
[master c25d1a4] 
Modify index 1 files changed, 
263 insertions(+), 270 deletions(-) 
rewrite index.html (83%)
nodule>git push
Counting objects: 5, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 313 bytes, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping NodeJS cartridge
remote: Fri Apr 18 2014 03:55:55 GMT-0400 (EDT): Stopping application 'nodule' ...
remote: Fri Apr 18 2014 03:55:55 GMT-0400 (EDT): Stopped Node application 'nodule'
remote: Saving away previously installed Node modules
remote: Building git ref 'master', commit c25d1a4
remote: Building NodeJS cartridge
remote: npm info it worked if it ends with ok
remote: npm info using npm@1.2.17
remote: npm info using node@v0.10.5
remote: npm info preinstall OpenShift-Sample-App@1.0.0
remote: npm info trying registry request attempt 1 at 03:56:01
remote: npm http GET https://registry.npmjs.org/express
remote: npm http 200 https://registry.npmjs.org/express
remote: npm info retry fetch attempt 1 at 03:56:01
remote: npm http GET https://registry.npmjs.org/express/-/express-3.4.8.tgz
remote: npm http 200 https://registry.npmjs.org/express/-/express-3.4.8.tgz
remote: npm info shasum aa7a8986de07053337f4bc5ed9a6453d9cc8e2e1
remote: npm info shasum /tmp/npm-452368-ggNewZcR/1397807761356-0.1033226354047656/tmp.tgz
remote: npm info shasum b9556fdb117f47bb5a97bc61ab5af7fc2dad8928
remote: npm info shasum /var/lib/openshift/5350ba6be0b8cd0c52000024/.npm/express/3.4.8/package.tgz
remote: npm info install express@3.4.8 into /var/lib/openshift/5350ba6be0b8cd0c52000024/app-root/runtime/repo
remote: npm info installOne express@3.4.8
remote: npm info /var/lib/openshift/5350ba6be0b8cd0c52000024/app-root/runtime/repo/node_modules/express unbuild
remote: npm info preinstall express@3.4.8
remote: npm info trying registry request attempt 1 at 03:56:02
remote: npm http GET https://registry.npmjs.org/connect/2.12.0
And about 10 screenfuls later,

remote: npm info ok
remote: Preparing build for deployment
remote: Deployment id is d0a8dd36
remote: Activating deployment
remote: Starting NodeJS cartridge
remote: Fri Apr 18 2014 03:56:18 GMT-0400 (EDT): Starting application 'nodule' ...
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://5350ba6be0b8cd0c52000024@nodule-yesberg.rhcloud.com/~/git/nodule.git/   
105dafe..c25d1a4  master -> master
Yes, it seems that the OpenShift Online page now shows my update --- good. When I login via ssh, it seems that the /app-root/runtime/repo/node_modules directory now has express and below that all its dependencies. I don't understand. Why did it work before with express elsewhere? Was it something I did that made node want to install express in the application itself? That's a mystery for another time, I suppose.

Now to start with the development environment. It seems that for now the essential tool is Grunt. That will run karma tests, jshint, and the live reloading that I find so neat. It took me quite a while to get the live reloading working. There are so many plugins that all seem to do similar things. I found it hard to understand how they should work together - each one seems to publish only a fraction of a gruntfile on its own readme.

I found that Romaric Pascal's tutorial at Rhumaric was the best way to get started with live reloading.

To start with, I need to install Grunt and some plugins.

> npm install grunt grunt-contrib-watch grunt-express grunt-open load-grunt-tasks --save-dev
Then I created a Gruntfile.js to get things started:

'use strict';

var path = require('path');

module.exports = function (grunt) {

  // Load grunt tasks automatically
  require('load-grunt-tasks')(grunt);

  // Define the configuration for all the tasks
  grunt.initConfig({

    express: {
      options: {
        port: 8080
      },
      devServer: {
        options: {
	  bases: path.resolve('.'),
	  livereload: true,
        }
      },
    },
    open: {
      server: {
        url: 'http://localhost:<%= express.options.port %>'
      }
    },
    watch: {
      all: {
	files: 'index.html',
	options: {
          livereload: true
        }
      }
    },

  });

  grunt.registerTask('default', [
      'express:devServer', 'open', 'watch'
  ]);
};

This file sets up three tasks (express, open, and watch), and then runs them all as the default target. It's very basic for the moment, with no karma or jshint, and only serving the index.html as a static file (rather than through the express app). I saved that file at the top level of my project and started it all up from the command line.

nodule>grunt
Running "express:devServer" (express) task

Running "express-server:devServer" (express-server) task
Web server started on port:8080, no hostname specified [pid: 10184]

Running "open:server" (open) task

Running "watch" task
Waiting...

It was nice to see that a new tab opened on my browser (Chrome) and showed the index.html page. I edited and saved the file, and magically the page in my browser updated! The console showed

>> File "index.html" changed.
Completed in 0.001s at Sat Apr 19 2014 15:25:25 GMT+1000 (E. Australia Standard Time) - Waiting...
Now, I need to git add the package.json and Gruntfile.js, git commit, and git push, and see what OpenShift makes of it all. There is heaps of line noise as OpenShift tries to install all the devDependencies, but it finally all breaks because grunt-cli isn't installed. There are other 3rd-party cartridges you could use if you wanted grunt on OpenShift, but I don't. What I want is to make OpenShift only install the production dependencies. And it seems this is now possible.

nodule>rhc env set NODE_ENV=production -a nodule
Password: *********
Setting environment variable(s) ... done
I touched the package file and added, committed, and pushed, and OpenShift deployed the system successfully.

That's all I have time for at the moment. Next steps will be to improve the Gruntfile to add some karma and jshint, and to make sure that it's running the express app, rather than just serving static html.

Monday, January 27, 2014

Adventures with the MEAN Stack and Openshift

I'm rewriting an application. The existing user experience isn't too bad, but it could be better. And the code... But I'm also trying to stay up-to-date, or at least not-too-far-behind. So my aim is to use some modern technologies to make a Good single page application. Technologies of interest include:
  • Node. I'm learning more and more about just how great Javascript can be. 
  • Angular. Before choosing angular, I read a few comparisons with Ember, Knockout, and others. I think Angular was a fairly easy choice, but I wanted to be sure that I would have company.
  • Mongodb. I haven't used a NoSQL for anything more than a tutorial or two. This one seems to be pretty popular. My application will certainly not qualify as Big Data, so I'm sure that just about any SQL/NoSQL system would be fine. But at least I'll learn. Mongoose seems to be the obvious object-database bridge.
  • Bootstrap. I've played with YUI and Google Web Toolkit, but never used Less or any serious CSS framework.
  • Karma and Grunt for rapid testing and development.
  • Openshift: Red Hat's PaaS offering.

MEAN stack running Locally

I decided that the Yeoman angular-fullstack generator would be a good ramp to get me going. There are a few pre-requisites (Node, Yeoman, Bower) which I had already installed before documenting this. If I get time, I'll try on a fresh VM to clarify these. But to start my application, I made myself a nice fresh directory and typed
yo angular-fullstack
I declined the offer to use Sass and Compass for now, but did choose Twitter Bootstrap, all the Angular components (resource, cookies, sanitize, and route), and Mongo & Mongoose. The screen filled with all sorts of downloads. I'm sure I won't even see most of these, let alone learn what they actually do or how to use them in anger. But that's layering (complexity management) for you. After a few minutes, my scaffolded app was ready, and all I had to do was type
grunt serve
and the Node server started, warned me that it couldn't connect to a Mongo on localhost, and then my app page appeared in my browser. It was a little short on detail, but I didn't know that at the time.

I installed mongo in c:\mongodb, and created a config file which simply included a line "dbpath = c:/mongodb/db". I started mongo daemon with the command
bin\mongod --config conf\mongodb.conf
Now I can terminate grunt (ctrl-c) and restart it. A new page appeared, this time with  list of "awesomeThings" retrieved from the database. But that wasn't the most awesome thing: if I edit one of the files, such as app/views/index.html, and save, the browser refreshes. And if I edit server-side javascript and save, the server restarts, and then the browser refreshes. Thank you connect-livereload!

After experimenting with, and learning a little about angular, express, and bootstrap, I wanted to show what I had to a friend. I decided I'd look for a cheap cloud host. Openshift (Paas) was the winner (small app for free), closely followed by DigitalOcean (IaaS). (I've also been playing around with Vagrant and noticed that Packer.io supports DigitalOceans nicely.)

MEAN on Openshift

The question was: how to get the Yeoman app onto Openshift. On Openshift, I created an account, and then started a new app with Node 0.10.0 and Mongo 2.2. It came with a default starter app, which I could clone with git. I could also log in using Putty, which is useful for debugging. It's great that something as simple as "git push" can stop, rebuild, and restart the Openshift app. Similar to Jenkins, but still great.

So after cloning the repository to my local pc, I can download all the dependencies and start the system with 
npm install
node server.js
Then I can point my browser to http://localhost:8080 and there's the app (a single page with some helpful links to Openshift/Node info. And for fun, I can look at the other route installed: http://localhost:8080/asciimo .

The Openshift default app is much smaller than the Yeoman angular-fullstack one. It only has a single npm dependency in the node_packages directory (express), compared to the Yeoman's 45 (including a variety of Grunt support, Karma testing support, and mongoose). So perhaps I should simply copy the Yeoman app into the Openshift directory and commit & push. That way I could have the Karma and Grunt benefits locally.

One significant challenge with this approach is that Openshift installs all those dependencies - even the ones which are for development only. It's true that Openshift has Jenkins support, so running tests is quite feasible. But it means that the post-git-push build step takes fifteen minutes (in stark contrast to the 2s Grunt connect-livereload!), and it uses 75MB of storage and more than 12,000 files! Not ideal for a free system, even if there's 1GB quota per gear, with the first 3 gears free. I'd like to be able to tell my Openshift npm to only install the production npms. I decided (at least for this test) not to commit the node_packages directory. The advice seems to be that node_packages should be committed, but only for long term stability.

So I commit and push, and wait the 15 minutes for all the npm activity (which shows up as part of the git push) to finish. It finishes with 
remote: npm info ok
remote: Preparing build for deployment
remote: Deployment id is a6d28b2f
remote: Activating deployment
remote: Starting MongoDB cartridge
remote: Starting NodeJS cartridge
remote: Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://52e60ad95004463c5b000384@test2-yesberg.rhcloud.com/~/git/test2.git/
   809f542..5897269  master -> master  

The words "Status: success" sound good. But when I go to my app page, I get a 503 Service Temporarily Unavailable. To locate the problem, I need to login with Putty. (Or I could use the rhc app: rhc ssh). To see the log files, I need to use the tail_all command. (Note that this only works from the home directory.) Every couple of seconds, it seems there's a new copy of the following error:
Error: listen EACCES    at errnoException (net.js:884:11)    at Server._listen2 (net.js:1003:19)    at listen (net.js:1044:10)    at Server.listen (net.js:1110:5)    at Function.app.listen (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:533:24)    at Object. (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/server.js:39:5)    at Module._compile (module.js:456:26)    at Object.Module._extensions..js (module.js:474:10)    at Module.load (module.js:356:32)    at Function.Module._load (module.js:312:12)DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
It looks like the listen() call on line 39 of server.js is failing. The default with Yeoman had been 9000 (although the server.js code has 3000 as another possibility), whereas the Openshift port was 8080. And if I type "export" at the Openshift shell, there is an environment variable 

 OPENSHIFT_NODEJS_PORT="8080"

On systems I'm used to, listening on the wrong port wouldn't give an EACCES error. But I notice that there are SELINUX environment variables - perhaps this is all part of multitenant hosting (assuming that's what Red Hat does with Openshift). I decided to try to adjust my local app to use 8080. And then I need to ensure that the livereload facility doesn't cause problems - it uses websockets on a high numbered port to ask the browser to reload.


I don't really understand exactly how it's all working under the hood at this stage. But I notice the following snippet in server.js
// Start server
var port = process.env.PORT || 3000;
app.listen(port, function () {
  console.log('Express server listening on port %d in %s mode', port, app.get('env'));
});
It looks like something must be setting the PORT environment variable to 9000 - otherwise we'd be using 3000. So if I grep for 9000, I find the Gruntfile.js includes
    express: {
      options: {
        port: process.env.PORT || 9000
      },
      dev: {
        options: {
          script: 'server.js',
          debug: true
        }
      },
      prod: {
        options: {
          script: 'server.js',
          node_env: 'production'
        }
      }
    },
So I could just change the 9000 to 8080 in the Gruntfile, but then Openshift isn't using Grunt. So I decided to use a Console.log just before the listen command (instead of just after it) to display the value of port. And given the length of time it takes to deploy (only 5 minutes this time), I thought I'd add in another "or":
var port = process.env.OPENSHIFT_NODEJS_PORT || process.env.PORT || 3000;
console.log("Attempting to listen on port %d",port);
The push, deploy and activation succeeds again.but still gives a 503 error. The log shows the following every 2-3s.


connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
Attempting to listen on port 8080
events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: listen EACCES
    at errnoException (net.js:884:11)
    at Server._listen2 (net.js:1003:19)
    at listen (net.js:1044:10)
    at Server.listen (net.js:1110:5)
    at Function.app.listen (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:533:24)
    at Object. (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/server.js:40:5)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
So it is now attempting to use 8080, but still throwing the EACCES error. An answer on Stackoverflow suggests that it might be the hostname, rather than the port. I guess I need to add another argument to the listen().
var port = process.env.OPENSHIFT_NODEJS_PORT || process.env.PORT || 3000;var ip = process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1";console.log("Attempting to listen on port %d on IP %s",port,ip);app.listen(port, ip, function () {  console.log('Express server listening on port %d in %s mode', port, app.get('env'));});
While I'm there, it seems that I should adjust the database connection details. Rather than hard coding, I can use an environment variable. This is an extract from lib/db/mongo.js
var uristring =
    process.env.OPENSHIFT_MONGODB_DB_URL ||
  process.env.MONGOLAB_URI ||
  process.env.MONGOHQ_URL ||
  'mongodb://localhost/test';
The 503 error has now disappeared, but is replaced by:
Error: Failed to lookup view "index" in views directory "/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/app/views"    at Function.app.render (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/application.js:493:17)    at ServerResponse.res.render (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/response.js:798:7)    at exports.index (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/lib/controllers/index.js:18:7)    at callbacks (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:164:37)    at param (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:138:11)    at pass (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:145:5)    at Router._dispatch (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:173:5)    at Object.router (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/lib/router/index.js:33:10)    at next (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/node_modules/connect/lib/proto.js:193:15)    at Object.methodOverride [as handle] (/var/lib/openshift/52e60ad95004463c5b000384/app-root/runtime/repo/node_modules/express/node_modules/connect/lib/middleware/methodOverride.js:48:5) 
GET / 500 46ms - 1.52kB
Using Putty, it seems that there's no "app/views" directory. I am more comfortable with hg than git, but could I have forgotten to commit the views somehow? Then I noticed that the .gitignore file provided by Yeoman included a line "views". I don't understand why that would be appropriate, so I deleted the line, committed and pushed.

This time, when I refresh my Openshift app page, the result is a blank white page. View Source shows that the index.html file is there. The problem is that all the css and javascript files refer to the bower_components folder, which is absent.
       

When Grunt starts up the system locally, it runs Bower, which downloads the appropriate js & css components according to bower.json. But Grunt isn't running on Openshift.  I decided to check to see whether the browser was receiving a 404 error for those files. But it wasn't. In fact, it was receiving a copy of index.html for each one of the files. The server.js file was routing any default GET to the index.index function:

// Angular Routes
app.get('/partials/*', index.partials);
app.get('/*', index.index);

So I need to choose one of the following approaches
  • run bower on Openshift
  • install all the framework files (.js and .css) into the repository,
  • or point to the CDN instead of bower_components
(To be continued...)










Saturday, June 30, 2012

Unity TDD with MPLAB C18

I'm lucky enough to be involved in a project at work that requires some microcontroller development. Being a convert to Test Driven Development for enterprise-style code, I was keen to take advantage of a testing framework for my embedded code. I bought James Grenning's book Test-Driven Development for Embedded C to get myself started. The Unity framework seems to be just what I need.

I chose the Microchip PIC family of microcontrollers because they appear to be very popular, flexible, easy to program, there are evaluation boards, and there are so many different variants that it should be possible to find just the right one for any occasion. The MPLAB IDE doesn't come close to the IntelliJ or Eclipse benchmarks, but I'm comfortable doing TDD in a simple text editor (Notepad++), so that's ok.

I downloaded Unity and installed the three source files into my MPLAB project. I wrote a simple test:

    #include "unity.h"

    void setUp(void) { }

    void tearDown(void) { }

    void test_demo()
    {
        TEST_ASSERT_EQUAL_INT(2,3);   
    }

and a simple test runner:

    #include "unity.h"
    void test_demo(void);

    void main(void)
    {
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Compiling showed that unity_internals.h was looking for a stdint.h header file, but not finding it. I had to define the constant UNITY_EXCLUDE_STDINT_H in my build. To do that, I used Project > Build Options > Project, and on the MPLAB C18 tab, clicked "Add..." to add it as a preprocessor macro.

The next problem was the lack of a putchar() function. I added this function to my test runner:
int putchar(int c)
{
    putc((char)c,stdout);
}

I selected the MPLAB debugger (Debugger > Select Tool > MPLAB Sim), and enabled the UART output tab (Debugger > Settings > Uart 1 IO > Enable, and choose Window for output). When I clicked Run, my SIM Uart1 window filled up with trash. I decided to check putc:

     void main(void)
    {
         putc('a',stdout);  // try this one
         while(1);
         UnityBegin();
         RUN_TEST(test_demo,1);
         UnityEnd();   
     }

Yes, when that runs, the SIM Uart1 window shows an 'a'. Each time I press reset, I get an extra 'a'. Try puts:

    void main(void)
    {
        putc('a',stdout);
        puts("abcde"); // try this one
        while(1);
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Yes, that works too. I looked into UnityBegin() (not much there) and then UnityEnd(). That starts with a UnityPrint. Let's try that one.

    void main(void)
    {
        putc('a',stdout);
        puts("abcde");
        UnityPrint("Does this print?");
        while(1);
        UnityBegin();
        RUN_TEST(test_demo,1);
        UnityEnd();   
    }

Looks like there's a problem with UnityPrint(). After some searching, I discovered (in the C18 C Compiler Getting Started manual) that the C18 compiler puts string constants in the code section, so they are const rom char*, rather than just const char *.

After a lot of experimentation, I decided that I needed to have two versions of the UnityPrint() function: one with a const rom char* parameter, and the original one with const char *. The two versions only differ in the signature and the first line:

    void UnityPrint(const char* string)
    {
        const char* pch = string;

    void UnityPrintRom(const rom char* string)
    {
        const rom char* pch = string;
   
When I changed my code to use UnityPrintRom, it worked. By the way, I don't really understand why it should help to have that apparently unused char c. It seems to be necessary to convince the compiler to use the right addressing.

Now I had to make Unity use UnityPrintRom at the appropriate times. I did this by two global search and replaces in unity.c. The first was to change all occurrences of UnityPrint(" to UnityPrintRom(" (there were 8 of these) and the second was to change UnityPrint(UnityStr to UnityPrintRom(UnityStr (there were 50 of these). The last two to change were UnityPrintRom(file) and UnityPrintRom(Unity.CurrentTestName). And I added a couple of prototypes into unity.h:

    void UnityPrintRom(const rom char* string);
    int putchar(int c);

Now, it works.

    testDemo.c:8:test_demo:FAIL: Expected 2 Was 3
    -----------------------
    1 Tests 1 Failures 0 Ignored
    FAIL

And when I fix up the assertion, I get:

    testDemo.c:1:test_demo:PASS
    -----------------------
    1 Tests 0 Failures 0 Ignored
    OK

The last step was to surround these changes with some #ifdef UNITY_MPLAB directives. The resulting code is now in a fork of the original repository at https://github.com/johnyesberg/Unity.



Sunday, May 13, 2012

Windows Development VMs - a long adventure through Virtualbox, Vagrant, VeeWee, Ruby, RVM, Cygwin

It's a deep rabbit hole, I admit. But the prospects of gold at the end of the mixed-metaphor rainbow are enough to make it worth burrowing all the way. Here's the story so far:

In the last five years or so, I've been enjoying a journey towards continuous delivery, with a number of tour-guides and companions (Erik Dörnenburg, Glennn Moy, Steve Dalton, Craig Aspinall, Rob Nielsen, various Yow! conference attendees and presenters, and many bloggers, especially Uncle Bob and lots of Thoughtworkers). It started with pair programming, and then progressed to test-driven development, and most recently, continuous integration, test, and deployment. At each stage, I found that I had acquired additional armour that  provided increasing levels of courage, confidence, freedom, and protection. It's now hard to go back.

My latest adventure is towards DevOps. The gradual accretion of software packages into my development environment can make it unclear what our actual dependencies are. Rather than creating (or building, or deploying) new projects on existing machines, I want to be able to spin up a shiny new vanilla virtual machine, and install a clean set of known software on it. In the Windows world, I don't want to forget to install the right version of the .Net Framework, or the ASP.NET component, or msdeploy, or all the other little bits that seem to be needed.

I've been running a couple of VMs on Virtualbox for some time. Vagrant is a very neat way to start, configure, and stop various VMs - especially when you need multiple VMs to work together. (I also like cumberbatch for testing such configurations.) All I need is a way to create the right base VM. All that interactive clicking to install Windows, SQLServer, SharePoint, etc is ok, but it's hard to document. A script would be better. And VeeWee is the tool for that. It's been working well on linux-related machines for some time, and support has recently been added for Windows. The VeeWee installation guide recommends using RVM (Ruby Version Manager). So to build RVM on Windows, I first had to install (well, add a few packages to) Cygwin.

So how did it all go? Well, there were a few errors. I'm running on a Windows 7 (64 bit) machine, in case that helps anyone.

First, the RVM on Cygwin script did cause a couple of dialog boxes "expr.exe has stopped working" to pop up for each version of Ruby that it installed (1.9.3-p194 and 1.9.2-p320). The log file suggested problems with the compiler, but it seems to have finished...

John@margaux ~/.rvm/log/ruby-1.9.3-p194/yaml
$ cat configure.log
]  ./configure --prefix="/cygdrive/c/Users/John/.rvm/usr"
checking for a BSD-compatible install... config/install-sh -c
checking whether build environment is sane... yes
./configure: line 2562: /cygdrive/c/Program: No such file or directory
configure: WARNING: `missing' script is too old or missing
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... gcc
checking whether the C compiler works... no
configure: error: in `/c/Users/John/.rvm/src/yaml-0.1.4':
configure: error: C compiler cannot create executables
See `config.log' for more details
c:\Users\John\Downloads\UnxUtils\usr\local\wbin\sed.exe: -e expression #1, char 1: Unknown command: ``C''

John@margaux ~/.rvm/log/ruby-1.9.3-p194/yaml
I configured RVM to use 1.9.2 as the default, which is what the VeeWee installation suggested.
rvm --default use 1.9.2
 The second error was in the VeeWee installation "bundle install" step.

John@margaux ~/repos/veewee
$ bundle install
Fetching gem metadata from http://rubygems.org/......
Using rake (0.9.2.2)
Installing CFPropertyList (2.1.1)
Installing Platform (0.4.0)
Installing ansi (1.3.0)
Installing archive-tar-minitar (0.5.2)
Installing builder (3.0.0)
Using bundler (1.1.3)
Installing ffi (1.0.11) with native extensions
Installing childprocess (0.3.2)
Installing diff-lcs (1.1.3)
Installing json (1.5.4) with native extensions
Installing gherkin (2.10.0) with native extensions
Installing cucumber (1.2.0)
Installing erubis (2.7.0)
Installing excon (0.9.6)
Installing formatador (0.2.1)
Installing mime-types (1.18)
Installing multi_json (1.0.4)
Installing net-ssh (2.2.2)
Installing net-scp (1.0.4)
Installing nokogiri (1.5.2) with native extensions
Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /cygdrive/c/Users/John/.rvm/rubies/ruby-1.9.2-p320/bin/ruby.exe extconf.rb
checking for libxml/parser.h... no
-----
libxml2 is missing.  please visit http://nokogiri.org/tutorials/installing_nokogiri.html for help with installing dependencies.
-----
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
        --with-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        --ruby=/cygdrive/c/Users/John/.rvm/rubies/ruby-1.9.2-p320/bin/ruby
        --with-zlib-dir
        --without-zlib-dir
        --with-zlib-include
        --without-zlib-include=${zlib-dir}/include
        --with-zlib-lib
        --without-zlib-lib=${zlib-dir}/lib
        --with-iconv-dir
        --without-iconv-dir
        --with-iconv-include
        --without-iconv-include=${iconv-dir}/include
        --with-iconv-lib
        --without-iconv-lib=${iconv-dir}/lib
        --with-xml2-dir
        --without-xml2-dir
        --with-xml2-include
        --without-xml2-include=${xml2-dir}/include
        --with-xml2-lib
        --without-xml2-lib=${xml2-dir}/lib
        --with-xslt-dir
        --without-xslt-dir
        --with-xslt-include
        --without-xslt-include=${xslt-dir}/include
        --with-xslt-lib
        --without-xslt-lib=${xslt-dir}/lib


Gem files will remain installed in /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/nokogiri-1.5.2 for inspection.
Results logged to /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/nokogiri-1.5.2/ext/nokogiri/gem_make.out
An error occured while installing nokogiri (1.5.2), and Bundler cannot continue.
Make sure that `gem install nokogiri -v '1.5.2'` succeeds before bundling.
So I had to install libxml2 to get nokogiri working. I went back to my cygwin setup, added libxml2 and libxml2-devel, and tried again. A similar error, reporting that libxslt is missing. In installed libxslt and libxslt-devel with cygwin setup. Then, the nokogiri & the rest of VeeWee installation went smoothly.

At this point, I was ready to start VeeWee-ing for Vagrant, using these directions. I started by creating a new directory ~/boxes to put my VMs in. But veewee wouldn't work from there. I looked into RVM a little, but didn't find anything obvious, so I decided to leave that as an exercise for later. I could make boxes inside ~/repos/veewee for now.
John@margaux ~/repos/veewee
$ veewee vbox define '2008r2' 'windows-2008R2-serverstandard-amd64'
The basebox '2008r2' has been succesfully created from the template 'windows-2008R2-serverstandard-amd64'
You can now edit the definition files stored in definitions/2008r2 or build the box with:
veewee vbox build '2008r2'
$ veewee vbox build  '2008r2'
Error: We executed a shell command and the exit status was not 0
- Command :VBoxManage -v.
- Exitcode :127.
- Output   :
sh: VBoxManage: command not found

Wrong exit code for command VBoxManage -v

John@margaux ~/repos/veewee
$ VBoxManage
bash: VBoxManage: command not found

John@margaux ~/repos/veewee
$
Looks like my virtualbox directory isn't in the path. Added it to the windows environment variable, and restarted my Cygwin bash. Hmm - not there. C:\Program Files\Oracle\Virtualbox shows up in a Cmd window echo %PATH%, but not in a Cygwin bash echo $PATH. How do I get Cygwin to reload the windows environment? Hooray for Stackoverflow, which already had the answer.
John@margaux ~
$ export PATH="$PATH:$(cygpath -pu "`reg query 'HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' /v PATH| grep PATH | cut -c23-`")"

John@margaux ~
$ VBoxManage -v
4.1.12r77245

John@margaux ~
$
So would my build work now? No.
$ veewee vbox build  '2008r2'
/cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/openssl.rb:1:in `require': no such file to load -- openssl (LoadError)
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/openssl.rb:1:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/buffer.rb:2:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/buffer.rb:2:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/algorithms.rb:1:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/algorithms.rb:1:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/session.rb:7:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh/transport/session.rb:7:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh.rb:10:in `require'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/net-ssh-2.2.2/lib/net/ssh.rb:10:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:20:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:20:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:18:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:4:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:3:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:2:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/helper/ssh.rb:1:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/box.rb:2:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/box.rb:2:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/virtualbox/box.rb:1:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/virtualbox/box.rb:1:in `'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/provider.rb:34:in `require'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/provider/core/provider.rb:34:in `get_box'
        from /cygdrive/c/Users/John/repos/veewee/lib/veewee/command/virtualbox.rb:17:in `build'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:263:in `dispatch'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:109:in `invoke'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:205:in `block in subcommand'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/task.rb:22:in `run'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/invocation.rb:118:in `invoke_task'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor.rb:263:in `dispatch'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/gems/thor-0.14.6/lib/thor/base.rb:389:in `start'
        from /cygdrive/c/Users/John/repos/veewee/bin/veewee:18:in `'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/bin/veewee:23:in `load'
        from /cygdrive/c/Users/John/.rvm/gems/ruby-1.9.2-p320@veewee/bin/veewee:23:in `
'
How to install openssl for ruby? I tried a few things, but I think what I needed to do was install openssl-devel (as well as the openssl I already had) using Cygwin setup. Then I had to
rvm remove 1.9.2
rvm install 1.9.2
rvm --default use 1.9.2
gem install bundler
bundle install
Finally, I'm at the point where running veewee vbox build '2008r2' will actually download the iso from the Internet!

John@margaux ~/repos/veewee
$ veewee vbox build  '2008r2'
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Creating an iso directory
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso
Moving /tmp/open-uri20120513-4468-1cp4y7g to /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.isooooooooooooooooooooooooooooooooooooooooooooooo|  48.4MB 357.7KB/s ETA:   0:00:00
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []

We did not find an isofile in /iso.

The definition provided the following download information:
- Download url: http://care.dlservice.microsoft.com//dl/download/7/5/E/75EC4E54-5B02-42D6-8879-D8D3A25FBEF7/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
- Md5 Checksum: 4263be2cf3c59177c45085c0a7bc6ca5


Download? (Yes/No) Yes
Checking if isofile 7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
|  34.9MB 306.6KB/s ETA:   2:46:10

I decided interrupt this download, and to modify my definitions/2008r2/definition.rb by commenting out the iso filename line and adding one I had prepared earlier.
    #:iso_file => "7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso",
    :iso_file => "en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso",
The next problem seems to be that when running VBoxManage, veewee is passing a Cygwin path, rather than a Windows path.
John@margaux ~/repos/veewee
$ veewee vbox build  '2008r2'
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso

The isofile VBoxGuestAdditions_4.1.12.iso already exists.
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []

The isofile en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso already exists.
Received port hint - 59856
Found port 59856 available
Creating vm 2008r2 : 384M - 1 CPU - Windows2008_64
Creating new harddrive of size 10140
Mounting cdrom: /cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Error: We executed a shell command and the exit status was not 0
- Command :VBoxManage storageattach "2008r2" --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard
_enterprise_datacenter_web_x64_dvd_x15-50365.iso".
- Exitcode :1.
- Output   :
VBoxManage.exe: error: Could not find file for the medium 'C:\cygdrive\c\Users\John\repos\veewee\iso\en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso' (VERR_PATH
_NOT_FOUND)
VBoxManage.exe: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Medium, interface IMedium, callee IUnknown
Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 210 of file VBoxManageDisk.cpp
VBoxManage.exe: error: Invalid UUID or filename "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso"

Wrong exit code for command VBoxManage storageattach "2008r2" --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium "/cygdrive/c/Users/John/repos/veewee/iso/en_windows_server
_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso"

John@margaux ~/repos/veewee
$
At this point, I googled to see what could be done, but found little in the way of advice. Maybe next time, this blog entry will come up.I didn't see anything that talked about VeeWee converting from the Cygwin path to a windows path. So I decided to take matters into my own hands, and modify the VeeWee code. I've read a little about ruby, but I'm not an experienced ruby programmer. I wasn't sure how to write code to detect whether it was running under Cygwin, so someone else may have to help with that soon, and perhaps contribute the results back to the project. But my changes were limited to two files.

In the file ~/repos/veewee/lib/veewee/provider/virtualbox/box/helper/create.rb I added a new method at the beginning of the BoxCommand module, i.e. line 6:
        def cygpath(s)
          `/bin/cygpath -w "#{s}"`.chomp
        end
This was inspired by Kevin Kleinfelter's code in this thread, although I had to add the -w flag to make sure the result was a windows path, not a cygwin path, and add quotes to ensure that it would work when there were spaces in the pathname (virtualbox puts spaces in the "Virtualbox VMs" folder name). I then had to call this method a four times: in the attach_disk, attach_isofile, attach_guest_additions, and attach_floppy methods, so that the resulting construction of the command variable looked like this:
command ="#{@vboxcmd} storageattach \"#{name}\" --storagectl \"SATA Controller\" --port 0 --device 0 --type hdd --medium \"#{cygpath(location)}\""
I modified some of the ui.info lines (including moving them below the command assignment) to print the whole command, rather than just the filename.

The second file that needed changing was ~/repos/veewee/lib/veewee/provider/core/box/floppy.rb. It seemed that the construction of the image of the floppy was not working properly for the same reason as above. So I added the same cypath(s) method. But that wasn't enough either. I saw errors like this:
Unable to access jarfile C:UsersJohnreposveeweelibjavadir2floppy.jar
It seemed that Java was devouring an extra backslash. So I added another method escape to floppy.rb.
       def escape(s)
         s.gsub('\\','\\\\\\\\')
       end
And I added a new line assigning jar_file, and updated the assignment to command, and creating a log message:
           jar_file=File.join(javacode_dir,"dir2floppy.jar")
           command="java -jar #{escape(cygpath(jar_file))} \"#{escape(cygpath(temp_dir))}\" \"#{escape(cygpath(floppy_file))}\""
           ui.info("Creating floppy: #{command}")
           shell_exec("#{command}")
Getting all this right took several iterations of running
$ veewee vbox build  '2008r2' --force
I had to add the --force to make veewee really construct the box again. I also had to manually delete the folder C:\Users\John\VirtualBox VMs\2008r2 each time before running that command. But after all these modifications, I eventually saw a Virtualbox window come up, and install windows (with no user intervention), rebooting a few times along the way. My VeeWee seemed to be waiting for sshd to work, but there seemed to be nothing happening on the VM:
$ veewee vbox build  '2008r2' --force
Downloading vbox guest additions iso v 4.1.12 - http://download.virtualbox.org/virtualbox/4.1.12/VBoxGuestAdditions_4.1.12.iso
Checking if isofile VBoxGuestAdditions_4.1.12.iso already exists.
Full path: /cygdrive/c/Users/John/repos/veewee/iso/VBoxGuestAdditions_4.1.12.iso

The isofile VBoxGuestAdditions_4.1.12.iso already exists.
Building Box 2008r2 with Definition 2008r2:
- postinstall_include : []
- postinstall_exclude : []
- force : true

The isofile en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso already exists.
Received port hint - 59856
Found port 59856 available
Creating vm 2008r2 : 384M - 1 CPU - Windows2008_64
Creating new harddrive of size 10140
Mounting cdrom: C:\Users\John\repos\veewee\iso\en_windows_server_2008_r2_standard_enterprise_datacenter_web_x64_dvd_x15-50365.iso
Mounting guest additions: C:\Users\John\repos\veewee\iso\VBoxGuestAdditions_4.1.12.iso
Attaching disk: C:\Users\John\VirtualBox VMs/2008r2/2008r2.vdi
Creating floppy: java -jar C:\\Users\\John\\repos\\veewee\\lib\\java\\dir2floppy.jar "C:\\cygwin\\tmp\\d20120513-6476-np2ws7" "C:\\Users\\John\\repos\\veewee\\definitions\\2008r2\\virtualfloppy.v
fd"
Received port hint - 59856
Found port 59856 available
Changing ssh port from 5985 to 59856
Waiting 1 seconds for the machine to boot

Typing:[1]:
Done typing.

Skipping webserver as no kickstartfile was specified
Starting a webserver :
Waiting for ssh login on 127.0.0.1 with user vagrant to sshd on port => 59856 to work, timeout=10000 sec
........................
I noticed that there was a C:\cygwin folder on my new VM, but it was empty. No wonder sshd wasn't connecting. The floppy (A:) contained install-cygwin-sshd.bat which was supposed to install cygwin. But it didn't contain cygwin-setup.exe like it should. (I ran the bat file inside a command prompt so that I could see where the first error was occurring.) So I copied that file to C:\Users\John\repos\veewee\definitions\2008r2, added a reference to that file in the ~/repos/veewee/definitions/2008r2/definition.rb file as below.
    :floppy_files => [
      "Autounattend.xml", # automate install and setup winrm
      "install-cygwin-sshd.bat",
      "install-winrm.bat",
      "cygwin-setup.exe",
      "oracle-cert.cer"],
Then shutdown & deleted my VM, and re-ran veewee build. Now I see Cygwin downloading various packages. It will be nice when it can use the repository from my host machine, rather than downloading from the Internet again. I know it should only have to do this once, but if we're developing...

So now I have a server sitting there with Cygwin installed, but it's not answering the sshd request that my host is trying to make. It seems the install-cygwin-sshd.bat may still not be completing. I don't want to run the whole thing, since it would install cygwin again. So I checked parts individually.
  • The cygrunsrv command worked. 
  • The /etc/group and /etc/passed files are there. 
  • The ssh-host-config command seems to run ok.
  • Although the ssh rule is in the firewall configuration, the SSHD one isn't. When I try to run the command manually, the SSHD one complains. It seems the word "SSHD" near the end might be causing problems, since it works correctly if I leave that bit out. Must look into that a bit more.
  • Running "net start sshd" returns " "System error 1069 has occurred. The service did not start due to a logon failure". Checked in the services control panel, and saw that there was an entry for Cygwin sshd, that indeed was not running. Trying to start it caused the same error.
And I'm out of time for today. I hope I'll be able to finish this journey soon, and describe further experience. I'd be pleased to hear from anyone who can suggest better ways to do all this.

Thursday, June 30, 2011

St Hallett Old Block Shiraz

Three of us visited Craig's place last night. It was supposed to be night for board games, but since it had been a fair while since the last one, we had a bit of chat to get through. And the tour of his house. The tour ended with a visit to the wine fridge. Then we had to pore over the wine list and choose something.

Craig was fairly confident that the 1991 St Hallett Old Block shiraz (acquired at auction 6 years ago) would be past its best. That was just the excuse I needed to encourage him to choose it - no point waiting longer!

The cork was very soft, and less than half came out with the corkscrew. After some digging, the only option was to push the cork in, and to sieve and decant. The bottle was fairly free of lees, the wine was on the brickish side of red, and the scent was a cause for optimism. After decanting, we left it for about 30 mins.

The first tastes revealed a rather solid depth with central smoothness, although there were still some aromatics and a hint of acetic acid. But after it had warmed slightly in the glass, and perhaps with slightly recalibrated palates thanks to the Mersey Valley cheese, the tannic complexities began to reveal themselves, and worries about the wine being too old seemed to evaporate. No unripe peppery flavours here; it was just plums with a hint of barnyard. The only things I could have asked for more of would have been some touches of tobacco and a bit more length.

We concluded in the end that the wine may well have been better a few years ago, but it was definitely still on the delicious side of good last night.

Thanks Craig!

Sunday, June 26, 2011

Binary Polictical Prisoners

I get jealous when I see that other governments are making all their data freely available. I can understand why, in the old economy, it could make sense to have user-pays access to all that data. But these days, it's so much less appropriate to keep that data locked up - political prisoner. I've written to my state Member of Parliament and the relevant minister to ask LNP and Labor positions.

Dear ...
I believe that as much state government information as possible (such as property sales data) should be free. While the government raises some revenue from selling access to such information, I think that making such information publicly available would reduce the bureaucracy.  Most importantly though, it would create an environment that would allow people to create useful innovative services for the public. There are precedents:
Does (your party) support such a position?

Best regards,

John.
If I get any answers, I'll put them here.

Thursday, May 26, 2011

Monad tutorials

I studied some computer science when I was at uni. It was pretty interesting. Perhaps I didn't do enough, though. I didn't learn about compilers, nor much about functional languages. I've been redressing both of these deficits a little, recently. I've been reacquainting myself with the Antlr compiler compiler (20 years ago it was yacc and lex!), and looking at monads - not because I have an urgent need, or because I've had a Haskell conversion, but because it seems like an interesting intellectual challenge. I've even started reading SICP.

I'm not tempted to tell you that a Monad is like a Burrito, nor to write my own monad tutorial. I can't say I've put in the effort to write a historical analysis of a whole bunch of them. Instead, all I can do is tell you that the three best descriptions I've found are:

Wednesday, February 2, 2011

Using Apache httpclient through an NTLM authenticating proxy with ftp

I needed to (programmatically) retrieve a file from an FTP server out in the internet. In this example, the URL is ftp://site.com/dir/file.txt. My computer can only access the Internet through proxies. There is an HTTP proxy called web-proxy.local, and an FTP proxy called ftp-proxy.local.

I noticed that I could retrieve the file using my browser, but not using command-line ftp. I determined that the ftp-proxy was slightly mis-configured, and didn't believe that my host was a legitimate user. But how did the browser fetch the file, using that URL above? A little work with wireshark showed that the browser makes an HTTP connection to the proxy, and passes the HTTP command:
GET ftp://site.com/dir/file.txt HTTP/1.1
When I tried to use the java.net.URLConnection with this URL, it wouldn't connect to the web-proxy. That seemed reasonable - it was probably trying to connect to it with FTP. but somehow I needed to create an HTTP connection to a URL starting with FTP.

I decided to try Apache HttpComponents HttpClient - Apache code is always great. After a little difficulty getting the right version (eventually 4.1), I found that I was getting
java.lang.IllegalStateException: Scheme 'ftp' not registered.
at org.apache.http.conn.scheme.SchemeRegistry.getScheme(SchemeRegistry.java:71)
at org.apache.http.impl.conn.DefaultHttpRoutePlanner.determineRoute(DefaultHttpRoutePlanner.java:111)
at org.apache.http.impl.client.DefaultRequestDirector.determineRoute(DefaultRequestDirector.java:710)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:356)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
...
I peered into the source and javadoc of the DefaultHttpRoutePlanner and SchemeRegistry, and discovered that I could add the 'ftp' scheme like this:
HttpClient hc = new DefaultHttpClient()

// Register a scheme so that we can ask the proxy to use ftp
Scheme ftp = new Scheme("ftp", 80, new PlainSocketFactory())
hc.getConnectionManager().getSchemeRegistry().register(ftp)
In this case, I don't think it matters what port number (80) I use, or even which type of socket factory, since the connection will be sent via the proxy anyway - the system doesn't really need to know how to create an ftp socket, or which port to use.

The next issue I had was making it work with the proxy. I had copied the example HttpClient code for using authenticating proxies, but it didn't work. Again, wireshark helped. When the browser fetched the file, I could see the 3-phase NTLM negotiation. But not when my software ran. A spot of googling showed me that instead of using the UsernamePasswordCredentials, it would be better to use NTCredentials. And now it works. The final code looks like this:
HttpClient hc = new DefaultHttpClient()

// Register a scheme so that we can ask the proxy to use ftp
Scheme ftp = new Scheme("ftp", 80, new PlainSocketFactory())
hc.getConnectionManager().getSchemeRegistry().register(ftp)

// Set up NT(LM)Credentials for use with the proxy.
hc.getCredentialsProvider().setCredentials(AuthScope.ANY, new NTCredentials("myUsername", "myPassword", "", ""));

// Set up the proxy
HttpHost proxy = new HttpHost("web-proxy.local", 8080);
hc.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy)

// Set up the URL to fetch
HttpGet hg = new HttpGet(ftp://site.com/dir/file.txt)
HttpResponse hr = hc.execute(hg)

HttpEntity entity = hr.getEntity()
InputStream instream = entity.getContent()
...

Friday, January 7, 2011

Grails proxy settings

I always forget these.

1. When setting the details from the (Windows) command line, use quotes everywhere:
> grails add-proxy someProxyName "--host=the.host.name" "--port=8080" "--username=myUserName" "--password=myPassword"

2. Don't forget to set-proxy:
> grails set-proxy someProxyName

3. The ProxySettings.groovy file is stored in the top level of .grails - not in the 1.3.5 subdirectory etc.

Wednesday, January 5, 2011

Online book shopping

For some professional development, and to see what Uncle Bob is on about, I decided to buy Structure and Interpretation of Computer Programs (SICP).

I was impressed that QBD Online had a price (including delivery) of $53.95. This was comparable to Amazon ($53.18), and Book Depository ($57.54). (It's available for free online, but I think I'd rather pay for a hard copy.) I ordered through QBD. Eleven days later (admittedly some were public holidays), they informed me that they were "temporarily out of stock", and the publisher couldn't advise when more stock would be available. I wasn't happy, and asked them to cancel the order. I was pleasantly surprised to see that they responded very quickly (cancel requested 9pm, response at 6am). It means that I'm more likely to go back to them in future.

It was simple to order the book at Amazon. They said that the book was in stock. Twelve hours later I have an email saying that the book has been shipped. Nice contrast.