Monday, 12 December 2011

Kobo Touch eReader -it's not a Kindle!

Touch Me!
I decided that it is time to join the 21st century and get myself an eReader. I  read a lot, often on the commute, and if  go on holiday half the weight is books so it makes sense to look for a smaller and lighter solution. Earlier readers were generally pretty ugly, but the latest ones, like the Kobo and the Kindle look pretty good. So what to buy? I did a bit of research and ruled out a tablet straight away, although they are more versatile devices the screens aren't as good for reading and they are considerably heavier and pricier.

The major player is the Kindle, it's small, light and decent looking -but at the time of writing has no touch screen in the UK. Other drawbacks are the closed format and the fact that Amazon get enough of my money anyway, a market is only a market if there is competition, otherwise it is a monopoly. There's the Sony, but that's expensive, at £129 when compared to the Kobo Touch at under £100.

The Kobo Touch is less then a ton -a mere tenner more than the non-touch Kindle, looks pretty decent -especially in the matt black, and can be picked up in the high street from Smiths or Argos. The spec is much the same as the Sony and the Kindle, With a 6 inch Pearl E-ink display, battery life around a month and 1 GB of free storage, although on the Kobe you can insert a microSD card to increase this.

The Kobe, unlike the Kindle, reads E-PUB books, so you are not restricted to the Kobo shop. I haven't had a good poke around the shop yet -but it looks well enough stocked. Kobo also supports DRM locked books via Adobe Digital Editions, which means that I can read books from the county library, I've downloaded Flashman on the March to try it out.

It will also cope with PDFs -so I have copied the G3 manual onto it,, and it displays well enough. This is a bit of a boon as the manual comes as a PDF, which means either printing it out or lugging a laptop if you want to refer to it away from home.

In use -it hasn't had a real test yet, but so far so good. The display is clear and pleasant, the touch screen works well, although it can be a little slow compared to the phone. The Kobo Touch is light and easy to hold in the hand, the quilting on the back is solid and feels quite pleasant. My eyes are still adjusting to it, probably because I'm holding it at a different distance and angle than I would a book, but he font, size and spacing are all selectable. It strikes me that these devices might be good for older readers (which I am rapidly becoming one of) as they are a lot lighter and handier than large print books.

The software works well and the device is pretty intuitive, for someone who hasn't used a reader before, as it should be -there isn't much to it. It is easy to bring up the table of contents for a book and navigate around it, I was half way through 'The Castle of Otranto' on my android phone, and I'm now at the same place on the eReader. If you use the Kobo Reader software on your devices, then what you're reading should be kept in sync, but I haven't tried this. Words can be checked in a dictionary and highlighted, pages can be annotated -you can even browse the web if you are desperate enough. I'd advise sticking to what the Kobo does best just read books on it, it's very good at that.

In conclusion -right now, this Christmas, in the UK- the Kobo is a Kindle killer, when Amazon ship us the Touch and open up their shop in the same way that they have in the States then it won't be, except that the Touch may well be 30% more expensive, and that's a good few books.

Monday, 5 December 2011

Zuiko Your Way -Old Lenses On New Bodies.

The time has come, the walrus said,
to speak of many things,
of shoes and ships and sealing-wax,
of OM adaper rings.

Couldn't resist the rhyme, so now I'll have to write a post about it. One of the isuues with switching camera makers is that you are left with all the old kit. One thing to do with all the old glass is to reuse it, and Olympus OM fit Zuiko lenses are supposed to be good for this, and especially so for the micro 4/3 format of the Panasonic G3 . There are two reasons for this, firstly the lenses are quick, due to the width of the OM fitting, and secondly they are light -it was part of the OM way- and so should put less strain on the mounting ring of the G3.

One thing to consider though, is that the focal length of the lens will double when used with the 4/3 system, so the 50mm standard lens becomes a 100mm portrait lens. The 500mm mirror lens is now a 1000mm bird spotters delight (maybe).

Another thing to note is that you can pay more or less what you want for the adapter, this one was fourteen quid from eBay, best price on Amazon seemed to be double this, the official Olympus MF-2 is £130! So far this one seems to be well manufactured and it came from a UK supplier (all the Hong Kong wanted more).

Finally, these lenses won't integrate the camera's electronics, unsurprisingly as the OM lenses don't have any. This means that there is no auto-focus and you have to put the G3 into 'shoot without lens' mode and use Aperture priority (A) on the top dial to shoot. When you do that the camera seems to work pretty well, although at high focal lengths it seems to over-expose, but maybe I need to read the manual.

I'll perhaps talk about the individual lenses later -but for now here'a a shot from the Sirius mirror lens.

Birdie -watch me!

Saturday, 3 December 2011

Panasonic G3 -a walk in the country.

Will you lot get away from that tree!

This is the first time I have taken the new camera out to play, and it's done pretty well. I was a bit worried that colours would be too cool, this wasn't the best day to prove it one way or another, it was getting near to sunset and the mist was rising, but all in all I am pretty pleased. I did play around with the camera settings a little, but I wasn't organised enough to write down what I used.


Mister friendly poses for the Panasonic

Generally I used the viewfinder, rather than the  screen, it has a dioptre adjuster so that you can compensate for any optical shortcomings that you have and bring the menu in focus. It's all pretty intuitive -which is just as well as I have yet to open up the CD with the manual in it. The viewfinder is good enough that I'm pleased that I chose this camera rather than, say, the Olympus without one.
I think I used the flash

One problem that I did have was with the buttons on the back of the camera by the screen, which I found myself hitting by accident, sending the camera into some strange modes. Another is that the touch screen, whilst usable, isn't the most responsive -it's quite a bit worse than the one on my LG Optimus phone.

Colours look pretty good here

Moody Views

Monday, 28 November 2011

New Toy -Panasonic G3 Camera.

Panasonic Lumix G3
We're off on holiday next year, and it's the sort of holiday where it's worth taking a decent camera. I have long been a fan of the Olympus OM cameras, currently I have an OM4, but the cost of e6 (slide to you) processing is going through the roof, £15 plus with a decent scan per  roll, plus the price of the film, and slides have never been the most user-friendly medium. Medium, that term does bring a smack of the spiritualist about it, gathering a darkened room to look at grainy images of the past -sounds like many a photo club meeting from the seventies and at least with 'the other side' you don't get the cheesy music (unless your spirit guide is Barry Manilow, what do you mean he's not dead!).

I have had a couple of digital cameras before; a Kodak DC120 which took surprisingly good pictures, and an Olympus 4000Z which you can still get CAMMEDIA memory for. But it is time for something that could produce quality shots rather than just snaps.

Why the G3? Firstly, I didn't want anything bigger and heavier than the OM4, small is beautiful, and that bought me to the small system cameras from Olympus, Panasonic and Sony. Secondly, call me old fashioned, but I wanted a viewfinder -that got me straight to the G3 (although some of the others offer it as an add on). Lastly price -it seems to offer the most bang for your buck. There's other good stuff too to wibble on about; such as the tilting touch screen, 16 MegaPixel sensor, cross-manufacturer micro four thirds system which means more lenses, the G3 looks like a proper camera without trying to fake one from 30+ years ago. If I wanted a range-finder I'd use my Nikonos or Voigtlander, come to think of it the Nik doesn't have a rangefinder -just 'f8 & be there'.

First impressions out of the box? The camera looks and feels great -the lens looks and feels cheap. Charge it up, turn the camera on and the G3 works pretty much as you'd expect, I was getting some sort of pictures out of the it at the first try.

I'll take it for a walk and let you see the results.

Saturday, 22 October 2011

Machine Learning -Week 2 Multi Varient Linear Regression

Intoducing Octave

Fitted line graph -the blue lines is derived from the red crosses
If you got past the title you are doing well! More line fitting, but we're now into multi-dimensional space captain! Put simply this means that you have more parameters (different types) of information to match. So in the house price eample used in the course; the simple version matches area(sq ft) to the price of the house, the multivarient version might use area, number of rooms, and age to get you a more accurate fit. Fortunately, it works as you'd hope, in that you just add up the effects of the different terms, or in programming terms you just have two loops :

for i = 1 to number_of_houses
  for j = number_of things_i_know_about_a_house

Cost function plot from Octave
Which leads nicely to Octave a programming language designed specifically for doing maths, it combines with gnuplot to draw pretty pictures of your data. The sort of data we are talking about now comes in tables -whether printed, or as databases tables or spreadsheets doesn't really matter, as a programmer you would normally bring this stuff in as arrays of whatever dimension and do the above. Octave, however understands matrices and tables intuitively and can manipulate them directly, the above might become :

number_of_houses * work_it_out(number_of things_i_know_about_a_house)
A lot simpler and, because Octave is built for this sort of job, much quicker.

Onwards and Upwards. The top graph represents the point of all this, if you know the area of your house you can estimate it's cost. The advantage of getting a computer to do this is that you can have enormous training sets -all house prices in the country, and do lots of subset plots -all semis, all semis in Suffolk etc.

Friday, 21 October 2011

AI - week 2 Bayes Networks, probably the best networks in the world

What are the chances of that?

After an easy introduction, via tree searching, last week, we're into the the thick of it with Bayes Networks and stochastic reasoning.

The good reverend Bayes

Bayes networks deal with uncertainty, they can answer questions such as -given that my cancer test was positive -what are the chances that I have cancer?  I'm not going to go into the details here, there are several resources on the web -but the answer is of the form

P(C|T) = ( P(T|C) . P(C) ) / P(T)

and likely to be lower than you think. P(C|T) is the chances of you having cancer given the test, P(T|C) is the chance of the test being positive if you have cancer -which I guess would come from the testing of the test, P(C) is the chance of you having cancer in general -which would come from actuarial tables or the like and P(T) is the chance of the test being positive whether o not you have cancer.
The course goes into more depth than this, showing how to reason from one test result to another, say, and how to chain probabilities.

All good stuff -but my brain aches.

Monday, 10 October 2011

A.I. -Week 1, the Prologue

In which the first video goes live and we meet Pros. Thrun and Norvig.

They had a bit of a struggle but Stanford have got the introductory lectures up on their site (via YouTube). The profs. seem affable and there's nothing too scary in the first video series -mainly just definition of terms and the illustration of some problems- but the reading list promises a hard climb.

Stanford seem to be encouraging these courses to be social -and for the videos to be viewed communally, turns out that there's a group in London so I'll trot along and take a look.

With apologies to Lurcio

Friday, 7 October 2011

Machine Learning - Intro and Gradient Descent

Week 1 - In which we meet Prof. Ng and experience the delights of cost functions and Linear Regression with One Variable.

This is my first week on the Stanford University machine learning course and so far so good -as the man said when he jumped off Nelson's column. Prof. Ng seems to be a pretty decent lecturer who doesn't over estimate the ability of his student and there are plenty of examples and explanations to drive the points home. The lectures come as videos with a few embedded questions to keep you awake, and they are, thankfully, split into bite sized chunks of 10 -15 minutes.

In the intro we get introduced to terminology -the difference between supervised and unsupervised learning. Supervised learning requires a training set, the main  example given is of house price history vs. house size; unsupervised is just trying to make inferences from a bunch of data -say clustering news stories. We are also given the difference between regression (line fitting, trending) problems and classification ones (sorting into buckets, true/false &c.).

Thence we arrive at Linear Regression with One Variable -which is line fitting on a 2D graph basically. The maths arrives, but not too brutally -calculus shows its head, but fortunately you don't have to understand the whole of calculus to use the little bit we want, so all good.

As well as the videos there are also online review questions -which I believe contribute to the final score. The good news is that you not only can, but are actively encouraged to, retake them until you get a perfect score. The real benefit of this is that they become a learning tool rather than just a test, there are answers given as well as just scores and I think that they have helped me understand what's going on rather better than I otherwise would.

I'm now most of the way through the optional linear algebra review -but I won't post on that.

Free university courses -computing at Stanford

This is a bit of a find, free Artificial Intelligence. and Machine Learning courses at Stanford University in the US. These two courses are examined (optionally, in the Advanced Track) and you can get a certificate, but there are other lecture series on more 'normal' programming topics too.

I think the 'Seven Languages in Seven Weeks' is going to take a bit of a knock as I have signed up for both AI and ML courses, neither have officially started yet but the first weeks' lectures are available for for machine learning.

I'll post on how my attempts at this go -suffice to say week one reveals the course to be fairly academic; and a friend is struggling with the mathematical approach, more because he finds it off-putting rather than difficult. I'm no Turing but so far so good -and there's always Khan Academy to fill in the gaps.

Thursday, 29 September 2011

Seven in Seven - Prolog is doing my head in.

It's day 2 of the Prolog section and my brain is suffering, I had forgotten how much unlike other languages Prolog is, and we haven't even got to backtracking yet. Things got so bad that I dug out my original copy of Clocksin and Mellish -It is so old the pages have actually yellowed and the reference implementations are for a DEC-10 and PDP-11! It seems to be helping to think of Prolog as a goal-seeking database, i.e. a collection of facts that you can ask questions of, it's quite cute that when you run a program Prolog answers 'yes' or 'no'.

Anyway the exercises, trivially they can be answered thus :

| ?- reverse([12,4,5,7],SL).

SL = [7,5,4,12]

| ?- min_list([12,4,5,7],SL).

SL = 4

| ?- sort([12,4,5,7],SL).  

SL = [4,5,7,12]

(1 ms) yes

But we ought to try a bit harder, so :

revappend([], Ys, Ys).
revappend([X|Xs], Ys, Zs) :- revappend(Xs, [X|Ys], Zs).

rev(Xs,Ys) :- revappend(Xs,[],Ys).

I pinched this in the end, after repeated attempts to use 'is' on something that isn't an arithmetic expression.

smallest(S,[H|T]) :- smallest(Ss,T), H >= Ss, S is Ss.
smallest(S,[H|T]) :- smallest(Ss,T), H < Ss, S is H.
This one is all mine, alas. It's not tail-recursive so it would be inefficient. Here I have added a rule for each case -but it might be possible to use an 'if' statement.

Finally the sort -I can't imagine that this is very efficient either, it works by taking the smallest member off the original list and putting it add the head of a new list, this happens repeatedly (through recursion) until the original list is empty :-

jsort([], Ys, Ys).
jsort(Xs, Ys, Zs) :- min_list(Xs,S), delete(Xs,S,X1s), jsort(X1s, [S|Ys], Zs).

sortj(Xs,Ys) :- jsort(Xs,[],Ys).

Monday, 26 September 2011

Seven in Seven - Prolog Day 1

I know it's out of sequence, but Io isn't making on the work Mac (it's fine on Fedora) so I'll jump ahead to Prolog for now. Prolog, it's nearly 30 years since I last had a play with Prolog and I eventually developed a soft spot for it -let's see if I still feel this way nowadays, or whether the soft spot is the bottom of the compost heap.

From memory Prolog stands for 'Programming with Logic' and it was popular with the Japanese AI community back in the 1980s, when they were taking over the world.  It is based around 'first order predicate calculus', and if I had my notes I'd tell you what it meant.  One of the main things to take from it though, is that Prolog is a formal language based on mathematics, and thus (in some sense) provable. Another formal language, that you may have heard of? SQL is based on set theory and in an ideal world is provable -E & OE, YMMV, not all implementations are equal, shares may go down as well as up!

First black mark, likes(X, Y) isn't the same as likes (X, Y) -the first is correct- in GNU Prolog.

Exercise : Create a knowledge base representing musicians and instruments. Also represent musicians and their genre of music.

I did this using tuples :
musician('Richard Thompson','Guitar','Folk') .
musician('Kathryn Tickell','Uillean Pipes','Folk') .
musician('John Lee Hooker','Guitar','Blues') .
 Which allows me to write queries like :

| ?- musician(Who,'Guitar',Genre).

Genre = 'Folk'
Who = 'Richard Thompson' ? a

Genre = 'Blues'
Who = 'John Lee Hooker'


Very pretty, but at the end of the day it's just a database query.

Friday, 23 September 2011

Seven in Seven - Ruby Day 3

This is the day hat the light is supposed to dawn -unfortunately I was in the pub last night and it feels like midnight in Svalbard on December the 21st. I think I'm also somewhat underwhelmed because Ruby is familiar territory, I did a little bit a few years ago and I've just carried out a project in Python so I'm just left thinking 'and...'

Exercise -extend the RubyCsv class to iterate and return CsvRow objects which you can index via 'method missing'

Again I'm not going  to post the complete code, aside from anything else there's a notice at the top saying that it can't be used in articles. For the iteration I mixed in Enumerable, changed @cvs_contents to an attr_reader and defined the 'each' method :

def each
  @csv_contents.each { |i| yield i }

@cvs_contents isn ow an array of CsvRow objects :

class CsvRow
        attr_accessor :contents, :index

        def initialize idx,row
                @contents = []
                @index = idx
                @contents = row.split(', ')

        def method_missing name, *args
created thus :

I don't really like having the header with each object, and their might be a way around this by creating the class on the fly, populating a class variable with the header list, but see the first paragraph.

What have I learned
Mixin is a powerful concept -but not unique to Ruby. method_missing is a powerful idea -and bloody dangerous. I have achieved the same thing in PHP by overriding __call() and I think that was safer.

Ruby syntax is nice, at least you don't have to scatter ':'s everywhere as you do in Python, it's more consistent than PHP and prettier than Perl5 which  generally looks as though a demented spider has been dancing on your keyboard.

Thursday, 22 September 2011

Seven in Seven, Ruby Day 2

Here we're starting to get a little deeper into Ruby, coming across a few nice features, such as code blocks (closures) and mixins. Still not really sure what the fuss is about though.

What did I learn?

Ruby is starting to look quite like python. Duck typing can catch you out : a regex pattern looks like a sting, quacks like a string, but doesn't waddle like a string. Ruby has nicked some of Perls' pattern-matching syntax, which is a good thing as it's great for rule-based input processing.

Exercise : Can we translate a hash to an array, and can we iterate over a hash?

Well, yes we can :


numbers = {1=>'one', 2=> 'two', 3=>'three', 4 => 'four'}
num_array = []

numbers.each do |num, str|
p 'My class is ' + num_array.class.to_s
p 'Element class is ' + num_array[0].class.to_s
puts num_array

 Of course we could have just used numbers.flatten()

This exercise is to print the numbers 1 to 16 in groups of 4

#Old school
i = 0
ary = []
(1..16).each do |num|
        if i % 4 ==  0
                if i > 0
                        p ary
                ary = []
        i = i + 1
p ary

# One liner -using a code block
(1..16).each_slice(4) {|ary| p ary}

Exercise 3 : Populate the Tree class from a hash.

 I'm not going to put all the code up for this, and I struggled a bit to get it going. The solution that I settled on was to have two classes,  a Tree class that took the hash argument which extended a Node class that was the basic structure.

The Node initializer looks like this :
def initialize(name, node = {})
    @children = []
    @node_name = name
    node.each { |key,child| @children.push(,child)) }

So it just build the tree recursively.

Exercise 4 implement grep

Grep implementation, matches a string against lines from a file,
outputs the line number and contents of the matched line

usage : rb_grep pattern file

i = 1[1],"r") do |f|
        pattern =[0])
        while line =  f.gets 
                line =~ pattern and printf("%d : %s", i,line)
                i = i + 1

Tuesday, 20 September 2011

Seven in Seven : Ruby Day 1.

Here we go, day 1, Bonus challenge -guess a number.


guess = 1

while guess > 0
        num = rand(10)
        guess = gets.to_i
        if guess > num
                puts "Too high was : " + num.to_s
        elsif guess < num
                puts "Too Low was : " + num.to_s
                puts 'Just Right'

Bonus tip, the ruby interpreter is called irb, and doesn't necessarily come with the ruby package.

Monday, 19 September 2011

The Road to Oxiana

by Robert Byron

Byron  was a pre WWII travel writer, indeed he died when his ship was torpedoed in 1941. The Road to Oxiana is the diary of a trip he made to Afghanistan in 1933-4, it can be read on many levels, from 'travelogue of an upper-class brit abroad' to the 'wanderings of an archaeological aesthete'.

One of the main joys of the book is that Byron is always well-written and forthright, for instance on the Venice Lido from the first paragraph :-
 The bathing, on a calm day, must be the worst in Europe: water like hot saliva, cigar-ends floating into one's mouth and shoals of jellyfish.

Lifar came to dinner. Bertie mentioned that all whales have syphilis.
The journey is travelling in the old style, things happen when they will, cars break, rivers flood roads, visas are granted and denied on a whim -his whole trip seems to run several months over its allotted time, and in the end he never does see the Oxus.

Byron's true passion if for architecture -I advise you to cultivate a knowledge of the squinch, it'll be useful for scrabble if nothing else- again the best I can do is to quote, this is about the Friday Mosque in Isfahan :

...But while the larger lacked the experience necessary to its scale the smaller embodies that precious moment between too little and too much,when the elements of construction have been refined of superfluous bulk, yet still withstand the allurements of superfluous grace; so that each element, like the muscles of a trained athlete, performs its function with winged precision, not concealing its effort as over refinement will do, but adjusting it to the highest degree on intellectual meaning.
 All in all it's worth taking the journey with Byron and listening to his story telling on the way. You wouldn't want to be on the sharp end of his tongue though -even if he does dish it out fairly impartially.

Wednesday, 7 September 2011

Haskell - 7 languages in 7 weeks, Day 2

Haskell Day 2 Exercises

Write a Haskell function to convert a string to a number. The string should be in the form of $2,345,678.99 and can possibly have leading zeros.

As I mentioned in the last post, there was a better way to solve this than tail recursion.

strToNum :: String -> Float
strToNum(str) = let num = [x|x <- str,  (x >= '0' && x <= '9') || x == '.']
       in read num :: Float

This function uses list comprehension to filter the string for valid characters and the uses read to convert the filtered string into a floating point number. As you can see it's very concise, and when you are used to the form very readable. This says num is all the valid (0-9 and .) characters from the string str. 

Write a function that takes an argument x and returns a lazy sequence that has every third number, starting with x. Then, write a function that includes every fifth number, beginning with y. Combine these functions through composition to return every eighth number, beginning with x + y.

stepList x n = take 5 [x,x+n..]

threeList x = stepList x 3
fiveList x = stepList x 5

eightList x y = zipWith (+) (threeList x) (fiveList y)

So, the lazy sequence is [x,x+n..] to stop things running off into infinity I've used take to perform the lazy eveluation. Again, Haskell can be very concise -look at zipWith in eightList.

Use a partially applied function to define a function that will return half of a number and another that will append \n to the end of any string.

half = (*) 0.5

This is quite cunning (of Haskell). Inside Hskell all functions are, I believe, curried -that is they take only one argument. As a bit of syntactic sugar Haskell support infix operators, but you can force them to their curried form but surrounding the operator in brackets.  Since the functions are curried they are quite happy to have a dangling second argument, in fact we have here a function that is 'multiply by 0.5'.

backAppend x y = y ++ x
append = backAppend "\n"

The same applies to the append function here, only we have had to define backAppend to get the arguments the way round we want.

I'm going to leave the second set for now, and maybe come back to them at the end.

Tuesday, 6 September 2011

Haskell, where vs let in guards

I'm making a stab at learning Haskell, a functional programming language, and as part of it I'm going through the section in Seven Languages in Seven Weeks, one of the exercises is to convert a string representation of a number to a real (floating point) number, so " $2,345,678.99" to  2345678.99.
Here's my first stab :

import Char

strToNum :: String -> Float

sToN :: (Float, Int, String) -> Float
sToN (acc, dec_place, []) = acc
sToN (acc, dec_place, xs)
        | head xs == '$' = sToN (acc, dec_place, tail xs)
        | head xs == ',' = sToN (acc, dec_place, tail xs)
        | head xs == '.' = sToN (acc, 1, tail xs)
        | dec_place > 0 =
                let  newacc = acc + (fromIntegral (digitToInt (head xs)) )  / (10.0  ^ dec_place)
                in sToN (newacc, dec_place + 1, tail xs)
        | otherwise =
                let newacc = acc * 10.0 + (fromIntegral (digitToInt (head xs)) )
                in sToN (newacc, dec_place, tail xs)

strToNum "" = 0.0
strToNum(xs) = sToN (0.0, 0, xs)

It's a bit brute force and ignorance, but it works. Just thought of a more elegant solution though. The point of this post is the use of 'let' rather than 'where' in the guard expression. The guards (| expressions) handle the various characters and states of the string, and in the two that alter the accumulator define the new value via a 'let'. Initially I tried using a 'where' for this, but it seems that you can only have one per pattern sequence, it would seem that this is because 'let' is a proper expression whilst 'where' is not.

BTW I also tried this with lambdas, but fell foul of the type system.

I quite like this pattern matching approach in general, and have used it in Perl a lot, as it is quite easy to see each 'case' and the logic that goes with it.

Sunday, 14 August 2011

Want to Learn About Computers?

From scratch, then get hold of Charles Petzold's book Code .

Unfortunately Code wasn't written when I was doing wire wraps on breadboards try to build a half adder using bc109s (or what ever transitors we were using) -if it had been the cybernetics weekend I spent as a teenager might have made a whole lot more sense.

Petzold starts right at the beginning with light bulbs and switches, spends a long time with relays and ends up with those classic microprocessors the 8080 and the 6800. If you stick with him, not a hard task, then you will understand how a modern computer works. I haven't quite finished it, but the book is restricted to classic von Neuman machines, I haven't seen any discussion of massively parallel architectures, RISC processors or data-flow engines, never mind quantum computers, but you can go on from here knowing all the basics and with the understanding to tackle more exotic designs.

The book isn't all hardware, it also covers numbering systems, text encoding, machine code and assembler -at the lower end with a brief look at operating systems and languages at the higher end of abstraction.

Monday, 28 March 2011

City Life

An example of London life that happened to me in the week.

I'm heading back to Liverpool Street to catch the train home -just another worker ant focused on the clock, when :

'Can I ask you a question'

Oh god I think, I take a look at the questioner, no clipboard that's a good sign, scabs on the face and swaying a bit -not so good. Still  I've got 5 minutes to spare, it might be quicker to play along with him.

'Go on then -if you're quick'

'Do you remember Dudley wossname -was in that film with that bird'

Ok, that came from nowhere, however I'm of the vintage that Bo Derek made a big impression on (well the male half anyway -and if the number of cornrow hairdos that were about on blondes at the time was any guide- the females weren't entirely immune).

'Uh, yes Dudley Moore'

'Who was it who was his partner, you know in the act, do you know?'

'Peter Cook' I reply.

'Brilliant mate -I had this bet with my mate that he was wrong and you've won me 20 quid'

'Who did he think it was then?'

'Peter O'Toole, wanker. I told him he was Dracula.'

Made me smile. And it's an excuse to put up a picture of Bo.

Wednesday, 9 March 2011

Turing got there first -again.

Alan Turing

I was doing some research into test driven development and came across Tony Hoare's paper  The Emperors Old Clothes . Hoare is a big name in the history of computer science, inventing Quicksort at an early age and then C.S.P -I have had the misfortune to program in Occam (and Ada) which implement this, nothing wrong with the concept or really the language -but Occam simulated on a vintage PC wasn't good.

Hoare's paper is well worth a read, both from a historical perspective and for the all too familiar view of projects going horribly wrong -in this case Algol 68, PL/1 and Ada. Helped by mandated use from the military Ada did finally take off -but so did the Bristol Brabazon - but the language failed to achieve the popularity hoped for and we all ended up using C++ for our sins (which must have been many).

Anyway, back to Turing, it seems he wrote a paper in 1949 (yup) entitled 'Checking a Large Routine' the opening paragraph reads :- 
"How can one check a large routine in the sense of making sure that it’s right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows"
 Sounds like the basics of Test Driven Development to me.

Sunday, 6 March 2011

Why I hate setters and getters (especially in PHP).

Whatever you think about object oriented programming (and the phrase ‘Emperors New Clothes’ has been known to pass my lips, although I’m not so dogmatic now.) there’s a point where the syntactic sugar turns to syntactic saccharine -and that point is accessor methods.

Whilst I acknowledge the usefulness of encapsulation and limited visibility I loathe having to write reams of setThis() and getThat() for every blasted variable I want to use. I was going through the Zend Framework tutorial and there’s a simple four field model in there, which generates eight methods, life is too short. The model already exploits __set and __get which allow controlled access to hidden properties, so why not go the whole hog? Here’s a version that calls an accessor method if there is one (you might need to do some extra processing or formatting in it) but otherwise returns the protected variable.

class SomeClass {
    protected $_comment;
    protected $_created;
    protected $_email;
    protected $_id;

    public function __get($name)
       $method = 'get' . $name;
       if (('mapper' == $name)) {
           throw new Exception('Invalid guestbook property');
       if(method_exists($this, $method)) {
           return $this->$method();

       $attr = "_$name";
       if(property_exists($this,$attr)) {
           return $this->$attr;
       throw new Exception('Invalid guestbook property');

Instead of defining a setId method and calling a $someObj->setId(3), you don’t define the method and use $someObj->id = 3.

Personally I’d probably turn this into an abstract base class to support the model classes, leaving the individual classes to supply specific accessor methods if needed -thus getting rid of reams of code that I’d need to maintain. There are times when you wouldn’t use this approach, but for the database mapping example given it’s a bit of a no brainer.

Why especially in PHP? Because it’s not an O.O. language, it’s a scripting language, I like the ability to use O.O. syntax when I think it adds clarity to the program and I like the fact that I don’t have to if I think it makes life harder. Additionally libraries like Zend Framework should make life quicker and easier, it is the point of them after all, and not slower, harder and more verbose.

BTW don’t let this put you off Zend Framework, it’s a good piece of work and I’ve built a few systems using it.

Tuesday, 15 February 2011

The Blind Watchmaker

by Richard Dawkins

The book that describes the biomorphs. It is much more than that, it aims to explain how a complex world can arise from blind chance and natural selection with no need for an 'Intelligent Designer'. I think that this is an excellent book, whether or not you believe in evolution it explains concepts that are key in several areas. A prime example is complexity and how it can arise from simplicity in small steps. The relationship with probability is explored too -together with some estimates of how likely some out comes, ssuch as life in the universe, are.

Dawkins is a good writer with a clear style who uses good examples to illustrate his points. From bats, to crocodiles; from ants to eyes; he takes examples from across the natural world. Consider the eye, one of the claims of intelligent design is that something as complex as an eye couldn't have arisen from natural selection and an 'incomplete' eye is no good. This is patent rubbish, at some point in evolutionary history there were no eyes and then the ability to tell light from dark evolved -that's a useful thing to have, if you're a worm you don't really need any more than that. In addition there are eyes at different stages of completeness within nature -the nautilus has an eye like ours, but with no lens; our eye is wired up backwards - that of the octopus is wired forwards; the point is that you don't need a perfect eye, just one that is good enough and better than your competitors.

Which reminds me  of the joke about the wildlife film makers who are watching a cheetah when it spots them and puts the team at the head of the menu. As it charges towards them , the sound recordist starts swapping his boots for trainers, when the camera man points out that he still won't be able to out run a cheetah simply because he's wearing Reeboks. 'Mate' replies the recordist, 'I don't have to run fatster than the cheetah -I just have to run faster than you'.

There is lots of other good stuff in here, genetic algorithms, cooperating genes, discussions around Lamarkism and Punctured Equilibrium, genetic explosions and spirals, the tail of the peacock and convergent evolution. All in all very highly recommended.

Thursday, 10 February 2011

First Fix

A bicycle crash is only a matter of time.

First Impressions - I like it, it's like being a kid again, the pedals go round all the time, whack you on the back of the leg if you're pushing the bike and you lose them going down steep hills. The bicycle itself needed some setting up of the saddle, seat post and handle bars, but now that is done it feels pretty comfortable. Compared to my tourer the Atlantis is quite wobble and twitchy, I think that this is because the frame is a bit shorter.

The main difference to a normal bicycle is that you can't coast, even when you've just got on you have to keep the feet moving. Secondly, you can slow yourself down by resisting the pedals as they go round, sort of back-pedalling but your legs still go forwards. Then there's the gears or rather gear, being unable to change gears, or coast, means the pedals spin like the fabled dervish going downhill and that you have to get out of the saddle and stand on the pedals to go up. Once you trust the bike, up is easier than down. So long as it doesn't go on too long.

This should be good for getting the legs into shape -there a plan, okay vague idea, to do the c2c this summer, and up north they have hills! And it is fartlek training for free.

My longest run so far is only 8 miles, I'd like to get to double this at some point, but any longer than that and gears are the way to go.

Overall it's a more physical ride and also a smoother one as there's not the clunking gear change, probably especially noticeable on my tourer with its down tube shifters and mongrel  drive train. Since your legs are always turning you are always thinking about pace, whether it's recovery, a sprint downhill to get you up the other side, or resistance to stop the bike running away with you. Cycling becomes more involving, and thus more fun.

Monday, 7 February 2011

Biomorphs and Javascript - a marriage made in purgatory.

Well it definitely isn't heaven, and it's not really hell (6502 assembly programming on a machine with no permanent memory and a dodgy power supply comes close).

I'm not sure I'm ready to devote a whole post to how crap Javascript is, I imagine that there are probably whole sites, probably complete universes, devoted to just that topic. But here's a handy hint -your variable isn't locally initialised unless you stick var in front of it, but your program will run anyway, perhaps almost correctly. Having been corrupted by languages where you don't declare variables, just use them, I got some unexpected side effects with the Biomorphs and the variables became global -even though they had been declared locally, inside a function scope. Enough, I've got to go and fill in the hole in the plaster where I was banging my head on the wall.

The biomorph program has had an overhaul and now produces, occasionally, things that look like biomorphs.  The main changes have been to switch from polar to Cartesian coordinates and to introduce randomness. The coordinate switch allows the introduction of a 'gene' as a string of x and y coordinates, and having this gene means that it can be populated randomly and thus gives us the potential for 'evolution'. Here's the guts :

function bio_morph()
        this.gene = new Array(9);

        //Generate a random gene 7 is the number of generations and gene 8 is the stem length
        for(var i = 0; i<7; i++) {
                this.gene[i] = Math.round((Math.random() * 20) -10);
        this.gene[7] = Math.random() * 10;
        this.gene[8] = Math.round(Math.random() * 10);
Multiplying by 20 and taking away 10 allows us to have negative numbers in our random sequence, so the creatures can gro up and down, left and right.

All that remains is to produce an array of the critters and allow you to choose the one start new generations from.

As always take a look at the main biomorph page.

Thursday, 3 February 2011

KISS My Cog. Fixie Bicycle Conversion.

Atlantis Rising

Keep It Simple Stupid, about the only useful principle I learned at university. It applies to software, life -and bicycles. Fixed gear bicycles are the new black (or rust red) although not themselves new, a pennyfarthing would have been fixed, and I remember getting smacked on the back of my legs by my pedals at a young age. Out of nostalgia, or trend following, I've been thinking about building a fixie, then a friend donated an old Dawes Atlantis with a Reynolds 501 frame and off we go.

The main selling point for fixed gear bikes is simplicity. Only one gear, so no deraillieur, shifters and associated nastiness, arguably you don't even need brakes (although this would not be legal in the U.K.) -so you get a simpler and lighter bike. Secondary points are that  it makes you a better cyclist, you don't have to think about gear changes, your pedalling style is  improved; and your legs get stronger. Since I have the family sparrow legs this can only be good.

KISS in action

Firstly some research, I can recommend the late, lamented Sheldon Brown and also Charlie the Bike Monger he's more single-speed but some good advice and a good source of parts.

The original rear wheel on the Dawes had a floppy axle and an obsolete SunTour cassette, so I decided to take the line of least resistance and buy a new wheel with a flip-flop hub from Charlie. A flip -flop hub takes a track cog on one side, which is screwed directly to the hub with a counter threaded lock ring (both supplied with the wheel), the other side takes a BMX style cog with an integral freewheel. The idea, from Sheldon, is to have an easier cog on the freewheel side and use this to get you home when the going gets a bit too tough. Since the wheel came with a 16 tooth cog I got a cheapo 18 tooth freewheel. Aside from being tight, I might need to find the right cog and chainring (that's the big cog attached to the pedals)combination  by experimentation, so chaep cogs first. You also need rim tape for the wheel, I forgot this and live nowhere near a bike shop, last pair of wheels I bought came with the tape installed -so put a note on your page Charlie, some of us are thick!

Whilst waiting for the bits to arrive I sorted out the brake cables -using a funnel of insulation tape to get oil down them, start off wrapping the tape tightly around the cable outer then looser as you get to the end.

When the wheel arrived cog installation was simple- just screw on, lock ring similar but the wrong way. Next the chain, it is important that this is taut but not tight, there's no deraillieur to take up the slack, and that the run between chainring and cog is straight. Luckily it seems to be straight on the inner ring of the Dawes, so all good. Measure up the chain for both cogs, by offering it up, and remove any surplus links, I bought a half link to get the length just right. Then get the chain taught and nip up the chain side nut first, straighten the wheel, then the other side. Make sure that the wheel nuts are bastard tight, unfortunate things can happen to ones manhood  if the chain comes off the cog.

Then it was on with new Aztec brake blocks (twice the braking area of the '50s clone on there), adjust the saddle -which was at a rather alarming angle, and off we go for a spin around the block. And spin you do, no freewheel, no coasting, your legs turn all the time, you can even slow yourself down by back pedalling. Strange but fun, as the actress said to the lady-boy bishop.

I'll let you know how I get on.

Sunday, 30 January 2011

Watching the English by Kate Fox

Subtitled 'The Hidden Rules of English Behaviour' this book purports to tell us why we are like we are -if you're English that is, sorry I shouldn't have assumed that. We do apologise a lot, something to do with our social dis-ease and hypocrisy -see, it's true, I wasn't really sorry.

The main fun that comes from the book is compare your own behaviour -and that of your friends, significant others and work colleagues with the behavioral rules laid out in the book. Kate has cunningly played to the English preoccupation with class by giving codes of behaviour for the various classes under each section. The other half lets herself down with a liking for napkin rings and coasters.

She also attempts to put us into context by comparing us to other  nations, for instance picking on our dislike (unless we're from Yorkshire -but the north is another country) of talking about money and our ubiquitous use of humour.

The book is engagingly and amusingly written, but the style can pall and one can get fed up with being called a hypocrite and social autistic, so it is probably best kept as a 'toilet book' -oops 'lavatory book' my lower-middle class slip is showing.

Tuesday, 25 January 2011

Recusrsion -tree walkers

There was these tree walkers fell off a cliff. Boom, boom -splat!  I'll get my coat.

Much data in computers (including this web page) is stored in tree (or graph, a tree is a particular kind of graph) structures.
Tree Structure of an HTML Page

In the diagram above the <html> element is the parent of the <head> element and the <title> element is one of the children of that <head> element.

So how do you find elements in a tree? Say I want to do something to all the <p> paragraph tags (which are at different levels) , the only way I can manipulate them is to check every element and see if it's a <p> tag and then do what I want to do.

Recursion is a classic method of walking a tree elegantly and with a minimal knowledge of the size and configuration of that tree. Recursive functions are ones that call themselves again and again to process data -but (and it'd a big but, bigger than Mr Creosote's) they know when to stop. A simple algorithm to process <p> elements might look like :

    func find_p($element) {
              if($element->name == 'p') {
              while($child = $element->get_child()) {

You would kick this off by calling it with the <html> element, it's not a <p> so it won't do any processing, just get all of <html> element's children in turn and call itself for each of them.for each of those children it will check if they are a <p> and then get each of their children....

If an element has no children then the function just returns, effectively going back up a level.

Hopefully you can see that the function will go down the tree left side first, so <html> -> <head>-><title>, <title> has no children so the function returns to the calling while loop and <head>s next child <link> ...

At no point in the program have I had to know how many children an element has or how wide or deep the tree is.

You can see recursion in action  in the Biomorph program where the method draw_morph() calls itself recursively to create the branching morphs.

NB in the real XML/HTML world you will probably find that the hard work is done for you, if I wanted to process all the <p> tags in a document I'd perhaps use XPath . If you do need to parse an XML document element by element (and I've had to) then recursion would be the way to go.

A word of warning, recursion can be resource heavy, all those function call will reserve resources that won't be freed up until the function returns, if you have a lot of depth and are passing around a lot of data the effect can be significant and your computer could freeze.

Sunday, 23 January 2011

Biomorphs - software genetics in HTML5 and Javascript.

'Pull  the switch Igor -we're frying tonight!'  Today we're going to create artificial life MWHAHAHA! Well not really, but we are going to have a play with genetics and software development, less fun than lightning and a methane atmosphere -but quicker and easier to arrange than a suitable planet.

Example Biomorphs

Richard Dawkins describes biomorphs in his book The Blind Watchmaker (chapter 3). Biomorphs are shapes that change by mutation and it is possible to build up creatures by selecting from the generated shapes. Pretty much a game really, although the book does carry a good description of n-dimensional spaces (9 in this instance) -maybe I finally get it.

As an exercise in brushing up my Javascript -and playing with HTML5 I am going to have a go at creating a version of his concept. The reason for Javascript and HTML5 is that I can host a page with the program on this blog and not have to set up a server somewhere.

In Dawkins original a biomorph is a tree structure that has 9 genes  controlling its growth, these genes control things like stem length, branch angle, number of iterations of breeding &c.. There isn't an algorithm as such in the book so this will be an 'Agile' (or suck it and see) development and I'll basically solve small problems and hope that I can bolt them all together for an overall solution.

Finally, as Biomorphs are tree structures the natural algorithm to draw them will be recursive, it'll be interesting to see if Javascript and the browsers can handle that.

The program will appear on a separate page available from the top menu, our first design decision!

Thursday, 20 January 2011

CodeIgniter PHP Framework.

As part of a job hunt, I am having to take a look at CodeIgniter, a PHP development framework. The idea of a framework is that it saves you time by already implementing a lot of the basic code that would write for every application, yet doesn't constrain you as much as using a CMS like Drupal would.

I have looked at several frameworks in the past and developed projects using a couple of them Zend and Rails. Zend is quite good in that you can treat it as either a library or a full blown MVC framework. Rails is also good for prototyping sites and web developments -but if you want to get a fairly vanilla website up and running quickly then a CMS like Drupal would probably be first choice (perhaps followed up with a bespoke/framework version of the site for version 2)

Anyway, CodeIgniter -seems to be popular and bills itself as 'light weight' -not something you could accuse Zend of. It's an MVC system, that says you don't need a templating language or, necessarily a database!

 I'm trying this on Fedora Linux running under VMWare on my laptop. Interestingly the minimum PHP requirement is 4.3.2, although it runs under PHP 5 -which has been out for several years now.

Initial installation was pretty simple,  unzip the file in your web root and edit the config as per the instructions (n. b. the config file path should be system/application/config/config.php from the installation folder). That's it -you get a welcome page. Obviously there is no database integration at this stage.

What do you get?
 Initially a View and a Controller -but no model; fair enough we haven't an app yet. Take a look at the feature list, there's nothing there that's particularly outstanding looks like an MVC system with some bog standard libraries, most of which seem to be available in native PHP or PEAR/PECL; perhaps that's why there seem to be quite a lot of articles on using bits of Zend Framework within CodeIgniter.

Out of the box the URLs all contain index.php, e.g. http://localhost/CodeIgniter/index.php/blog/. I don't like this, but it is easy enough to remove using an Apache rewrite rule.


These seem to work much as one would expect, and support a folder structure which should make for good code structure. For instance you should be able to have http://site/blog/post and site/blog/list in different files in a directory called blog, rather than lumping them into one big blog controller. Parameters are passed raw rather than as nvps (as far as I can see). There are pros and cons to this, the pro being shorter URLs the con being that it's less obvious from the URL what the function arguments are. Given the dynamic (ok chaotic) nature of most websites I can see bugs arising. Oh yeah -they live in the controller folder.


Live in the view directory! Whilst there is a templating language, this seems to be regarded as unnecessary and, from the manual, PHP is used directly. Fine by me, PHP is a templating language. It seems that views have to be called explicitly. Boring, I like the default that says if I have a 'post' controller I'll call a 'post' view by default, if it exists; but I realise that this is just syntactic sugar. Beyond that it's just standard PHP, front end programming, data can be passed from the controller to the view as an array (of however many dimensions) just like Smarty so that's all good. You can also call multiple views from one controller function, thus mimicing Zend layouts, but this is starting to push display logic into the controller, breaking the MVC pattern.

These seem to be a bit of an after thought and are described as optional. As an ageing database hacker the model is always where I start my development and design from, as it embodies the essence of an application and forces you describe the things that you are dealing with. So one black mark there. Models implement a cut-down Active Record pattern, table aren't implemented as classes (as per Rails and Zend). I think this is a shame as you lose some of the name-based goodness that you get with the one table/class model (meta model?). On the other hand there is a quite steep learning curve when it comes to more complex queries and the Rails/Zend version.

Models are abstracted from the underlying database engine, as it's PHP4 compatible I'm guessing not through PDO. You may need to still understand the underlying database though, i.e. transactions aren't going to work on a MySQL MYISAM table.

A couple of nice features are method chaining with PHP5 to produce readable code :
$this->db->select('title')->from('mytable')->where('id', $id)->limit(10, 20);
Cache control to allow you to cache part of a query string, as is result caching.
Transactions, meta data and adapter function calls are all supported.

And there's more!
There are the usual helper, plugin and library mechanisms. There are limited hooks to allow you to call your own code within the control of flow through CodeIgniter. There's an AutoLoad mechanism, although the description sounds more like a pre-load, so not on-demand like Zend. It would seem that scaffolding has been dropped. You can override the routing and perform output caching.

What do I think?
That I'm knackered now. I think I might like CodeIgniter, it seems less heavyweigh than the tZend Framework and easier to get to grips with yet still retains most of the essentials. If you are desperate for a bit of Zend (maybe Lucene Search) my guess is that you could still use it from within CodeIgniter (although I think that I'd run Solr and write a class to talk to that -if there isn't one already).

So CodeIgniter, my first framework, and maybe you won't ever grow out of it.

Sunday, 9 January 2011

Pumpkin Soup

I normally try and grow a few pumpkins on the allotment for Halloween, and any spares can be made into soup for Guy Fawkes, and frozen for the rest of the winter. This year I was able to use the stock from boiling up a gammon (sometimes it's too salty) -I generally make this with apple juice as well as water and it made a good soup great.

Recipe :
1.5 lbs (675g) pumpkin flesh
2 large onions
8oz carrots
1 can chopped tomatoes
juice of a lemon
4oz red lentils
3.5 pints stock
pepper, nutmeg and salt to taste
0.5 pint milk (optional)

chop and fry the onions and carrots until the onions are soft, add the rest of the ingredients (except the milk), bring to the boil and then turn down the heat, put the lid on the pan and simmer for half an hour or so until the veg and lentils are soft. I then blend it using in the pan using one of those wand blenders. If you are planning to eat the soup straight away then you can add the milk now (if wanted, it makes the sooup taste smoother) and reheat, otherwise you can leave it until you are ready to eat it -a dash in a microwaved mug full works just as well.

Saturday, 8 January 2011

Cooking Stock.

I'm tight, so I really like making stock as a by product from cooking other things. Generally I make it from the remains of chicken, turkey at Christmas and you can sometimes get away with ham too, although it can be too salty. Anything with bones will do though, except maybe lamb (soon after I wrote this I came across a recipe needing lamb stock).

Stock recipe make around 1 litr/2 pints :
  • Chicken carcass
  • 1 Carrot, cut into 2 or 3 pieces.
  • 1 onion, quartered. (Or leek tops if you've been cooking leeks)
  • 3 cloves
  • 4 -6 peppercorns
  • 3 cardamon pods (optional)
  • herbs -parsley, thyme, bay rosemary, anything that you've got. (Parsley stalks are good too)
Break up the carcass, place in a pan -it should be a snug fit, add the above and then water to cover. Bring the stock up to a simmer on a low heat and keep it on a slow simmer for a couple of hours. I stick the oven gloves on the pan lid as a bit of insulation. Turn the ring off and leave to cool. I decant the stock into jugs and chill it so that any fat solidifies on the top and can be scraped off.

That's it, either use it or freeze until ready, great for soups, casseroles, risottos &c.

Friday, 7 January 2011

Experiments With A Simple Evolutionary Algorithm (2)

Why is this interesting?

It's the way that the problem is solved. Normally with a program you work out what the problem is that needs to be solved and write code to solve that specific problem. With this algorithm less understanding is needed, you just need to work out if one child is 'fitter' than another, everything else is solved randomly.

Mama doesn't know best

In my original version of the algorithm I allowed cloning. If no child was better than the parent then the parent would be passed back to start the next generation. When I took this out, it seemed to make little or no difference to the number of generations that were taken to solve the problem.


This is a very simple genetic algorithm, it relies solely on mutation, rather than sexual breeding, and it only breeds from one, the very best, child.

I haven't tried it yet, but I suspect that it would be easy for the program to vanish up a blind alley.

Say that, instead of a phrase, the goal was to solve a maze, or to find a route on a road map between two points, the fitness test would probably be distance from the goal. In either of the two examples the test could easily drive the program into a dead end -and it has no way of backing out of it.

How to get out of it? What happens in life? In short I haven't a clue. A few  possible approaches would be :

  1. Relax the breeding algorithm to let more than the very best breed, perhaps by having multiple children, their number determined by fitness.
  2. Sex (imagine Clarkson saying it) this would shake things up a bit.
  3. Backtracking -valid in AI (and politics) but maybe not in evolution.
  4. Competition, more starting places, so more routes to the goal.
I may try some of these out, but I want to play with Bimorphs in HTML5 as well -oh and find a job.

Tuesday, 4 January 2011

The Fabric of Reality

by David Deutsch

This is a frustrating book, I feel that there is some good stuff in there, but that the arguments aren’t that well made.

Deutsch’s theme seems to be that we can construct a theory of everything from four current scientific theories; the Turing Principle of a universal computer, the multiverse version of quantum physics, Karl Popper’s view of knowledge and Dawkin’s update of Darwininan evolution, and the greatest of these is quantum theory. The argument is that all these theories can be related, and that as quantum theory is the most basic -in terms of physics and mathematical description- then it is the most important. The other three theories are emergent, that is they can’t be connected directly, mathematically, to the base theory, in the way that say large parts of chemistry can be, yet they still provide good working explanations of the universe (multiverse!) as it is.

That word explanation is key to the book, he explains (following Popper) that all scientific theories are explanations of problems that we use until we find a better one. I struggled with the word problem, but if I read it more as a conundrum, or define it as something that needs an explanation it helps. An explanation is deemed to be good if it is the simplest one to fit all the known facts, reference William of Occam. So whilst we can describe planetary motion in terms of angels moving heavenly bodies about the earth so as to appear that the earth rotates about the sun, it’s much simpler just to treat the earth as moving around the sun -which we would have to, angels or no- than to worry about the motives of the angels as well.

It is also central to the book that we should treat explanations as serious and true until we find a problem with them that needs a new explanation. So Newtonian mechanics was believed in, and treated as true and the world advanced using Newton for two or three centuries until problems arose (such as the orbit of Mercury being slightly different to that predicted) that required a new explanation. That explanation was General Relativity. Deutsch argues that this is not the case with his chosen theories, although they are the best explanations that we currently have they are not taken seriously, at least in part because we just don’t like them. Quantum theory is downright weird. You either have to believe in the Copenhagen Interpretation that the world changes because you look at it; or the multiverse theory where new universes are created at every decision point, that damn cat is dead and alive. The Selfish Gene is a cold and amoral theory and doesn’t need a god. Not a worry when you’re talking about rocks and atoms, but this is personal and we have emotional problems dealing with it. The Turing principle is fine -but largely ignored, we’re too busy playing with computers to think much about them and perhaps the same is true of Popper, although it seems pretty similar to the scientific method as I was taught it in school.

So what do I think? It’s not very coherent -perhaps because three of the theories are emergent so we can’t formally tie everything together. There are links there between the theories, obvious ones between Turing and Dawkins, and computers are, in a limited sense, quantum devices anyway. So worth reading, lots to think about, but for me it lacked a defined central argument and some of the logic chopping seemed to get close to the ‘All elephants are grey, the battleship is grey, therefore the battleship is an elephant.’; although I do realise that this is because we are supposed to assume the four theories to be absolutely true and dream up ideas around that assumption, ideas that we can then test.