*ASCII text, very long lines

Notes on Code, Design and Faith, by Fabian Tamp.

A Tidy Little Towers of Hanoi Implementation in C++

I’ve got an interview coming up in the next couple of weeks for a fairly large software company. Something that’s regarded as a bit of a classic problem is the Towers Of Hanoi puzzle. The story goes something like this:

There is a history about an Indian temple in Kashi Vishwanath which contains a large room with three time-worn posts in it surrounded by 64 golden disks. Brahmin priests, acting out the command of an ancient prophecy, have been moving these disks, in accordance with the immutable rules of the Brahma, since that time. According to the legend, when the last move of the puzzle will be completed, the world will end. (Wikipedia)

The “immutable rules of the Brahma” are:

  • Only one disk may be moved at a time.
  • Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.
  • No disk may be placed on top of a smaller disk.

You can actually buy these as children’s toys, and the thing is, kids start to pick it up pretty quickly. There’s a pattern to it, and there’s only one way of doing it, so it’s something that’s easy to learn intuitively. It’s a little bit harder to codify, however. Here’s the solution I came up with:

  1. Start with all of the discs on pillar A.
  2. To move all the discs to pillar B:
    1. Move the stack of discs less the bottom disc to pillar C
    2. Move the last disc to pillar B
    3. Move the stack of discs on pillar C back to pillar B.

You’ll notice that this is a recursive solution – it works out to only be a few lines of code. Here’s what I came up with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//moves n discs from stack one to stack two, using stack 3 as intermediate.
template <typename T>
void MoveDisksHanoiStyle(Stack& s1, Stack& s2, Stack& s3, size_t n) {
    // base cases.
    if (n == 0)
        return;
    if (n == 1) {
        s2.push(s1.pop());
        PrintHanoiTowers<T>();
        return;
    }
    MoveDisksHanoiStyle(s1, s3, s2, n - 1);
    s2.push(s1.pop());
    PrintHanoiTowers<T>();
    MoveDisksHanoiStyle(s3, s2, s1, n - 1);
}

Whilst the algorithm wasn’t too bad, I found setting up the infrastructure fairly difficult. I was using my own custom Stack implementation (another practice question), and printing stacks is either destructive and O(n) or an O(2n) operation (copy all the elements to one stack, then copy them back). So, I needed a copy constructor that copied the linked list internal to the stack. Headache! The other little bit of trickery here is the PrintHanoiTowers<T>() function: because we’re swapping s1, s2, and s3 on each recursion, we can’t just print s1, s2, and s3 per-se. Here’s the function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
template <typename T>
void PrintHanoiTowers(Stack<T>* s1_save = 0, Stack<T>* s2_save = 0, Stack<T>* s3_save = 0) {
    static Stack<T>* s1_ptr = 0;
    static Stack<T>* s2_ptr = 0;
    static Stack<T>* s3_ptr = 0;
    if (s1_save) s1_ptr = s1_save;
    if (s2_save) s2_ptr = s2_save;
    if (s3_save) s3_ptr = s3_save;

    Stack<T> s1(*s1_ptr);
    Stack<T> s2(*s2_ptr);
    Stack<T> s3(*s3_ptr);

    while (s1.count() || s2.count() || s3.count()) {
        size_t s1c = s1.count();
        size_t s2c = s2.count();
        size_t s3c = s3.count();
        if (s1c >= s2c && s1c >= s3c)
            cout << setw(4) << s1.pop();
        else
            cout << setw(4) << "";
        cout << " ";
        if (s2c >= s1c && s2c >= s3c)
            cout << setw(4) << s2.pop();
        else
            cout << setw(4) << "";
        cout << " ";
        if (s3c >= s1c && s3c >= s2c)
            cout << setw(4) << s3.pop();
        else
            cout << setw(4) << "";
        cout << endl;
    }
    cout << setfill('-') << setw(14) << "" << setfill(' ') << endl;
}

What we’re doing here is:

  • Saving pointers to s1, s2, and s3 when we call with non-default parameters, and then
  • Copying the stacks on every call,
  • Then destructively removing and printing elements from all the stacks.

You can find my full working code for this solution (including a linked-list implementation, and a stack implementation) at my github exercises repo.

IPCTK: Run Code Out of Process, Use Like a Normal Object

Multithreading and the Singleton Pattern

In December of last year, as part of my undergraduate thesis, I developed a piece of software that statistically extrapolated electrical network data. This software required a lot of simulations – I needed a year’s worth of data at 15 minute intervals.

Doing the math:

365 days * 24 hours * 4 measurements/hour = 35040

Each simulation took 2.5 seconds to run, and so those 35040 simulations were going to take just over a day to run.

I was at a stage about a week out from the end of the project, and a spare day just to run simulations wasn’t really viable.

So, I did what any programmer would do – I tried to multi-thread my application.

The simulator I was using, however, wasn’t particularly keen on the idea – turns out, the API implements a singleton pattern. Just for the record – it was not an appropriate choice in this situation (I’m not really sure it’s ever a great choice actually).

If I was going to multi-thread this application, I was going to have to multi-process it.

Inter-process Communication

The basic procedure is:

  • Spawn a child process (in this case, the simulator)
  • Connect some anonymous pipes between the child and parent processes
  • Use stream-based I/O for communication between processes.

That’s fine, but it’s a lot of boilerplate code - what if there was a way to automate it?

Aspect-Oriented Programming with PostSharp

PostSharp is a great developer tool, based on the idea of applying attributes to .NET code that somehow modify the behaviour of the attribute target. I wondered if it was possible to write a ‘RunOutOfProcessAttribute’ that could just be slapped on top of a class to force it to run in another process?

Turns out it is: PostSharp has an aspect (MethodInterceptionAspect) that pulls the method body out of a method, replaces it with your custom code, and then allows you to call the method body later.

For our purposes, what we need to do in that custom code is:

  • Test to see if this code is running in-process or out-of-process
  • If it’s running in-process, forward the call to our external process
  • If it’s running out-of-process, run the code, and forward the result back to the calling process.

Use Procedure

The basic use procedure is:

  1. Add a new Exe project to your solution.
  2. Add classes to the Exe.
  3. For each class that you want to run out-of-process:
  4. Mark the class with the [RunOutOfProcess] attribute.
  5. Make sure the class implements IDisposable.
  6. Make sure that no initialisation logic is in the constructor – instead, just put in a call to an Init() function with the same parameters as the constructor. (this is a workaround for a PostSharp quirk)
  7. Mark your Init() function with the [Initializer] attribute.
  8. In the Exe entry point, simply add a call to IPCTK.OutOfProcessHandler.Loop(arg0, arg1);, where arg0 and arg1 are the first and second command-line parameters respectively.

Then, just add a reference to your new Exe project and start using the classes. Any classes marked with [RunOutOfProcess] will be… erm, Run Out Of Process.

A Couple of Useful Bits and Pieces

Attribute Targeting

MethodInterceptionAspects apply only to methods, not to classes. To ensure that we can put [RunOutOfProcess] on classes and obtain the desired behaviour, we define the RunOutOfProcessAttribute as follows:

1
2
3
4
[Serializable] //Required by PostSharp
[MulticastAttributeUsage(MulticastTargets.Method)] //target all methods of class...
[AttributeUsage(AttributeTargets.Class)] //...but only allow the attribute to be applied to the class
public sealed class RunOutOfProcessAttribute : MethodInterceptionAspect

We then add the following CompileTimeValidate hook, which silently ignores private/protected member functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//Applies the attribute only to:
// classes implementing IDisposable,
// only on public methods,
// or on methods marked with [Initializer]
public override bool CompileTimeValidate(MethodBase method)
{
    if (method.DeclaringType.GetInterface("IDisposable") == null)
        throw new InvalidAnnotationException("Class must implement IDisposable " + method.DeclaringType);

    if (!method.Attributes.HasFlag(MethodAttributes.Public) && //if method is not public
        !MethodMarkedWith(method,typeof(InitializerAttribute))) //method is not initialiser
        return false; //silently ignore.

    return true;
}

Serialisation

IPCTK works by serialising/deserialising the arguments to and return values from methods and sending these along anonymous pipes. As such:

  • The toolkit is most useful when the ratio of expected compute time to input/output size is high.
  • If anything needed to be optimised ever, it was serialisation for this purpose.

NetSerialiser would probably offer a good performance improvement, at the expense of being unable to serialise implementers of ISerializable. Maybe I’ll look into this in a future version.

More Info

For more information, take a look at the IPCTK project page on github.

NuGet: Adding Install Scripts to Project-based Packages

ElecNetKit is bundled into a set of NuGet packages in order to simplify the setup procedure. The ElecNetKit.Engines.OpenDSS package, however, requires the OpenDSS COM component to be installed in order to run.

Rather than leave my users hanging without any idea as to why they couldn’t use the OpenDSS engine within ElecNetKit, I added a PowerShell script that notifies the package installer if OpenDSS is not installed:

1
2
3
4
5
6
7
8
9
10
11
12
13
try {
$obj = New-Object -ComObject OpenDSSengine.DSS
}
catch {
    $null = [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
    $yesno = [System.Windows.Forms.MessageBox]::Show(
    "It looks like you don't have OpenDSS installed. Head over to" +
    "http://sourceforge.net/projects/electricdss/ and grab a copy?", "Hey!", 4)
    if ($yesno -eq "YES")
    {
        start 'http://sourceforge.net/projects/electricdss/'
    }
}

NuGet can run powershell scripts upon package installation, so long as they’re located inside the package’s ‘tools’ directory. When you’re dynamically generating packages from project output, however, it’s not immediately clear how to add such files.

If you take a look at the NuGet.exe source file Commands/ProjectFactor.cs, on about line 155:

1
2
3
4
5
6
// if there is a .nuspec file, only add content files if the  element is not empty.
if (manifest == null || manifest.Files == null || manifest.Files.Count > 0)
{
// Add content files
AddFiles(builder, ContentItemType, ContentFolder);
}

Turns out you can just add the <files> element to your .nuspec file and the files get included along with the project outputs. Here’s the relevant fragments from my .nuspec:

1
2
3
4
<files>
    <file src="NugetScripts\*.ps1" target="tools"></file>
</files>
</package>

On a side note, open-source tools are the best!

Git: Undoing a Fast-forward Merge When You Should’ve Just Used a New Branch

The Problem

Here’s the scenario:

I had a git repo that looks like this:

A-B----     master
   \   /
    C-D     *develop

I was on the develop branch. Here’s the foolish thing that I did:

  • I forgot that develop differed from the master branch,
  • Made a change on develop, committed it, and merged it into master, and
  • Pushed the change to my remote (called publish).

There were no changes after B on the master branch, and as such, when I merged, git did a fast-forward. My repo now looked like this:

A-B-C-D   master, develop, remotes/publish/master, remotes/publish/develop.

I wanted to keep develop at D, but revert master to B, and then merge the changes made at D into master.

What I Should’ve Done to Start With

What I should’ve done was:

1
2
3
4
5
6
7
8
9
git checkout master     #change to `master` branch
git checkout -b hotfix  #make a new branch called `hotfix` from `master`
#make file changes
git commit              #commit the `hotfix`
git checkout master     #switch back to `master`
git merge hotfix        #pull in the hotfix changes
git checkout develop    #switch to `develop`
git merge hotfix        #pull in the hotfix changes
git push publish        #push everything to my remote.

Resulting in a diagram that looks like this:

    D--     hotfix
   /   \
A-B---- E   master
   \    |
    C---F   *develop

Fixing the Mistake

Turns out the solution wasn’t really that hard, but git can be really scary if you’re not sure what you’re doing. The develop branch is already in the right spot, so we just need to:

1
2
3
git checkout master         #make sure we're on the `master` branch
git reset --hard sha-of-B   #move `master` back to revision `B`, deleting all other changes in the working directory.
git push publish --force    #because we've reverted some changes, we need to force the remote to accept the reversion.

That reverts the master branch back before the merge. To then pull in the changes made in the hotfix:

1
2
git cherry-pick sha-of-D    #pull in only the changes made in revision D.
git push publish

And we’re done!

Thanks to Simon Boudrias for his help on this one. See the StackOverflow question here.

Rethinking the `static` Keyword in Object-Oriented Programming (OOP)

abstract static

A friend lamented to me last week about the absence of abstract static members from class definitions in Java:


god damn why can’t you have abstract static methods in java. #sometimesstupidlanguage

— Samuel Killin (@SamuelKillin) December 13, 2012

@samuelkillin When would you need to do that?

— Fabian Tamp (@capnfabs) December 13, 2012

@capnfabs eg: superclass Car with abstract method getName(). Shouldn’t have to instantiate a Porsche to call Porsche.getName()… any soln?

— Samuel Killin (@SamuelKillin) December 13, 2012

@samuelkillin Why do you want to call Porsche.getName(), if you don’t have an actual Porsche?

— Fabian Tamp (@capnfabs) December 13, 2012

@capnfabs All Porsche’s have the name “Porsche” – it’s a property of the type, not a particular instance.

— Samuel Killin (@SamuelKillin) December 13, 2012


Turns out you can’t do it in C# either. Common understanding is that the absence of such a feature from these languages is an limitation in the language implementation (Java, C#). I’ve come to the realisation that there might be a very good reason as to why you can’t have abstract static members in these languages, and I think it’s due to a problem with the way we understand the static keyword.

I think we might be thinking about the static keyword wrongly. The static keyword is used for instance-independent functions and data. Instance-independence, however, doesn’t really make sense in an object-oriented context.I’ll explain this in a second, but fundamental to understanding that is an understanding of the way that programmers view class definitions.

What do class definitions really represent?

The tweet presented at the start of this article perfectly represents the way that many programmers think about class definitions. There are two main components here:

  1. Class definitions represent a type of object, and
  2. A class definition defines a set of properties common to all objects of that type.

With such an understanding, it’s easy to see where the need for an abstract static member comes from. Like in the Porsche example earlier, sometimes, each subclass of a class will share a common property, and this property should be implemented to be common across all instances of the subclass.

The difficulty in this understanding is that it’s very nearly accurate. What I think a lot of programmers get wrong when it comes to OOP is that class definitions do not represent a type of object, but represent a contract of characteristics that each instance is guaranteed to fulfil.

In the same way that classes implement interfaces, instances (objects) implement class definitions. In their purest form, class definitions specify specific data and functions that class instances will hold – no more, no less. They make no guarantee of type or commonality between instances of the same class.

This is wholly consistent with the notion of classes as objects – our superclass Car simply requires all instances to fulfil the contract of having a Make and Model, and cares not for the implementation details, including whether there are subsets of Cars with common Makes or Models. Asking the superclass to understand this by marking its members with abstract static requires a tighter coupling than should be used in such scenarios.

Implications for the static keyword

If a class definition is simply a contract of characteristics, or an interface that instances are expected to fulfil, then the notion that static members represent common properties of the type is wrong.

What are static members actually for, then?

I’d argue that static members are simply a convenience to the programmer, somewhat divergent from true object-oriented design. In my own experience, I’ve only ever used static members when I’ve somehow needed to manage or coordinate multiple instances of a class. Take a look at the following class definition, defining a member contract for an animal on a farm (example in C#):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Animal {
    String _uniqueID;
    public String UniqueID {get { return _uniqueID; }}

    public Animal() {
        _uniqueID = getUniqueID();
    }

    public static HashSet<string> _uniqueIDs;
    public static Animal() {
        _uniqueIDs = new HashSet<string>();
    }
    public static String getUniqueID() {
        //code to generate ID and check it's not already in use,
        // i.e. in _uniqueIDs
    }
}

Our Animal class is split into two parts – the non-static members define the actual data pertaining to the Animal contract, and the static members simply manage the integration of the Animals within their broader context.

You could just as easily express the same functionality like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class Animal {
    String _uniqueID;
    public String UniqueID {get { return _uniqueID; }}

    public Animal(String UniqueID) {
        _uniqueID = UniqueID;
    }
}

class Farm {
    public static HashSet<string> _uniqueIDs;
    public static Farm() {
        _uniqueIDs = new HashSet<string>();
    }
    public static String getUniqueID() {
        //code to generate ID and check it's not already in use,
        // i.e. in _uniqueIDs
    }

    public Animal MakeAnimal() {
        return new Animal(getUniqueID());
    }
}

Using this architecture, Animals are still guaranteed to have unique IDs within their broader context – Farm objects – without having coordination logic embedded within the Animal class.

If you’re not expressing your class in this format, you’re probably violating the Single-Responsibility Principle – you’ve simultaneously got a class that is used for SomethingOrOther-ing and the static aspects of that class, that coordinate SomethingOrOthers.

Here’s a challenge for you – try re-writing the static keyword out of some of your old code. It may take a little getting used to, and it’s certainly not always necessary, but it will be helpful for understanding that static is a convenience keyword only, and not an integral part of Object-Oriented design.

A final note on Porsches

As for our opening example – “all Porsches have the name ‘Porsche’” – I disagree; you’re thinking about classes wrong. “Porsche” is the make of the individual car, and the fact that it is a value common to a subset of Cars should merely be a coincidence to the Car superclass. I’ve got no problem with thinking about this specific issue without the use of the static keyword, with:

1
2
3
4
5
class Porsche : Car {
    public override string GetMake() {
        return "Porsche";
    }
}

Information Technology, Data and Context

Information Technology – it’s an awfully un-glamourous pair of words. When people think of IT, they think of business applications that don’t really work as well you’d expect them to, database administration, and ugly-looking spreadsheet-y things on badly designed websites.

I’m moving into a software development job next year, and I so badly want to rush to tell people that IT is not what I do. The thought makes me cringe, and I’m not sure why. It’s not the stereotypes (ok, it’s a little bit the stereotypes); it’s a deep desire to be known for doing something that’s meaningful.

It’s surprising that information technology makes me cringe because information is awesome! Information helps us make decisions. If you told me it was going to rain tomorrow and I somehow remembered to pack an umbrella, hooray! Information saves the day! If I told you that a guy on the street outside was giving away iPhones, and he was, woohoo! Information! If you call the fire brigade, and tell them exactly where the fire is, and they turn up in time, boo-ya! Information!

So why does information technology suck so hard?

Information ≠ Data

The problem is that the phrase ‘Information Technology’ has somehow come to refer to data technology.

Let me stress:

Data is little bits and pieces of truth (hopefully). Tiny little factoids, interesting at best, useless at worst.

Information is relevant data: data that is useful for making decisions. As such, all information has context, and needs to be communicated effectively. Without fulfilling these two criteria, it’s just data.

Information has Context

Information has context and gets interpreted in context. Because information is interpreted in context, it needs to be underpinned by identical assumptions, timely and relevant.

Take our umbrella example before. It was fantastic when you told me that it was going to rain tomorrow, but we both assumed that we were talking about the same place. If instead, you were referring to the weather in Madrid, the data is useless – it’s not information.

Take our guy on the street giving out iPhones – if I waited until a week from now to tell you about him, you’d probably just be annoyed. Maybe it would make for a good story, but it certainly doesn’t affect the way you live. Without timeliness, data is an interesting story at best.

Take our fire-brigade example. If instead, I picked up my phone, dialled somebody at random, and told them exactly where the fire was, it wouldn’t be particularly useful. Delivering data to the wrong people stops it from becoming information.

A good test to see whether data is information is to see whether you could legitimately ask “why are you telling me this?”. If you can, you’re working with data, not information.

Information is Well-communicated

If information is useful data, then it needs to be communicated in a fashion that empowers people to make decisions.

Take this data for example:

35% more men than women like to wear pirate hats on tuesdays.

This information might be useful to pirate hat salespeople, for example, who might reason that they’d be better off targeting men instead of women on Tuesdays. That’s a valid conclusion, but this single figure doesn’t tell the whole story. The above statistic came from the following data:

On Tuesdays, 1% of women like to wear pirate hats on a Tuesday.

On Tuesdays, 1.35% of men like to wear pirate hats on a Tuesday.

Not looking so good for our pirate hat salespeople any more! (Especially if interpreted within the context of a salesperson’s knowledge that 10% of all people like to wear pirate hats on Saturday).

The problem with the first fact isn’t the truthfulness of the data – it’s that the first fact oversummarises – it throws away too much data for the sake of brevity, because of a lack of context-awareness.

In most situations, a well-designed graph will give you the desired level of simplicity, whilst retaining the richness of the data you started with:

The pirate hat salespeople would find something like this far more powerful for effective decision-making.

…and back to Information Technology

All of this should be fairly intuitive (if a little difficult to articulate) – what I’ve presented here is just a few key principles of effective communication. ‘Information Technology’ often falls short when it comes to effective communication, and that’s why people still think about it as ‘data technology’.

Information is awesome – there’s little doubt about that. As an Information Technologist, it’s my job to help people to make good decisions by communicating meaningful, context-appropriate data. That’s something I’d be happy to do.

Thesis, Part II

My undergraduate thesis is finally done. Hurrah! I made a toolkit that makes power networks much simpler to conceptualise, and thus make informed decisions about.

The thesis is called “A Sensitivity Analysis Toolkit for Mitigation of Distributed Generation Voltage Rise”. Sounds like a mouthful, right?

That’s why I made a poster, which explains everything in ordinary English!

It’s also available in a PDF format if you find it doesn’t work in your browser.

If you think that this project might be of some interest or use to you, download the entire thesis report (PDF, ~50 pages) or get in touch.

The web version is built with CSS3/SVG, and comic images are courtesy of Grant Teunissen of ‘Anything But Real Life’.

Selling Recommendations

My undergraduate thesis has two main outcomes. I’m not going to bore you with the details here – if you’re really interested, you can read my thesis comic here – but I had a couple of thoughts about two different kinds of products/goals/outcomes that I’ll be working on.

Outcome One is a tool that provides a lot of information to businesses. This tool doesn’t directly save anyone any money, but it enables people to make decisions that could.

Outcome Two is a tool that provides recommendations to businesses. Recommendations that, if followed, will start saving money immediately (hopefully).

Which do you reckon would be easier to sell?

I’m thinking it would be the first outcome. And not because it’s more useful, necessarily, but because people making decisions in business like to have the evidence to support their decisions – whether they have to defend their decisions or not, it’s important to have the evidence around. People need to feel like they own their decisions.

As such, I think there’s a difficulty selling Outcome Two-style expert systems to businesses. Unless you frame it in such a way that it accelerates people through a decison making process.

The Decision-Making Process

The classic, rational model of decision-making look something like this:

Side Note: Is this model valid? It’s called the rational decision model because it assumes that people are thinking… well, rationally. My hunch is that for engineers, it’s a pretty valid assumption – you’re supposed to at least try to think rationally as an engineer.

The problem with selling a recommendation system is that you’re going straight from the ‘Recognise/define problem’ stage to the ‘Implement’ stage.

And as a consequence, it feels utterly wrong to the engineer. Nobody (thinking rationally) would recommend something without being sure that it was the best alternative.

Side Note: Actually, it doesn’t even really matter what decision making model you use: every model has some kind of step in between the ‘Problem recognition’ and ‘Implementation’ stage.

Framing your Recommendations

If you’re going to sell a recommendation system, you need to hold the hand of the decision maker through steps 2-4. They need to be convinced that the alternative they pick is the best, and be convinced that they’re choosing from a fairly broad solution space.

The ‘hack’ is at step two. Don’t sell a recommendation system – sell a design tool. Make the design tool adept at highlighting the steps that you followed to come to the recommendation. Tell the user a convincing story as to how you got your answer, and allow them to come to the same conclusion.

I’ve heard it said before that a good teacher is one that gives you the tools to learn, but at the end of the day, allows you to learn the lesson for yourself. Sorry if that’s a little too ‘Dead Poet’s Society’, but that’s pretty much exactly what we’re aiming to do here.

Thanks to Phil Ciufo for some of the ideas here.

Abstraction, Humanity and Design

A couple of months ago, I did a presentation on “re-humanising people through design” for a job interview. It’s an idea I got partially from the work of Bret Victor and a lot from the first issue of The Manual.

The core idea is that a lot of aspects of our society, and especially the way we share information, are based on abstraction. To abstract something is to make its representation less specific, so that it’s easier to work with, to understand, or to communicate.

We abstract all the time in our everyday lives. Recipes will often tell us to “cook the vegetables”, instead of “cook the carrots, onions and broccoli”; we’ll tell people to turn “left” at the roundabout, when really, there’s two exits that are left, just one is very “left”, and the other is only slightly “left” (ever notice how GPSes only ever say “take the second exit?”). We’ll refer to someone as “that guy over there”.

Our ability to abstract is part of what makes us able to reason as humans, but ironically, it enables us to dehumanise others. When we start abstracting people, we become so quick to forget about their humanity.

Internet Argumentxkcd: “It’s easier to be a jerk to words than people”

And that’s the problem: so often on the Internet, we forget that we’re talking to people, and not just to an idea, or a collection of words, or a username.

Everything changes when we realise that we’re interacting with people. Take the average comment you read on YouTube and compare it to the average comment you read on Facebook.

Design can help people to remember that they’re dealing with human beings: not just words, not just ideas.


Design can also stop people, real people, with brothers, mothers, children, husbands and wives, from simply becoming statistics.

“If I say to you there were 99 murders in San-Francisco in 2008, you’d think it was deplorable for the three seconds before you completely forgot about it, but if I show you the actual map it makes you think this person died here and that one over there; you see a pattern; you see that most of them are on the east side of town, it becomes particular and specific, it becomes places you could have been – it could have been you, you could go to them, you could bring flowers…”

Rebecca Solnit, on 99% Invisible, Dec 2010

Every one of those 99 people were human beings, with stories, with family, with memories. As designers, we have the power to ensure we’re emphasising individuals’ humanity, not abstracting it away from them.

Scientific Entrepreneurship

I heard it argued today that Entrepreneurship is has historically been neglected by business schools, as it was considered unimportant.

I’d argue that the reason why entrepreneurship wasn’t really discussed until recently is simply that it’s difficult to codify, and to examine. Other professions (engineers, lawyers, doctors) teach hard skills, and test on hard skills – I don’t think I’ve once had an exam that’s deliberately tested my tacit knowledge.

I suppose Scientific Management only really came about at the start of the 1900s, and innovation-driven entrepreneurship theory only really became popular with Schumpeter’s writings in the 1930’s. It’s not really too surprising that we’ve only just started thinking about entrepreneurship as a teachable, scientific process.

There’s a bit of a problem here: Entrepreneurs are typified as wanting to be the best; the best entrepreneurs have a ‘gift’ that can’t easily be communicated or replicated, and as such, the ‘scientific’, repeatable-process entrepreneurship probably gets neglected a bit.