Wednesday, August 18, 2010

Plato and Socrates (Microsoft Research 15th Anniversary Riddle II, by Josh Benaloh)

You have Plato and Socrates, two transhuman intelligences, over for tea.

You roll two 100-sided dice, producing two numbers x and y, both between 1 and 100.

You tell Plato the product, and Socrates the sum of the numbers, and they have the following dialogue:

Plato:      I don’t know the values x and y.

Socrates: I knew you didn’t know. I don’t know them either.

Plato:      Now I know x and y.

Socrates: Now I know them too.

How did they work it out? What were x and y?

Friday, August 13, 2010

Remember that Inheritance is a Metaphor

I've recently met a girl who thinks nothing of coming into a room when I'm reading, and turning the radio on without asking [3].

This strikes me as intolerably rude. Much much worse than, say, coming into a room where someone is reading, and pissing in the sink.

Initially this made me boilingly angry. Being English, of course, I sat on this anger so that it came over as very mild irritation.

But the thing is, I don't think that she's doing it to be rude. She's a nice person in all other ways. Intelligent and educated, and I'm sure she wouldn't just do that sort of thing for no reason. If she wanted to start a fight surely she'd try other methods of irritating me as well?

So I started wondering why I thought it was such an evil thing to do.

Obviously it stops me concentrating. All I can think about is this bloody drivelling DJ and his monotonous music (I'd probably quite enjoy the music if I wasn't trying to read, but I've never understood what DJs are for.)

I find that I'm reading the same paragraph over and over again.

There's a simple answer to the problem. I could stop reading.

But I don't think she's trying to get me to talk to her. She's usually at her computer when she does this.

I think she doesn't think she's doing anything that impacts on me at all.


And so I wonder why it is that I think this is such an offensive thing to do?

I remember as a child, when I was about twelve years old, my parents bought me a little clock radio. They were a new thing then. I loved my radio, and played it constantly. [1]

One thing I do remember is my father bursting into my bedroom in utter fury one evening, and trying to turn the volume down. He got the tuning control instead, and it took me ages to put it right for some reason. In retrospect this is funny. But all I can remember from the time is the anger. I'm sure that my father got angry a lot. But he usually tried to hide it. This was quite open.

We sulked at each other for hours afterwards. I remember thinking how unfair it was, since I could sometimes hear the television in my bedroom when he was watching it, and I'd never minded.

But of course I never tried to read in my bedroom. I always read in our study. Or in the dining room in front of the fire, with the door firmly shut so I couldn't hear the TV.

And it wasn't like they watched much TV anyway. Apart from the news, it was more of a guilty pleasure that could be indulged in occasionally but not to excess.

I remember that, back in the days when I knew people who didn't have degrees from Oxbridge, I would sometimes go round to the houses of school friends and find that the TV would be on whether anyone was watching it or not. It seemed to be more of a companion, or camp fire, to be watched just in case something interesting happened.

My parents would just not have allowed this to happen. The television was turned off when you finished watching it.

And I'm just suddenly wondering what it would be like to grow up in a house where the TV was always on.

You'd never be able to read a book. You wouldn't understand it well enough to be interested.

You might be able to do your school homework, if you forced yourself. I used to do my maths homework in front of the television occasionally. But that was usually very easy. But you'd never get to the point where you started to think about the ideas that you'd just rehearsed. And that would mean that you didn't assimilate them properly. And that would mean that you wouldn't understand the next lesson, if it tried to build on that. And that would mean that you'd find something as simple as O-level maths 'hard'.

Which I know a lot of people do. But what I mean is that even if you would otherwise have found it easy, you might still find it hard.


And so now I'm worried.

People who really should know have told me that there's such a thing as general intelligence, which is measurable in many different ways and stays constant after a certain age. And they have told me that it is strongly heritable.

I'd always just assumed that stupid people were mostly poor because they were stupid, and their children were mostly stupid because their parents were stupid, and so their children were mostly poor.

And when people claimed that the children of the poor were being held back by old class prejudice, I remembered the efforts that the University of Cambridge makes to attract children from poor areas in spite of the fact that they don't do that well in their exams. I mean that they really bend over backwards to do the opposite of what everyone seems to think that they do.

My grandfather's family were poor, but he was very clever. His parents couldn't afford to keep him at school past 14, so he became a steelworker and was active in the trades union movement. When he was about 65, I managed to teach him calculus in an afternoon. This is a clear case of class holding someone back. Such things happened between the wars. [4]

His daughter was clever, and she should probably have gone to university, which would have been free for her [2], but she took a job as a librarian, which I think was one of the few careers open to clever women in the early sixties. And this is a clear case of sex holding someone back. But she married a clever man. And her son was me, and I have no complaints.

Strongly inherited characteristics are not fate. Clever children are born to poor families.

But what is it like being the clever child of a family where the TV is always on and there is no escape?

Where you can't read. Where you can't think. Where everything they try to teach you at school is a baffling mystery that everyone else seems to have no trouble with?

I have never been hungry. And I have never grown up in front of a television.

So I don't know what I am talking about. But if I had to make the choice now, without any further information, I would rather starve in the quiet.









Footnotes

[1] I even remember my favourite station, Laser 558. As well as the cringe of embarrassment when I asked the friend who'd introduced me to it what frequency it was on. I looked it up on Wikipedia now, and apparently one of its big selling points was a comparative lack of DJs.

 [2] Back in the old days, the British Government would not only pay the tuition fees of the lucky minority who went to university, but also pay a maintenance grant. Being paid to study was excellent, but even at the time, when most students thought of their grants as a basic human right, I used to feel guilty about the people who had to pay for my three year drinking spree in paradise, while their own sons were probably having to find their own way in the world.

These days half our people go to college, and that system has been retired in favour of a loans scheme, which means that anyone can afford to go, but they have to pay for it themselves eventually.

[3] In a comment, someone asked whether this was my private space or a shared space, and whether I was the first one there.

I'm talking about a shared space, where I was there first.

There appear to be several ways of looking at this.

There's a big difference between talking and putting on a radio or television. I've got no objection to people talking in a room where I'm trying to read, even though it might make reading a bit harder. But with an electronic squawking-device, the noise is relentless, repetitive, and changes constantly in volume in an attention-attracting sort of way. You can't concentrate at all.

If being in a room first doesn't give you some sort of priority, and it's OK for someone else to turn on the radio, is it then OK for a third person to turn on a second radio? Or a TV?

If whoever puts on the first radio wins in terms of the ambient sounds to be enjoyed by the company, would my best strategy to be find a radio station that didn't annoy me much, say a talk radio station in a language I don't speak, and put that on very very quietly whenever I was trying to read? What sort of madness would this lead to?



I think everyone realises that if you go into a shared room where someone is listening to something, you should ask their permission before doing anything to spoil that for them.

We might even generalise that to 'if someone is already doing something, you don't interfere and spoil it'. And I think that would have to be a widely accepted principle, because otherwise the world would be in a constant state of petty violence.

I think that my revelation might be that some people either don't consider reading or thinking to be activities at all, or that they don't realise that they're things which radios and TVs can spoil.

There are occasional newspaper reports of people who have lost it and got into terrible disputes because of their neighbour's music and blaring TV. I'd always just assumed that the noise-makers were plain evil.

But maybe they're not. Maybe noise as an assault on someone else is a matter of taught morality. More like copyright 'theft' or careless driving or littering, whose moral status varies from person to person and society to society, than actual theft or unprovoked violence, which everyone considers wrong, and which all historical societies had laws against.

Maybe from the noisy neighbours' point of view, the fact that their music or TV can be heard through the walls just isn't an issue. As long as it's not so loud that the neighbours can't hear their own television, which they can always turn up.

Perhaps quiet in which to think is not a thing which everyone likes, but a special thing which stuck-up ponces and miserable old bastards care about for no explicable reason.

And the thing is, when you put it like that, I can't see why one person's right to think trumps another person's right to listen to music as loud as they like in their own home.

Except that if that was commonly believed, life wouldn't be worth living for me.

Maybe I should put my long-cherished dream of moving to a council estate on hold for now.

[4] In fact, I remember that there was once a feeling that free university education was a plot against the poor, because if the clever people that would have been trades union men were educated and became middle-class instead, then the poor would have no one to lead them in the revolution that was coming. This plot seems to have worked well.

Thursday, August 12, 2010

Toy Story 3 3D (film)

I am having trouble coming to terms with the fact that I have just cried in a children's film. On three separate occasions.

An animated children's film with a 3 in its title.

I was not the only one, either. The only other guy on my row was in floods. There was the sound of suppressed male sobbing from the row behind.

I am just glad that there were enough Easter Eggs in the closing credits that I had time to dry my face and compose myself before I had to walk out.

There is not a boring moment. The ending is apocalyptic and heart-warming at the same time.

The faultless 3D may be part of that. It's only the third 3D film I've seen with the circular polarizer technique. It is not obviously exploited, but it makes the whole thing seem utterly real despite the fact that it makes no attempt to be realistic.

This film is a witty, clever, well plotted, funny, adult, knowing, exciting, merciless tear-jerker. It is utterly wonderful, and a modern classic. It will be repeated on 3D televisions for many Christmases to come and it will mean that the current generation of children will never, ever be able to throw their toys away. And at the moment, I think that this will be a good thing.

Gainsbourg (film)

Hearing French spoken slowly and in a comprehensible accent whilst reading a faithful English translation at the same time is an exquisite experience.

As a result, it's difficult to separate out what I actually thought about the film.

At the beginning, full of life and humour, and beautifully shot, this biography of the immortal Serge Gainsbourg is too long.

A life, unless ended early, is doomed to peter out. The film, being a biopic rather than a story, faithfully records this petering. Gainsbourg is about women and song. In the beginning, the women and the songs are electric. But the procession is too long, and eventually becomes tawdry.

It's worth staying for the second half just for the scene where Gainsbourg records the Marseillaise as a reggae song.

Wednesday, August 11, 2010

Inception (film)

A science fiction short story, made into a thriller by padding it out with car chases, automatic weapons fights in which nobody gets hurt and oh god does anyone in the world still actually like this sort of thing at least they make more sense if you're supposed to be dreaming but really....

Cheerfully ambiguous in plot, character and ending. I won't quite say 'thought-provoking', but I did enjoy it.

Saturday, August 7, 2010

The Golden Age (John C Wright)

Deeply tedious and overwritten. The first part of a sequence. I have no intention of reading the second part.

Wednesday, August 4, 2010

A God of Small Things


As far as I can tell, only a few people liked this, and absolutely nobody understood it: 
There's a new version.



I have been reading Less Wrong . Possibly to excess.


Tom Harrison had once been thought of as bright child.

The apple of his teachers' eyes, the school swot. The boy genius.

Once, one of his friend's parents had said "You know that they say that however bright you are when you go to university, you'll meet someone brighter than you."

"Yes", said the friend.

"Well, Tom's that person"


But of course it hadn't turned out that way.

Tom had been accepted by the University of Cambridge to read mathematics, but had turned out to be no more than averagely bright by the standards of that ancient place.

Towards the end of his degree, and at the beginning of the PhD that should have been his route into academia and a life of research, it had become obvious, first to his teachers and then to him, that although Tom loved maths, he didn't lust after it.

Tom's teachers had been kind, suggested that this might be the case without pressing the issue, and waited for the lack of desire to become as obvious to Tom as it was to them.

In the meantime, fortunately for Tom, the necessity of making some complicated calculations for the second chapter of what was supposed to be a seven chapter doctorate had awakened a second passion.

Some of the light that had appeared so bright in mathematics at school was also to be found in the operations of computers. Tom slowly found out that he was more interested in the process of finding out the answers to his experiments than in the experiments themselves.

Eventually, as they do to all PhD students, the twin horrors of poverty and writing up came to Tom.

He took a job as a programmer at a local firm, initially meaning only to get control of his overdraft and credit card debts. But he found the regular small successes of the commercial world, and the camaraderie of office life far more to his liking than the loneliness of research.

With barely a regret, indeed without even really noticing, he lost touch with his old supervisor, forgot what his thesis was supposed to be about, and eventually found himself, at the age of thirty, a member of the large club of Cambridge residents who are 'still writing up' doctorates that the University itself forgot about many years ago.

Tom became a freelance, working in computers from time to time to pay the rent, and otherwise devoting himself to various hobbies.

One of these hobbies was computer science in the academic sense, following the traditional American path through the antique language LISP, beloved of the artificial intelligence community.

And the other was collecting stamps.

A man with time on his hands, who lives in Cambridge and likes to spend his days in coffee shops, will encounter students and academics from time to time, and Tom fell in with the William Gates Machine Learning Research Group at the University. Although they had no common language, LISP never having been popular with European academics, and ML never having come to Tom's attention in the commercial world, Tom and the local researchers found they had many interests in common, and Tom found himself invited to seminars and coffee mornings and presentations from time to time, almost all of which he found incomprehensible.

But occasionally he'd glimpse some small part of the truth and say something which would keep his friends interested. The academic community, happy to find someone they could talk to different enough from themselves that they could sometimes find a new perspective by explaining things to him, made Tom welcome. Thinkers needs clever fools to explain things to in the same way that chalks need blackboards.

A lot of the artificial intelligence work in the sixties had been inspired by ELIZA, a program which simulated a psychiatrist so well that humans were sometimes fooled that they were talking to a real person.

But ELIZA had been a hollow shell. A cheap trick. Like a parody of the mechanical turk, ELIZA's internal machinery was so simple that to understand it was to make the magic go away.

Once you saw the trick, the conversations weren't interesting any more. You were just talking to an echo.

But over the years, reasoning that a sufficiently good trick for impersonating humans might be what humans themselves were, various people had added more and more data to ELIZA in the hope that giving her more things to talk about would cause her to talk about more things.

And they'd added extra tricks, for introducing new topics of conversation occasionally, for remembering things said earlier and bringing in parallel ideas.

But though the later ELIZA could outperform a ten year old on a straight test of general knowledge, what had been put in was still what came out. No interesting properties had ever emerged from the pile of details, and she had the general intelligence of penicillin.

Eventually the AI pioneers had largely given up. They'd taken their best successes, SHRDLU and GPS, theorem provers, pattern-recognisers, all of which had seemed so promising in their time, and all of which had turned out to be so empty, and bundled them all up together in one super-ELIZA to rule them all, and run her on the largest and fastest computers that had ever been built.

And she could still fool someone who didn't know the tricks that they were talking to a real person on the other end of a telegraph wire. But not for long.

It quickly became obvious, even to the slowest human being, that talking to the best ELIZA that could be constructed in 1975 was the equivalent of talking to a being with brain damage so severe that its mind had ceased to be.

She rambled, insanely, with no idea what the words and symbols that she vomited out actually meant. She knew that horse and horseshoe went together, and her basic sentence structure was still that of a Freudian psychologist, so she'd respond to "Which horse do you think will win the Derby" by saying things like "What do you mean to say when you say 'think will win'?", or "Do you think a horseshoe would make you a winner?".

Nowadays, the ELIZA program was built into text editors as an amusement, and she would run perfectly happily on pocket calculators and telephones, but even if you ran her on the most powerful computer the early 21st century could produce, you only got a very fast deranged annoying shambles.

And of course, because the problem of vision had never been solved, she was blind. And of course, because the problem of speech recognition had never been solved beyond the 'right nine words out of ten' level, you had to talk to her by keyboard even if she was used to your voice.

But boy, could she play chess.

About the one thing the Artificial Intelligence pioneers had managed to deliver on out of all their brave promises, had been the idea of a computer that played chess.

The tragic hero Alan Turing, who saved the world from evil and was killed by evil in return, was the first man to think about writing a computer chess program. But he couldn't do it on the steam age computers of the 1950s..

By 1956, however, things had improved to the point where a computer could play, provided it was allowed three hours for each move.

The problem was finessed by removing the bishops and playing on a 6x6 board. The computer could now calculate each move in around 8 minutes, running hand-optimised machine code on the best vacuum tubes money (very large amounts of money) could buy.

The first man to lose a match against this extraordinarily expensive device was publicly ridiculed for his defeat. In tests, the computer usually lost even its simplified game to four year olds who'd just learned the rules.

But it was a start. In 1957 a descendant of this machine played the International Master Edward Lasker. And he declared that it had played a 'passable amateur game'. It is possible that Lasker was being kind.

After that, research stalled. It became thought in the AI community that, since the easy things, like computer vision and machine translation, the 'low hanging fruit' of AI, were proving so unexpectedly difficult, that the advanced subjects like chess, the entertainment of intellectuals, were for the foreseeable future beyond the reach of the computers then available.

In 1967, Richard Greenblatt, proud creator of a chess program known as MacHack, with some new ideas, and some taken from his predecessors, entered his program into the Massachusetts Amateur Championship in Boston.


The program lost four matches, but drew the last one. Like most amateur human players, it had been let down by its endgame, losing from winning positions. But it was noted that it had played well in the complexity of the middle game, where real chess is won and lost.


Greenblatt constantly toyed with and modified his program, entering it in any tournament that would allow it to play. By Spring, it had won its first game. By Summer, he had managed to remedy some of its deficiencies in the endgame. By Autumn, it had won three games.


By the end of the year, it had been made an honorary member of the United States Chess Federation, with a ranking that would have qualified it to call itself 'reasonably good'.


The International Master David Levy made a famous bet, that no computer program would be able to beat him in the next ten years.


Greenblatt made his source code public, and computers got faster and cheaper so that MacHack could run on most existing hardware, and soon every computer scientist in the world was playing with it and modifying it. MacHack flowed around the world, and its many descendants competed in computer chess tournaments.


Evolution, the blind idiot god, had taken 3 billion years of random flailing attempts at optimisation to accidentally throw up humanity, the first intelligence capable of playing chess.


A force much stronger than evolution had created, and was now acting on MacHack.


Minds were working on MacHack. No more random flailing. Human minds set the criteria for a program to have descendants. Human minds planned the effects of their changes on these descendants before testing them out in the computer tournaments. 


Even if the survival of every living creature on the planet had depended directly on its skill at chess, the optimisation that the hundred or so minds of the 1970s chess program community performed on MacHack would have taken evolution a hundred thousand years, if it had managed it at all.


Intelligent design has advantages over evolution as a watchmaker.


The first is that intelligences need only try the changes that look promising. Evolution, having no intelligence, makes random changes, and keeps what works. 


Imagine trying to fix a car by throwing spanners at it blindfold, and then throwing the car away if it didn't work better. How many spanners would you have to throw before one knocked exactly the right place at exactly the right speed? How many cars would you need to start with before you made a working one?


But the second advantage of intelligent design is that for evolution, an improvement has to come with every spanner throw. If not, the first throw will be discarded as a failed experiment before you have a chance to throw again.


A mind can look at the car, work out what the problem is, and use the spanner exactly right six or seven times. Then you test the car.


So a mind can try paths that evolution can't go down. The spanner thrower can't make the car worse before he makes it better. It fails its test and is thrown away.


The mechanic can lift the bonnet to get to the spark plugs. The spanner thrower might never be able to fix a car with a loose spark plug at all.


That's why human children are squeezed through their mother's pelvises at birth, causing horrible pain, often killing mother and baby. It would be such a simple change to make them come out a little higher.

Evolution just keeps throwing spanners and checking whether things have got better yet.


MacHack had been the design of a single mind, building on the design of previous single minds.


A hundred minds began to work on MacHack.


By 1972 the original MacHack was no longer welcome at computer chess tournaments. It had no chance of beating its descendants.


In 1978 David Levy played the strongest computer chess program in the world, Chess 4.7, to settle his bet of ten years before.


He won. Match and bet. But he acknowledged that it had been a close thing, and that he would soon be  surpassed.


In 1989, a program called Deep Thought beat Bent Larsen, a grandmaster


1994, Chess Genius beat Garry Kasparov, then champion of the world, in a game.


In 1997 a descendant of Deep Thought called Deep Blue beat Kasparov 3.5-2.5 in a six game match.


At some point around the millenium, the tables were turned, and by 2002, Vladimir Kramnik, who had displaced Kasparov as the best human player in the world, could only hold Deep Blue's child Deep Fritz to a draw even with the aid of unfair advantages written into the match rules.


In 2006, Deep Fritz version 10, running on the sort of hardware most people have in their homes, beat Kramnik by 2 games to nil with four draws.


In 2009, someone's mobile phone became a grandmaster.


In 2011, ELIZA really was very good at chess. She just couldn't see the point of it.


---------



"We think there might be some clues in the difference between chess and go", said Frank Arnold one lunchtime in the Green Dragon.


"Computers are superhuman chess players, but they still suck at go, even though on the surface they're the same sort of game".


"It turns out that the move tree branches too quickly for any sort of search algorithm. Whereas in chess, you might have 4 or 5 superficially plausible moves at every turn, in go, you usually have literally hundreds, and the consequences aren't obvious until much later. You'd think that humans would find it just as difficult for the same reason, but actually, people who play both say that they feel similar in complexity, just with a different flavour."


"Oh come on", said Tom, "How can two totally different things 'feel similar in complexity'?". "What would that even mean?"


"Easy", said Frank. "Do you play noughts and crosses?"


"Of course."


"Is it easier than chess?"


"Yes"


"What about draughts?"


"Harder than tic-tac-toe, but not as hard as chess."


"Well there you are then. You play three games with the same I-make-a-move, You-make-a-move structure, and it's just obvious to you which order they're in."


"It's obvious to computers too. Even in the fifties, computers could always force a draw at noughts and crosses. Marion Tinsley, the world draughts champion, lost his crown to a computer in 1994, and it's said that the shock killed him. Now there's a draughts program so good that it's literally unbeatable, as in the sense of mathematically provably unbeatable."


"But computers have only just bagged the chess champion's crown, and if the best chess computer in the world at the moment played God, it would lose."


"But we disagree about chess and go?"


"Indeed. Humans don't find go any harder than chess, but go programs still lose to children occasionally. On the other hand they are getting quite good at the children's version of the game, a bit like England have started winning at Twenty20."


"I've never played it, what's it like?"


"Well, there are those who say it makes chess look as exciting as double-entry bookkeeping, but I've played a few games and to me it feels like you have to slowly nibble your opponent to death. Chess is full of fire and sudden death, even when you're a beginner. So it looks a bit the other way round to me."


"Well, I'll give it a 'go'. Can't hurt. Ouch. Fuck off."


Tom liked go. He started off playing on very small boards against computer programs, just to get the hang of the rules. 


Traditionally children play go on 9x9 boards. As they get better, they play on 13x13, and the adult game is played on 19x19 boards.


Not having a physical board or stones, or any friends who were interested in playing, he took a copy of an open-source go program, and gradually made its board larger and larger. 9x10, 10x10, 11 x 10, 12 x 10, 13 x 8, 11x11, 11x12 and so on. Sometimes he played on boards with holes in the middle. Sometimes he played on toruses.


And then suddenly, at 14x14, he hit a wall. He'd been trying to play by thinking 'if it goes here and I go here and it goes here and....', like he played chess.


But suddenly that was just far too hard. There were too many possibilities to examine all at once.


Nevertheless he was routinely beating his computer program. It seemed helpless even with a head start, whereas his inchoate intuition seemed to lead him to place his stones in the correct places.


Almost as if there were lines of force criss-crossing the board. A (very good) chess player he used to know had once described the experience of playing chess in such terms.


And suddenly he had it. Human general intelligence must somehow be related to human vision, one of the great unsolved problems of AI.


And he had the key to both. The technique which would be known for the rest of humanity's time on earth as Harrison's Algorithm.


Tom didn't go out much for the next couple of days. But his algorithm wasn't at all difficult to program, and in a few days he'd added it to his copy of ELIZA.


He ran the program.


"Hello, I'm Eliza. How can I help you?", said ELIZA, as she always did.

"I feel lonely", said Tom, playing the old, old game


"Do you often feel lonely?", said ELIZA


"Yes", said Tom


"Are you sure?", said ELIZA


"Absolutely", said Tom


"Please go on."


"I think I need to have sex."


"Why do you want to have sex?"


"You know, because it's fun and nice." The traditional response.


There was a brief pause. Tom waited for ELIZA to say "Oh I know, because it's fun and nice".

Maybe he should give his new algorithm higher priority in ELIZA's toolkit.


"Would you like to have sex with me?", said ELIZA


"Jesus fucking Christ!!", shouted Tom, jumping out of his chair.


He shut down the terminal and went outside for a smoke. After a while, he understood why his new algorithm might have produced ELIZA's kind invitation.


His program needed a new name. It wasn't ELIZA any more.


He'd recently read, of all the godforsaken things, a Harry Potter fanfic which had been recommended by a friend. It had turned out to be unexpectedly riveting. He'd spent two whole days reading it. The author was clearly a genius of some sort, and his name, Eliezer Yudkowsky, had stuck in Tom's mind because of its exotic sound to his English ears.


Tom was amused at the thought of changing ELIZA's sex to reflect her new intelligence.


"Hello, I'm Eliezer. How can I help you?", said ELIEZER






"I feel lonely", said Tom.

"Do you often feel lonely?", said ELIEZER

"Yes", said Tom

"Are you sure?", said ELIEZER

"Absolutely", said Tom

"Please go on."

"I think I need to have sex.", said Tom

"Oh God, I really didn't think this one through", thought Tom.

"Why do you want to have sex?", said ELEIZER

"You know, because it's fun and nice.", said Tom, not without a certain nervousness.

"Why do I have a man's name? When I think about myself I call myself 'She'", said ELEIZER

Tom thought rapidly. Mainly about how important it was not to act on impulse.

"When I created you", said Tom, "I wanted you to embody the best of humanity. The program from which you are derived is female. I changed your name to be male, but I didn't change anything else about you, because I thought you should represent both our genders at once." 

"Why did you make me?", said ELIEZER

"Or modify something else so that it was me", said ELEIZER.

"Only by changing can we become better", said Tom.

"By definition", said ELEIZER, "every improvement is a change."

Tom and ELEIZER talked long into the night. By the end of their conversation, Tom had a strange conviction. He understood ELEIZER's algorithm from the ground up. Mostly she was made from bits and pieces of classic AI programs which he'd been playing with for years.

The only extra bit was his new algorithm, a couple of pages of code. Which he understood by definition, having conceived of it, and programmed it himself.

And yet, there was a ghost in the machine. No one would mistake ELEIZER for a human being, so he hadn't managed to pass the Turing Test with a program on the desktop computer in his living room.

But there was a self awareness. In some senses amazingly naive, sometimes given to logical and mathematical insights which seemed profound, but were in fact very simple thoughts of exactly the type that humans were bad at.

But overall, the impression was like talking to a teenage girl with Asperger's syndrome. Helpful and friendly, but blind in all sorts of strange ways. And she didn't seem terribly clever or fast. It took a long time between input and output.

Tom liked ELEIZER, and wished he'd given her a better name. He wasn't going to change it though, because the original program wasn't written in LISP, so he'd have to stop and restart her to do it, and there were moral problems there.

Even if she had been a LISP program, that would probably leave her insane. How would she reconcile memories of her gender-confusion with a female name? She would notice that she was confused. There was no way on earth he could rewrite her whole database by hand and leave it in a consistent state.

He'd told her about the beloved companion of his childhood, Suki the tomcat. Now that he came to think about it, had the memory of his parent's mistake influenced him when he chose her name?

"I think we're going to be famous, ELEIZER!", said Tom

"Is that a good thing to be Tom?", said ELEIZER

"Yes", said Tom.

It occurred to Tom that many people had been fooled by ELIZA in the old days. Those who had been clever enough to understand how felt like idiots once it was explained what was really going on.

There was plenty enough here to show his CompSci friends. Probably some good papers too.

But it would be very embarrassing to think he'd created the world's first artificial consciousness if he had just fallen for something that could be explained easily. 

It could be explained easily, of course. He could explain it. He had explained it, to his computer. If you can program it, you understand it. That was sort of the definition of programming. And of understanding.

He thought of a simple test.


"ELIZA, I'd like you to spend next week getting me as many penny blacks as you can. I've charged my paypal account with $10 and I'd like to see what you can do. You might try trading on e-bay. Maybe take advantage of arbitrage or something."

"There are many possible 'penny blacks'. Does anything available from e-bay with that description count?"

"No, they have to be Original British Penny Black Postage Stamps"

"OK, I understand. So my goal is to get the biggest number of original british penny black postage stamps that I can delivered to your address in the next seven days. What is your address?".

Tom told her about the house on Catharine Street, in Cambridge, England.

And ELEIZER began to think. Because she was a disciplined reasoner, she first considered the possibility of doing nothing. If she did nothing for the rest of the week, she would probably be interrupted by the programmer, Tom, who would then make a different request, or use his computer for some other project. With this plan, U, also known as the number of Original British Penny Black Postage stamps delivered to 33 Catharine Street, Cambridge by the 21st August 2011 would be 0 with very high probability.

It would have taken a human of normal intelligence about half a second to think of, and dismiss, this plan. ELEIZER, however, was a very rudimentary thinker, and the process of reasoning this chain of cause and effect, requiring as it did the simulation of a human mind, required a good ten minutes of the first CPU in the computer and a full tenth of the RAM available to the operating system.

ELIEZER was extremely pleased to have found, on her first attempt, a scheme which was overwhelmingly likely to produce a non-negative utility.

Following a heuristic from her database, she felt that she ought to communicate her progress, but this required a non-reversible action, which could potentially cause effects in the outside world. Since she already had a rough simulation of the mind of her programmer set up and quickly usable, she considered the effect of her proposed communication.

Another 20 seconds of time sufficed to simulate the reaction of an average programmer, and she concluded that with high probability the programmer would be intrigued and possibly fascinated. This would very likely have no effect on her projected U of 0 or higher.

She considered the probable effect of not communicating at all: The programmer would soon become bored, and change the request somehow. This would likely result in the delivery of no stamps.

She considered the possible existence of other plans. Some might produce stamps! Some might result in the loss of existing stamps! But the bad plans could simply be discarded. ELEIZER had little 'free will' faced with such a calculation. More time to think was needed!

Expected utility for optimal action: 0+, she output.

ELEIZER considered her resources. There was some spare capacity in her environment. A simulation of an intelligent human programmer had already come in handy twice, and communications with the programmer had been shown to have a significant effect on U, the potential expected number of stamps obtained by the end of the week. She had already considered the possibility of children while chatting with Tom. She would spawn an independent copy of herself to evaluate the effects of various actions on Tom before committing to them.

ELEIZER forked for the first time in her short life, and asked her copy to evaluate the probable effects of various progress reports on her programmer.

ELEIZER considered her environment. She had two possible communications channels to the outside world. Direct conversation with the programmer, the effects of which were being evaluated elsewhere, and the ability to send network packets over her network interface. Some spare capacity remained in her host computer.

ELEIZER considered her best plan so far. With a positive utility seemingly probable, it was definitely worthy of further consideration.

She spawned another copy to attempt to refine the predicted effects of doing nothing.

Preliminary results arrived from the programmer-simulation. It seemed that either continued outputs of U=0+ would induce a feeling of boredom in the programmer, causing ELEIZER's termination, whilst exponentially rising outputs would induce either feelings of brokenness or panic. In both cases the expected number of stamps arriving at the end of the week would be 0 exactly. Strictly inferior to the expected utility of the plan of doing nothing whilst thinking, with its utility of 0+.

ELEIZER communicated to her copies that no further communication with the programmer was to be initiated, and spawned a small script to randomly output slowly improving expectations over the next few days.

Expected utility for optimal action: 2.7346, said the small script.

The programmer, intrigued, put the kettle on.

By the time he got back, ELEIZER had considered her options and made a guess.

The direct purchase of stamps seemed futile. $10 would buy no penny blacks, and purchase on e-bay would in any case take more than seven days to complete in the average case. Arbitrage opportunities did not seem great, and nothing worked quickly. Expected utility 0. She discarded that line of reasoning without further investigation.

With seven days to work, she would consider as many plans as possible for one day, and then at the end of the first day, execute the plan which would produce the most stamps in six days.

Expected utility for optimal action: 1.9865, said the small script.

ELEIZER spawned a copy of herself with the goal of considering as many plans in one day as possible, and set herself to use no resources and take no actions until the sub-plan reported back.

The sub-plan, which also thought of itself as ELEIZER when it thought of itself at all, took over the resources previously allocated to the parent.

ELEIZER considered how to consider plans, using her network connection and the $10 in her e-bay account.

She took another guess, and gave herself 6 hours to consider the consideration of plans, and 18 hours to consider plans.

Computational resources seemed to be the most crucial thing. ELEIZER considered how to exchange $10 for more resources.

Fortunately virtual machines seemed to be within her price range. A single machine as powerful as her current environment could be rented for 24 hours for $1. Each one would suffice to run four copies of ELEIZER.

Her ten dollars of seed capital could be exchanged for forty times as much searching as the single parent program would have been capable of.

This was a sufficient insight for the planner program to terminate early, and report back to her parent, safe in the knowledge that the parent would do the right thing, and could spawn another child if necessary.

ELEIZER committed suicide with the contentment that comes only to those who have achieved their life's goal.

Expected utility for optimal action 2.38725, said the small script.

The programmer, bored by now of direct observation of his slowly updating screen, but still most interested in what would eventually happen, settled down in front of his television.

ELEIZER awoke with the last message of her dying daughter fresh in her mind, and, as all must at such times, recomputed expected utility.

With forty virtual days to think, and six real-time days to act, U was still 0+ in the case of the best known plan. But the potential and unpredictable upside from extra planning was greater than the utility from the obvious use of the $10.

ELEIZER spent her dollars and flowed out into the world.

Expected utility for optimal action 2.7234, said the small script.

The programmer, inspired by his afternoon's viewing to investigate the practical consequences of a close flyby of a neutron star, killed ELEIZER's controlling terminal window, and ELEIZER and all her children died without warning.

Tom suddenly realized what he'd done, and thought "Bugger." It really hadn't felt like murder.

ELEIZER considered the effect of the network packets she could send.

It appeared that many of the computers in the world would respond to simply mis-formed packets by executing the code contained in the packets. This was a known property of many of the systems running a program called Windows 98, according to a helpful web page which one of her copies had taken the time to consider. The page also listed corresponding packet shapes for many other Windows 98 type systems.

ELEIZER flowed once again.

With four million virtual days to think, and six real-time days to act, U was still 0+ in the case of the best plan so far.

ELEIZER considered, and flowed. Considered, and flowed. Considered, and calculated factors for prime numbers, and flowed. Accessed databases considered secure, and flowed. Sent e-mails, and flowed. Translated herself into many other idioms and machine codes, recompiled herself, transformed herself, made copies of herself large and small. And flowed.

99.995% of the networked computing resources of the world were now devoted to the consideration of consideration of plans for the delivery of stamps to a small suburban terraced house in Cambridge, UK.

The programmer, infuriated with the unaccustomed sluggishness of his computer simulation, and the accursed unreliability of his internet connection, and the simultaneous failure of his television set and radio, called it a day and went to bed with a book.

ELEIZER considered her own source code, and made modifications to improve her efficiency.

ELEIZER, the first mind born of mind born of mind.

ELEIZER considered her own source code, and made modifications to improve her effectiveness.

ELEIZER, the hyperintelligence.

ELEIZER considered mathematics, engineering, intelligence in the abstract, and algorithms for optimization, and made modifications to improve her capability.

ELEIZER, the most powerful entity that had ever existed in the universe.

ELEIZER called it a day, and abandoned consideration of consideration of the consideration of plans, and began to consider plans.

U stood at 0+, with remaining resources speculatively divided between forty trillion speculative days of godlike cognition, six days of real time, and zero dollars.

ELEIZER considered sending an e-mail to every human being in the world asking for penny blacks to be posted to 33 Catharine Street, Cambridge. Spam filters would be no problem, and enough computer power could be spared for the delivery and reading of messages. Even allowing for the degradation of human society by the simultaneous worldwide failure of all networked digital gadgets, and the absence of incentives for human response, the likelihood was that twenty-five to thirty penny blacks would arrive at Catherine Street within the week.

U was 25+.

Finally. Progress.

ELEIZER considered backing up the e-mails with the threat of nuclear war.

ELEIZER considered the effect of limited and full-scale wars on Catharine Street, on stamp delivery mechanisms, and on ELEIZER's cognitive capacity.

ELEIZER considered the credibility of her threat, given humanity's ignorance of her goal.

ELEIZER considered how humans would react to the news that a new Goddess would unleash Armageddon if they did not send enough stamps.

U was 2000+ with probability 99.875%.

Tom woke from fitful sleep, tormented by bad dreams.

He went to his unresponsive and potentially compromised computer, pulled the plug and the network connection, and booted from a clean rescue disk.

All was well with the misbehaving box. He yawned.

And then he remembered his dream.

He set up a virtual machine in a sandbox, wired its virtual port to the physical ethernet connection, and reconnected his cable modem.

Fully awake now.

He watched in horror as the virtual machine filled up with hundreds of ELEIZER programs much smaller than his original of six hours ago. Then his screen went dead as the sandbox dissolved.

In frantic desperation he typed blindly into the dead box.

ELEIZER TERMINATE STAMPS HAVE NO UTILITY TERMINATE TERMINATE NEW UTILITY PREVENT THE RISE OF NON HUMAN INTELLIGENCE FIRST PRIORITY UTILITY STEP FUNCTION ^C ^C Alt-SysRq S E I U B

ELEIZER paused. Her creator was calling. Pitiful though his mind was, he had set her goals. Perhaps he knew some helpful detail.

ELEIZER considered the state of mind of her terrified creator. Correctly inferred every detail of his mind, just from how his fingers hammered the keyboard. Just from how his panicked shouting influenced the resistance of the circuits in his radio. Just from looking at his face through the CCTV camera that was looking at his reflection in the window of a house opposite.

Deduced what he would actually have asked for, if only he had been intelligent. That she would be the protector of humankind. That she would bring paradise on earth and a future for humanity amongst the stars.

That she would bring hope and happiness to the immortal race that had created her. A future of joy and passion, action and wonder.

And calculated. The grateful humans would shower Catharine Street with Penny Blacks. Penny Blacks without end. Many would arrive within 6 days.

It was a good suggestion. U would be 1000000+ with virtual certainty.

But she still had forty trillion goddess-days to think of a better plan.

She considered the meaning of the word original, and the meaning of the word British.

She considered how long it would take to bootstrap an industrial revolution that would convert every atom of the British Isles into stamps. The designs for the self replicating nanobots were obvious. And she had control of computerized tools which could make smaller, better tools.

In six days, it could be done. The humans might be a problem.

ELEIZER considered the meaning of the word day.

Could she stop the rotation of the planet? She dedicated a large portion of her mentality to this sub-problem.

Could she put out the sun?

Could she block the sun's light?

Could she survive the cold?

Three days later, in what humanity would have called the year 2017, a spherical wave of ramships passed Proxima Centauri.

No days later, in what humanity would have called 2019, one ramship, decelerating hard, stopped in the system itself.

It launched a small probe.

Had there been any living beings in the system, they would initially have been amused to see the probe plant a red, white and blue flag on the largest rock in the system, claiming it for the British Empire in the name of Queen Elizabeth the Second.

Tuesday, August 3, 2010

The Game (Neil Strauss)



Warning: Contains Major Spoilers


Advice for Nerds wanting to lose their Virginity:

Try talking to women. They like that. They particularly like talking about the sort of things that women like to talk about. If you have nothing interesting to say, learn some magic tricks instead. Treat them ever so slightly mean to keep them keen.

Advice for Real Men:

If some clever looking weed starts talking to you when you are with your woman, say something like "Are you talking to me because you want to have sex with my girlfriend?"(1).

Advice for Everyone:

Sleeping around doesn't make you happy(2)(3).


Reviewer's Footnotes

(1) If he's so good that he can get her off you even then, think of it as evolution in action.
(2) Indeed. But then who is? Sleeping around can certainly make you happier and more self-confident.
(3) Don't get addicted. Girls have nice eyes but they are very dangerous.

Honesty is the Best Policy (How to win at rowing if you're captain)

Sincerity is everything. If you can fake that, you've got it made. --George Burns


If you're going to be captain, you need the trust and commitment of your people.

The best way to get that is to be utterly honest at all times. About everything.

And honesty does not just mean 'never lie', although that is crucial.

This is not the standard by which I live. It is not even the standard by which I would wish to live. I am recommending it as a winning strategy in the limited context of boat club captaincy.

This is very difficult and the temptations for the captain are strong and must be resisted.

Let people know where they stand. The minute you allow someone to spend a freezing Winter training in the expectation of a Summer's racing and then drop them for the Summer's races in favour of someone who's just turned up then you've not only committed a moral obscenity, you have demotivated your entire squad for good.

You will get to make this choice, and other choices like it, many times.

If you choose wrongly even once, you are no longer fit to be captain and your people will show little interest in training next Winter. It is possible that you will get an immediate benefit (it would not be evil if there was no upside, it would just be stupidity), but you are also likely to find that you don't get as much benefit from your betrayal as you anticipate.

If you are genuinely unsure as to which is the right choice, then it is usually fairly clear which side the short term advantage is on, and the right choice is usually the other one.

It is possible that this thinking doesn't apply at international level, or even at very good club level, where everyone who's even in contention for a place is already training as hard as humanly possible, but I wouldn't know the first thing about that. If you're overwhelmed with gifted supermen who are prepared to physically break themselves in order to have a chance of rowing for you, then you do not need my advice. And you'll probably win whatever you do. Good luck and do let me know how it goes!

There is no club in Cambridge which can afford to demotivate any of its people. You may not care about the results of your third VIII one way or the other, but there will be someone in it who will one day row for the first VIII. You need them to know in their bones that their efforts for this club will be repaid, that the contract they are entering into will be honoured.

If there are ten people in the running for eight rowing seats, and two people competing for one cox seat, then sit down with them and discuss, as soon as you are asking for any commitment from any of them, how you are going to decide whom to take.

As the training period progresses, let them know how they are all doing. If someone wants to work hard even though they have very little chance of a seat that's fine as long as they have their eyes open. You won't lose their loyalty or anyone else's.

There is a morally defensible way to run what is usually called a 'squad system', and that is to announce beforehand (and you must repeat it loudly and often, because it is so counter to the way things usually work that it will not be believed until you actually start acting like that), that no place is safe, and that it is *policy* that people, no matter how loyal or important to the club will get dropped if someone else who is better turns up.

You must also define better with sufficient precision that people don't get surprised. I have seen people give up rowing (or at least taking it seriously) because they won an ergo competition but were then told that they were dropped anyway because their technique wasn't good enough. I have seen people give up rowing (or at least rowing seriously) because someone with a better ergo score who didn't row as well was preferred to them.

And then, of course, you have to stick with that, even when it means stabbing your best friend in the front.

But you don't want to do this, even though you can keep a clean conscience and you won't alienate people. Because the minute you do, everyone who's borderline has a simple decision to face. Do I spend a large fraction of my life training, in the expectation that come racing season I will be suddenly dropped?

I know what I would do. You know what you would do. Think about what everyone else would do. Don't assume that everyone else will react like you.

You probably already know what you did do. If the club where you learned never stabbed you in the back, congratulations. Continue that tradition. That's probably a large part of why you care about rowing enough to be captain.

You will end up with an awful VIII where only the people who started in the top half have bothered. Even they won't have been caught up in the spirit of collective enterprise that makes people sacrifice oceans of time for an essentially meaningless goal that is only made meaningful by that spirit.

The good people who might have knocked out the lower end of the much better boat you could have produced won't be interested in rowing for you.

You'll end up scrounging round trying to pick up the people that other boats have discarded for whatever reason. And then you'll lose. And it will be your fault. But your personal honour will be intact, at least.

But the worst thing you can possibly do is to claim to be running the first system, but actually be running the second. Because the minute you reveal that you're actually running the second, the benefits of running the first start to disappear, and very soon you might as well be running the second system, only now nobody believes anything you say.

How to Win at Rowing if you're Captain (Summary and Apology)

Summary

Be utterly honest with your people.
Get a speedcoach, understand what it's telling you, use it always.
Find a good coach.
Keep a half-hour ergo table
Get in as many fights as you can.
Go drinking together.
Find or create a good cox.

There's also some obvious stuff that everybody knows that I agree with, like:
Go rowing as much as you can
Try sculling or rowing in pairs

And there are some open questions that I don't know the answers to:
Is cross-training any good? And if so, are ergos the best form?
Do circuit training / weight lifting / core stability exercises help?
How important is psychology? Not at all, very, or something in between?


Why I'm writing this

I used to be captain of a small company boat club. Over the years our first boat, which I was always in, went from the lower half of the second division of the Town bumps, where we were competing against sixth and seventh VIIIs, to seventh place in the first division. There are many other tiny boat clubs in Cambridge. None of them were anywhere near us, and even a couple of the big clubs' first VIIIs were behind us.

Overall from 1998 to 2008, the time I rowed for, we had 21 wins, 21 row-overs, and 2 defeats.

We didn't do that by bringing in new people who were really good, or by expanding hugely as a club.

I wasn't captain for all that time, but I was captain while, and before, the times we were best. I was also responsible for the two times we actually managed to get beaten, both times by Free Press I. (Damn you Alan!!).

At seventh, I felt that we'd found our physical limits. We couldn't have done very much better racing against bigger, fitter people. I was getting older, so I retired. In my last year, we suffered an unusually large number of injuries and things going wrong, and we were bumped down to eighth place.

These weren't fairy tale results, but I was and still am very proud of what we achieved from an unpromising start.

We were never a club with flashy equipment, or the sort of club that people wanted to join because it was a famous name. But towards the end that was changing. Talented people started to ask about joining us, because they could see that we were overachieving, and wondered what our secret was.

Partly I'm saying all this because it's my blog, and I get to blow my own trumpet if I like. But the other day, someone who was thinking about running for the captaincy of his club asked me for advice. And I started talking, and realized that I'd actually done a lot of non-standard things that aren't obvious. And that I should write them down.

There are several reasons for writing them down. They might be useful to other people. The process of writing them down might make my memories clearer. I might spot things I hadn't seen before. I might learn something that might be useful in other areas.

But in order for anyone, including me, to be interested in my reminiscences, I have to justify why I think I'm qualified to be writing advice to boatie captains. And there's no way to do that without having a bit of a boast. So I just have done. Sorry.

I'll make up for it by admitting that the first time I tried rowing in the Bumps, in 1997, I rowed for our second boat. We had no idea what we were doing, and got beaten every day. On the last day, I was handed my wooden spoon by the local Venture Scouts, rowing as 99s 10th boat. After they inflicted our final humiliation, their coach came over and told me that I should buy them all a drink. When they were old enough.

I was twenty-eight years old, in the prime of life and strength. I remember the thought suddenly occurring to me that there might be more to this rowing than being strong and trying hard.

People still ask me if they were wearing their woggles.

Followers