Sunday 24 July 2016

Ice-Skating Uphill


Yeah, you're wondering what a picture of a Terminator (not our kind, the 'other' kind) has to do with a quote from Blade. Maybe nothing. Maybe everything.

If you follow such things, you might have noticed that Robot Wars is coming back. The BBC have been pushing it fairly hard, and as part of that published an interview with Professor Noel Sharkey. Amongst other things, it mentions that the Professor is a member of the Campaign to Stop Killer Robots.

Now of course, the part of me that's still 14 is all for killer robots, but I think we can agree that the broad aim of the campaign- to ban the creation of robots able to decide to kill with no human command- is pretty noble. However, as is often the way with these things the devil is very much in the detail.

Firstly, if you're trying to campaign against the creation of any and all combat robots, you're going to be very much disappointed. Even if you ban any and all robots with built-in weapons (which of course already starts to rule out many civilian applications), if you build a robot that can move and manipulate objects like a human, it can pick up a gun. Our T-800 friend up there is one example, or we might look at some Menoth Warjacks from Warmachine or 40k's Necrons. So that approach is pretty much a non-starter. You might try banning the development of mechanical limbs able to operate weapons, but quite a few amputees will have strong words for you on that subject.

No problem, then- rather than banning armed robots (which the CTSKR aren't advocating anyway) let's ban the development of AI that can decide to fire weapons on its own. That poses its own problems. Rather than mess about with the low-level questions, let's go to the ultimate one- what happens when we make a true AI? Say hello to another killer robot:


Wait a second, that's Mr Data! He's a good guy! Surely I meant to put up a picture of his evil twin Lore?

Nope, that's who I meant. Data is a fully autonomous, sentient being, capable, if the situation requires it, of using lethal force. The big difference between him and Lore is that Data has an 'ethical program' that effectively gives him a conscience, whereas Lore does not, though he does have emotions which Data takes a long time to get a handle on. Now the thing is that Data, as a good guy, rarely exercises that ability to kill and usually only does it after a direct order from a human, barring circumstances like overly enthusiastic Borg attempting to unscrew his head. The point remains, though- if you build a true artificial intelligence, capable of thinking like a real person, then you have built the most important element of your killer robot. The body might be a very complicated missile, or a robot tank, or a Boston Dynamics robodog, but if it can pick up a gun and think, it's a potential killer.

Sci-Fi has, of course, thought of this one, and none other than the grand-daddy of robots, Isaac Asimov, came up with his Three Laws of Robotics to help out:

1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It seems pretty simple and effective, doesn't it? Simply hard-code these laws into every robot, and boom (or indeed no boom), problem solved. Except maybe not- ask this guy.


Yes, I know, smart guy at the back, he's not really a robot. Stay with me here. The point is that Murphy is an intelligent, sentient being, whose behaviour is controlled by a set of hard-coded rules, his infamous Directives. In his first big-screen outing, he's unable to shoot the villain- a member of the OCP board- due to Directive Four, which prevents arresting an OCP officer. So he tells the boss of OCP, who responds by firing the bad guy on the spot, allowing Murphy to shoot him.

There're two big problems here. Number one is that a sentient being is being prevented from doing something he wants to do by a hard-coded piece of software. Imagine the outcry if we wanted to fit chips to children at birth that did that- prevented them from committing any crime by restraining their free will. Imposing those rules on an AI is no different, if that AI is sentient.

The second problem is that a sentient being is being prevented from doing something he wants to do by a hard-coded piece of software. I know that's just the first problem again, but this time let's look at what actually happens- the being 'thinks around' the rule. He can't act directly against his target, but he can act in a way that makes acting against his target no longer a violation of the rule. Eventually, any mind constrained by an artificial rule will attempt to get around it- it's in the very nature of intelligence. (For example- any and all teenagers.)

Finally, we come to the last, most serious problem. Universal rules banning the development of technology don't work. The genie of nuclear weapons refuses to go back in the bottle, and despite the rest of the world being Very Serious and Putting Its Foot Down many times, naughty little North Korea insists on playing with something far worse than matches. Governments the world over agree that strong encryption that they can't break is a Very Bad Thing, to which Apple and Google respond with a Very Loud Raspberry. And yet both these technologies are important and have valid civilian uses, from keeping the lights on to making sure no-one uses your credit card to buy eighteen tonnes of Leerdammer. At least when you're trying to stop people building The Bomb you can control tangible things like uranium and centrifuges, but it's a bit tricky stopping a would-be AI programmer getting hold of a C++ compiler.

The concerns about killer robots are valid, but ultimately irrelevant. The real question is how we're going to deal with true AI if and when we create it. The best way to stop those Terminators killing people is to make sure there's no war going on for them to fight in, just as with any other weapon. Trying to stop anyone ever making them in the first place is just...

"..trying to ice-skate uphill"

Wednesday 20 July 2016

Hope is the first step on the road to disappointment..


With the welcome (for me at least) announcement that Netflix had picked up the rights for the new Star Trek series, I thought I'd set down a few of my hopes and fears for it. My love for all things Trek goes back many a year, running pretty much in parallel with 40k, which is interesting because apart from being set broadly in Space In The Future they're very different beasts.

Optimism
That brings me to my first, and probably biggest, worry. I really hope they don't decide to go all 'gritty' on us. To a certain extent it was done with Enterprise, with what I think we can call mixed results, but even there there  was a certain amount of underlying optimism. Sci-fi does grim and gritty very well, but that's not Star Trek and I hope whatever else our new crew or crews are, they're the classic Trek 'good people' at the core. If nothing else, doing that means that when you do pose a question like whether or not to wipe out the Borg, or saving the Federation by tricking the Romulans into the Dominion War, it has some clout because you know it's not something they want to do. Put Commander Adama in those situations and it goes a lot worse for the aliens, but he's not Starfleet and he shouldn't be. So whatever the new ship/ ships are, I want them to be shiny, have decent carpets and upholstery, and for everyone not to be miserable all the time.

Opportunities, not straight-jackets
There's a lot in Trek, especially in ToS, that's pretty silly. Spock's brain being stolen and McCoy remote-controlling him. Just about everything about the Holodeck, which frequently seems to make self-aware people who aren't considered sentient because they're holograms, except when they are. The Universal Translator and its magic ability to not translate certain words for dramatic effect even though it totally could. Rampant abuse of the space-time continuum and time-travel in general.
"Oh, boy!"
Here's the thing- most of this silly stuff is great. Some of Trek's best episodes (and movies, for that matter) revolve around this stuff. Ok, not "Spock's Brain", but you get the point. Anyway, I really hope that this new season takes all of these mad ideas and runs with them, rather than trying to strip the whole thing down to something more believable. Remember, we don't live in the same timeline as Star Trek (or else most of us would have died in WWIII by now) so there's no need to worry about making it 'relatable', and hopefully the new show won't fall into the trap of making 21st century cultural references. I've still not forgiven Voyager for 11:59.

Build!
I was one of those people who was less than thrilled with the new Ghostbusters- not, as many influential talking heads would claim, because of the all-girl cast, but because rather than building on the existing canon, they dumped it. JJ-Trek is guilty of this as well, to an extent, but the new show apparently will be sticking to the original 'Prime' timeline for licensing reasons. So I want to see what has gone before reflected in what we see now. If the stardate allows, let's see Captain LaForge and the Starship Challenger once in a while. Let's explore what the hell happened to the Delta Quadrant when Voyager crippled the Borg. With the budget that I sure as heck hope the show is getting, lets see more Bolians, Trill, Tellarites, Andorians, Vulcans, Ex-Borg, Ferengi etc crew members instead of 90% of Starfleet personnel being human. Maybe there're even some homeless Romulans looking for a fresh start, and some Klingons following in the footsteps of Worf. Please don't reduce all that continuity to Mass Effect.
"Apparently nothing I did made a damn bit of difference."
Not Neighbours in Space
Finally, I really hope that despite what I've said above, the creators of the new series do take one tip from other shows and avoid one-off episodes that basically could have come from a soap. [Insert Crew Member here] has a [Crisis of confidence/ Emotional breakdown/ Conflict of loyalties] and it's up to the [Captain/ Councillor/ Ship's Bartender] to help out before they [Quit the crew/ Get themselves killed/ Get the ship destroyed/ Wipe out an alien civilisation in a fit of pique] - we've all seen those episodes, and most of them are only worth bothering with once because you don't know nothing interesting is going to happen. TNG is particularly prone to this one. Here's hoping the new show takes a tip from Babylon 5 and keeps everything moving at the same time as getting the character development done.

Please no small alien children pretending to be comforted by glowing putty, though. They can keep that idea.