Friday, May 29, 2015

Myths of competence and specialization

An idea has been going around for a while that science fiction, more than anything, is a literature of competence — the protagonists of science fiction are competent people who can be trusted to do the right things under the circumstances (given their knowledge of the situation), and their mistakes can generally be traced back to withheld information or the effects of external forces that manipulate their mental state (like drugs or mind control). This is true of a lot of golden age science fiction (wherein, generally speaking, the protagonists were also respectable, if not amiable — think Asimov & Heinlein), and is generally less true of new wave science fiction (think of Ellison, wherein occasionally our protagonists are mad or naive or belong to a culture with alien values) and first-generation cyberpunk (think of Neuromancer, wherein every character who isn’t mad is varying degrees of self-loathing and self-destructive). But, a fiction of competence is also the lens through which many people see the real world — and some of them are probably drawn to golden-age science fiction for this reason.
I have a friend who is, like me, a software engineer. He clearly sees the world through this lens. He sees people as, generally speaking, professionals; what I consider to be design errors he considers to be some unfortunate but inevitable product of circumstance that must have very good and acceptable reasons behind it. He acknowledges the occasional genuinely poor decision, when it’s undeniable that there’s no good excuse for it, but he considers such things rare and rarely acknowledges poor decisions made by people he respects. When faced with a problem, he prefers to theorize about it rather than probe it experimentally, and is willing to spend more time generating an elaborate mental model of a problem than experimentally discovering its contours. In other words, he has confidence in the integrity of his mind and the minds of others, and considers the production of mental models to be a generally foolproof method for exploring the world.
Although I respect him a great deal, and although I admit that his knowledge of many fields is deeper than mine, I consider his attitude naively optimistic.
My model of the world is compatible with the rule of the blind idiot god. The universe is complex enough that few elements can be modeled perfectly by human beings. Because competence is difficult to achieve, few people achieve it — incompetence and poor decisions are the rule, rather than the exception. Furthermore, even competent people have little reason to exercise their competence — the illusion of competence is rewarded moreso than actual competence, and exercising one’s competence takes time and energy that pretending to exercise one’s competence does not — and society rewards behaviors that are incompatible with the production and maintenance of genuine competence.
Human beings tend to value confidence in themselves. I consider this a major failure. Because the world cannot be perfectly modeled, all models are by definition imperfect — and confidence is faith in the predictive success of one’s mental model for situations upon which it has not yet been tested. Confidence is valued in oneself in part because confidence (i.e., lack of hesitation) is valuable in genuine emergencies — if you are being chased by a bear, spending mental effort determining whether the bear genuinely exists or is an illusion produced by a trickster god is detrimental to your expected lifespan. Genuine emergencies are more rare now than they were when the adrenal and peripheral nervous system first developed in our distant forebears, and they are less important to the survival of our genetic line — we are more likely to fail to reproduce out of a bias against children or financial instability or a lack of attraction to the opposite sex than out of actually being killed by something we could run away from (like a bicycle, an enemy, or a wild animal); as a result, in today’s world, it is generally more risky to be sure than to be unsure. The same confidence in the correctness of your mental model of the world that will save you from a wild animal will get you run over by a truck, because change blindness is part of the same set of energy-saving heuristics that allow human beings to do things faster and with less effort by introducing errors into our models of the world; the same confidence that would allow a human being in a nomadic-band-of-hunter-gatherers situation to fight effectively against another band trying to use the same resources will lead a modern person to fight and die in a religious war.
Human beings also value confidence in leaders. This is for a similar reason — if you are in a nomadic band of fewer than 150 other people, and you are being attacked by another group of approximately the same size, your odds are about even so long as your hesitation level is about even, but lack of hesitation gives you a tiny advantage. Your leader, because he is in charge of coordinating tactics, is the bottleneck — his hesitation is your hesitation. This is the context where leaders are useful — when discounting planning time your odds are 50/50, but when every second of hesitation counts against you, fortune favors fools who rush in over the ones who consider the situation carefully. But, few genuinely important situations today depend upon split-second decision-making. Unless you’re in the military, your ability to make poor decisions quickly will never be more important to your lifespan than your ability to make good decisions (although the ability to make good decisions quickly is beneficial in a wide variety of situations, it’s not really practical to develop), and unless you play professional sports the same is true of your livelihood. A good leader in typical modern circumstances is someone who takes minutes or hours to think a decision through, and who knows when to back off and reconsider a decision that has proven to be flawed — in other words, exactly the kind of person who appears unconfident to the point of neurosis. Because our heuristics are stuck in the stone age, to become a leader you must appear confident, but in order to be a good leader your apparent confidence must be an illusion.
This is not to say that I don’t believe in competence. In fact, I think competence is undervalued and under-sold. Take, for instance, the polymath.
A lot of people these days say that polymaths can no longer exist — that the world has gotten too complex. Bullshit. Our models of the world have gotten better — which means that our ability to predict the world has gotten better. It’s easier to be a polymath today than ever before, because being a polymath means being competent in a variety of fields, and great strides have been made in every field with regard to our ability to learn to become competent in them. The world has not gotten more complex, but instead, through human endevours, it has gotten slightly simpler — not because we have changed the world but because we have changed our minds, developing mental tools for organizing the massive clusterfuck that is reality into more and more useful predictive models, wherein the complexity of the model grows slower than its predictive utility.
The same narrative that claims that there can be no more polymaths tells us that specialization is desirable, or at worst an unfortunate necessity. If we can’t learn a variety of mental models because the models have gotten more complex, then we need to stick to our lane and go deep into one silo, solving the problems that fit into that domain.
But, all problems are in reality multidisciplinary. Disciplines and problem domains are inventions of human beings, and reality has no interest in them. The specialist is blind to this. The specialist sees the portions of the problem that fall into his domain, and perhaps slightly foggily sees the portions that fall into neighbouring domains; the remainder is some vast undifferentiated miasma that must be left to other people to figure out. As a result, the specialist can be very confident about his results — because he has chopped off everything in the universe that he doesn’t know how to model, and has applied a model to the tiny portion that has been left over. His model may not yield useful results, because he has ignored most of the universe, and he really can’t effectively isolate his subject that way.
The generalist, on the other hand, sees the universe and applies several different models that apply to different aspects of the subject (as well as sections of the world immediately surrounding it). The polymath, who is a generalist upgraded with the knowledge of several specialists, does the same thing with better results because he has a wider variety of useful models and the experience to determine which models are appropriate. The polymath can do this because he realises that each specialized field is a pattern recognition machine, and because some patterns can be found in the world wherever you look, many disciplines have independently reinvented the same or very similar models with different terminology. He can combine the similar models to form superior hybrid models, and when the models are exactly the same he can learn the new terminology or use the shared model to synthesize its sister models across domains. And, since models build upon each other based on shared patterns, he can use models from one discipline to more efficiently learn models from another, unrelated discipline because they essentially accidentally share patterns. Because of the polymath’s wider scope, he also is aware of common failures in various forms of various models — he is aware that the failures can compound, and so despite having better predictive results at a lower cost, he also has lower confidence; he has eliminated the artificially inflated confidence of the specialist and is left with a level of confidence more appropriate to the actual situation.
I feel like this myth of competence and confidence — the Captain Kirk character voyaging into the unknown and believing that he already knows it, confidently applying human biases to non-human situations and considering himself to be morally superior to cultures that don’t share his values — is not merely naive and optimistic, but actually regressive and dangerous. Any confident leader and man of action can be percieved, with a minor shift of perspective, as an arrogant fool who acts without thinking; any crusade against evil people doing evil things can be reframed as an intolerant bigot battling a system of values he doesn’t understand. This kind of literature transplants into the space age the kind of leader who hasn’t really been appropriate for a leadership role since the dawn of agriculture.

Originally posted at https://medium.com/@enkiv2/myths-of-competence-and-specialization-c5d27506c2bf

Saturday, May 23, 2015

Against certain naive varieties of transhumanist sentiment

A frequent idea that I run across when speaking to technophiles with transhumanist leanings these days is the superiority of tech over biology. People will say, "I'll upload my brain and live forever", or "I'll replace my arm so that I can be stronger", or "I'll get a wetwire to the internet so I can read faster". This isn't a new idea; I said variations on the same thing when I was fifteen. But, it's absolutely stupid.

We have never built a machine with a lifespan and resilience comparable to a human being. Machine failure rates fall along a bathtub curve, but while an expected human lifespan is between seventy and eighty years these days, it's the rare freak of a machine that still functions after ten or twenty years -- let along thirty, let alone continuously. Biological systems have insanely complex self-repair and self-maintenance systems, and the reason we live for eighty years is that our parts are continuously being maintained, rather than undergoing routine maintenance on a human scale of weekly-monthly-yearly. The very first programmable electromechanical computers were built in the 30s (depending on who you ask and how you define it, you can push it forward or back about ten years), meaning that a human being living an average lifetime that was born at the same moment as the very first programmable computer in the modern sense would be dying *right now*; drum and disk storage is twenty years younger (along with transistors, ram that's not based on relays/mercury tubes/CRTs, programming languages other than machine code, and interactive terminals), and the internet is about fifteen years younger than *that* (along with email, pipes, directories, and asymmetric key cryptography). Someone born at the moment the first packet was sent over the internet would be middle-aged. Nevertheless, all these systems have changed drastically many times over the course of their lifetime, in incompatible ways. All of the component parts have been replaced many times over. At various points in the past, all these systems have had *complete* failures (yes, including the internet). These systems are not at the point where they could be expected to safeguard the lifetime of a rat, let alone extend the lifetime of a human being.

Likewise, with prosthetic arms. Prosthetic arms are great -- for people who are missing their arms. Cochlear implants aren't competing with people's real ears; they're competing with being deaf. The prosthetic eyes that have finally gotten FDA approval have approximately the same resolution as a TRS-80 Model 100 -- they aren't competing with real eyes, but with total blindness.

Wetwires are in an even worse position. The current state of the art in brain implants can, with incredibly invasive and dangerous brain surgery, temporarily hook your brain up to 200 I/O lines, each of which neurons *might* decide to grow on. Wetwires are competing with reading: a four thousand year old technology that's constantly being improved upon, that takes advantage of the human eye and optic nerve -- a pipe so fat that the eye is considered to be a part of the brain, a pipe so fat that the eye does complex processing independently of the visual cortex and can independently trigger signals to the amygdala about emotionally striking scenes before the visual cortex can even receive the image data. Furthermore, reading is a technology that the government of every developed nation spends huge amounts of money on installing into its citizens! Wetwires can't compete with that.

That said, this isn't the end of transhumanism, or even of grinding. Implants aren't going to go away. It's just that we aren't looking at them correctly.

Implants are a *long* way away from replacing the things that human beings already do well, like living and thinking and reading and moving. Generally speaking, to the extent that it's trivial to do so, when there's a problem with scale, we invent an external technology to handle it -- when we need to turn bolts, we build wrenches that fit in our hands instead of hacking our arms off and replacing them with wrenches. If we depend upon implant tech (and other varieties of transhuman tech) to fund itself by being an improvement over what humans already are capable of doing, then the whole field will go bankrupt. But, there are two fields in which this kind of tech can excel. One is performing a worse job at than the human body at tasks that the human body already does -- prosthetics for people with missing limbs, and replacement parts for people whose parts are faulty or missing. The other is allowing human beings to do things they've never done before -- not by increasing scale, but by qualitative change.

The cochlear implant kind of sucks. When it's installed, wires are stuck to the cochlea -- a snail-shaped fluid-filled organ in the inner ear that does the heavy lifting in hearing (the rest of the ear is basically involved in amplification and protection). In normal hearing, vibrations from outside the ear are amplified by a set of bones that operate like a cam assembly, before pressing on a flexible membrane on the big end of the cochlea, and the movement of tiny hairs inside the cochlea produces the perception of sound, with the position of the hairs that are most stimulated determined by the frequency of the sound. In a cochlear implant, the wires cause the hairs to be stimulated directly, with galvanism, and so the number of wires installed corresponds to the resolution of sound available. We do not have the technology to produce CD-quality sound. We don't even have the technology to produce speak-and-spell-quality sound. People with cochlear implants are stuck trying to decode speech based on fewer distinct frequencies than there are bars on a child's xylophone. But the cochlear implant, as an accident of its nature, has one improvement over the ear -- it has a built-in headphone jack. Cochlear implant-like technologies are far from being an improvement over ears, but when combined with throat mics or other mechanisms for producing the fundamental elements of speech from subvocalizations, they might be an improvement over the walkie-talkie. At the point at which this technology has enough demand to make people voluntarily submit to brain surgery, I expect that this is exactly how it will be used (and I expect the first market to be military or paramilitary -- people who, on a life or death basis, need to communicate without using their hands and without being heard by other people nearby).

There's another trend going on, as well. Just as desktops became laptops and laptops became smartphones, smartphones are on the cusp of becoming wearables, and wearables will become implants.

However, this change-over is very rarely quick, and even more rarely complete. Before desktops, we had minicomputers, and before minicomputers, mainframes; however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying), and desktops are hardly going anywhere.

We haven't yet reached the point where it's reasonable to develop software on a smartphone -- which means that smartphones are about where personal computers were in 1979, but beyond where laptops were in 1989 (when the now-famous New York Times article suggesting that the age of the laptop had come and gone was written, but also, paradoxically, when the HP-95 palmtop was released -- which, as it so happens, was perfectly capable of having software developed on it if you could get along with a calculator-style chiclet keyboard). Smartphones are currently being used for light-duty applications while the heavy-duty applications like video editing, compiling, and triple-A gaming are being done on laptops and desktops -- a mirror of about ten years ago, when laptops were being used for lighter-duty applications. Meanwhile, wearables are at the same stage that smartphones were in the Palm Treo era, or that PDAs were in the Newton era -- in other words, just close enough to seem potentially commercially viable, but not quite far enough along yet to be useful as anything more than an expensive toy. Expensive toys can be industry-changing, if they're popular enough, but to do that you need to pull a Sony and sell under cost at scale. (Sony is the only company I can think of that has pulled off making its new technologies ubiquitous and cheap by selling under cost at scale multiple times -- it did it with 3 1/4 inch floppy disks by supplying Apple with them under cost in the mid-80s, again a few years later with CDs, and again by shipping every PS3 with a blu-ray drive. But, it's also failed multiple times with that technique -- with mini-discs, with the cell processor...)

Every few years, the entire tech industry pulls out twenty-year-old project from the MIT Media Lab or CMU or PARC or somewhere and collectively decides to shit its pants over it. Recently, we've been hitting a quadruple-whammy: wearable computers, the Internet of Things, 3d printing, and virtual reality.

The current wearable computer boom started with Google Glass taking pretty much equally from the work that Thad Starner and Steve Mann were doing in the early 90s; appropriately, Starner was brought onto the Glass project, while Mann was completely uncredited despite the fact that they took the name from him. And, despite the fact that Glass was a complete PR disaster, Google definitely decided what parts of Starner's work to borrow with an eye toward PR -- Starner's most interesting idea, subliminal reminders, was omitted from Glass and the Glass UI standards and Glass UI frameworks were written in such a way that subliminal reminders should be completely impossible. Now, in an almost hilariously ironic turn of events, Microsoft has taken essentially exactly the same technology, made it steroscopic, reframed it in terms of geolocation-centric AR (something Glass was never going to be capable of doing, by design), and turned it into a massive PR success.

In comparison, the current Internet of Things boom seems to be driven entirely by industry-wide amnesia. That's not entirely unfair, since the industry has, until now, had a very hard time figuring out what to call it. The current term of art is the Internet of Things, but from around 1995 to around 2005, everybody was calling it Ubiquitous Computing. The IoT is hitting a lot of the same media roadblocks as VR did in the early 90s, which makes me think that it's probably around the same point in the hype cycle, although technologically, it's definitely further along.

Ten years ago, when I was an unemployed teenager, I had two big projects that were lighting up my eyes. One of them was a wearable computer project. The other was a UbiComp project -- what you'd now call the Internet of Things. At the time, the wearable computer project was by far less feasible; displays were expensive, cpus were expensive, making either run off a battery and getting the thing small enough and light enough to fit on your body meant lowering its capabilities to an extreme. I designed several prototype wearable computers around the AT90S8515 -- an 8-bit microcontroller that cost $10 and had 127 bytes of ram -- and various LED-based displays, but it was clear that unless I was willing to either buy thousand-dollar equipment or strap a laptop to my back and make due with audio cues as an interface, wearable computers were really infeasible. (I ended up strapping a laptop to my back and using audio cues, in the end.) The UbiComp project, on the other hand, was completely within the realm of possibility -- I had a working prototype for a system for communal cooperative use of a single computer, based on identifier tokens stored on a cheap wiimote knockoff that doubled as an input device; the cost of the system was the cost of a random desktop computer, a projector, and a $20 wiimote knockoff. If I had had steady disposable income, I could have formed a corporation and finished my prototype and become yet another failed IoT startup -- the technology was there, solid, and absolutely trivial.

Today, IoT is even easier. My potentially-$300 computer could be replaced with a $20 raspberry pi. Wiimote knockoffs don't even cost $20 anymore. The projector costs more than the rest of the system in total, and my homebrewed account-sharing system could be replaced with the kind of cloud-based thing that newbies whip up in minutes and brag about on hacker news. A couple years ago, I did a wearable computer, too -- with about $350 worth of parts (a raspberry pi, a twiddler, a $100 head mounted display, and a USB battery pack), I built something that, while not comparable in usability to a laptop, beat the pants off the absolute best I could do with that kind of money in 2005 -- mostly because of economies of scale provided by the popularity of smartphones. PDAs manufactured in 2005 couldn't really run 800x600 color VGA, or even 300x200 color VGA -- too slow. (Maybe you could do it if you were especially clever. I wasn't clever enough to make up for my lack of riches -- wagering the cost of disassembling an expensive PDA on my ability to make it drive a display was too rich for my blood.) A single-board computer capable of running Linux in 2005 was a fucking high-end single-board computer. But, the iPhone came out -- a single board computer running BSD shoved into a PDA -- then the Android phones started appearing a couple years later -- cheaper single board computers running Linux and Java shoved into PDAs. Now the chips that run Linux in smartphones are cheap enough that Texas Instruments will give away a handful of free samples to anybody with a university-affiliated email address, complete with specialized circuitry for fast video decoding. Single board computers running Linux can be sold for $20 and make enough money to prop-up a non-profit organization. Meanwhile, some nerds figured out that a series of cheap wifi chips could be reflashed, and now you can buy complete postage-stamp-sized wifi-enabled systems that can run Lua for $5.

So, we're at the point now where you can stick the guts of a smartphone on the side of your head and have a head-mounted smartphone with a battery life of about two hours, or you can stick the guts of your smartphone on your wrist and have a smartphone with a battery life of about a day if you barely ever have the screen on. Or, you can stick the guts of a smartphone in your pocket and stick a screen on your head, and actually have a reasonable battery life with reasonable usage. We aren't at the point where we can start making fully wearable never-take-em-off computers with reasonable battery life and reasonable capability, although I think that if we take a page out of the MIT Media Lab book and combine this with IoT, we might be able to make due with what we have for a little longer. This has problems -- centralized IoT is the domain of natural monopolies, with most of them fated to go the way of AppleTalk (although centralized IoT is all the rage now, with every consortium of manufacturers competing to make their own incompatible standards on the off chance that theirs will be the one to take off); meanwhile, decentralized IoT is the stuff of IT nightmares, where failures in logistics and/or security can lead to a lightbulb DDoSing your house and/or the white house. My own design, which was based on a federated model with an open protocol and a market for competing vendors, has unfortunately been obviated by time -- it was based on the assumption that the normal use would be an evolution of the cyber-cafe, and it probably would have worked in 2005, but no longer makes sense in the same universe as widespread smartphone ownership and devices like chromecast. Offloading computing from wearables onto IoT nodes will require an extreme of either security or naivete -- and because security is complicated, I fully expect a future hellworld of incredibly insecure wearable/IoT mesh networking comparable to the amazing terror of running Windows 9x on the internet in the 90s. Welcome back to an era where anybody with a modicum of knowledge can remote control your computer and nobody can patch it for five years; except this time, the computer is strapped to your face.

This is a problem that *must* be solved before the wearables become implantables. Implants need to be smaller than wearables. Right now, the state of medical device security is pretty low -- while medical device software, along with airplane control software and nuclear power plant software, has higher quality standards under normal operating conditions, it's largely no better than normal consumer-grade software when it comes to resisting actual planned attacks, and sometimes worse. We already have computers in all sorts of things -- horrible, insecure computers; our airplanes can be hijacked through the in-flight wifi network, our cars can be hijacked through the CD player, our pacemakers can be remote-controlled over wifi, and our routers are already sustaining self-replicating botnets. When these devices are on our bodies, the threats become more visible; when they are in our bodies, they become potentially fatal -- not necessarily because of malice (it takes a special kind of person to actually shut down somebody's heart by exploiting their pacemaker) but because of incompetence (it doesn't particularly take a special kind of person to try to make a botnet out of every exploitable wifi-enabled device, including pacemakers, and then not check available memory and crash the pacemakers because he's just written to an address that doesn't exist).

Implants are coming, and wearables are coming first. Implants will come both faster and slower than we expect, because they won't be used how we expect. They won't make us live longer or read faster, but instead will let us do things we haven't imagined yet. Let's fix our shit before we've got buffer overflow vulnerabilities that'll take actual brain surgery to patch.

Wednesday, May 13, 2015

Utopianism and sci-fi as machine-lit

There are several popular ways to look at science fiction as a genre. I have my own preferences. That said, the major opposing perspective -- what I'd term the 'machine-lit' school of thought -- has its merits, insomuch as it highlights a set of common tendencies in science fiction. I'd like to take this space to highlight the basic premise of machine-lit, the tendencies it breeds, and why I find most machine-lit to be relatively uninteresting.

(The third major perspective, what I call the spaceship-on-the-cover style, I find wholly uninteresting and is the subject of other essays; however, this perspective is becoming historically important lately because of some drama surrounding the Hugo awards being gamed by groups who prefer this style, so it's worth mentioning in passing.)

Machine-lit is, in a general sense, the construction of a narrative around a concept invented by the author, as a capsule intended to introduce the reader to the concept. Lots of early science fiction is machine-lit for actual machines (Ralph 124C41+ being an ideal example of how this can go wrong yet still be very influential). The works of Ayn Rand are machine-lit for the Objectivist philosophy. Big-idea science fiction novels tend to be machine-lit for the ideas they represent.

One failure mode of machine-lit is that, because the narrative is intended as a delivery mechanism for the concepts, the narrative can itself be weak or nearly nonexistent if the author thinks the ideas themselves are interesting enough. (Ayn Rand, again, and Gernsback, again -- but also major dystopian novels like Zamatayin's We and 1984). Likewise, style can be a major issue in machine-lit, with The Unincorporated Man's borderline-fanfic-quality-prose depending upon its intended audience of libertarians to forgive lack of technical skill in writing because the ideas are sufficiently in-line with the ideology, and PKD's writing so heavily leaning on the ideas (not to mention the amphetamines) to pull it through (outside of rare stylistically-polished books like A Scanner Darkly).

There are definitely instances where books intended as machine-lit end up having well-developed plot and characters and a coherent and polished writing style (pretty much every Neal Stephenson book meets these criteria, as does Brave New World), but to some extent, doing so depends upon a kind of imagination and intellectual honesty that brings the book into the middle-ground between machine-lit and the world-building-based style of science fiction that I tend to champion, whose most extreme and visible example is seen in the post-Neuromancer works of William Gibson.

Another major failure mode of machine-lit is that, because of the dependence upon the central conceit of the book, if that conceit is uninteresting or unoriginal, the book as a whole fails along with it. With big-idea novels related to politics (Rand again) or philosophy (a handful of PKD books that lean too heavily on solipsism or philosophical zombies, and nearly every film adaptation of a PKD work), interest in these works falls evenly along either political-ideological or philosophical-education lines -- a communist is, largely, going to find The Fountainhead or Anthem uninteresting; someone who is familiar enough with the idea of solipsism to find it fairly uninteresting will likewise find The Matrix uninteresting, while someone who rejects Serle's Chinese Room paradox and the idea of philosophical zombies as based on an erroneous deification of consciousness will find the host of films about robots being incapable of emotion or of morality to be uninteresting. When the same idea is recycled into dozens of machine-lit works, the popularity of the idea itself can suffer, because while no longer wholly novel it will often be framed in similar ways, with similar changes based on the needs of the story or premise, by nearly identical stories (The Matrix has more in common with Simula-3 and its major film adaptations, World on a Wire and The Thirteenth Floor, than it does with Plato's Allegory of the Cave, from which all of them were derived). Today, talking about solipsism will make people think of The Matrix rather than, say, Descartes' "evil genius" -- and despite my general feeling that The Meditations failed to be adequately convincing, we as a society are favoring an action franchise with major and obvious plotholes over a fairly heavily considered work by a brilliant philosopher.

Again, if a text develops its characters and plot adequately, the central conceit can essentially be ignored -- a good ghost story is good even to people who don't believe in ghosts, while a bad ghost story will fail to entertain enough to motivate people to suspend their disbelief.

Machine-lit shares with the rest of speculative fiction a basis in a counterfactual model of the world. That is to say, we start our world-building by setting some axioms that, in our world, are not true, and work from there. The difference is that machine-lit, by definition, performs the basic world building then immediately jumps to narrative, then stops as soon as something resembling a completed text is produced. Within world-building-based science fiction, a much more complex world is built, and the narrative and characters stem from that world organically.

This requires a dedication to completeness and intellectual honesty, in part because genuinely following the logical progression of the central mechanism of a counterfactual world can point out flaws in its structure.

In cryptography, the first and most important rule is never to roll your own crypto -- always use a well-known and well-tested algorithm, at the very least, and ideally also use a well-known and well-tested implementation. The reason is that flaws are never intentionally introduced into crypto by people who want the crypto to succeed, and thus fatal flaws can only be identified by other people -- and the more people there are looking for flaws in an algorithm, the faster such flaws are found (and the longer it takes to find fatal flaws in an algorithm, the more likely it is that such flaws are difficult to find). Everyone who designs crypto professionally is also skilled in trying to break crypto: you learn to avoid the flaws that you have discovered how to exploit. Likewise in computer security -- the research arm of the computer security community consists of people who figure out how to break security and then figure out how to patch those holes.

In fact, this is a common pattern in legitimately serious enterprises. The scientific method is exactly this: suggest a model of the world, and then recruit people to attack it. The adversarial justice system is based on two groups of people presenting different models of the world and attacking each others' models. Even in philosophy, philosophers engage in critiques of the ideas of other philosophers, rather than ignoring any idea they don't agree with.

Any functional member of any of these communities will attempt, before putting their ideas out into the world, to stress-test them personally -- formulate simple attacks, determine which portions of the idea are weak and whether they can be strengthened without complete restructuring.

Machine-lit, by and large, fails to perform these sanity checks. Machine-lit is the domain of people who are so in love with their ideas that they cannot bear to test their mettle before pushing them out into the world.

An ideology at the core of machine-lit, if properly investigated, would collapse upon itself or mutate such that it fails to be an ideology. A utopia at the core of machine lit would, upon close inspection, become a dystopia; a dystopia, upon close inspection, would yield some happy and fulfilled people, making the message of the book ambiguous. An actual machine at the core of machine-lit, if properly and rigorously tested, would become at worst a patent application but possibly an actual invention.

I'm perfectly in favor of optimism in science fiction. Nothing is to be gained from keeping the genre grimdark as a rule, in the same way that nothing is to be gained from keeping superhero movies grimdark. However, utopian science fiction represents a failure to take the medium seriously -- and a shallow dystopia or cozy apocalypse is no better. Science fiction should be a genre of ideas, but there's no point if we allow our ideological biases and our love of shiny toys to turn it into a genre of shallow ideas shielded from unforgiving reality. The real world has problems, and while escapism is fine, a work cannot simultaneously be an escapist fantasy and a serious analysis presenting a serious solution to the problems it fantasizes about escaping from.

Science fiction always starts as machine-lit. But, machine-lit is a larval stage that adult science fiction works outgrow.

Saturday, January 18, 2014

On Neuromancer (a rant)

(Note: this is actually from a thread on Tumblr. After I wrote it, I liked it so much that I figured it should stand on its own. The context was that someone was comparing Neuromancer unfavorably to 1984 and Brave New World, which appears to be a fundamental misreading of its genre and its place in history. Neuromancer's setting is only accidentally dystopian, as Count Zero and Mona Lisa Override demonstrate by focusing on characters of different social status.)


Neuromancer isn’t notable for its depiction of a dystopian future, but instead for its depiction of a socially realistic future wherein intent is just as meaningless as it is in reality. In both 1984 and Brave New World, the dystopias depicted are perfectly functional: dissent is systematically squashed or subverted by a mechanism that basically never fails, and the people in power are of one mind; the failure of both is that they cease to be realistic by way of being too tidy (the same way that conspiracy theories fail). In Neuromancer (and other elements of the Cyberspace Trilogy & its shared universe), obsolete future technology is being used for purposes other than that for which it was designed (something that happens in reality but rarely in science fiction, because science fiction historically has very often been about wanking over the author’s designs for a technically-driven world rather than taking into account the diversity of human desires and social circumstances), and the elites and antagonists remain in their positions despite being realistically dysfunctional.
As an example, in Johnny Mnemonic (which takes place in the same universe, set before Neuromancer by a few years), a group of primarily-black anarchoprimitivists called the Lo-Teks use the availability of cheap medical equipment to replace their own teeth with dog teeth as a fashion statement and signifier of cultural identity. In the end, nothing the Lo-Teks did had a fundamental effect on the state of the universe — they were, canonically, a failed revolutionary group with some interesting cultural attributes who in a minor way helped a naiive rich white dude once to escape an unpleasant corporation, then faded into obscurity never to be mentioned again. This kind of texture violates Poe’s rules for writing — everything is supposed to support the main plot, either thematically or materially — but Gibson re-popularized a kind of non-Poe-based science fiction as social realism (and this is the punk aspect of cyberpunk — the punk sensibility is one brimming with awareness of corruption and dysfunctionality but that nevertheless would rather succumb to pragmatism and absurdism than to nihilism). Gibson writes science fiction as future history, but he writes it as a future history of regular people and he does so with full awareness of the importance of cultural texture and the tendency of human beings to expend personal resources on social posturing. Gibson himself would probably agree that Neuromancer is overrated — he considers it to be an ‘adolescent attempt’, and says that he merely took the standard caper storyline and dropped it into the universe in which several of his short stories were set. Plot-wise, it wasn’t anything special. In terms of prediction of the internet — bullshit; Gibson doesn’t really care about the internet, and didn’t know anything about it when he wrote the book. The real draw of Neuromancer (which would be all the more potent for readers who have been choking on Asimov, Heinlein, and other Golden Age writers where everything is smooth and metallic and nothing ever breaks) is the depiction of a universe that has dirt in the corners and scratches on the silverware, where there are prostitutes and drug dealers and suburban kids and soap operas, and where the protagonists and antagonists alike are fundamentally understandable people with selfish motivations and embarrassing gaps in their knowledge and secret crushes and hang-nails. The Cyberspace Trilogy is full of non-humans or semi-humans (Wintermute, Josef Virek, the Dixie Flatline) with these same flaws and obsessions.
Second- and third-generation cyberpunk loses this; fundamentally, everyone in Neuromancer is at some level a loser and a poseur, and to the extent that there’s a lack of outwardly shown emotion there’s clearly a seething rage or cynicism clearly related to the crapsack world and crapsack circumstances they live in. When second-generation cyberpunk shows a ‘hacker’ in a black leather jacket, a blank mask of a face, and carefully bleached hair, it shows an empty shell (or at best an archetype of competence — no different fundamentally than the worst of the golden-age heroes and less interesting than the ones Heinlein wrote); the same character in first generation cyberpunk would have an inferiority complex, rips in his leather jacket, and secretly fear that he was incompetent. First generation cyberpunk is first generation NetRunner, complete with hand-drawn cards and jokes about weasels; second-generation cyberpunk is the Battleship movie: too much budget, not enough content, everything focus-grouped to hell.
Gibson didn’t invent this punk angle of cyberpunk. An obsession with social and cultural ephemera and the perversions thereof is the calling card of J. G. Ballard, to which Gibson attributes some of his early style; a tendency toward vignette for the sake of texture may well have come out of W. S. Burroughs, although it’s fundamentally part of both normal speech and real life; a focus on the texture of ordinary reality was very popular in the socially conscious writing of early twentieth century realists, and arguably came back into vogue with the ‘new wave’ of science fiction in the 1960s. Fundamentally, all of the attributes I’ve mentioned for Gibson apply to PKD’s writing as well, and so there’s a good reason that whenever PKD’s writing gets adapted to the screen it resembles cyberpunk. But, on a certain level. PKD just wasn’t a good stylist and rarely wrote a book whose prose flowed. Other first-generation cyberpunk authors had similar problems: Rudy Rucker peppers his writing with hippie-ish neologisms even sillier than PKD’s (where PKD stopped at ‘kipple’, Rucker’s proto-cyberpunk novel Software has a brain-eating cult called the Little Kidders, calls robots ‘boppers’, and he now edits a magazine named ‘flurb’); John Shirley’s arguably-not-cyberpunk-but-definitely-punk series A Song Called Youth goes in the opposite direction and borders on golden-age-style moral clarity (while not succumbing to the tendency for golden age science fiction to be always either economically or socially conservative but never both at the same time). Where Gibson succeeded is to take PKD’s proto-punk sensibilities and give them the smoothness of beat poetry and the obsessiveness with minutiae that has dominated contemporary art since the end of the second world war. Gibson made a socially conscious, realistic, pop-art science fiction with a subversive edge, that nevertheless went down like honey. It had all the humor of the best situationist manifestos, all the intricacy of an episode of The Simpsons, and all the professionalism of a book on typography for professional photolithographic technicians. That is why Neuromancer deserves respect.
(Pedantic note: Neuromancer came out in 1982, not 1984)

Source: http://enki2.tumblr.com/post/73752574522/neuromancer

Saturday, January 11, 2014

(Off topic) Wearable computer project build details, part 1

I've had an interest in wearable computing for a long time -- and after Glass was announced and it became clear that Google wanted to produce wearable computing technology that was limited to gluing powerpoint to your face, I decided to get my act together and build something practical for once.

Since this has been a fairly long-running project, compared to my previous attempts, I figured that I might as well document it.

Step 1: Monocular Display


So, a friend sent me a partially disassembled head mounted display. Having had read about the use of monocular displays, and mechanisms by which people had turned binocular displays into monocular displays (mostly tutorials for MyVu-brand displays wherein the wires were simply cut), I decided to take what was essentially a freebie display and turn it monocular.




Disassembling the display

I had a pair of goggles that I bought at an anime convention (steampunk is really big at anime conventions), so I actually just mounted the disassembled display on one of the thin clear plastic lenses with a combination of screws and velcro, then mounted the lens on the inside using velcro.


Mounting the display board and one of the control boards to the lens

The control box with its battery has been opened up and mounted with velcro on the side of the goggles
The lens is mounted to the inside with velcro
The completed monocular display mod, for this revision

Step 2: Panic, Worry, Waste Money


I was always planning to use the Raspberry Pi as the heart of the wearable computer, but I actually had quite a hard time getting the software to work. In the end, it turned out that I was using a mislabeled SD card -- it claimed to be one of the brands that was supported, but it was actually a card from another manufacturer that had been re-labeled and re-sold. SD cards, despite adhering to the SD standard, can vary significantly enough between manufacturers that a project like the Raspberry Pi doesn't actually support arbitrary brands, particularly for booting. I got a different card that was branded the same, and it worked just fine.


Initially, I thought that I had burned out the control box for the display I had. I was missing the adapter for charging it, so I opened it up and attached a USB cord directly to it (because I couldn't find an appropriately sized barrel connector); only later did I actually attempt to hook it up, and found that it was displaying a white screen regardless of what I was doing with the Pi. Later on, when I got a duplicate display, I discovered that this was actually the fault of the composite video adapter I was using -- I was attempting to use a composite audio adapter for composite video, and getting no signal. The duplicate display came with both the charger and the appropriate adapter, so in the end I didn't need to modify that box at all, nor did I need to replace it.

Anyhow, between this and lacking a handful of other components, it took a while before I got around to hacking together the rest of the project.

Wearing the display 
From the side

Step 3: Software


I wrote this software using a library that I had written for another project, and that is supposed to be released as open source some time in the near future. It's not released yet, so I'm avoiding really talking about it. Suffice it to say that it's similar to a previous project.

Software written for this project -- a zzstructure navigator -- running on the Pi, but connected to a normal TV

Step 4: Hook Shit Together


In the end, since my monocular display worked just fine, all I needed to do was connect it to the Pi. I ended up making things a little bit neater, for the sake of my sanity.

I took the separate audio and video cables leading from the Pi to the adapter and wrapped them with a bit of spiral wrap I had lying around from earlier projects. Having these catch on things individually would be extremely annoying, once I started actually wearing the thing -- I've had wires from earlier wearable computer projects catch on things like chairs, pets, and moving vehicles, with all the expected consequences. The number of cables required to attach a first generation twiddler to a laptop manufactured ten years later may well be the reason that particular unit broke (and it's very hard to find twiddlers on ebay).

I wrapped everything in a small zippered leather case. This case was, when I obtained it, marketed as a wallet -- although it doesn't actually fit in a normal-sized front pocket -- and explicitly intended for users of PDAs. As a wallet, it failed -- it was too large and couldn't hold much of anything in the way of bills or cards. As a PDA carrier, it failed -- it was actually too big for any of the various models of palm pilot or blackberry I had since I obtained it, and the velcro patch on the inside never managed to keep them in place. However, it's perfectly sized for the Pi, complete with plugs sticking out of the side and an SD card sticking out of the back. The wifi dongle I attached stuck out the top, making it impossible to zip the thing up completely, but it didn't cause enough of a gap to radically risk the hardware in the case of a sudden downpour.

I also used sugru to seal up a couple of the holes in the plastic case my Pi uses that I didn't think I was likely to use, and to insulate the unused side of the two-in-one USB cord I'm using to power the Pi. 

The full layout, at the time of writing

 
 The monocular display, functioning

Recent Changes


Using velcro for holding sensitive optics in place is a huge pain -- every time I get things displaying right, the lens moves as soon as I move my head. The velcro was self-adhesive, and it had trouble consistently sticking to the lens housing and to the plastic pane used by the goggles, so the lens would sometimes even physically fall out. So, I mounted some sugru on the pane where the velcro was, and I intend to use small screws and bolts to adjust the lens distance.Update: the sugru didn't work -- I couldn't get an appropriate distance with it, and it was hard to trim down just with the tools at hand. However, I was in luck: despite the lens being much closer to the eye in the goggle mount than in the original system, it turned out that if I flipped the lens around in the housing and then flipped the housing around, I got perfect focus at the intended distance. So, I just superglued the housing onto the pane of the goggles, and clipped the lens in backwards:
The lens housing being glued in. The screws are just there as extra anchor points -- they are flush with the back of the housing and glued on as well. The lens has small protrusions that clip it into the housing with either orientation, so this made it pretty easy; I now have the larger half of the lens facing my eye from the outside of the housing. Keep in mind that this puts the lens very close to the eye (so, watch out for infections), and that superglue produces fumes that are irritating to the eye (so wait for it to air out before using the display for long periods). The housing won't actually fit square with the display because there are some screws that poke through from the other side holding the display itself onto the pane, so I have it shifted to one side; one side of the screen is slightly blurry, so perhaps I should have aligned the display the other way.


The unhoused control box had some problems. It has a giant lithium-ion battery pack held on with some not-very-sticky double-sided tape, which on several occasions fell off with enough force to pull free the power wires. It also was too heavy to stick on to the velcro on the goggle's vent (this is where it is mounted in the side photo, which is why it is tilted to the side). I had kept it around just by chance, but because I was so sure I had burnt it out, I misplaced some pieces of the housing and could not re-house it in the original case. So, I just wrapped it in electrical tape and taped it to the band of the goggles, after cutting some holes for connectors and for the power switch. The cable leading from the control box to the display is fairly long and stiff, so I looped it up and taped that to the control box as well, leaving just enough slack to connect the display's control board.



Future work


I still haven't solved the problem of input. Many people use tiny bluetooth keyboards of the type intended for powerpoint presentations for this, but I consider full QWERTY keyboards to be best used with two hands and unwieldy with one -- and what's the point of a wearable computer if you need two hands to operate it. I'm considering building a chordite derivative, or perhaps a clone of the twiddler -- both are fairly popular one-handed chording keyboards intended to be held in the hand (as opposed to some chording keyboards that are only good if put flat on a table). I'd like to have a joystick or a direction pad for moving the mouse pointer, unlike the twiddler (which uses an accelerometer). I have some teensys I can use for emulating a genuine keyboard and mouse, but I'm circumspect about adding more programmable hardware to this project; I may simply use the Pi's GPIO port. UPDATE: turns out that the twiddler hasn't actually used accelerometers for mousing since the 2.0, about ten years ago. I dropped $200 on a Twiddler 2.1, so I will update when I can test it. It seems like everything will work fine with it aside from using the SUPER key as a meta key; since I am running Awesome with the default configuration on the Pi, I'll need to reconfigure the command key to be CTRL-ALT instead.

Another possible input system is eye tracking. There was an interesting student project at Cornell using cheap phototransistors and LEDs for eye tracking by bouncing infrared light off the sclera, and it seems like that should be easy to incorporate.

I've been interested in Thad Starner's presentations on a project at Georgia Tech wherein glove-mounted vibrating motors are used for non-intentional learning. In his experiment, subjects were taught to play simple melodies on the piano using these vibrating gloves, which simulated the muscle memory of rote practice. There are many things I'd like to be able to do with my hands, and some of them can probably be accelerated with this, so I've bought five vibrating motors and intend to wire them up to the Pi's GPIO port and train myself with them, as soon as I can find a suitable glove. Update (2014-01-20): Started on the glove, but I don't really have the appropriate equipment so I'm having my girlfriend finish it for me. Here's the beginning of it:


On a less exciting note, I still haven't received the battery pack I intend to use. So, I have that to look forward to in a few days, at which point I'll be able to test how long the unit can run. I was surprised at how long the display ran on a full charge, although I expect to need to break out a cable to quickly charge it up from the main battery pack. I wouldn't like to hook it up to the Pi itself, since it doesn't take too kindly to sudden drops in board voltage; instead, I might simply attach a USB splitter to the battery pack and run one line directly to the display charger, ready to be plugged in whenever power gets too low. I'm not really sure how long the battery will last when running the Pi with hardware hooked up and also charging the display, so I may end up upgrading to this battery pack instead.

In the end, I intend to stick both the battery pack and whatever boards I need for bridging the Pi GPIO with my own implemented I/O devices inside the leather case, although I may not be able to fit them. The battery pack looks pretty water-resistant, so maybe I'll just keep it in another pocket.

Update (2014-01-19):

I appear to have somehow fried one of my Raspberry Pi boards. I initially thought that the power supply wasn't putting out enough amperage to run the board, but it turns out (luckily) that I was incorrect -- it works just fine with my other Pi. I have no idea what caused the failure. Unfortunately, it's clear that the wifi dongle (which did work once upon a time) is now no longer functional; I may need to replace it, or it may simply be that the battery is incapable of supplying enough power to run it properly.

The second Pi came with its own (rather nice) housing, and I'm avoiding putting it in the same box as the other one on the off chance that the housing is related to the damaged board. In terms of likelier causes, though, it looks like there's some damage to the SD card holder, which may be from stress on the card itself -- having your SD card stick out of the housing by more than an inch is a design flaw, and perhaps one of the most glaring in the Pi. I suspect I will need to build a small extension to the housing for the express purpose of accommodating a normal-sized SD card, or purchase a much shorter one and transfer the image there.

My twiddler still hasn't arrived, although I am increasingly hopeful about its capabilities. It looks like I'll be able to get away with using it for everything, but that I will need to change all my keybindings from SUPER to something else because the twiddler doesn't support SUPER as a meta-key. If it's easy enough (as this teardown implies) I may add a hold switch, because I believe every portable device with buttons requires a hold functionality.

Update (2014-01-23):
The main problem with the monocular display is that the goggles fog up quickly. Even completely removing the vent covers doesn't help much. So, I was considering dehumidification mechanisms.

One option (the easy one) is to put a bunch of dessicant packages into the goggles. Unfortunately, dessicants tend to be toxic, and they also need to be replaced frequently. Best-case, you can use salt (which isn't toxic but which you wouldn't want in your eyes) and bake it to get the water out. I'd like to avoid this.

The other is to use cooling to cause condensation on some heat-sink or coil outside the goggles and collect or otherwise discard the condensed water (the way that both air conditioning units and professional dehumidifiers work). The trick is that most of these have moving parts (which you don't really want attached to your face).

Luckily, there are solid-state semiconductor-based systems for cooling: peltier junctions. And, better still, all the peltier junction produces is a difference in temperature between the two sides -- meaning that we can also produce air movement by exploiting the difference in pressure between hot air and cold air.

I'm not fully convinced that this will work, but the idea is that the expanding air on the top (hot) side of the peltier pump will push through the tube into the heat sink on the cool (bottom) side, and the movement of the air will create just enough suction to cause the air in the goggles to pass through the cool heat sink and condense some of its water, which will then drip off. I will probably need to mount a thermistor on the top and build an automatic shut-off that will kill the power to the heat pump when it gets too hot.

Update (2014-02-01):
I received the Twiddler, and last Tuesday I took the unit out for a test at the local hackerspace:

and also at work:
The battery life seemed pretty OK, but I hadn't gotten a replacement wifi adapter yet so I didn't trust it.

Today, I installed the new wifi adapter (an Edimax EW-7811Un, based around the RTL8188CUS and very common for Pi users) and took the machine for a walk. I wore it and kept it on and connected to wifi from around 11AM to around 5PM without recharging, and this was fairly successful.

There are two caveats. One is that the adapter doesn't seem to like mixed 802.11BG networks (which is what my router provides), and an unsecured 802.11BG network shows up as a WEP-enabled ad-hoc network. I switched my router into B mode and this fixed the immediate problem. However, I still couldn't connect -- and it turned out that if I plugged in the battery pack, I could connect just fine.

The SD card did just fine as well, which was somewhat unexpected since the previous one had its holder shear in short order after much less wear. I credit a combination of more support in the laser-cut box I'm using and the unexpectedly good performance of the hacked-together SD-card protector I stuck on. I had a bunch of rubber belts lying around that were just slightly smaller than the perimeter of the box, and I wrapped one around the end of the box such that it would extend out just past the end of the SD card.
The belt wraps around the end, and juts out just past the edge of the SD card. The battery is held on with rubber bands, for now.

The wifi dongle is just below the keyboard connector here, so it can't be seen. The battery's wire trails around.
As a side note to the de-fogging saga, I've noticed that fogging doesn't happen nearly as much outside in the cold (for all the obvious reasons), but that this doesn't really help because outside light coming through the left side of the goggles actually overpowers the light from the display. In addition to defogging mechanisms, I will need to figure out a way to block out light. I may roll that into attempts to encase the electronics and protect them from the elements.

Update 2014-02-14:
A few weeks ago, while installing an opaque mask on the lens, the connections between two lens-mounted boards broke. On the display side, there's a socket, but on the side hooking to the board to which the (usb-lookalike) connector going to the controller connects, our wires are surface-mount soldered to vias. Anyone trying to replicate this project should probably desolder these wires and solder them on the other side of the via, because the torque from the cable leading to the controller is enough to pull the velcro off (and so it was only a matter of time before surface-mount connections like that pulled apart).

I created the mask just out of thin cardboard and electrical tape, using a spare lens as a template and cutting out a rectangle for where the display was affixed. I placed it between the lens and the screw-on abutment that holds replacement lenses in.

After adding the mask and redoing the solder, I folded velcro around the edge opposite the wires I just soldered in and affixed the board to the velcro vertically; I also changed the position of the cable to avoid producing so much torque on the wires.


The next step for this project is the production of a protective casing for the lens-mounted boards. I wanted such a casing to be transparent, but it looks like barring getting a glass blower or an expert in lexan-bending to produce a small box, I will not be able to produce a transparent case. I may need to use an opaque printed case.

Update 2014-04-25:
Adafruit published a tutorial for an extremely similar design. Their version is slicker and easier to follow, so if you are trying to build something like this, I recommend primarily following their directions (particularly for the disassembly portion). If you are trying to build this unit exactly, follow their disassembly instructions and then follow mine beginning with step #4.

A future path I'd like to explore is to use something like this for legitimate overlay. My general idea is to remove the goggles entirely, use the adafruit version of the mount with the elbow inverted (so that the display goes in the opposite direction) or straight (so that the display juts straight out), then mounting an extra fresnel lens (if necessary) and a half-silvered mirror at a 45 degree angle in front of the eye:

Friday, December 13, 2013

RahXephon as a tortured metaphor for filmmaking

(Author's note: I will assume that you have seen all of Evangelion and RahXephon. I will assume that if you haven't, that you don't particularly care about spoilers. I may well introduce something you consider to be a spoiler about some other film, too. If you're that sensitive, go fuck yourself; you have no business being on the internet without having watched to the end of every obscure movie in existence. Have a nice day.)

Films as a metaphor for film-making have a long history in the west, and film-making metaphors in film (intentionally or otherwise) are even more common. Even when the metaphor is fairly direct (as in David Lynch's Mulholland Drive or Epic), there is an element of the surreal that muddies interpretation: films about film are strange loops, particularly when they cover topics like artistic integrity (Mulholland Drive) or advances in special effects (Inception, The Thirteenth Floor, FX, F For Fake). Further confusing matters, any film about illusion or trickery can be interpreted as a film about film (because the trickery in any film is implemented via film trickery -- something audiences are fully aware of, and something that often gets lampshaded). On top of this, inversions of the rule of film trickery are themselves reinforcements of the maze-like nature of illusion, even as they are laughed off as fourth-wall gags (Freakazoid).

Evangelion is an interesting (but probably well-worn) case, insomuch as it's widely acknowledged that the surreal content in the original series rose at approximately the same rate as the extent to which events in the series mirrored political and financial circumstances surrounding its production. It's also a paradoxical case in that, when our sound stage and filming equipment are revealed (in the last two episodes of the series, and again in the movie), the iconography of filmmaking equipment is purely metaphorical -- no such equipment was used in the making of the series itself. In fact, this reveal is a kind of joke; in addition to being a reference to the french new wave films that inspired some of the unusual cinematography (Anno is a fan of unusual camera positions, which he admired in live-action new wave films -- which in some notable cases broke to showing a sound stage with lighting and filming equipment), it's an acknowledgement that (like the psychodrama that accompanies it) there is a second layer of illusion. A straightforwardly cheeky use of this gag would be to show pencils drawing the characters, and show frames degrading into cells (as was done in Gainax's earlier Otaku no Video -- also semiautobiographical); instead, they show a simulacrum of filming equipment surrounding their simulated characters, with the implied simulated pencils and inks still hidden.

But, what about RahXephon?

To the extent that RahXephon is discussed, it's often dismissed as a rip-off of Evangelion. This is at least somewhat reasonable: both take the giant mech genre and deconstruct it by painting an unflattering picture of the kind of organization that would employ such weapons; both add a great dose of surreal imagery and psychodrama; both put a largely unwilling and cowardly protagonist in the pilot position, surround him by more dominant female characters, and take away his agency almost any time his actions have relevance to the plot; both even employ secret societies of german origin, dream sequences involving trains during late afternoon, episodes wherein the main character unknowingly kills a friend who has been transformed against his or her will into an enemy, and episodes wherein the pilot and the mech are trapped alone in an isolated pocket universe where time moves differently; furthermore, both contain the crucial point that both the protagonists and their enemies are puppeted by a single organization toward a larger goal. On the level of aesthetic decisions, both contain soundtracks that lean heavily on slightly modified classical works in the public domain, and both have opening themes featuring a female vocalist whose performance is broken up with staccato backing rhythm.

I see one other similarity, but one that probably does more to legitimatize RahXephon on its own: where Evangelion introduced its filmmaking metaphors late, RahXephon was always and entirely about the medium of film.


RahXephon begins by introducing us to the main character in what we take for a representative portion of the world in the near future. He lives in Tokyo, has a pair of close friends, and goes to school on the train. We learn in the first episode that most of the world has been destroyed, and only Tokyo is left. Everything aside from this reveal seems quite normal, so as viewers we accept it: this is a science fiction show set in a world where only Tokyo survived some cataclysm that destroyed the rest of the world.

The first indication that anything is abnormal from our main character's perspective is when the city is attacked -- but this is a normal reveal; after all, there would be no mech if there wasn't a threat to the city. We get a second revelation of abnormality, when our main character's friend spontaneously disappears. A third revelation occurs when an unfamiliar figure knows our main character's name, and a fourth occurs when it is revealed that the mysterious men in black who are attacking her have blue blood (probably both a reference to Blue Christmas and to Evangelion's "pattern blue" -- signifying an angel). By the end of the first episode, the worldview that both the main character and the viewer had has been built up and already systematically subverted, and every cliche reveal hides another behind it: when the attack first occurs, the audience expects an Evangelion-like sequence (supported by the physical similarity between Haruka Shitow and Misato Katsuragi, the similarity of the unexpected appearance and disappearance of Mishima Reika during the first attack in Rah Xephon to the unexpected momentary appearance of Rei Ayanami during the first attack in Evangelion, and other structural and cinematographic echoes); when we first see the men in black, we may connect them to Section 2 in Evangelion, but once they are revealed to be nonhuman we can link their fight scene to similar fight scenes in X/1999, The Matrix, and the Animatrix between a mysterious character and shade-clad non-human MIBs. But, those expectations are again dashed -- Kamina misses the train (another inverted Evangelion reference, probably), and we later discover that the cold, distant mother figure is the orchestrator of the attack (and in a position of power within some government organization, unbeknownst to her son). Then, we get entirely into surrealist territory (quite literally), and the Xephon awakens.

This is all to say that the entire first episode is about trickery, allusion, and the systematic creation and frustration of expectations. While any film or tv show has some of this, Rah Xephon is extremely heavy-handed with it during the first episode (and far less so for most of the remainder of the series). While it may seem initially as though the complexity of plot during the first episode is a systemic flaw on the part of the writers, it becomes clear in retrospect (as many episodes pass that are not so cluttered) that this was a conscious decision.

In the following episode, we discover that Ayato and Haruka have broken free from Tokyo Jupiter. Our entire first episode, which focuses on trickery and deception, is a pocket universe formed for an unknown reason and ruled with systematic deception. The world outside Tokyo has not been destroyed -- instead, it has been made artificially inaccessible by the very people who teach that it has been destroyed.

A second piece of information is revealed about this pocket universe (one that fans of Inception may latch onto): time moves differently there.


A film, for practical reasons, must operate outside the realm of real time. Otherwise, films whose events did not take between one and two hours would not fit in the feature film format, and films whose events take place over days or weeks would be impossible to watch. Films are an artifice, created by a group of people, exposed to an audience in an artifically isolated setting (a movie theatre), relying upon trickery to immerse the viewers in a false but internally consistent world.


Later, Ayato is told that he is an 'Orin'. It's unclear to me what this means -- it is never translated. It may be a pun; one of the recurring symbols is a bird (and 'orin' is the greek root meaning bird). However, he is told that an Orin by definition has artistic talent, and that The Orin is destined to "tune the world". As much as this mirrors End of Evangelion, X/1999, Ideon Be Invoked, and other similar works to which Rah Xephon could be unpleasantly compared, I don't think this is another throwaway cliche. A running theme is that both Mishima Reika and Quon talk to Ayato about the abilities and duties of the Orin in musical terms, and he replies that he doesn't understand because he is a visual artist with no musical talent. The relatively derivative (albeit striking and formally excellent) visual style of Rah Xephon is in contrast to the spectacular sound design in the series; in addition to being formally superior to other shows of the time and having a much better sense of detail (in both the use of BGM and in sound effects), there's an intense classical influence that makes Evangelion's musical references seem poseurish, and the knowledge that the creators had about the physics of sound and about music is reflected in the content of the plot (particularly in Quon, but also in the in-show descriptions of how Dolem attacks operate). At the end of every episode, where Evangelion told us to expect "more fanservice", Rah Xephon informs us that "the world is saturated with sound".

The themes of artistic ability and its effect on the world come to a peak at the end, when Ayato fulfills the prophecy, ascends to godship, and re-creates the world in his ideal image. The leitmotif of Mishima Reika, Quon, and the tuning of the world is a portion of the melody from Poltovsian Dances, which became popular in the 1950s after it was used as the melody to 'Strangers in Paradise', a song in the movie musical 'Kismet'. This melody appears in several variations in the soundtrack, including a distorted and discordant version called "Way to the Tune". It is worthwhile to point out the title of the most famous adaptation, because the goal of the entire Xephon project was to create a paradise. As Ayato approaches this, the artificial structure of hostility between Terra and the government of Tokyo Jupiter breaks down; characters that had been formerly communicating only in secret now met in the open, and the old man who always appeared to be in charge reveals that this was his plan all along.

It is my hypothesis that Rah Xephon is wholly about the production of a film or TV show. Tokyo Jupiter resembles the world of a film in more ways than I have mentioned, not even going into the philosophical arguments about film and dream that drove Inception. Our shadowy governing structures, bifurcated but with personal connections joining them even as they are artificially separated, resemble the red tape network of unions and responsibilities familiar to filmmakers and students of film. Our old man, the producer, funds the whole thing. Thrown out of his imagined universe and artificially kept outside of it, our Orin/director fulfills the prophecy that he must edit the world to his liking, using his aesthetic sensibilities, while semi-incomprehensible sound designers (Quon and Mishima Reika) speak riddles past him. A bit player breaks down over the artificiality of his role (D is for Designer Children, D is for Defective). Crazy social and family ties are suspended for the duration of the production, explicitly in order to play artifically adversarial roles.

You could probably even argue that Rah Xephon is about the creation of Evangelion.