Monday, August 17, 2015

Peter Watts & p-zombies

I was surprised, upon listening to a two part interview with Peter Watts, to find him tentatively supporting Chalmer's positions on qualia and the hard problem. Part of the reason is that Watts is a(n ex-) scientist with a background in biology and neuroscience, and also both very intelligent and spectacularly good at not avoiding unpleasant trains of thought. The other reason I was surprised is that I read Blindsight, and interpreted it as an amazingly good takedown of the Chalmers philosophical zombie idea along the same lines as Dennett's.

This essay will contain spoilers for Blindsight, probably. Also, spoilers for the epistemology of Chalmers and Dennett. If you don't like to learn things in orders not officially sanctioned by the establishment, I recommend you at least read Blindsight -- it's a great read, and Watts has been nice enough to put it online for free.

Chalmers presents the idea of consciousness as indicated by qualia -- a representation of the subjective feeling of the outside world. His position, in my understanding, is that subjective feeling is a more difficult thing to model than other properties of the world. While I'm not sure about Chalmers himself, other people have used this idea that qualia is a "hard problem" as an excuse for reintroducing cartesian dualism into the world of epistemology -- by claiming that qualia is so difficult to model that not even straight-up neurons can model it, and thus we need to bring in quantum nanotubules or some other structure as a stand-in for the soul.

A lot of people have been suspicious of the idea of qualia. After all, isn't a representation a representation? Isn't a subjective representation just a second-order representation? I agree with Dennett when he argues that it's an unnecessary complication, with no evidence for it. I would furthermore argue that it's a matter of preferring a mysterious answer to a mysterious question: complex behavior can be difficult to predict not because it's irreducible -- not because each piece is complex -- but because lots of simple pieces combine in a complex way, but there's a general tendency among people to try to keep emotional parity with explanations (mysterious things need to be explained in a way that retains the mystery or else you've lost the mystery; negative events can't be explained as an interaction between purely positive intentions, or else where did the negative essence come from?) but ultimately reality doesn't deal in emotional valences and so feelings of mystery do not need to be conserved.

Chalmers came up with a fascinating thought experiment in order to "prove" the existence of qualia. He suggested the idea of a 'philosophical zombie': a person indistinguishable from a regular person, but without qualia. Because qualia cannot be tested for, this person would be completely indistinguishable from a regular person.

Somehow, a lot of otherwise intelligent people thought that this was a good argument. I can't see the invisible dragon in my garage, and therefore it must exist.

In Blindsight, Watts plays with a few variations on the philosophical zombie idea. He puts forth the idea of vampires being said to lack qualia -- along with other cognitive anomalies that are of benefit to a humanoid with a very different position in the food chain. Certain optical illusions and cognitive biases don't work on them. They have some differences in social behavior. They are largely lacking in empathy, without having the problems with impulse control that tend to be comorbid with lack of empathy in human sociopaths. A vampire, along with a split-brain patient, a personality collective, a person with extreme sensory modifications, and some other various neurodivergents take a space trip to meet a colony of intelligent starfish/squid-like aliens that are determined to have no qualia either and no sense of identity.

But, the ideas about qualia don't line up here. I assumed it was on purpose.

Rather than 'qualia', each of these neurodivergent characters has some facility or attribute missing or strongly modified that is very clearly defined and very clearly not the same as qualia. And furthermore, each of these characters has very different behaviors based on their divergence from the norm. (This is along the same lines as the Rifters trilogy, particularly Starfish -- we're basically talking about circumstances where people who are psychologically and neurologically maladapted to normal life in a normal society end up being very well adapted to a fundamentally different environment.)

In other words, it's a strong argument against philosophical zombies.

In the end of Blindsight, our protagonist gets back within radio range of Earth and can tell it's been taken over by the vampires. Because Earth had stopped broadcasting music and entertainment, in favor of utilitarian communications. The vampires aren't philosophical zombies, because they can be distinguished from humans. Because the particular kinds of things that they don't experience lead them to live in a more utilitarian manner.

Indeed, no novel could deal with philosophical zombies. Because, by definition, philosophical zombies could not be distinguished from normal people. A novel about philosophical zombies could not be distinguished from a novel with no philosophical zombies in it.

Now, the argument for qualia is that, while human beings can experience something through their senses (like the color green), that experience cannot be identified in the brain itself. There is no neuron for 'green', and even if there was, the neuron itself wouldn't be 'green' or contain the concept of 'green'.

This argument has a handful of big flaws, some of which have been dissected elsewhere, so I'm going to dispatch it as efficiently as possible. First off, while some things do seem to have dedicated neurons (this is the 'Grandmother Neuron' model), most things don't -- however, this is not terribly unusual; we are very accustomed to another system for modeling the world where some configurations of state have single symbols and others have sets of meaningfully interconnected symbols: language. The word 'green' is not necessarily green -- in fact, it might be red -- and does not contain the concept of green, but instead gains its meaning from its relationship to other things. Ultimately, we can say that it gains real meaning by being in a relationship with other symbols in a manner that represents some configuration of the outside world as perceived through some people's sensory apparatus, and gains utility insomuch as it allows us to communicate and make predictions. However, we can have syntactically meaningful configurations of symbols that could not have any semantic meaning -- the colorless green ideas sleep furiously -- or syntactically and semantically meaningful configurations of symbols that could not represent our universe -- maxwell's demon mounted the pink unicorn's dragon-skin saddle and rode off at six times the speed of light in order to find some anti-entropic material and transmogrify it into orgone. Since language does this, there's no reason for the brain to be incapable of it; since the brain makes language, the brain must be capable of doing it. It's also not mysterious -- even toy languages with heavily simplified grammars designed for computers to manipulate can do thing kind of thing (think RDF, or PROLOG).

As someone who has a background in biology and neurology, who works with words and language professionally, and who thinks deeply and clearly about most things, I would expect Watts to make these same judgments. If he has a counterargument in favor of qualia, I'd like to hear it. But, my general position is that to the extent that something that behaves similar to qualia exists, it is symbol manipulation, and to the degree that something like consciousness exists, it is something like self-simulation.

Wednesday, August 12, 2015

On rationalist fiction

(Originally posted here)
Let’s ignore for a moment the association between ‘rationalist fiction’ and HPMOR, LessWrong, etc. Let’s ignore the cultural associations here. Let’s ignore that most existing rationalist fiction by word count is HPMOR and most existing rationalist fiction by title count is MLP fanfic. 
Because the techniques and guidelines given for rationalist fiction are a toolset not uniquely tied to the cultural and political atmosphere that gave rise to them, and they can be used in a different context.
One problem with popular fiction – a problem that negatively impacts representation and a problem that Storming the Ivory Tower & others have mentioned in a critical sense – is that characters make dumb or out of character decisions in order to avoid interfering with existing ideological constraints in the plot. This is a killer of meaningful representation – because non-white-male characters are the first to be compromised for dramatic effect, being typically supporting characters – and it’s a killer of immersion.
Let’s look at the toolset, and look at how these tools can be used to prevent flaws in characterization.
1) Level-1 intelligent characters consistently behave in ways that are realistic for people with interiority. In other words, they act in their own self-interest unless they have abiding reasons not to. (This can be taken too far, in the sense that you can easily drop realistic character attributes that might cause real people to act against their own self-interest, like infatuation or various forms of mental illness. I would argue against this strong interpretation, in favor of clearly establishing character traits that might cause deviation from self interest before plot points hinge upon them – someone with anger control problems should be shown being enraged in a context that establishes their tendency for anger to overwhelm their other traits, for instance. Furthermore, particular circumstances can definitely make poor self-control or other self-defeating behavior reasonable. However, the general rule should be: err on the side of the characters behaving in a reasonable way, and make the justification for any self-defeating behavior clear.) This particular rule prevents us from grossly misrepresenting the decisions of minor characters in order to move the plot along – it does away with the ‘magical negro’ trope, ‘fridging’, ‘badass’ female characters who are inexplicably kidnapped, and every other case wherein a character sacrifices themselves to the plot; it can furthermore act against inaccurate and stigmatic representations of mental illness, since even characters with established mental illness will act rationally within their mental framework (we cannot excuse a character’s behavior with “well, they’re crazy”, but we can point to manic or depressive behavior and explain it if the character is established to have those patterns of behavior).
2) Problem-solving tools are introduced prior to their use, and the setting and world rules are established prior to those elements being important to the plot. In other words, we cannot introduce a previously-unknown force during a dramatic moment that kills off half the characters. (It is possible for the characters to be unaware of these things and act accordingly, but the reader should not be.)
3) Stereotypical shorthand is not used to code particular types of characters; instead, the beliefs of the characters are extended out to produce their behavior. (In the original form, this was limited to intelligence – we eschew technobabble, bowties, and other nerd stereotypes in favor of spending more mental effort planning the behavior and thought processes of level-2-intelligent characters than level-1-intelligent characters. I think we can reasonably benefit from extending this to all characters.)
4) Eschew a good-versus-evil conflict in favor of a good-versus-good conflict. (In other words, avoid straw villains. If two intelligent, knowledgeable, and reasonable people cannot disagree on a subject, it does not make for a good conflict. However, to the extent that it is reasonable, two characters on opposite sides of the conflict can have access to different information.) As a corollary, avoid conflicts without an intellectual basis: while conflicts over resources are realistic, the intellectual justifications of imaginary conflicts over imaginary resources are relatively uninteresting.
5) When a primary character makes an important and well-reasoned decision, step through their reasoning. (This should probably be done for both protagonists and antagonists; furthermore, I would do this for supporting characters, for any important decisions they make.) Showing your work prevent you from accidentally causing a character to make a poor decision out of laziness, and establishes the interiority of the character to the readers, exposing beliefs that may not have been made as explicit from their actions as the author thinks. When performed with antagonists, it is a useful tool for the author to avoid straw-man antagonists and avoid giving readers the impression that an antagonist is a straw-man.
As for problems I have with the original formulation of rationalist fiction, I have a few:
1) Assuming all characters behave rationally all the time is not only unrealistic but also problematic. A clear head is a privileged position provided in part by resources like time, energy, food, and lack of mental and physiological stress. Behaving in one’s own interest in a complex and forward-thinking way when any of these resources is removed is a skill that must be learned from practice, and a skill that is rare. Furthermore, the occasional unconsidered decision – so long as it is not *transparently* stupid – injects some entropy into a story that would otherwise become a relatively predictable game of minimax. The general rule of thumb should be that poor decisions are rare and irrational decisions are both even rarer and justified, and that furthermore those decisions should lead to a more interesting world-state than could have been trivially predicted (but one that could with some effort be predicted).
2) While we all at some times learn a great deal from the though processes of others, it’s presumptuous to hold up the internal monologues of rationalist fiction as a model of rational thought. Certainly some readers will find certain ideas and strategies clever; however, an honest author will learn just as much from her own attempt to construct these trains of thought. Rather than considering these as models of rationality, consider them as interesting exercises in critical thinking and rhetoric – to benefit the author as much as the reader.
3) Trying to teach science in these stories is kind of stupid. Even if there’s a plot-justification, it’s liable to seem out of place; furthermore, you risk painting some of your characters into a corner in terms of justifying their familiarity with a topic and their ability to illustrate it – since teaching is a skill, and many characters are completely justified in not having it (particularly if it interferes with their other character traits). While Yudkowsky recommends avoiding it without sufficient plot justification, I would recommend avoiding it altogether unless it cannot be avoided.
4) Claiming that the only piece of media to do this before HPMOR is The World of Null-A betrays a lack of awareness of whole popular genres. Yudkowsky has seen Naruto, but despite his interest in rationalism and intelligent protagonists, is totally unaware of Spiral, Umineko, and every other anime that fits his ruleset to a tee? Having a highly rational protagonist engage in a battle of wits against highly rational antagonists is a pretty common pattern, and in many of these examples even my extensions of the rules are kept. I will admit that Spiral arguably fails in its attempts to keep itself as rational fiction in its second half; however, Yudkowsky described a set of rules defining the best of a genre that is alive and well across the Pacific and is claiming to have invented it.

Interacting with Fiction

Interacting with Fiction

(Originally posted here)
This essay may be disorganized. Treat it as a brain dump on the material, rather than a serious analysis.
I’d like to discuss a few different kinds of interactive fiction, coming from different traditions and with different attributes. I’d like to discuss how the forms themselves play with ideas about constraint and agency, and how treating them seriously might change the way we think about fiction and fictional worlds. I’d also like to discuss how each of these subverts certain ideas about interactive fiction taken from non-interactive fiction, and make connections between these forms and other related forms that I haven’t seen made due to accidents of history and geneology.

Dramatis Personae

I’d like to introduce our fictional forms, along with their attributes, an exemplar of each form, and a few other forms that bear similarities.
Classic IF: Also called the ‘text adventure’ genre, Classic IF (which I will use interchangably with ‘IF’ in this essay) is written fiction in the form of a computer program that can be interacted with via free-form text input. The exemplar I choose is Collosal Cave Adventure. Usually, when people talk about ‘interactive fiction’, they mean this. Most of the attributes of classic IF carry over into the ‘point and click adventure’ genre, because historically, most creators of point and click adventures started out in text adventures; I am treating the ability to click on any object in a crowded scene to be of the same class of player agency as free-form text input for the purposes of this essay and using IF to refer to both forms, for reasons that will become clear in the next section. Genre conventions in classic IF include difficult puzzles and a stance of habitual contempt for the player. Player habits developed by this form include exhaustive searches of possibility space (picking up all objects, trying all verbs, clicking everywhere on the screen).
Visual Novels: Also called ‘VNs’, visual novels consist of sequences of scenes interspersed with player choices. Visual novels differ from classic IF in that player choices are strictly limited — typically no more than four options are ever given, these options are clearly presented to the user (no free-form text input), and the options chosen almost always cause meaningful narrative changes. If classic IF has a maze structure, VNs have a tree structure. I’ve chosen as an exemplar of the form Everlasting Summer, because it’s free & contains many of the genre-typical attributes and features. Genre conventions include plotted routes based on romantic pairings (being associated romantically with a particular character will give you a very different sequence of choices and events than with another character) and framing devices involving time travel. Player habits include re-playing in order to play through all possible routes (or at least, get all possible endings). Many recent twine games are similar in structure to visual novels, and so I would classify them the same way; while some FMV games are best classified as part of the point and click adventure genre, many are better grouped with VNs.
Wiki-based Choose Your Own Adventure stories: While these are not typically considered in essays like this, I think they add several interesting dimensions of possibility. My chosen exemplar is the Infictive Research Wiki Adventure. Wiki adventures have a primary method of play similar to visual novels, but differ in that players can modify scenes and options.
Fan work: Here is where we get a bit meta. Fan work, also called doujinshi, is the blanket term for any creative work related to a franchise not made by the franchise license holders. If we include fanon in this definition, we can classify it as a genuine interaction with a static fictional world that can result in apparent mutations to that fictional world. My exemplar is the fan theories subreddit.
A note on our characters: I have avoided classifying the behemoth of triple-a games as part of interactive fiction because in modern high-budget games, gameplay mechanics and visual sophistication often take priority overstorytelling, and to the extent that storytelling is done it is entirely non-interactive. Unless the player character can meaningfully change the story being told (in a more complex way than winning or losing) and the story being told takes a prominent role in the experience, I would not classify it as interactive fiction. As far as I’m aware, the only recent triple-a game franchise to meet these criteria as well as the least suitable VN has been Mass Effect; however, that franchise also struggled with a percieved betrayal of the fanbase’s expectation for meaningful interaction with the fictional world during the end of the final game. Because our focus is on agency and constraint in interactive storytelling, my position is that games that allow the player character free and detailed movement in 3d space (or indeed 2d space) are, generally speaking, providing levels of agency superfluous to the goal of storytelling and potentially directly counter to it. The fact that these games often mimic the styles of non-interactive forms of storytelling like film for their storytelling elements while having primary gameplay mechanics be of no use during designated storytelling portions indicates that storytelling and gameplay are considered to be separate domains potentially at odds in this kind of game, while the genres I am focusing on have gameplay elements that directly interact with the structure of narrative.

Agency and meta-agency

In classic IF, the player is in control of a player character. His control is, genrally speaking, limited to physics — he can control the player character’s geographical location in the game world, pick up and manipulate objects, and have limited interaction with characters, based on the limits of the command parser and the variety of interactions planned by the game designer. I call this physical and limited-conversational agency: the player can manipulate the physical state of the game and initiate pre-scripted entire conversations.
In a VN, the player is also in control of a player character. However, the player’s decisions are much more limited. Rather than being able to try whatever obscure sequence of words he can imagine, the set of possible options is laid out. The responsibility for enumerating the possibilities of the world has moved from player to developer, which makes for easier play — no rules are hidden. Classic IF will appear more mysterious than a VN of similar complexity, and it is possible to have options in a VN that in classic IF would make it unplayable because the player could not reasonably be expected to guess them. In both IF and VNs, the world is crystallized and all possible narrative paths through the world have been predetermined; however, in a VN, because of the requirement that these options be enumerated, we have limited the player’s agency to actions that have meaningful narrative effects. I call this narrative agency: the player’s actions directly select which path to take through the story tree.
In a wiki adventure, we have both narrative agency and meta-agency. A player can take whatever choices he likes, but can also create new narrative paths. The story is crystallized until the user decides to change it. Furthermore, there is a social element: stories are being mutated by a group, and feedback loops cause strange attractors in the group’s psychology to manifest in the fiction.
Finally, in fan work, we have only meta-agency. Fan work itself has no protagonist; the player navigates his own mental model of a narrative and creates new narratives from it. Once these narratives are released into the world they are crystallized; but, their mutability is ensured because new versions can be created by other fans. Occasionally, fan work creates a culture significantly divorced from the original and invents a very independent narrative universe, based more on trends and patterns in the fanbase than on any genuine attributes of the supposed source material — an extreme form of the feedback loops found in wiki adventure, generating narrative simulacra.

Completeness

A common habit of VN players is to get 100% completion — to visit all routes and view all possible outcomes. On one hand, this is a show of dedication, and an in-group signaling mechanism: VNs can be extremely long, so getting 100% completion is often time-consuming in addition to requiring some careful note-taking and book-keeping. Some VN engines include features to aid in keeping track of options and routes already taken, or features useful only on re-play (such as skipping over already-seen content). On the other hand, this kind of completionism is a godlike ability to model the entire work completely — akin to viewing every alternate timeline in a Burroughs-Wheeler MWI universe. This completionism is made possible by the enumeration of responses. It is not possible in classic IF, which can have a structure of similar complexity and choices of similar granularity, unless the player determines the set of all possible options and uses them at all possible points — and while engines that can recognize only expressions of the form can be iterated over using all possible combinations of recognized verbs and nouns, some engines support more elaborate language constructs including embedding, which makes enumeration of all possible recognizable strings impossible.
However, our mutable forms (fan work and wiki adventures) are incompletable on yet another order of magnitude. They change along the axis of real time as well as fictional time. While you can take a snapshot of a wiki adventure at any given time and play it to 100% completion, it can be modified the next time you play it — at any point along its timeline. Fan work is even more extreme; by its nature it forks, so any given fanwork is at any given time geneologically connected to several others that differ and are themselves mutable in real time. Fan work is the most amorpous — combining the flexibility of language with mutation along time and geographic axes, yet still operating directly upon narrative without the use of a player-character intermediary. Nevertheless, fan work is a game — a game with no author and no end, created entirely by the players.

Friday, May 29, 2015

Myths of competence and specialization

An idea has been going around for a while that science fiction, more than anything, is a literature of competence — the protagonists of science fiction are competent people who can be trusted to do the right things under the circumstances (given their knowledge of the situation), and their mistakes can generally be traced back to withheld information or the effects of external forces that manipulate their mental state (like drugs or mind control). This is true of a lot of golden age science fiction (wherein, generally speaking, the protagonists were also respectable, if not amiable — think Asimov & Heinlein), and is generally less true of new wave science fiction (think of Ellison, wherein occasionally our protagonists are mad or naive or belong to a culture with alien values) and first-generation cyberpunk (think of Neuromancer, wherein every character who isn’t mad is varying degrees of self-loathing and self-destructive). But, a fiction of competence is also the lens through which many people see the real world — and some of them are probably drawn to golden-age science fiction for this reason.
I have a friend who is, like me, a software engineer. He clearly sees the world through this lens. He sees people as, generally speaking, professionals; what I consider to be design errors he considers to be some unfortunate but inevitable product of circumstance that must have very good and acceptable reasons behind it. He acknowledges the occasional genuinely poor decision, when it’s undeniable that there’s no good excuse for it, but he considers such things rare and rarely acknowledges poor decisions made by people he respects. When faced with a problem, he prefers to theorize about it rather than probe it experimentally, and is willing to spend more time generating an elaborate mental model of a problem than experimentally discovering its contours. In other words, he has confidence in the integrity of his mind and the minds of others, and considers the production of mental models to be a generally foolproof method for exploring the world.
Although I respect him a great deal, and although I admit that his knowledge of many fields is deeper than mine, I consider his attitude naively optimistic.
My model of the world is compatible with the rule of the blind idiot god. The universe is complex enough that few elements can be modeled perfectly by human beings. Because competence is difficult to achieve, few people achieve it — incompetence and poor decisions are the rule, rather than the exception. Furthermore, even competent people have little reason to exercise their competence — the illusion of competence is rewarded moreso than actual competence, and exercising one’s competence takes time and energy that pretending to exercise one’s competence does not — and society rewards behaviors that are incompatible with the production and maintenance of genuine competence.
Human beings tend to value confidence in themselves. I consider this a major failure. Because the world cannot be perfectly modeled, all models are by definition imperfect — and confidence is faith in the predictive success of one’s mental model for situations upon which it has not yet been tested. Confidence is valued in oneself in part because confidence (i.e., lack of hesitation) is valuable in genuine emergencies — if you are being chased by a bear, spending mental effort determining whether the bear genuinely exists or is an illusion produced by a trickster god is detrimental to your expected lifespan. Genuine emergencies are more rare now than they were when the adrenal and peripheral nervous system first developed in our distant forebears, and they are less important to the survival of our genetic line — we are more likely to fail to reproduce out of a bias against children or financial instability or a lack of attraction to the opposite sex than out of actually being killed by something we could run away from (like a bicycle, an enemy, or a wild animal); as a result, in today’s world, it is generally more risky to be sure than to be unsure. The same confidence in the correctness of your mental model of the world that will save you from a wild animal will get you run over by a truck, because change blindness is part of the same set of energy-saving heuristics that allow human beings to do things faster and with less effort by introducing errors into our models of the world; the same confidence that would allow a human being in a nomadic-band-of-hunter-gatherers situation to fight effectively against another band trying to use the same resources will lead a modern person to fight and die in a religious war.
Human beings also value confidence in leaders. This is for a similar reason — if you are in a nomadic band of fewer than 150 other people, and you are being attacked by another group of approximately the same size, your odds are about even so long as your hesitation level is about even, but lack of hesitation gives you a tiny advantage. Your leader, because he is in charge of coordinating tactics, is the bottleneck — his hesitation is your hesitation. This is the context where leaders are useful — when discounting planning time your odds are 50/50, but when every second of hesitation counts against you, fortune favors fools who rush in over the ones who consider the situation carefully. But, few genuinely important situations today depend upon split-second decision-making. Unless you’re in the military, your ability to make poor decisions quickly will never be more important to your lifespan than your ability to make good decisions (although the ability to make good decisions quickly is beneficial in a wide variety of situations, it’s not really practical to develop), and unless you play professional sports the same is true of your livelihood. A good leader in typical modern circumstances is someone who takes minutes or hours to think a decision through, and who knows when to back off and reconsider a decision that has proven to be flawed — in other words, exactly the kind of person who appears unconfident to the point of neurosis. Because our heuristics are stuck in the stone age, to become a leader you must appear confident, but in order to be a good leader your apparent confidence must be an illusion.
This is not to say that I don’t believe in competence. In fact, I think competence is undervalued and under-sold. Take, for instance, the polymath.
A lot of people these days say that polymaths can no longer exist — that the world has gotten too complex. Bullshit. Our models of the world have gotten better — which means that our ability to predict the world has gotten better. It’s easier to be a polymath today than ever before, because being a polymath means being competent in a variety of fields, and great strides have been made in every field with regard to our ability to learn to become competent in them. The world has not gotten more complex, but instead, through human endevours, it has gotten slightly simpler — not because we have changed the world but because we have changed our minds, developing mental tools for organizing the massive clusterfuck that is reality into more and more useful predictive models, wherein the complexity of the model grows slower than its predictive utility.
The same narrative that claims that there can be no more polymaths tells us that specialization is desirable, or at worst an unfortunate necessity. If we can’t learn a variety of mental models because the models have gotten more complex, then we need to stick to our lane and go deep into one silo, solving the problems that fit into that domain.
But, all problems are in reality multidisciplinary. Disciplines and problem domains are inventions of human beings, and reality has no interest in them. The specialist is blind to this. The specialist sees the portions of the problem that fall into his domain, and perhaps slightly foggily sees the portions that fall into neighbouring domains; the remainder is some vast undifferentiated miasma that must be left to other people to figure out. As a result, the specialist can be very confident about his results — because he has chopped off everything in the universe that he doesn’t know how to model, and has applied a model to the tiny portion that has been left over. His model may not yield useful results, because he has ignored most of the universe, and he really can’t effectively isolate his subject that way.
The generalist, on the other hand, sees the universe and applies several different models that apply to different aspects of the subject (as well as sections of the world immediately surrounding it). The polymath, who is a generalist upgraded with the knowledge of several specialists, does the same thing with better results because he has a wider variety of useful models and the experience to determine which models are appropriate. The polymath can do this because he realises that each specialized field is a pattern recognition machine, and because some patterns can be found in the world wherever you look, many disciplines have independently reinvented the same or very similar models with different terminology. He can combine the similar models to form superior hybrid models, and when the models are exactly the same he can learn the new terminology or use the shared model to synthesize its sister models across domains. And, since models build upon each other based on shared patterns, he can use models from one discipline to more efficiently learn models from another, unrelated discipline because they essentially accidentally share patterns. Because of the polymath’s wider scope, he also is aware of common failures in various forms of various models — he is aware that the failures can compound, and so despite having better predictive results at a lower cost, he also has lower confidence; he has eliminated the artificially inflated confidence of the specialist and is left with a level of confidence more appropriate to the actual situation.
I feel like this myth of competence and confidence — the Captain Kirk character voyaging into the unknown and believing that he already knows it, confidently applying human biases to non-human situations and considering himself to be morally superior to cultures that don’t share his values — is not merely naive and optimistic, but actually regressive and dangerous. Any confident leader and man of action can be percieved, with a minor shift of perspective, as an arrogant fool who acts without thinking; any crusade against evil people doing evil things can be reframed as an intolerant bigot battling a system of values he doesn’t understand. This kind of literature transplants into the space age the kind of leader who hasn’t really been appropriate for a leadership role since the dawn of agriculture.

Originally posted at https://medium.com/@enkiv2/myths-of-competence-and-specialization-c5d27506c2bf

Saturday, May 23, 2015

Against certain naive varieties of transhumanist sentiment

A frequent idea that I run across when speaking to technophiles with transhumanist leanings these days is the superiority of tech over biology. People will say, "I'll upload my brain and live forever", or "I'll replace my arm so that I can be stronger", or "I'll get a wetwire to the internet so I can read faster". This isn't a new idea; I said variations on the same thing when I was fifteen. But, it's absolutely stupid.

We have never built a machine with a lifespan and resilience comparable to a human being. Machine failure rates fall along a bathtub curve, but while an expected human lifespan is between seventy and eighty years these days, it's the rare freak of a machine that still functions after ten or twenty years -- let along thirty, let alone continuously. Biological systems have insanely complex self-repair and self-maintenance systems, and the reason we live for eighty years is that our parts are continuously being maintained, rather than undergoing routine maintenance on a human scale of weekly-monthly-yearly. The very first programmable electromechanical computers were built in the 30s (depending on who you ask and how you define it, you can push it forward or back about ten years), meaning that a human being living an average lifetime that was born at the same moment as the very first programmable computer in the modern sense would be dying *right now*; drum and disk storage is twenty years younger (along with transistors, ram that's not based on relays/mercury tubes/CRTs, programming languages other than machine code, and interactive terminals), and the internet is about fifteen years younger than *that* (along with email, pipes, directories, and asymmetric key cryptography). Someone born at the moment the first packet was sent over the internet would be middle-aged. Nevertheless, all these systems have changed drastically many times over the course of their lifetime, in incompatible ways. All of the component parts have been replaced many times over. At various points in the past, all these systems have had *complete* failures (yes, including the internet). These systems are not at the point where they could be expected to safeguard the lifetime of a rat, let alone extend the lifetime of a human being.

Likewise, with prosthetic arms. Prosthetic arms are great -- for people who are missing their arms. Cochlear implants aren't competing with people's real ears; they're competing with being deaf. The prosthetic eyes that have finally gotten FDA approval have approximately the same resolution as a TRS-80 Model 100 -- they aren't competing with real eyes, but with total blindness.

Wetwires are in an even worse position. The current state of the art in brain implants can, with incredibly invasive and dangerous brain surgery, temporarily hook your brain up to 200 I/O lines, each of which neurons *might* decide to grow on. Wetwires are competing with reading: a four thousand year old technology that's constantly being improved upon, that takes advantage of the human eye and optic nerve -- a pipe so fat that the eye is considered to be a part of the brain, a pipe so fat that the eye does complex processing independently of the visual cortex and can independently trigger signals to the amygdala about emotionally striking scenes before the visual cortex can even receive the image data. Furthermore, reading is a technology that the government of every developed nation spends huge amounts of money on installing into its citizens! Wetwires can't compete with that.

That said, this isn't the end of transhumanism, or even of grinding. Implants aren't going to go away. It's just that we aren't looking at them correctly.

Implants are a *long* way away from replacing the things that human beings already do well, like living and thinking and reading and moving. Generally speaking, to the extent that it's trivial to do so, when there's a problem with scale, we invent an external technology to handle it -- when we need to turn bolts, we build wrenches that fit in our hands instead of hacking our arms off and replacing them with wrenches. If we depend upon implant tech (and other varieties of transhuman tech) to fund itself by being an improvement over what humans already are capable of doing, then the whole field will go bankrupt. But, there are two fields in which this kind of tech can excel. One is performing a worse job at than the human body at tasks that the human body already does -- prosthetics for people with missing limbs, and replacement parts for people whose parts are faulty or missing. The other is allowing human beings to do things they've never done before -- not by increasing scale, but by qualitative change.

The cochlear implant kind of sucks. When it's installed, wires are stuck to the cochlea -- a snail-shaped fluid-filled organ in the inner ear that does the heavy lifting in hearing (the rest of the ear is basically involved in amplification and protection). In normal hearing, vibrations from outside the ear are amplified by a set of bones that operate like a cam assembly, before pressing on a flexible membrane on the big end of the cochlea, and the movement of tiny hairs inside the cochlea produces the perception of sound, with the position of the hairs that are most stimulated determined by the frequency of the sound. In a cochlear implant, the wires cause the hairs to be stimulated directly, with galvanism, and so the number of wires installed corresponds to the resolution of sound available. We do not have the technology to produce CD-quality sound. We don't even have the technology to produce speak-and-spell-quality sound. People with cochlear implants are stuck trying to decode speech based on fewer distinct frequencies than there are bars on a child's xylophone. But the cochlear implant, as an accident of its nature, has one improvement over the ear -- it has a built-in headphone jack. Cochlear implant-like technologies are far from being an improvement over ears, but when combined with throat mics or other mechanisms for producing the fundamental elements of speech from subvocalizations, they might be an improvement over the walkie-talkie. At the point at which this technology has enough demand to make people voluntarily submit to brain surgery, I expect that this is exactly how it will be used (and I expect the first market to be military or paramilitary -- people who, on a life or death basis, need to communicate without using their hands and without being heard by other people nearby).

There's another trend going on, as well. Just as desktops became laptops and laptops became smartphones, smartphones are on the cusp of becoming wearables, and wearables will become implants.

However, this change-over is very rarely quick, and even more rarely complete. Before desktops, we had minicomputers, and before minicomputers, mainframes; however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying), and desktops are hardly going anywhere.

We haven't yet reached the point where it's reasonable to develop software on a smartphone -- which means that smartphones are about where personal computers were in 1979, but beyond where laptops were in 1989 (when the now-famous New York Times article suggesting that the age of the laptop had come and gone was written, but also, paradoxically, when the HP-95 palmtop was released -- which, as it so happens, was perfectly capable of having software developed on it if you could get along with a calculator-style chiclet keyboard). Smartphones are currently being used for light-duty applications while the heavy-duty applications like video editing, compiling, and triple-A gaming are being done on laptops and desktops -- a mirror of about ten years ago, when laptops were being used for lighter-duty applications. Meanwhile, wearables are at the same stage that smartphones were in the Palm Treo era, or that PDAs were in the Newton era -- in other words, just close enough to seem potentially commercially viable, but not quite far enough along yet to be useful as anything more than an expensive toy. Expensive toys can be industry-changing, if they're popular enough, but to do that you need to pull a Sony and sell under cost at scale. (Sony is the only company I can think of that has pulled off making its new technologies ubiquitous and cheap by selling under cost at scale multiple times -- it did it with 3 1/4 inch floppy disks by supplying Apple with them under cost in the mid-80s, again a few years later with CDs, and again by shipping every PS3 with a blu-ray drive. But, it's also failed multiple times with that technique -- with mini-discs, with the cell processor...)

Every few years, the entire tech industry pulls out twenty-year-old project from the MIT Media Lab or CMU or PARC or somewhere and collectively decides to shit its pants over it. Recently, we've been hitting a quadruple-whammy: wearable computers, the Internet of Things, 3d printing, and virtual reality.

The current wearable computer boom started with Google Glass taking pretty much equally from the work that Thad Starner and Steve Mann were doing in the early 90s; appropriately, Starner was brought onto the Glass project, while Mann was completely uncredited despite the fact that they took the name from him. And, despite the fact that Glass was a complete PR disaster, Google definitely decided what parts of Starner's work to borrow with an eye toward PR -- Starner's most interesting idea, subliminal reminders, was omitted from Glass and the Glass UI standards and Glass UI frameworks were written in such a way that subliminal reminders should be completely impossible. Now, in an almost hilariously ironic turn of events, Microsoft has taken essentially exactly the same technology, made it steroscopic, reframed it in terms of geolocation-centric AR (something Glass was never going to be capable of doing, by design), and turned it into a massive PR success.

In comparison, the current Internet of Things boom seems to be driven entirely by industry-wide amnesia. That's not entirely unfair, since the industry has, until now, had a very hard time figuring out what to call it. The current term of art is the Internet of Things, but from around 1995 to around 2005, everybody was calling it Ubiquitous Computing. The IoT is hitting a lot of the same media roadblocks as VR did in the early 90s, which makes me think that it's probably around the same point in the hype cycle, although technologically, it's definitely further along.

Ten years ago, when I was an unemployed teenager, I had two big projects that were lighting up my eyes. One of them was a wearable computer project. The other was a UbiComp project -- what you'd now call the Internet of Things. At the time, the wearable computer project was by far less feasible; displays were expensive, cpus were expensive, making either run off a battery and getting the thing small enough and light enough to fit on your body meant lowering its capabilities to an extreme. I designed several prototype wearable computers around the AT90S8515 -- an 8-bit microcontroller that cost $10 and had 127 bytes of ram -- and various LED-based displays, but it was clear that unless I was willing to either buy thousand-dollar equipment or strap a laptop to my back and make due with audio cues as an interface, wearable computers were really infeasible. (I ended up strapping a laptop to my back and using audio cues, in the end.) The UbiComp project, on the other hand, was completely within the realm of possibility -- I had a working prototype for a system for communal cooperative use of a single computer, based on identifier tokens stored on a cheap wiimote knockoff that doubled as an input device; the cost of the system was the cost of a random desktop computer, a projector, and a $20 wiimote knockoff. If I had had steady disposable income, I could have formed a corporation and finished my prototype and become yet another failed IoT startup -- the technology was there, solid, and absolutely trivial.

Today, IoT is even easier. My potentially-$300 computer could be replaced with a $20 raspberry pi. Wiimote knockoffs don't even cost $20 anymore. The projector costs more than the rest of the system in total, and my homebrewed account-sharing system could be replaced with the kind of cloud-based thing that newbies whip up in minutes and brag about on hacker news. A couple years ago, I did a wearable computer, too -- with about $350 worth of parts (a raspberry pi, a twiddler, a $100 head mounted display, and a USB battery pack), I built something that, while not comparable in usability to a laptop, beat the pants off the absolute best I could do with that kind of money in 2005 -- mostly because of economies of scale provided by the popularity of smartphones. PDAs manufactured in 2005 couldn't really run 800x600 color VGA, or even 300x200 color VGA -- too slow. (Maybe you could do it if you were especially clever. I wasn't clever enough to make up for my lack of riches -- wagering the cost of disassembling an expensive PDA on my ability to make it drive a display was too rich for my blood.) A single-board computer capable of running Linux in 2005 was a fucking high-end single-board computer. But, the iPhone came out -- a single board computer running BSD shoved into a PDA -- then the Android phones started appearing a couple years later -- cheaper single board computers running Linux and Java shoved into PDAs. Now the chips that run Linux in smartphones are cheap enough that Texas Instruments will give away a handful of free samples to anybody with a university-affiliated email address, complete with specialized circuitry for fast video decoding. Single board computers running Linux can be sold for $20 and make enough money to prop-up a non-profit organization. Meanwhile, some nerds figured out that a series of cheap wifi chips could be reflashed, and now you can buy complete postage-stamp-sized wifi-enabled systems that can run Lua for $5.

So, we're at the point now where you can stick the guts of a smartphone on the side of your head and have a head-mounted smartphone with a battery life of about two hours, or you can stick the guts of your smartphone on your wrist and have a smartphone with a battery life of about a day if you barely ever have the screen on. Or, you can stick the guts of a smartphone in your pocket and stick a screen on your head, and actually have a reasonable battery life with reasonable usage. We aren't at the point where we can start making fully wearable never-take-em-off computers with reasonable battery life and reasonable capability, although I think that if we take a page out of the MIT Media Lab book and combine this with IoT, we might be able to make due with what we have for a little longer. This has problems -- centralized IoT is the domain of natural monopolies, with most of them fated to go the way of AppleTalk (although centralized IoT is all the rage now, with every consortium of manufacturers competing to make their own incompatible standards on the off chance that theirs will be the one to take off); meanwhile, decentralized IoT is the stuff of IT nightmares, where failures in logistics and/or security can lead to a lightbulb DDoSing your house and/or the white house. My own design, which was based on a federated model with an open protocol and a market for competing vendors, has unfortunately been obviated by time -- it was based on the assumption that the normal use would be an evolution of the cyber-cafe, and it probably would have worked in 2005, but no longer makes sense in the same universe as widespread smartphone ownership and devices like chromecast. Offloading computing from wearables onto IoT nodes will require an extreme of either security or naivete -- and because security is complicated, I fully expect a future hellworld of incredibly insecure wearable/IoT mesh networking comparable to the amazing terror of running Windows 9x on the internet in the 90s. Welcome back to an era where anybody with a modicum of knowledge can remote control your computer and nobody can patch it for five years; except this time, the computer is strapped to your face.

This is a problem that *must* be solved before the wearables become implantables. Implants need to be smaller than wearables. Right now, the state of medical device security is pretty low -- while medical device software, along with airplane control software and nuclear power plant software, has higher quality standards under normal operating conditions, it's largely no better than normal consumer-grade software when it comes to resisting actual planned attacks, and sometimes worse. We already have computers in all sorts of things -- horrible, insecure computers; our airplanes can be hijacked through the in-flight wifi network, our cars can be hijacked through the CD player, our pacemakers can be remote-controlled over wifi, and our routers are already sustaining self-replicating botnets. When these devices are on our bodies, the threats become more visible; when they are in our bodies, they become potentially fatal -- not necessarily because of malice (it takes a special kind of person to actually shut down somebody's heart by exploiting their pacemaker) but because of incompetence (it doesn't particularly take a special kind of person to try to make a botnet out of every exploitable wifi-enabled device, including pacemakers, and then not check available memory and crash the pacemakers because he's just written to an address that doesn't exist).

Implants are coming, and wearables are coming first. Implants will come both faster and slower than we expect, because they won't be used how we expect. They won't make us live longer or read faster, but instead will let us do things we haven't imagined yet. Let's fix our shit before we've got buffer overflow vulnerabilities that'll take actual brain surgery to patch.

Wednesday, May 13, 2015

Utopianism and sci-fi as machine-lit

There are several popular ways to look at science fiction as a genre. I have my own preferences. That said, the major opposing perspective -- what I'd term the 'machine-lit' school of thought -- has its merits, insomuch as it highlights a set of common tendencies in science fiction. I'd like to take this space to highlight the basic premise of machine-lit, the tendencies it breeds, and why I find most machine-lit to be relatively uninteresting.

(The third major perspective, what I call the spaceship-on-the-cover style, I find wholly uninteresting and is the subject of other essays; however, this perspective is becoming historically important lately because of some drama surrounding the Hugo awards being gamed by groups who prefer this style, so it's worth mentioning in passing.)

Machine-lit is, in a general sense, the construction of a narrative around a concept invented by the author, as a capsule intended to introduce the reader to the concept. Lots of early science fiction is machine-lit for actual machines (Ralph 124C41+ being an ideal example of how this can go wrong yet still be very influential). The works of Ayn Rand are machine-lit for the Objectivist philosophy. Big-idea science fiction novels tend to be machine-lit for the ideas they represent.

One failure mode of machine-lit is that, because the narrative is intended as a delivery mechanism for the concepts, the narrative can itself be weak or nearly nonexistent if the author thinks the ideas themselves are interesting enough. (Ayn Rand, again, and Gernsback, again -- but also major dystopian novels like Zamatayin's We and 1984). Likewise, style can be a major issue in machine-lit, with The Unincorporated Man's borderline-fanfic-quality-prose depending upon its intended audience of libertarians to forgive lack of technical skill in writing because the ideas are sufficiently in-line with the ideology, and PKD's writing so heavily leaning on the ideas (not to mention the amphetamines) to pull it through (outside of rare stylistically-polished books like A Scanner Darkly).

There are definitely instances where books intended as machine-lit end up having well-developed plot and characters and a coherent and polished writing style (pretty much every Neal Stephenson book meets these criteria, as does Brave New World), but to some extent, doing so depends upon a kind of imagination and intellectual honesty that brings the book into the middle-ground between machine-lit and the world-building-based style of science fiction that I tend to champion, whose most extreme and visible example is seen in the post-Neuromancer works of William Gibson.

Another major failure mode of machine-lit is that, because of the dependence upon the central conceit of the book, if that conceit is uninteresting or unoriginal, the book as a whole fails along with it. With big-idea novels related to politics (Rand again) or philosophy (a handful of PKD books that lean too heavily on solipsism or philosophical zombies, and nearly every film adaptation of a PKD work), interest in these works falls evenly along either political-ideological or philosophical-education lines -- a communist is, largely, going to find The Fountainhead or Anthem uninteresting; someone who is familiar enough with the idea of solipsism to find it fairly uninteresting will likewise find The Matrix uninteresting, while someone who rejects Serle's Chinese Room paradox and the idea of philosophical zombies as based on an erroneous deification of consciousness will find the host of films about robots being incapable of emotion or of morality to be uninteresting. When the same idea is recycled into dozens of machine-lit works, the popularity of the idea itself can suffer, because while no longer wholly novel it will often be framed in similar ways, with similar changes based on the needs of the story or premise, by nearly identical stories (The Matrix has more in common with Simula-3 and its major film adaptations, World on a Wire and The Thirteenth Floor, than it does with Plato's Allegory of the Cave, from which all of them were derived). Today, talking about solipsism will make people think of The Matrix rather than, say, Descartes' "evil genius" -- and despite my general feeling that The Meditations failed to be adequately convincing, we as a society are favoring an action franchise with major and obvious plotholes over a fairly heavily considered work by a brilliant philosopher.

Again, if a text develops its characters and plot adequately, the central conceit can essentially be ignored -- a good ghost story is good even to people who don't believe in ghosts, while a bad ghost story will fail to entertain enough to motivate people to suspend their disbelief.

Machine-lit shares with the rest of speculative fiction a basis in a counterfactual model of the world. That is to say, we start our world-building by setting some axioms that, in our world, are not true, and work from there. The difference is that machine-lit, by definition, performs the basic world building then immediately jumps to narrative, then stops as soon as something resembling a completed text is produced. Within world-building-based science fiction, a much more complex world is built, and the narrative and characters stem from that world organically.

This requires a dedication to completeness and intellectual honesty, in part because genuinely following the logical progression of the central mechanism of a counterfactual world can point out flaws in its structure.

In cryptography, the first and most important rule is never to roll your own crypto -- always use a well-known and well-tested algorithm, at the very least, and ideally also use a well-known and well-tested implementation. The reason is that flaws are never intentionally introduced into crypto by people who want the crypto to succeed, and thus fatal flaws can only be identified by other people -- and the more people there are looking for flaws in an algorithm, the faster such flaws are found (and the longer it takes to find fatal flaws in an algorithm, the more likely it is that such flaws are difficult to find). Everyone who designs crypto professionally is also skilled in trying to break crypto: you learn to avoid the flaws that you have discovered how to exploit. Likewise in computer security -- the research arm of the computer security community consists of people who figure out how to break security and then figure out how to patch those holes.

In fact, this is a common pattern in legitimately serious enterprises. The scientific method is exactly this: suggest a model of the world, and then recruit people to attack it. The adversarial justice system is based on two groups of people presenting different models of the world and attacking each others' models. Even in philosophy, philosophers engage in critiques of the ideas of other philosophers, rather than ignoring any idea they don't agree with.

Any functional member of any of these communities will attempt, before putting their ideas out into the world, to stress-test them personally -- formulate simple attacks, determine which portions of the idea are weak and whether they can be strengthened without complete restructuring.

Machine-lit, by and large, fails to perform these sanity checks. Machine-lit is the domain of people who are so in love with their ideas that they cannot bear to test their mettle before pushing them out into the world.

An ideology at the core of machine-lit, if properly investigated, would collapse upon itself or mutate such that it fails to be an ideology. A utopia at the core of machine lit would, upon close inspection, become a dystopia; a dystopia, upon close inspection, would yield some happy and fulfilled people, making the message of the book ambiguous. An actual machine at the core of machine-lit, if properly and rigorously tested, would become at worst a patent application but possibly an actual invention.

I'm perfectly in favor of optimism in science fiction. Nothing is to be gained from keeping the genre grimdark as a rule, in the same way that nothing is to be gained from keeping superhero movies grimdark. However, utopian science fiction represents a failure to take the medium seriously -- and a shallow dystopia or cozy apocalypse is no better. Science fiction should be a genre of ideas, but there's no point if we allow our ideological biases and our love of shiny toys to turn it into a genre of shallow ideas shielded from unforgiving reality. The real world has problems, and while escapism is fine, a work cannot simultaneously be an escapist fantasy and a serious analysis presenting a serious solution to the problems it fantasizes about escaping from.

Science fiction always starts as machine-lit. But, machine-lit is a larval stage that adult science fiction works outgrow.

Saturday, January 18, 2014

On Neuromancer (a rant)

(Note: this is actually from a thread on Tumblr. After I wrote it, I liked it so much that I figured it should stand on its own. The context was that someone was comparing Neuromancer unfavorably to 1984 and Brave New World, which appears to be a fundamental misreading of its genre and its place in history. Neuromancer's setting is only accidentally dystopian, as Count Zero and Mona Lisa Override demonstrate by focusing on characters of different social status.)


Neuromancer isn’t notable for its depiction of a dystopian future, but instead for its depiction of a socially realistic future wherein intent is just as meaningless as it is in reality. In both 1984 and Brave New World, the dystopias depicted are perfectly functional: dissent is systematically squashed or subverted by a mechanism that basically never fails, and the people in power are of one mind; the failure of both is that they cease to be realistic by way of being too tidy (the same way that conspiracy theories fail). In Neuromancer (and other elements of the Cyberspace Trilogy & its shared universe), obsolete future technology is being used for purposes other than that for which it was designed (something that happens in reality but rarely in science fiction, because science fiction historically has very often been about wanking over the author’s designs for a technically-driven world rather than taking into account the diversity of human desires and social circumstances), and the elites and antagonists remain in their positions despite being realistically dysfunctional.
As an example, in Johnny Mnemonic (which takes place in the same universe, set before Neuromancer by a few years), a group of primarily-black anarchoprimitivists called the Lo-Teks use the availability of cheap medical equipment to replace their own teeth with dog teeth as a fashion statement and signifier of cultural identity. In the end, nothing the Lo-Teks did had a fundamental effect on the state of the universe — they were, canonically, a failed revolutionary group with some interesting cultural attributes who in a minor way helped a naiive rich white dude once to escape an unpleasant corporation, then faded into obscurity never to be mentioned again. This kind of texture violates Poe’s rules for writing — everything is supposed to support the main plot, either thematically or materially — but Gibson re-popularized a kind of non-Poe-based science fiction as social realism (and this is the punk aspect of cyberpunk — the punk sensibility is one brimming with awareness of corruption and dysfunctionality but that nevertheless would rather succumb to pragmatism and absurdism than to nihilism). Gibson writes science fiction as future history, but he writes it as a future history of regular people and he does so with full awareness of the importance of cultural texture and the tendency of human beings to expend personal resources on social posturing. Gibson himself would probably agree that Neuromancer is overrated — he considers it to be an ‘adolescent attempt’, and says that he merely took the standard caper storyline and dropped it into the universe in which several of his short stories were set. Plot-wise, it wasn’t anything special. In terms of prediction of the internet — bullshit; Gibson doesn’t really care about the internet, and didn’t know anything about it when he wrote the book. The real draw of Neuromancer (which would be all the more potent for readers who have been choking on Asimov, Heinlein, and other Golden Age writers where everything is smooth and metallic and nothing ever breaks) is the depiction of a universe that has dirt in the corners and scratches on the silverware, where there are prostitutes and drug dealers and suburban kids and soap operas, and where the protagonists and antagonists alike are fundamentally understandable people with selfish motivations and embarrassing gaps in their knowledge and secret crushes and hang-nails. The Cyberspace Trilogy is full of non-humans or semi-humans (Wintermute, Josef Virek, the Dixie Flatline) with these same flaws and obsessions.
Second- and third-generation cyberpunk loses this; fundamentally, everyone in Neuromancer is at some level a loser and a poseur, and to the extent that there’s a lack of outwardly shown emotion there’s clearly a seething rage or cynicism clearly related to the crapsack world and crapsack circumstances they live in. When second-generation cyberpunk shows a ‘hacker’ in a black leather jacket, a blank mask of a face, and carefully bleached hair, it shows an empty shell (or at best an archetype of competence — no different fundamentally than the worst of the golden-age heroes and less interesting than the ones Heinlein wrote); the same character in first generation cyberpunk would have an inferiority complex, rips in his leather jacket, and secretly fear that he was incompetent. First generation cyberpunk is first generation NetRunner, complete with hand-drawn cards and jokes about weasels; second-generation cyberpunk is the Battleship movie: too much budget, not enough content, everything focus-grouped to hell.
Gibson didn’t invent this punk angle of cyberpunk. An obsession with social and cultural ephemera and the perversions thereof is the calling card of J. G. Ballard, to which Gibson attributes some of his early style; a tendency toward vignette for the sake of texture may well have come out of W. S. Burroughs, although it’s fundamentally part of both normal speech and real life; a focus on the texture of ordinary reality was very popular in the socially conscious writing of early twentieth century realists, and arguably came back into vogue with the ‘new wave’ of science fiction in the 1960s. Fundamentally, all of the attributes I’ve mentioned for Gibson apply to PKD’s writing as well, and so there’s a good reason that whenever PKD’s writing gets adapted to the screen it resembles cyberpunk. But, on a certain level. PKD just wasn’t a good stylist and rarely wrote a book whose prose flowed. Other first-generation cyberpunk authors had similar problems: Rudy Rucker peppers his writing with hippie-ish neologisms even sillier than PKD’s (where PKD stopped at ‘kipple’, Rucker’s proto-cyberpunk novel Software has a brain-eating cult called the Little Kidders, calls robots ‘boppers’, and he now edits a magazine named ‘flurb’); John Shirley’s arguably-not-cyberpunk-but-definitely-punk series A Song Called Youth goes in the opposite direction and borders on golden-age-style moral clarity (while not succumbing to the tendency for golden age science fiction to be always either economically or socially conservative but never both at the same time). Where Gibson succeeded is to take PKD’s proto-punk sensibilities and give them the smoothness of beat poetry and the obsessiveness with minutiae that has dominated contemporary art since the end of the second world war. Gibson made a socially conscious, realistic, pop-art science fiction with a subversive edge, that nevertheless went down like honey. It had all the humor of the best situationist manifestos, all the intricacy of an episode of The Simpsons, and all the professionalism of a book on typography for professional photolithographic technicians. That is why Neuromancer deserves respect.
(Pedantic note: Neuromancer came out in 1982, not 1984)

Source: http://enki2.tumblr.com/post/73752574522/neuromancer

Saturday, January 11, 2014

(Off topic) Wearable computer project build details, part 1

I've had an interest in wearable computing for a long time -- and after Glass was announced and it became clear that Google wanted to produce wearable computing technology that was limited to gluing powerpoint to your face, I decided to get my act together and build something practical for once.

Since this has been a fairly long-running project, compared to my previous attempts, I figured that I might as well document it.

Step 1: Monocular Display


So, a friend sent me a partially disassembled head mounted display. Having had read about the use of monocular displays, and mechanisms by which people had turned binocular displays into monocular displays (mostly tutorials for MyVu-brand displays wherein the wires were simply cut), I decided to take what was essentially a freebie display and turn it monocular.




Disassembling the display

I had a pair of goggles that I bought at an anime convention (steampunk is really big at anime conventions), so I actually just mounted the disassembled display on one of the thin clear plastic lenses with a combination of screws and velcro, then mounted the lens on the inside using velcro.


Mounting the display board and one of the control boards to the lens

The control box with its battery has been opened up and mounted with velcro on the side of the goggles
The lens is mounted to the inside with velcro
The completed monocular display mod, for this revision

Step 2: Panic, Worry, Waste Money


I was always planning to use the Raspberry Pi as the heart of the wearable computer, but I actually had quite a hard time getting the software to work. In the end, it turned out that I was using a mislabeled SD card -- it claimed to be one of the brands that was supported, but it was actually a card from another manufacturer that had been re-labeled and re-sold. SD cards, despite adhering to the SD standard, can vary significantly enough between manufacturers that a project like the Raspberry Pi doesn't actually support arbitrary brands, particularly for booting. I got a different card that was branded the same, and it worked just fine.


Initially, I thought that I had burned out the control box for the display I had. I was missing the adapter for charging it, so I opened it up and attached a USB cord directly to it (because I couldn't find an appropriately sized barrel connector); only later did I actually attempt to hook it up, and found that it was displaying a white screen regardless of what I was doing with the Pi. Later on, when I got a duplicate display, I discovered that this was actually the fault of the composite video adapter I was using -- I was attempting to use a composite audio adapter for composite video, and getting no signal. The duplicate display came with both the charger and the appropriate adapter, so in the end I didn't need to modify that box at all, nor did I need to replace it.

Anyhow, between this and lacking a handful of other components, it took a while before I got around to hacking together the rest of the project.

Wearing the display 
From the side

Step 3: Software


I wrote this software using a library that I had written for another project, and that is supposed to be released as open source some time in the near future. It's not released yet, so I'm avoiding really talking about it. Suffice it to say that it's similar to a previous project.

Software written for this project -- a zzstructure navigator -- running on the Pi, but connected to a normal TV

Step 4: Hook Shit Together


In the end, since my monocular display worked just fine, all I needed to do was connect it to the Pi. I ended up making things a little bit neater, for the sake of my sanity.

I took the separate audio and video cables leading from the Pi to the adapter and wrapped them with a bit of spiral wrap I had lying around from earlier projects. Having these catch on things individually would be extremely annoying, once I started actually wearing the thing -- I've had wires from earlier wearable computer projects catch on things like chairs, pets, and moving vehicles, with all the expected consequences. The number of cables required to attach a first generation twiddler to a laptop manufactured ten years later may well be the reason that particular unit broke (and it's very hard to find twiddlers on ebay).

I wrapped everything in a small zippered leather case. This case was, when I obtained it, marketed as a wallet -- although it doesn't actually fit in a normal-sized front pocket -- and explicitly intended for users of PDAs. As a wallet, it failed -- it was too large and couldn't hold much of anything in the way of bills or cards. As a PDA carrier, it failed -- it was actually too big for any of the various models of palm pilot or blackberry I had since I obtained it, and the velcro patch on the inside never managed to keep them in place. However, it's perfectly sized for the Pi, complete with plugs sticking out of the side and an SD card sticking out of the back. The wifi dongle I attached stuck out the top, making it impossible to zip the thing up completely, but it didn't cause enough of a gap to radically risk the hardware in the case of a sudden downpour.

I also used sugru to seal up a couple of the holes in the plastic case my Pi uses that I didn't think I was likely to use, and to insulate the unused side of the two-in-one USB cord I'm using to power the Pi. 

The full layout, at the time of writing

 
 The monocular display, functioning

Recent Changes


Using velcro for holding sensitive optics in place is a huge pain -- every time I get things displaying right, the lens moves as soon as I move my head. The velcro was self-adhesive, and it had trouble consistently sticking to the lens housing and to the plastic pane used by the goggles, so the lens would sometimes even physically fall out. So, I mounted some sugru on the pane where the velcro was, and I intend to use small screws and bolts to adjust the lens distance.Update: the sugru didn't work -- I couldn't get an appropriate distance with it, and it was hard to trim down just with the tools at hand. However, I was in luck: despite the lens being much closer to the eye in the goggle mount than in the original system, it turned out that if I flipped the lens around in the housing and then flipped the housing around, I got perfect focus at the intended distance. So, I just superglued the housing onto the pane of the goggles, and clipped the lens in backwards:
The lens housing being glued in. The screws are just there as extra anchor points -- they are flush with the back of the housing and glued on as well. The lens has small protrusions that clip it into the housing with either orientation, so this made it pretty easy; I now have the larger half of the lens facing my eye from the outside of the housing. Keep in mind that this puts the lens very close to the eye (so, watch out for infections), and that superglue produces fumes that are irritating to the eye (so wait for it to air out before using the display for long periods). The housing won't actually fit square with the display because there are some screws that poke through from the other side holding the display itself onto the pane, so I have it shifted to one side; one side of the screen is slightly blurry, so perhaps I should have aligned the display the other way.


The unhoused control box had some problems. It has a giant lithium-ion battery pack held on with some not-very-sticky double-sided tape, which on several occasions fell off with enough force to pull free the power wires. It also was too heavy to stick on to the velcro on the goggle's vent (this is where it is mounted in the side photo, which is why it is tilted to the side). I had kept it around just by chance, but because I was so sure I had burnt it out, I misplaced some pieces of the housing and could not re-house it in the original case. So, I just wrapped it in electrical tape and taped it to the band of the goggles, after cutting some holes for connectors and for the power switch. The cable leading from the control box to the display is fairly long and stiff, so I looped it up and taped that to the control box as well, leaving just enough slack to connect the display's control board.



Future work


I still haven't solved the problem of input. Many people use tiny bluetooth keyboards of the type intended for powerpoint presentations for this, but I consider full QWERTY keyboards to be best used with two hands and unwieldy with one -- and what's the point of a wearable computer if you need two hands to operate it. I'm considering building a chordite derivative, or perhaps a clone of the twiddler -- both are fairly popular one-handed chording keyboards intended to be held in the hand (as opposed to some chording keyboards that are only good if put flat on a table). I'd like to have a joystick or a direction pad for moving the mouse pointer, unlike the twiddler (which uses an accelerometer). I have some teensys I can use for emulating a genuine keyboard and mouse, but I'm circumspect about adding more programmable hardware to this project; I may simply use the Pi's GPIO port. UPDATE: turns out that the twiddler hasn't actually used accelerometers for mousing since the 2.0, about ten years ago. I dropped $200 on a Twiddler 2.1, so I will update when I can test it. It seems like everything will work fine with it aside from using the SUPER key as a meta key; since I am running Awesome with the default configuration on the Pi, I'll need to reconfigure the command key to be CTRL-ALT instead.

Another possible input system is eye tracking. There was an interesting student project at Cornell using cheap phototransistors and LEDs for eye tracking by bouncing infrared light off the sclera, and it seems like that should be easy to incorporate.

I've been interested in Thad Starner's presentations on a project at Georgia Tech wherein glove-mounted vibrating motors are used for non-intentional learning. In his experiment, subjects were taught to play simple melodies on the piano using these vibrating gloves, which simulated the muscle memory of rote practice. There are many things I'd like to be able to do with my hands, and some of them can probably be accelerated with this, so I've bought five vibrating motors and intend to wire them up to the Pi's GPIO port and train myself with them, as soon as I can find a suitable glove. Update (2014-01-20): Started on the glove, but I don't really have the appropriate equipment so I'm having my girlfriend finish it for me. Here's the beginning of it:


On a less exciting note, I still haven't received the battery pack I intend to use. So, I have that to look forward to in a few days, at which point I'll be able to test how long the unit can run. I was surprised at how long the display ran on a full charge, although I expect to need to break out a cable to quickly charge it up from the main battery pack. I wouldn't like to hook it up to the Pi itself, since it doesn't take too kindly to sudden drops in board voltage; instead, I might simply attach a USB splitter to the battery pack and run one line directly to the display charger, ready to be plugged in whenever power gets too low. I'm not really sure how long the battery will last when running the Pi with hardware hooked up and also charging the display, so I may end up upgrading to this battery pack instead.

In the end, I intend to stick both the battery pack and whatever boards I need for bridging the Pi GPIO with my own implemented I/O devices inside the leather case, although I may not be able to fit them. The battery pack looks pretty water-resistant, so maybe I'll just keep it in another pocket.

Update (2014-01-19):

I appear to have somehow fried one of my Raspberry Pi boards. I initially thought that the power supply wasn't putting out enough amperage to run the board, but it turns out (luckily) that I was incorrect -- it works just fine with my other Pi. I have no idea what caused the failure. Unfortunately, it's clear that the wifi dongle (which did work once upon a time) is now no longer functional; I may need to replace it, or it may simply be that the battery is incapable of supplying enough power to run it properly.

The second Pi came with its own (rather nice) housing, and I'm avoiding putting it in the same box as the other one on the off chance that the housing is related to the damaged board. In terms of likelier causes, though, it looks like there's some damage to the SD card holder, which may be from stress on the card itself -- having your SD card stick out of the housing by more than an inch is a design flaw, and perhaps one of the most glaring in the Pi. I suspect I will need to build a small extension to the housing for the express purpose of accommodating a normal-sized SD card, or purchase a much shorter one and transfer the image there.

My twiddler still hasn't arrived, although I am increasingly hopeful about its capabilities. It looks like I'll be able to get away with using it for everything, but that I will need to change all my keybindings from SUPER to something else because the twiddler doesn't support SUPER as a meta-key. If it's easy enough (as this teardown implies) I may add a hold switch, because I believe every portable device with buttons requires a hold functionality.

Update (2014-01-23):
The main problem with the monocular display is that the goggles fog up quickly. Even completely removing the vent covers doesn't help much. So, I was considering dehumidification mechanisms.

One option (the easy one) is to put a bunch of dessicant packages into the goggles. Unfortunately, dessicants tend to be toxic, and they also need to be replaced frequently. Best-case, you can use salt (which isn't toxic but which you wouldn't want in your eyes) and bake it to get the water out. I'd like to avoid this.

The other is to use cooling to cause condensation on some heat-sink or coil outside the goggles and collect or otherwise discard the condensed water (the way that both air conditioning units and professional dehumidifiers work). The trick is that most of these have moving parts (which you don't really want attached to your face).

Luckily, there are solid-state semiconductor-based systems for cooling: peltier junctions. And, better still, all the peltier junction produces is a difference in temperature between the two sides -- meaning that we can also produce air movement by exploiting the difference in pressure between hot air and cold air.

I'm not fully convinced that this will work, but the idea is that the expanding air on the top (hot) side of the peltier pump will push through the tube into the heat sink on the cool (bottom) side, and the movement of the air will create just enough suction to cause the air in the goggles to pass through the cool heat sink and condense some of its water, which will then drip off. I will probably need to mount a thermistor on the top and build an automatic shut-off that will kill the power to the heat pump when it gets too hot.

Update (2014-02-01):
I received the Twiddler, and last Tuesday I took the unit out for a test at the local hackerspace:

and also at work:
The battery life seemed pretty OK, but I hadn't gotten a replacement wifi adapter yet so I didn't trust it.

Today, I installed the new wifi adapter (an Edimax EW-7811Un, based around the RTL8188CUS and very common for Pi users) and took the machine for a walk. I wore it and kept it on and connected to wifi from around 11AM to around 5PM without recharging, and this was fairly successful.

There are two caveats. One is that the adapter doesn't seem to like mixed 802.11BG networks (which is what my router provides), and an unsecured 802.11BG network shows up as a WEP-enabled ad-hoc network. I switched my router into B mode and this fixed the immediate problem. However, I still couldn't connect -- and it turned out that if I plugged in the battery pack, I could connect just fine.

The SD card did just fine as well, which was somewhat unexpected since the previous one had its holder shear in short order after much less wear. I credit a combination of more support in the laser-cut box I'm using and the unexpectedly good performance of the hacked-together SD-card protector I stuck on. I had a bunch of rubber belts lying around that were just slightly smaller than the perimeter of the box, and I wrapped one around the end of the box such that it would extend out just past the end of the SD card.
The belt wraps around the end, and juts out just past the edge of the SD card. The battery is held on with rubber bands, for now.

The wifi dongle is just below the keyboard connector here, so it can't be seen. The battery's wire trails around.
As a side note to the de-fogging saga, I've noticed that fogging doesn't happen nearly as much outside in the cold (for all the obvious reasons), but that this doesn't really help because outside light coming through the left side of the goggles actually overpowers the light from the display. In addition to defogging mechanisms, I will need to figure out a way to block out light. I may roll that into attempts to encase the electronics and protect them from the elements.

Update 2014-02-14:
A few weeks ago, while installing an opaque mask on the lens, the connections between two lens-mounted boards broke. On the display side, there's a socket, but on the side hooking to the board to which the (usb-lookalike) connector going to the controller connects, our wires are surface-mount soldered to vias. Anyone trying to replicate this project should probably desolder these wires and solder them on the other side of the via, because the torque from the cable leading to the controller is enough to pull the velcro off (and so it was only a matter of time before surface-mount connections like that pulled apart).

I created the mask just out of thin cardboard and electrical tape, using a spare lens as a template and cutting out a rectangle for where the display was affixed. I placed it between the lens and the screw-on abutment that holds replacement lenses in.

After adding the mask and redoing the solder, I folded velcro around the edge opposite the wires I just soldered in and affixed the board to the velcro vertically; I also changed the position of the cable to avoid producing so much torque on the wires.


The next step for this project is the production of a protective casing for the lens-mounted boards. I wanted such a casing to be transparent, but it looks like barring getting a glass blower or an expert in lexan-bending to produce a small box, I will not be able to produce a transparent case. I may need to use an opaque printed case.

Update 2014-04-25:
Adafruit published a tutorial for an extremely similar design. Their version is slicker and easier to follow, so if you are trying to build something like this, I recommend primarily following their directions (particularly for the disassembly portion). If you are trying to build this unit exactly, follow their disassembly instructions and then follow mine beginning with step #4.

A future path I'd like to explore is to use something like this for legitimate overlay. My general idea is to remove the goggles entirely, use the adafruit version of the mount with the elbow inverted (so that the display goes in the opposite direction) or straight (so that the display juts straight out), then mounting an extra fresnel lens (if necessary) and a half-silvered mirror at a 45 degree angle in front of the eye: