Friday, May 29, 2015

Myths of competence and specialization

An idea has been going around for a while that science fiction, more than anything, is a literature of competence — the protagonists of science fiction are competent people who can be trusted to do the right things under the circumstances (given their knowledge of the situation), and their mistakes can generally be traced back to withheld information or the effects of external forces that manipulate their mental state (like drugs or mind control). This is true of a lot of golden age science fiction (wherein, generally speaking, the protagonists were also respectable, if not amiable — think Asimov & Heinlein), and is generally less true of new wave science fiction (think of Ellison, wherein occasionally our protagonists are mad or naive or belong to a culture with alien values) and first-generation cyberpunk (think of Neuromancer, wherein every character who isn’t mad is varying degrees of self-loathing and self-destructive). But, a fiction of competence is also the lens through which many people see the real world — and some of them are probably drawn to golden-age science fiction for this reason.
I have a friend who is, like me, a software engineer. He clearly sees the world through this lens. He sees people as, generally speaking, professionals; what I consider to be design errors he considers to be some unfortunate but inevitable product of circumstance that must have very good and acceptable reasons behind it. He acknowledges the occasional genuinely poor decision, when it’s undeniable that there’s no good excuse for it, but he considers such things rare and rarely acknowledges poor decisions made by people he respects. When faced with a problem, he prefers to theorize about it rather than probe it experimentally, and is willing to spend more time generating an elaborate mental model of a problem than experimentally discovering its contours. In other words, he has confidence in the integrity of his mind and the minds of others, and considers the production of mental models to be a generally foolproof method for exploring the world.
Although I respect him a great deal, and although I admit that his knowledge of many fields is deeper than mine, I consider his attitude naively optimistic.
My model of the world is compatible with the rule of the blind idiot god. The universe is complex enough that few elements can be modeled perfectly by human beings. Because competence is difficult to achieve, few people achieve it — incompetence and poor decisions are the rule, rather than the exception. Furthermore, even competent people have little reason to exercise their competence — the illusion of competence is rewarded moreso than actual competence, and exercising one’s competence takes time and energy that pretending to exercise one’s competence does not — and society rewards behaviors that are incompatible with the production and maintenance of genuine competence.
Human beings tend to value confidence in themselves. I consider this a major failure. Because the world cannot be perfectly modeled, all models are by definition imperfect — and confidence is faith in the predictive success of one’s mental model for situations upon which it has not yet been tested. Confidence is valued in oneself in part because confidence (i.e., lack of hesitation) is valuable in genuine emergencies — if you are being chased by a bear, spending mental effort determining whether the bear genuinely exists or is an illusion produced by a trickster god is detrimental to your expected lifespan. Genuine emergencies are more rare now than they were when the adrenal and peripheral nervous system first developed in our distant forebears, and they are less important to the survival of our genetic line — we are more likely to fail to reproduce out of a bias against children or financial instability or a lack of attraction to the opposite sex than out of actually being killed by something we could run away from (like a bicycle, an enemy, or a wild animal); as a result, in today’s world, it is generally more risky to be sure than to be unsure. The same confidence in the correctness of your mental model of the world that will save you from a wild animal will get you run over by a truck, because change blindness is part of the same set of energy-saving heuristics that allow human beings to do things faster and with less effort by introducing errors into our models of the world; the same confidence that would allow a human being in a nomadic-band-of-hunter-gatherers situation to fight effectively against another band trying to use the same resources will lead a modern person to fight and die in a religious war.
Human beings also value confidence in leaders. This is for a similar reason — if you are in a nomadic band of fewer than 150 other people, and you are being attacked by another group of approximately the same size, your odds are about even so long as your hesitation level is about even, but lack of hesitation gives you a tiny advantage. Your leader, because he is in charge of coordinating tactics, is the bottleneck — his hesitation is your hesitation. This is the context where leaders are useful — when discounting planning time your odds are 50/50, but when every second of hesitation counts against you, fortune favors fools who rush in over the ones who consider the situation carefully. But, few genuinely important situations today depend upon split-second decision-making. Unless you’re in the military, your ability to make poor decisions quickly will never be more important to your lifespan than your ability to make good decisions (although the ability to make good decisions quickly is beneficial in a wide variety of situations, it’s not really practical to develop), and unless you play professional sports the same is true of your livelihood. A good leader in typical modern circumstances is someone who takes minutes or hours to think a decision through, and who knows when to back off and reconsider a decision that has proven to be flawed — in other words, exactly the kind of person who appears unconfident to the point of neurosis. Because our heuristics are stuck in the stone age, to become a leader you must appear confident, but in order to be a good leader your apparent confidence must be an illusion.
This is not to say that I don’t believe in competence. In fact, I think competence is undervalued and under-sold. Take, for instance, the polymath.
A lot of people these days say that polymaths can no longer exist — that the world has gotten too complex. Bullshit. Our models of the world have gotten better — which means that our ability to predict the world has gotten better. It’s easier to be a polymath today than ever before, because being a polymath means being competent in a variety of fields, and great strides have been made in every field with regard to our ability to learn to become competent in them. The world has not gotten more complex, but instead, through human endevours, it has gotten slightly simpler — not because we have changed the world but because we have changed our minds, developing mental tools for organizing the massive clusterfuck that is reality into more and more useful predictive models, wherein the complexity of the model grows slower than its predictive utility.
The same narrative that claims that there can be no more polymaths tells us that specialization is desirable, or at worst an unfortunate necessity. If we can’t learn a variety of mental models because the models have gotten more complex, then we need to stick to our lane and go deep into one silo, solving the problems that fit into that domain.
But, all problems are in reality multidisciplinary. Disciplines and problem domains are inventions of human beings, and reality has no interest in them. The specialist is blind to this. The specialist sees the portions of the problem that fall into his domain, and perhaps slightly foggily sees the portions that fall into neighbouring domains; the remainder is some vast undifferentiated miasma that must be left to other people to figure out. As a result, the specialist can be very confident about his results — because he has chopped off everything in the universe that he doesn’t know how to model, and has applied a model to the tiny portion that has been left over. His model may not yield useful results, because he has ignored most of the universe, and he really can’t effectively isolate his subject that way.
The generalist, on the other hand, sees the universe and applies several different models that apply to different aspects of the subject (as well as sections of the world immediately surrounding it). The polymath, who is a generalist upgraded with the knowledge of several specialists, does the same thing with better results because he has a wider variety of useful models and the experience to determine which models are appropriate. The polymath can do this because he realises that each specialized field is a pattern recognition machine, and because some patterns can be found in the world wherever you look, many disciplines have independently reinvented the same or very similar models with different terminology. He can combine the similar models to form superior hybrid models, and when the models are exactly the same he can learn the new terminology or use the shared model to synthesize its sister models across domains. And, since models build upon each other based on shared patterns, he can use models from one discipline to more efficiently learn models from another, unrelated discipline because they essentially accidentally share patterns. Because of the polymath’s wider scope, he also is aware of common failures in various forms of various models — he is aware that the failures can compound, and so despite having better predictive results at a lower cost, he also has lower confidence; he has eliminated the artificially inflated confidence of the specialist and is left with a level of confidence more appropriate to the actual situation.
I feel like this myth of competence and confidence — the Captain Kirk character voyaging into the unknown and believing that he already knows it, confidently applying human biases to non-human situations and considering himself to be morally superior to cultures that don’t share his values — is not merely naive and optimistic, but actually regressive and dangerous. Any confident leader and man of action can be percieved, with a minor shift of perspective, as an arrogant fool who acts without thinking; any crusade against evil people doing evil things can be reframed as an intolerant bigot battling a system of values he doesn’t understand. This kind of literature transplants into the space age the kind of leader who hasn’t really been appropriate for a leadership role since the dawn of agriculture.

Originally posted at https://medium.com/@enkiv2/myths-of-competence-and-specialization-c5d27506c2bf

Saturday, May 23, 2015

Against certain naive varieties of transhumanist sentiment

A frequent idea that I run across when speaking to technophiles with transhumanist leanings these days is the superiority of tech over biology. People will say, "I'll upload my brain and live forever", or "I'll replace my arm so that I can be stronger", or "I'll get a wetwire to the internet so I can read faster". This isn't a new idea; I said variations on the same thing when I was fifteen. But, it's absolutely stupid.

We have never built a machine with a lifespan and resilience comparable to a human being. Machine failure rates fall along a bathtub curve, but while an expected human lifespan is between seventy and eighty years these days, it's the rare freak of a machine that still functions after ten or twenty years -- let along thirty, let alone continuously. Biological systems have insanely complex self-repair and self-maintenance systems, and the reason we live for eighty years is that our parts are continuously being maintained, rather than undergoing routine maintenance on a human scale of weekly-monthly-yearly. The very first programmable electromechanical computers were built in the 30s (depending on who you ask and how you define it, you can push it forward or back about ten years), meaning that a human being living an average lifetime that was born at the same moment as the very first programmable computer in the modern sense would be dying *right now*; drum and disk storage is twenty years younger (along with transistors, ram that's not based on relays/mercury tubes/CRTs, programming languages other than machine code, and interactive terminals), and the internet is about fifteen years younger than *that* (along with email, pipes, directories, and asymmetric key cryptography). Someone born at the moment the first packet was sent over the internet would be middle-aged. Nevertheless, all these systems have changed drastically many times over the course of their lifetime, in incompatible ways. All of the component parts have been replaced many times over. At various points in the past, all these systems have had *complete* failures (yes, including the internet). These systems are not at the point where they could be expected to safeguard the lifetime of a rat, let alone extend the lifetime of a human being.

Likewise, with prosthetic arms. Prosthetic arms are great -- for people who are missing their arms. Cochlear implants aren't competing with people's real ears; they're competing with being deaf. The prosthetic eyes that have finally gotten FDA approval have approximately the same resolution as a TRS-80 Model 100 -- they aren't competing with real eyes, but with total blindness.

Wetwires are in an even worse position. The current state of the art in brain implants can, with incredibly invasive and dangerous brain surgery, temporarily hook your brain up to 200 I/O lines, each of which neurons *might* decide to grow on. Wetwires are competing with reading: a four thousand year old technology that's constantly being improved upon, that takes advantage of the human eye and optic nerve -- a pipe so fat that the eye is considered to be a part of the brain, a pipe so fat that the eye does complex processing independently of the visual cortex and can independently trigger signals to the amygdala about emotionally striking scenes before the visual cortex can even receive the image data. Furthermore, reading is a technology that the government of every developed nation spends huge amounts of money on installing into its citizens! Wetwires can't compete with that.

That said, this isn't the end of transhumanism, or even of grinding. Implants aren't going to go away. It's just that we aren't looking at them correctly.

Implants are a *long* way away from replacing the things that human beings already do well, like living and thinking and reading and moving. Generally speaking, to the extent that it's trivial to do so, when there's a problem with scale, we invent an external technology to handle it -- when we need to turn bolts, we build wrenches that fit in our hands instead of hacking our arms off and replacing them with wrenches. If we depend upon implant tech (and other varieties of transhuman tech) to fund itself by being an improvement over what humans already are capable of doing, then the whole field will go bankrupt. But, there are two fields in which this kind of tech can excel. One is performing a worse job at than the human body at tasks that the human body already does -- prosthetics for people with missing limbs, and replacement parts for people whose parts are faulty or missing. The other is allowing human beings to do things they've never done before -- not by increasing scale, but by qualitative change.

The cochlear implant kind of sucks. When it's installed, wires are stuck to the cochlea -- a snail-shaped fluid-filled organ in the inner ear that does the heavy lifting in hearing (the rest of the ear is basically involved in amplification and protection). In normal hearing, vibrations from outside the ear are amplified by a set of bones that operate like a cam assembly, before pressing on a flexible membrane on the big end of the cochlea, and the movement of tiny hairs inside the cochlea produces the perception of sound, with the position of the hairs that are most stimulated determined by the frequency of the sound. In a cochlear implant, the wires cause the hairs to be stimulated directly, with galvanism, and so the number of wires installed corresponds to the resolution of sound available. We do not have the technology to produce CD-quality sound. We don't even have the technology to produce speak-and-spell-quality sound. People with cochlear implants are stuck trying to decode speech based on fewer distinct frequencies than there are bars on a child's xylophone. But the cochlear implant, as an accident of its nature, has one improvement over the ear -- it has a built-in headphone jack. Cochlear implant-like technologies are far from being an improvement over ears, but when combined with throat mics or other mechanisms for producing the fundamental elements of speech from subvocalizations, they might be an improvement over the walkie-talkie. At the point at which this technology has enough demand to make people voluntarily submit to brain surgery, I expect that this is exactly how it will be used (and I expect the first market to be military or paramilitary -- people who, on a life or death basis, need to communicate without using their hands and without being heard by other people nearby).

There's another trend going on, as well. Just as desktops became laptops and laptops became smartphones, smartphones are on the cusp of becoming wearables, and wearables will become implants.

However, this change-over is very rarely quick, and even more rarely complete. Before desktops, we had minicomputers, and before minicomputers, mainframes; however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying), and desktops are hardly going anywhere.

We haven't yet reached the point where it's reasonable to develop software on a smartphone -- which means that smartphones are about where personal computers were in 1979, but beyond where laptops were in 1989 (when the now-famous New York Times article suggesting that the age of the laptop had come and gone was written, but also, paradoxically, when the HP-95 palmtop was released -- which, as it so happens, was perfectly capable of having software developed on it if you could get along with a calculator-style chiclet keyboard). Smartphones are currently being used for light-duty applications while the heavy-duty applications like video editing, compiling, and triple-A gaming are being done on laptops and desktops -- a mirror of about ten years ago, when laptops were being used for lighter-duty applications. Meanwhile, wearables are at the same stage that smartphones were in the Palm Treo era, or that PDAs were in the Newton era -- in other words, just close enough to seem potentially commercially viable, but not quite far enough along yet to be useful as anything more than an expensive toy. Expensive toys can be industry-changing, if they're popular enough, but to do that you need to pull a Sony and sell under cost at scale. (Sony is the only company I can think of that has pulled off making its new technologies ubiquitous and cheap by selling under cost at scale multiple times -- it did it with 3 1/4 inch floppy disks by supplying Apple with them under cost in the mid-80s, again a few years later with CDs, and again by shipping every PS3 with a blu-ray drive. But, it's also failed multiple times with that technique -- with mini-discs, with the cell processor...)

Every few years, the entire tech industry pulls out twenty-year-old project from the MIT Media Lab or CMU or PARC or somewhere and collectively decides to shit its pants over it. Recently, we've been hitting a quadruple-whammy: wearable computers, the Internet of Things, 3d printing, and virtual reality.

The current wearable computer boom started with Google Glass taking pretty much equally from the work that Thad Starner and Steve Mann were doing in the early 90s; appropriately, Starner was brought onto the Glass project, while Mann was completely uncredited despite the fact that they took the name from him. And, despite the fact that Glass was a complete PR disaster, Google definitely decided what parts of Starner's work to borrow with an eye toward PR -- Starner's most interesting idea, subliminal reminders, was omitted from Glass and the Glass UI standards and Glass UI frameworks were written in such a way that subliminal reminders should be completely impossible. Now, in an almost hilariously ironic turn of events, Microsoft has taken essentially exactly the same technology, made it steroscopic, reframed it in terms of geolocation-centric AR (something Glass was never going to be capable of doing, by design), and turned it into a massive PR success.

In comparison, the current Internet of Things boom seems to be driven entirely by industry-wide amnesia. That's not entirely unfair, since the industry has, until now, had a very hard time figuring out what to call it. The current term of art is the Internet of Things, but from around 1995 to around 2005, everybody was calling it Ubiquitous Computing. The IoT is hitting a lot of the same media roadblocks as VR did in the early 90s, which makes me think that it's probably around the same point in the hype cycle, although technologically, it's definitely further along.

Ten years ago, when I was an unemployed teenager, I had two big projects that were lighting up my eyes. One of them was a wearable computer project. The other was a UbiComp project -- what you'd now call the Internet of Things. At the time, the wearable computer project was by far less feasible; displays were expensive, cpus were expensive, making either run off a battery and getting the thing small enough and light enough to fit on your body meant lowering its capabilities to an extreme. I designed several prototype wearable computers around the AT90S8515 -- an 8-bit microcontroller that cost $10 and had 127 bytes of ram -- and various LED-based displays, but it was clear that unless I was willing to either buy thousand-dollar equipment or strap a laptop to my back and make due with audio cues as an interface, wearable computers were really infeasible. (I ended up strapping a laptop to my back and using audio cues, in the end.) The UbiComp project, on the other hand, was completely within the realm of possibility -- I had a working prototype for a system for communal cooperative use of a single computer, based on identifier tokens stored on a cheap wiimote knockoff that doubled as an input device; the cost of the system was the cost of a random desktop computer, a projector, and a $20 wiimote knockoff. If I had had steady disposable income, I could have formed a corporation and finished my prototype and become yet another failed IoT startup -- the technology was there, solid, and absolutely trivial.

Today, IoT is even easier. My potentially-$300 computer could be replaced with a $20 raspberry pi. Wiimote knockoffs don't even cost $20 anymore. The projector costs more than the rest of the system in total, and my homebrewed account-sharing system could be replaced with the kind of cloud-based thing that newbies whip up in minutes and brag about on hacker news. A couple years ago, I did a wearable computer, too -- with about $350 worth of parts (a raspberry pi, a twiddler, a $100 head mounted display, and a USB battery pack), I built something that, while not comparable in usability to a laptop, beat the pants off the absolute best I could do with that kind of money in 2005 -- mostly because of economies of scale provided by the popularity of smartphones. PDAs manufactured in 2005 couldn't really run 800x600 color VGA, or even 300x200 color VGA -- too slow. (Maybe you could do it if you were especially clever. I wasn't clever enough to make up for my lack of riches -- wagering the cost of disassembling an expensive PDA on my ability to make it drive a display was too rich for my blood.) A single-board computer capable of running Linux in 2005 was a fucking high-end single-board computer. But, the iPhone came out -- a single board computer running BSD shoved into a PDA -- then the Android phones started appearing a couple years later -- cheaper single board computers running Linux and Java shoved into PDAs. Now the chips that run Linux in smartphones are cheap enough that Texas Instruments will give away a handful of free samples to anybody with a university-affiliated email address, complete with specialized circuitry for fast video decoding. Single board computers running Linux can be sold for $20 and make enough money to prop-up a non-profit organization. Meanwhile, some nerds figured out that a series of cheap wifi chips could be reflashed, and now you can buy complete postage-stamp-sized wifi-enabled systems that can run Lua for $5.

So, we're at the point now where you can stick the guts of a smartphone on the side of your head and have a head-mounted smartphone with a battery life of about two hours, or you can stick the guts of your smartphone on your wrist and have a smartphone with a battery life of about a day if you barely ever have the screen on. Or, you can stick the guts of a smartphone in your pocket and stick a screen on your head, and actually have a reasonable battery life with reasonable usage. We aren't at the point where we can start making fully wearable never-take-em-off computers with reasonable battery life and reasonable capability, although I think that if we take a page out of the MIT Media Lab book and combine this with IoT, we might be able to make due with what we have for a little longer. This has problems -- centralized IoT is the domain of natural monopolies, with most of them fated to go the way of AppleTalk (although centralized IoT is all the rage now, with every consortium of manufacturers competing to make their own incompatible standards on the off chance that theirs will be the one to take off); meanwhile, decentralized IoT is the stuff of IT nightmares, where failures in logistics and/or security can lead to a lightbulb DDoSing your house and/or the white house. My own design, which was based on a federated model with an open protocol and a market for competing vendors, has unfortunately been obviated by time -- it was based on the assumption that the normal use would be an evolution of the cyber-cafe, and it probably would have worked in 2005, but no longer makes sense in the same universe as widespread smartphone ownership and devices like chromecast. Offloading computing from wearables onto IoT nodes will require an extreme of either security or naivete -- and because security is complicated, I fully expect a future hellworld of incredibly insecure wearable/IoT mesh networking comparable to the amazing terror of running Windows 9x on the internet in the 90s. Welcome back to an era where anybody with a modicum of knowledge can remote control your computer and nobody can patch it for five years; except this time, the computer is strapped to your face.

This is a problem that *must* be solved before the wearables become implantables. Implants need to be smaller than wearables. Right now, the state of medical device security is pretty low -- while medical device software, along with airplane control software and nuclear power plant software, has higher quality standards under normal operating conditions, it's largely no better than normal consumer-grade software when it comes to resisting actual planned attacks, and sometimes worse. We already have computers in all sorts of things -- horrible, insecure computers; our airplanes can be hijacked through the in-flight wifi network, our cars can be hijacked through the CD player, our pacemakers can be remote-controlled over wifi, and our routers are already sustaining self-replicating botnets. When these devices are on our bodies, the threats become more visible; when they are in our bodies, they become potentially fatal -- not necessarily because of malice (it takes a special kind of person to actually shut down somebody's heart by exploiting their pacemaker) but because of incompetence (it doesn't particularly take a special kind of person to try to make a botnet out of every exploitable wifi-enabled device, including pacemakers, and then not check available memory and crash the pacemakers because he's just written to an address that doesn't exist).

Implants are coming, and wearables are coming first. Implants will come both faster and slower than we expect, because they won't be used how we expect. They won't make us live longer or read faster, but instead will let us do things we haven't imagined yet. Let's fix our shit before we've got buffer overflow vulnerabilities that'll take actual brain surgery to patch.

Wednesday, May 13, 2015

Utopianism and sci-fi as machine-lit

There are several popular ways to look at science fiction as a genre. I have my own preferences. That said, the major opposing perspective -- what I'd term the 'machine-lit' school of thought -- has its merits, insomuch as it highlights a set of common tendencies in science fiction. I'd like to take this space to highlight the basic premise of machine-lit, the tendencies it breeds, and why I find most machine-lit to be relatively uninteresting.

(The third major perspective, what I call the spaceship-on-the-cover style, I find wholly uninteresting and is the subject of other essays; however, this perspective is becoming historically important lately because of some drama surrounding the Hugo awards being gamed by groups who prefer this style, so it's worth mentioning in passing.)

Machine-lit is, in a general sense, the construction of a narrative around a concept invented by the author, as a capsule intended to introduce the reader to the concept. Lots of early science fiction is machine-lit for actual machines (Ralph 124C41+ being an ideal example of how this can go wrong yet still be very influential). The works of Ayn Rand are machine-lit for the Objectivist philosophy. Big-idea science fiction novels tend to be machine-lit for the ideas they represent.

One failure mode of machine-lit is that, because the narrative is intended as a delivery mechanism for the concepts, the narrative can itself be weak or nearly nonexistent if the author thinks the ideas themselves are interesting enough. (Ayn Rand, again, and Gernsback, again -- but also major dystopian novels like Zamatayin's We and 1984). Likewise, style can be a major issue in machine-lit, with The Unincorporated Man's borderline-fanfic-quality-prose depending upon its intended audience of libertarians to forgive lack of technical skill in writing because the ideas are sufficiently in-line with the ideology, and PKD's writing so heavily leaning on the ideas (not to mention the amphetamines) to pull it through (outside of rare stylistically-polished books like A Scanner Darkly).

There are definitely instances where books intended as machine-lit end up having well-developed plot and characters and a coherent and polished writing style (pretty much every Neal Stephenson book meets these criteria, as does Brave New World), but to some extent, doing so depends upon a kind of imagination and intellectual honesty that brings the book into the middle-ground between machine-lit and the world-building-based style of science fiction that I tend to champion, whose most extreme and visible example is seen in the post-Neuromancer works of William Gibson.

Another major failure mode of machine-lit is that, because of the dependence upon the central conceit of the book, if that conceit is uninteresting or unoriginal, the book as a whole fails along with it. With big-idea novels related to politics (Rand again) or philosophy (a handful of PKD books that lean too heavily on solipsism or philosophical zombies, and nearly every film adaptation of a PKD work), interest in these works falls evenly along either political-ideological or philosophical-education lines -- a communist is, largely, going to find The Fountainhead or Anthem uninteresting; someone who is familiar enough with the idea of solipsism to find it fairly uninteresting will likewise find The Matrix uninteresting, while someone who rejects Serle's Chinese Room paradox and the idea of philosophical zombies as based on an erroneous deification of consciousness will find the host of films about robots being incapable of emotion or of morality to be uninteresting. When the same idea is recycled into dozens of machine-lit works, the popularity of the idea itself can suffer, because while no longer wholly novel it will often be framed in similar ways, with similar changes based on the needs of the story or premise, by nearly identical stories (The Matrix has more in common with Simula-3 and its major film adaptations, World on a Wire and The Thirteenth Floor, than it does with Plato's Allegory of the Cave, from which all of them were derived). Today, talking about solipsism will make people think of The Matrix rather than, say, Descartes' "evil genius" -- and despite my general feeling that The Meditations failed to be adequately convincing, we as a society are favoring an action franchise with major and obvious plotholes over a fairly heavily considered work by a brilliant philosopher.

Again, if a text develops its characters and plot adequately, the central conceit can essentially be ignored -- a good ghost story is good even to people who don't believe in ghosts, while a bad ghost story will fail to entertain enough to motivate people to suspend their disbelief.

Machine-lit shares with the rest of speculative fiction a basis in a counterfactual model of the world. That is to say, we start our world-building by setting some axioms that, in our world, are not true, and work from there. The difference is that machine-lit, by definition, performs the basic world building then immediately jumps to narrative, then stops as soon as something resembling a completed text is produced. Within world-building-based science fiction, a much more complex world is built, and the narrative and characters stem from that world organically.

This requires a dedication to completeness and intellectual honesty, in part because genuinely following the logical progression of the central mechanism of a counterfactual world can point out flaws in its structure.

In cryptography, the first and most important rule is never to roll your own crypto -- always use a well-known and well-tested algorithm, at the very least, and ideally also use a well-known and well-tested implementation. The reason is that flaws are never intentionally introduced into crypto by people who want the crypto to succeed, and thus fatal flaws can only be identified by other people -- and the more people there are looking for flaws in an algorithm, the faster such flaws are found (and the longer it takes to find fatal flaws in an algorithm, the more likely it is that such flaws are difficult to find). Everyone who designs crypto professionally is also skilled in trying to break crypto: you learn to avoid the flaws that you have discovered how to exploit. Likewise in computer security -- the research arm of the computer security community consists of people who figure out how to break security and then figure out how to patch those holes.

In fact, this is a common pattern in legitimately serious enterprises. The scientific method is exactly this: suggest a model of the world, and then recruit people to attack it. The adversarial justice system is based on two groups of people presenting different models of the world and attacking each others' models. Even in philosophy, philosophers engage in critiques of the ideas of other philosophers, rather than ignoring any idea they don't agree with.

Any functional member of any of these communities will attempt, before putting their ideas out into the world, to stress-test them personally -- formulate simple attacks, determine which portions of the idea are weak and whether they can be strengthened without complete restructuring.

Machine-lit, by and large, fails to perform these sanity checks. Machine-lit is the domain of people who are so in love with their ideas that they cannot bear to test their mettle before pushing them out into the world.

An ideology at the core of machine-lit, if properly investigated, would collapse upon itself or mutate such that it fails to be an ideology. A utopia at the core of machine lit would, upon close inspection, become a dystopia; a dystopia, upon close inspection, would yield some happy and fulfilled people, making the message of the book ambiguous. An actual machine at the core of machine-lit, if properly and rigorously tested, would become at worst a patent application but possibly an actual invention.

I'm perfectly in favor of optimism in science fiction. Nothing is to be gained from keeping the genre grimdark as a rule, in the same way that nothing is to be gained from keeping superhero movies grimdark. However, utopian science fiction represents a failure to take the medium seriously -- and a shallow dystopia or cozy apocalypse is no better. Science fiction should be a genre of ideas, but there's no point if we allow our ideological biases and our love of shiny toys to turn it into a genre of shallow ideas shielded from unforgiving reality. The real world has problems, and while escapism is fine, a work cannot simultaneously be an escapist fantasy and a serious analysis presenting a serious solution to the problems it fantasizes about escaping from.

Science fiction always starts as machine-lit. But, machine-lit is a larval stage that adult science fiction works outgrow.