A frequent idea that I run across when speaking to technophiles with transhumanist leanings these days is the superiority of tech over biology. People will say, "I'll upload my brain and live forever", or "I'll replace my arm so that I can be stronger", or "I'll get a wetwire to the internet so I can read faster". This isn't a new idea; I said variations on the same thing when I was fifteen. But, it's absolutely stupid.
We have never built a machine with a lifespan and resilience comparable to a human being. Machine failure rates fall along a bathtub curve, but while an expected human lifespan is between seventy and eighty years these days, it's the rare freak of a machine that still functions after ten or twenty years -- let along thirty, let alone continuously. Biological systems have insanely complex self-repair and self-maintenance systems, and the reason we live for eighty years is that our parts are continuously being maintained, rather than undergoing routine maintenance on a human scale of weekly-monthly-yearly. The very first programmable electromechanical computers were built in the 30s (depending on who you ask and how you define it, you can push it forward or back about ten years), meaning that a human being living an average lifetime that was born at the same moment as the very first programmable computer in the modern sense would be dying *right now*; drum and disk storage is twenty years younger (along with transistors, ram that's not based on relays/mercury tubes/CRTs, programming languages other than machine code, and interactive terminals), and the internet is about fifteen years younger than *that* (along with email, pipes, directories, and asymmetric key cryptography). Someone born at the moment the first packet was sent over the internet would be middle-aged. Nevertheless, all these systems have changed drastically many times over the course of their lifetime, in incompatible ways. All of the component parts have been replaced many times over. At various points in the past, all these systems have had *complete* failures (yes, including the internet). These systems are not at the point where they could be expected to safeguard the lifetime of a rat, let alone extend the lifetime of a human being.
Likewise, with prosthetic arms. Prosthetic arms are great -- for people who are missing their arms. Cochlear implants aren't competing with people's real ears; they're competing with being deaf. The prosthetic eyes that have finally gotten FDA approval have approximately the same resolution as a TRS-80 Model 100 -- they aren't competing with real eyes, but with total blindness.
Wetwires are in an even worse position. The current state of the art in brain implants can, with incredibly invasive and dangerous brain surgery, temporarily hook your brain up to 200 I/O lines, each of which neurons *might* decide to grow on. Wetwires are competing with reading: a four thousand year old technology that's constantly being improved upon, that takes advantage of the human eye and optic nerve -- a pipe so fat that the eye is considered to be a part of the brain, a pipe so fat that the eye does complex processing independently of the visual cortex and can independently trigger signals to the amygdala about emotionally striking scenes before the visual cortex can even receive the image data. Furthermore, reading is a technology that the government of every developed nation spends huge amounts of money on installing into its citizens! Wetwires can't compete with that.
That said, this isn't the end of transhumanism, or even of grinding. Implants aren't going to go away. It's just that we aren't looking at them correctly.
Implants are a *long* way away from replacing the things that human beings already do well, like living and thinking and reading and moving. Generally speaking, to the extent that it's trivial to do so, when there's a problem with scale, we invent an external technology to handle it -- when we need to turn bolts, we build wrenches that fit in our hands instead of hacking our arms off and replacing them with wrenches. If we depend upon implant tech (and other varieties of transhuman tech) to fund itself by being an improvement over what humans already are capable of doing, then the whole field will go bankrupt. But, there are two fields in which this kind of tech can excel. One is performing a worse job at than the human body at tasks that the human body already does -- prosthetics for people with missing limbs, and replacement parts for people whose parts are faulty or missing. The other is allowing human beings to do things they've never done before -- not by increasing scale, but by qualitative change.
The cochlear implant kind of sucks. When it's installed, wires are stuck to the cochlea -- a snail-shaped fluid-filled organ in the inner ear that does the heavy lifting in hearing (the rest of the ear is basically involved in amplification and protection). In normal hearing, vibrations from outside the ear are amplified by a set of bones that operate like a cam assembly, before pressing on a flexible membrane on the big end of the cochlea, and the movement of tiny hairs inside the cochlea produces the perception of sound, with the position of the hairs that are most stimulated determined by the frequency of the sound. In a cochlear implant, the wires cause the hairs to be stimulated directly, with galvanism, and so the number of wires installed corresponds to the resolution of sound available. We do not have the technology to produce CD-quality sound. We don't even have the technology to produce speak-and-spell-quality sound. People with cochlear implants are stuck trying to decode speech based on fewer distinct frequencies than there are bars on a child's xylophone. But the cochlear implant, as an accident of its nature, has one improvement over the ear -- it has a built-in headphone jack. Cochlear implant-like technologies are far from being an improvement over ears, but when combined with throat mics or other mechanisms for producing the fundamental elements of speech from subvocalizations, they might be an improvement over the walkie-talkie. At the point at which this technology has enough demand to make people voluntarily submit to brain surgery, I expect that this is exactly how it will be used (and I expect the first market to be military or paramilitary -- people who, on a life or death basis, need to communicate without using their hands and without being heard by other people nearby).
There's another trend going on, as well. Just as desktops became laptops and laptops became smartphones, smartphones are on the cusp of becoming wearables, and wearables will become implants.
However, this change-over is very rarely quick, and even more rarely complete. Before desktops, we had minicomputers, and before minicomputers, mainframes; however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying), and desktops are hardly going anywhere.
We haven't yet reached the point where it's reasonable to develop software on a smartphone -- which means that smartphones are about where personal computers were in 1979, but beyond where laptops were in 1989 (when the now-famous New York Times article suggesting that the age of the laptop had come and gone was written, but also, paradoxically, when the HP-95 palmtop was released -- which, as it so happens, was perfectly capable of having software developed on it if you could get along with a calculator-style chiclet keyboard). Smartphones are currently being used for light-duty applications while the heavy-duty applications like video editing, compiling, and triple-A gaming are being done on laptops and desktops -- a mirror of about ten years ago, when laptops were being used for lighter-duty applications. Meanwhile, wearables are at the same stage that smartphones were in the Palm Treo era, or that PDAs were in the Newton era -- in other words, just close enough to seem potentially commercially viable, but not quite far enough along yet to be useful as anything more than an expensive toy. Expensive toys can be industry-changing, if they're popular enough, but to do that you need to pull a Sony and sell under cost at scale. (Sony is the only company I can think of that has pulled off making its new technologies ubiquitous and cheap by selling under cost at scale multiple times -- it did it with 3 1/4 inch floppy disks by supplying Apple with them under cost in the mid-80s, again a few years later with CDs, and again by shipping every PS3 with a blu-ray drive. But, it's also failed multiple times with that technique -- with mini-discs, with the cell processor...)
Every few years, the entire tech industry pulls out twenty-year-old project from the MIT Media Lab or CMU or PARC or somewhere and collectively decides to shit its pants over it. Recently, we've been hitting a quadruple-whammy: wearable computers, the Internet of Things, 3d printing, and virtual reality.
The current wearable computer boom started with Google Glass taking pretty much equally from the work that Thad Starner and Steve Mann were doing in the early 90s; appropriately, Starner was brought onto the Glass project, while Mann was completely uncredited despite the fact that they took the name from him. And, despite the fact that Glass was a complete PR disaster, Google definitely decided what parts of Starner's work to borrow with an eye toward PR -- Starner's most interesting idea, subliminal reminders, was omitted from Glass and the Glass UI standards and Glass UI frameworks were written in such a way that subliminal reminders should be completely impossible. Now, in an almost hilariously ironic turn of events, Microsoft has taken essentially exactly the same technology, made it steroscopic, reframed it in terms of geolocation-centric AR (something Glass was never going to be capable of doing, by design), and turned it into a massive PR success.
In comparison, the current Internet of Things boom seems to be driven entirely by industry-wide amnesia. That's not entirely unfair, since the industry has, until now, had a very hard time figuring out what to call it. The current term of art is the Internet of Things, but from around 1995 to around 2005, everybody was calling it Ubiquitous Computing. The IoT is hitting a lot of the same media roadblocks as VR did in the early 90s, which makes me think that it's probably around the same point in the hype cycle, although technologically, it's definitely further along.
Ten years ago, when I was an unemployed teenager, I had two big projects that were lighting up my eyes. One of them was a wearable computer project. The other was a UbiComp project -- what you'd now call the Internet of Things. At the time, the wearable computer project was by far less feasible; displays were expensive, cpus were expensive, making either run off a battery and getting the thing small enough and light enough to fit on your body meant lowering its capabilities to an extreme. I designed several prototype wearable computers around the AT90S8515 -- an 8-bit microcontroller that cost $10 and had 127 bytes of ram -- and various LED-based displays, but it was clear that unless I was willing to either buy thousand-dollar equipment or strap a laptop to my back and make due with audio cues as an interface, wearable computers were really infeasible. (I ended up strapping a laptop to my back and using audio cues, in the end.) The UbiComp project, on the other hand, was completely within the realm of possibility -- I had a working prototype for a system for communal cooperative use of a single computer, based on identifier tokens stored on a cheap wiimote knockoff that doubled as an input device; the cost of the system was the cost of a random desktop computer, a projector, and a $20 wiimote knockoff. If I had had steady disposable income, I could have formed a corporation and finished my prototype and become yet another failed IoT startup -- the technology was there, solid, and absolutely trivial.
Today, IoT is even easier. My potentially-$300 computer could be replaced with a $20 raspberry pi. Wiimote knockoffs don't even cost $20 anymore. The projector costs more than the rest of the system in total, and my homebrewed account-sharing system could be replaced with the kind of cloud-based thing that newbies whip up in minutes and brag about on hacker news. A couple years ago, I did a wearable computer, too -- with about $350 worth of parts (a raspberry pi, a twiddler, a $100 head mounted display, and a USB battery pack), I built something that, while not comparable in usability to a laptop, beat the pants off the absolute best I could do with that kind of money in 2005 -- mostly because of economies of scale provided by the popularity of smartphones. PDAs manufactured in 2005 couldn't really run 800x600 color VGA, or even 300x200 color VGA -- too slow. (Maybe you could do it if you were especially clever. I wasn't clever enough to make up for my lack of riches -- wagering the cost of disassembling an expensive PDA on my ability to make it drive a display was too rich for my blood.) A single-board computer capable of running Linux in 2005 was a fucking high-end single-board computer. But, the iPhone came out -- a single board computer running BSD shoved into a PDA -- then the Android phones started appearing a couple years later -- cheaper single board computers running Linux and Java shoved into PDAs. Now the chips that run Linux in smartphones are cheap enough that Texas Instruments will give away a handful of free samples to anybody with a university-affiliated email address, complete with specialized circuitry for fast video decoding. Single board computers running Linux can be sold for $20 and make enough money to prop-up a non-profit organization. Meanwhile, some nerds figured out that a series of cheap wifi chips could be reflashed, and now you can buy complete postage-stamp-sized wifi-enabled systems that can run Lua for $5.
So, we're at the point now where you can stick the guts of a smartphone on the side of your head and have a head-mounted smartphone with a battery life of about two hours, or you can stick the guts of your smartphone on your wrist and have a smartphone with a battery life of about a day if you barely ever have the screen on. Or, you can stick the guts of a smartphone in your pocket and stick a screen on your head, and actually have a reasonable battery life with reasonable usage. We aren't at the point where we can start making fully wearable never-take-em-off computers with reasonable battery life and reasonable capability, although I think that if we take a page out of the MIT Media Lab book and combine this with IoT, we might be able to make due with what we have for a little longer. This has problems -- centralized IoT is the domain of natural monopolies, with most of them fated to go the way of AppleTalk (although centralized IoT is all the rage now, with every consortium of manufacturers competing to make their own incompatible standards on the off chance that theirs will be the one to take off); meanwhile, decentralized IoT is the stuff of IT nightmares, where failures in logistics and/or security can lead to a lightbulb DDoSing your house and/or the white house. My own design, which was based on a federated model with an open protocol and a market for competing vendors, has unfortunately been obviated by time -- it was based on the assumption that the normal use would be an evolution of the cyber-cafe, and it probably would have worked in 2005, but no longer makes sense in the same universe as widespread smartphone ownership and devices like chromecast. Offloading computing from wearables onto IoT nodes will require an extreme of either security or naivete -- and because security is complicated, I fully expect a future hellworld of incredibly insecure wearable/IoT mesh networking comparable to the amazing terror of running Windows 9x on the internet in the 90s. Welcome back to an era where anybody with a modicum of knowledge can remote control your computer and nobody can patch it for five years; except this time, the computer is strapped to your face.
This is a problem that *must* be solved before the wearables become implantables. Implants need to be smaller than wearables. Right now, the state of medical device security is pretty low -- while medical device software, along with airplane control software and nuclear power plant software, has higher quality standards under normal operating conditions, it's largely no better than normal consumer-grade software when it comes to resisting actual planned attacks, and sometimes worse. We already have computers in all sorts of things -- horrible, insecure computers; our airplanes can be hijacked through the in-flight wifi network, our cars can be hijacked through the CD player, our pacemakers can be remote-controlled over wifi, and our routers are already sustaining self-replicating botnets. When these devices are on our bodies, the threats become more visible; when they are in our bodies, they become potentially fatal -- not necessarily because of malice (it takes a special kind of person to actually shut down somebody's heart by exploiting their pacemaker) but because of incompetence (it doesn't particularly take a special kind of person to try to make a botnet out of every exploitable wifi-enabled device, including pacemakers, and then not check available memory and crash the pacemakers because he's just written to an address that doesn't exist).
Implants are coming, and wearables are coming first. Implants will come both faster and slower than we expect, because they won't be used how we expect. They won't make us live longer or read faster, but instead will let us do things we haven't imagined yet. Let's fix our shit before we've got buffer overflow vulnerabilities that'll take actual brain surgery to patch.
We have never built a machine with a lifespan and resilience comparable to a human being. Machine failure rates fall along a bathtub curve, but while an expected human lifespan is between seventy and eighty years these days, it's the rare freak of a machine that still functions after ten or twenty years -- let along thirty, let alone continuously. Biological systems have insanely complex self-repair and self-maintenance systems, and the reason we live for eighty years is that our parts are continuously being maintained, rather than undergoing routine maintenance on a human scale of weekly-monthly-yearly. The very first programmable electromechanical computers were built in the 30s (depending on who you ask and how you define it, you can push it forward or back about ten years), meaning that a human being living an average lifetime that was born at the same moment as the very first programmable computer in the modern sense would be dying *right now*; drum and disk storage is twenty years younger (along with transistors, ram that's not based on relays/mercury tubes/CRTs, programming languages other than machine code, and interactive terminals), and the internet is about fifteen years younger than *that* (along with email, pipes, directories, and asymmetric key cryptography). Someone born at the moment the first packet was sent over the internet would be middle-aged. Nevertheless, all these systems have changed drastically many times over the course of their lifetime, in incompatible ways. All of the component parts have been replaced many times over. At various points in the past, all these systems have had *complete* failures (yes, including the internet). These systems are not at the point where they could be expected to safeguard the lifetime of a rat, let alone extend the lifetime of a human being.
Likewise, with prosthetic arms. Prosthetic arms are great -- for people who are missing their arms. Cochlear implants aren't competing with people's real ears; they're competing with being deaf. The prosthetic eyes that have finally gotten FDA approval have approximately the same resolution as a TRS-80 Model 100 -- they aren't competing with real eyes, but with total blindness.
Wetwires are in an even worse position. The current state of the art in brain implants can, with incredibly invasive and dangerous brain surgery, temporarily hook your brain up to 200 I/O lines, each of which neurons *might* decide to grow on. Wetwires are competing with reading: a four thousand year old technology that's constantly being improved upon, that takes advantage of the human eye and optic nerve -- a pipe so fat that the eye is considered to be a part of the brain, a pipe so fat that the eye does complex processing independently of the visual cortex and can independently trigger signals to the amygdala about emotionally striking scenes before the visual cortex can even receive the image data. Furthermore, reading is a technology that the government of every developed nation spends huge amounts of money on installing into its citizens! Wetwires can't compete with that.
That said, this isn't the end of transhumanism, or even of grinding. Implants aren't going to go away. It's just that we aren't looking at them correctly.
Implants are a *long* way away from replacing the things that human beings already do well, like living and thinking and reading and moving. Generally speaking, to the extent that it's trivial to do so, when there's a problem with scale, we invent an external technology to handle it -- when we need to turn bolts, we build wrenches that fit in our hands instead of hacking our arms off and replacing them with wrenches. If we depend upon implant tech (and other varieties of transhuman tech) to fund itself by being an improvement over what humans already are capable of doing, then the whole field will go bankrupt. But, there are two fields in which this kind of tech can excel. One is performing a worse job at than the human body at tasks that the human body already does -- prosthetics for people with missing limbs, and replacement parts for people whose parts are faulty or missing. The other is allowing human beings to do things they've never done before -- not by increasing scale, but by qualitative change.
The cochlear implant kind of sucks. When it's installed, wires are stuck to the cochlea -- a snail-shaped fluid-filled organ in the inner ear that does the heavy lifting in hearing (the rest of the ear is basically involved in amplification and protection). In normal hearing, vibrations from outside the ear are amplified by a set of bones that operate like a cam assembly, before pressing on a flexible membrane on the big end of the cochlea, and the movement of tiny hairs inside the cochlea produces the perception of sound, with the position of the hairs that are most stimulated determined by the frequency of the sound. In a cochlear implant, the wires cause the hairs to be stimulated directly, with galvanism, and so the number of wires installed corresponds to the resolution of sound available. We do not have the technology to produce CD-quality sound. We don't even have the technology to produce speak-and-spell-quality sound. People with cochlear implants are stuck trying to decode speech based on fewer distinct frequencies than there are bars on a child's xylophone. But the cochlear implant, as an accident of its nature, has one improvement over the ear -- it has a built-in headphone jack. Cochlear implant-like technologies are far from being an improvement over ears, but when combined with throat mics or other mechanisms for producing the fundamental elements of speech from subvocalizations, they might be an improvement over the walkie-talkie. At the point at which this technology has enough demand to make people voluntarily submit to brain surgery, I expect that this is exactly how it will be used (and I expect the first market to be military or paramilitary -- people who, on a life or death basis, need to communicate without using their hands and without being heard by other people nearby).
There's another trend going on, as well. Just as desktops became laptops and laptops became smartphones, smartphones are on the cusp of becoming wearables, and wearables will become implants.
However, this change-over is very rarely quick, and even more rarely complete. Before desktops, we had minicomputers, and before minicomputers, mainframes; however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying), and desktops are hardly going anywhere.
We haven't yet reached the point where it's reasonable to develop software on a smartphone -- which means that smartphones are about where personal computers were in 1979, but beyond where laptops were in 1989 (when the now-famous New York Times article suggesting that the age of the laptop had come and gone was written, but also, paradoxically, when the HP-95 palmtop was released -- which, as it so happens, was perfectly capable of having software developed on it if you could get along with a calculator-style chiclet keyboard). Smartphones are currently being used for light-duty applications while the heavy-duty applications like video editing, compiling, and triple-A gaming are being done on laptops and desktops -- a mirror of about ten years ago, when laptops were being used for lighter-duty applications. Meanwhile, wearables are at the same stage that smartphones were in the Palm Treo era, or that PDAs were in the Newton era -- in other words, just close enough to seem potentially commercially viable, but not quite far enough along yet to be useful as anything more than an expensive toy. Expensive toys can be industry-changing, if they're popular enough, but to do that you need to pull a Sony and sell under cost at scale. (Sony is the only company I can think of that has pulled off making its new technologies ubiquitous and cheap by selling under cost at scale multiple times -- it did it with 3 1/4 inch floppy disks by supplying Apple with them under cost in the mid-80s, again a few years later with CDs, and again by shipping every PS3 with a blu-ray drive. But, it's also failed multiple times with that technique -- with mini-discs, with the cell processor...)
Every few years, the entire tech industry pulls out twenty-year-old project from the MIT Media Lab or CMU or PARC or somewhere and collectively decides to shit its pants over it. Recently, we've been hitting a quadruple-whammy: wearable computers, the Internet of Things, 3d printing, and virtual reality.
The current wearable computer boom started with Google Glass taking pretty much equally from the work that Thad Starner and Steve Mann were doing in the early 90s; appropriately, Starner was brought onto the Glass project, while Mann was completely uncredited despite the fact that they took the name from him. And, despite the fact that Glass was a complete PR disaster, Google definitely decided what parts of Starner's work to borrow with an eye toward PR -- Starner's most interesting idea, subliminal reminders, was omitted from Glass and the Glass UI standards and Glass UI frameworks were written in such a way that subliminal reminders should be completely impossible. Now, in an almost hilariously ironic turn of events, Microsoft has taken essentially exactly the same technology, made it steroscopic, reframed it in terms of geolocation-centric AR (something Glass was never going to be capable of doing, by design), and turned it into a massive PR success.
In comparison, the current Internet of Things boom seems to be driven entirely by industry-wide amnesia. That's not entirely unfair, since the industry has, until now, had a very hard time figuring out what to call it. The current term of art is the Internet of Things, but from around 1995 to around 2005, everybody was calling it Ubiquitous Computing. The IoT is hitting a lot of the same media roadblocks as VR did in the early 90s, which makes me think that it's probably around the same point in the hype cycle, although technologically, it's definitely further along.
Ten years ago, when I was an unemployed teenager, I had two big projects that were lighting up my eyes. One of them was a wearable computer project. The other was a UbiComp project -- what you'd now call the Internet of Things. At the time, the wearable computer project was by far less feasible; displays were expensive, cpus were expensive, making either run off a battery and getting the thing small enough and light enough to fit on your body meant lowering its capabilities to an extreme. I designed several prototype wearable computers around the AT90S8515 -- an 8-bit microcontroller that cost $10 and had 127 bytes of ram -- and various LED-based displays, but it was clear that unless I was willing to either buy thousand-dollar equipment or strap a laptop to my back and make due with audio cues as an interface, wearable computers were really infeasible. (I ended up strapping a laptop to my back and using audio cues, in the end.) The UbiComp project, on the other hand, was completely within the realm of possibility -- I had a working prototype for a system for communal cooperative use of a single computer, based on identifier tokens stored on a cheap wiimote knockoff that doubled as an input device; the cost of the system was the cost of a random desktop computer, a projector, and a $20 wiimote knockoff. If I had had steady disposable income, I could have formed a corporation and finished my prototype and become yet another failed IoT startup -- the technology was there, solid, and absolutely trivial.
Today, IoT is even easier. My potentially-$300 computer could be replaced with a $20 raspberry pi. Wiimote knockoffs don't even cost $20 anymore. The projector costs more than the rest of the system in total, and my homebrewed account-sharing system could be replaced with the kind of cloud-based thing that newbies whip up in minutes and brag about on hacker news. A couple years ago, I did a wearable computer, too -- with about $350 worth of parts (a raspberry pi, a twiddler, a $100 head mounted display, and a USB battery pack), I built something that, while not comparable in usability to a laptop, beat the pants off the absolute best I could do with that kind of money in 2005 -- mostly because of economies of scale provided by the popularity of smartphones. PDAs manufactured in 2005 couldn't really run 800x600 color VGA, or even 300x200 color VGA -- too slow. (Maybe you could do it if you were especially clever. I wasn't clever enough to make up for my lack of riches -- wagering the cost of disassembling an expensive PDA on my ability to make it drive a display was too rich for my blood.) A single-board computer capable of running Linux in 2005 was a fucking high-end single-board computer. But, the iPhone came out -- a single board computer running BSD shoved into a PDA -- then the Android phones started appearing a couple years later -- cheaper single board computers running Linux and Java shoved into PDAs. Now the chips that run Linux in smartphones are cheap enough that Texas Instruments will give away a handful of free samples to anybody with a university-affiliated email address, complete with specialized circuitry for fast video decoding. Single board computers running Linux can be sold for $20 and make enough money to prop-up a non-profit organization. Meanwhile, some nerds figured out that a series of cheap wifi chips could be reflashed, and now you can buy complete postage-stamp-sized wifi-enabled systems that can run Lua for $5.
So, we're at the point now where you can stick the guts of a smartphone on the side of your head and have a head-mounted smartphone with a battery life of about two hours, or you can stick the guts of your smartphone on your wrist and have a smartphone with a battery life of about a day if you barely ever have the screen on. Or, you can stick the guts of a smartphone in your pocket and stick a screen on your head, and actually have a reasonable battery life with reasonable usage. We aren't at the point where we can start making fully wearable never-take-em-off computers with reasonable battery life and reasonable capability, although I think that if we take a page out of the MIT Media Lab book and combine this with IoT, we might be able to make due with what we have for a little longer. This has problems -- centralized IoT is the domain of natural monopolies, with most of them fated to go the way of AppleTalk (although centralized IoT is all the rage now, with every consortium of manufacturers competing to make their own incompatible standards on the off chance that theirs will be the one to take off); meanwhile, decentralized IoT is the stuff of IT nightmares, where failures in logistics and/or security can lead to a lightbulb DDoSing your house and/or the white house. My own design, which was based on a federated model with an open protocol and a market for competing vendors, has unfortunately been obviated by time -- it was based on the assumption that the normal use would be an evolution of the cyber-cafe, and it probably would have worked in 2005, but no longer makes sense in the same universe as widespread smartphone ownership and devices like chromecast. Offloading computing from wearables onto IoT nodes will require an extreme of either security or naivete -- and because security is complicated, I fully expect a future hellworld of incredibly insecure wearable/IoT mesh networking comparable to the amazing terror of running Windows 9x on the internet in the 90s. Welcome back to an era where anybody with a modicum of knowledge can remote control your computer and nobody can patch it for five years; except this time, the computer is strapped to your face.
This is a problem that *must* be solved before the wearables become implantables. Implants need to be smaller than wearables. Right now, the state of medical device security is pretty low -- while medical device software, along with airplane control software and nuclear power plant software, has higher quality standards under normal operating conditions, it's largely no better than normal consumer-grade software when it comes to resisting actual planned attacks, and sometimes worse. We already have computers in all sorts of things -- horrible, insecure computers; our airplanes can be hijacked through the in-flight wifi network, our cars can be hijacked through the CD player, our pacemakers can be remote-controlled over wifi, and our routers are already sustaining self-replicating botnets. When these devices are on our bodies, the threats become more visible; when they are in our bodies, they become potentially fatal -- not necessarily because of malice (it takes a special kind of person to actually shut down somebody's heart by exploiting their pacemaker) but because of incompetence (it doesn't particularly take a special kind of person to try to make a botnet out of every exploitable wifi-enabled device, including pacemakers, and then not check available memory and crash the pacemakers because he's just written to an address that doesn't exist).
Implants are coming, and wearables are coming first. Implants will come both faster and slower than we expect, because they won't be used how we expect. They won't make us live longer or read faster, but instead will let us do things we haven't imagined yet. Let's fix our shit before we've got buffer overflow vulnerabilities that'll take actual brain surgery to patch.
however, minicomputers are not quite gone (IBM still sells machines running z/OS, although most of the market is dying),
ReplyDeletePossibly a bad juxtaposition, but z/OS is IBM's operating system for their mainframes, not their minicomputers.
Modern z/OS systems don't really fit neatly into any of our categories, here. They're intel and/or PPC boxes with an OS/360-derived software stack. When I say mainframe I usually mean machines built at scale with discrete logic, because that's what it meant during the period when minicomputers and mainframes were competing -- minicomputers were less powerful but tended to be transistorized when mainframes were still vacuum tube machines, and tended to use ICs for logic at a time when mainframes were transistorized. The divisions are unclear at best, and at a time when a wristwatch can be expected to have more computing power than the last of the discrete-logic mainframes, the categorizations go from foggy to completely meaningless.
DeleteWhen I mention z/OS and minicomputers in the same sentence, what I mean is that z/OS is one of the last vestiges of the minicomputer era -- but z/OS is not a minicomputer OS, but instead an OS intended to compete with minicomputers. I am unaware of any genuine minicomputer-derived systems still on the market (not counting various and sundry unixes, since unix is very much a living thing and modern unixes are pretty well-adapted to their new environment). But, I agree that the implication is unclear. z/OS is very obviously not a mainframe system in the way we commonly understand it, insomuch as it cannot run on mainframes as we commonly understand them, but instead on very modern boxes that are marketed as mainframes because their software stack is a mainframe software stack.
Also, that thing you said was $5 is actually $9.95.
ReplyDeleteThe chip is available from a variety of sources for a variety of prices; I've heard $5 quoted, but also $3. Adafruit sells it with a dev board attached for ten bucks.
Delete