Monday, December 26, 2011

Unpatched exploits in the human biocomputer: Defcon 19 edition




Sunday, December 4, 2011

A more scientific study of 'window areas'

I was reading about the book Space-Time Transients and Unusual Events and it occurred to me that a far more scientific study of window areas or hot points for high strangeness is now possible. We have a fairly large sample set in the form of back-issues of The Fortean Times. Were one to take machine-readable copies of all the back-issues, isolate the place names, and plot a point on a map for every mention, we could have an idea of distribution of claims. This self-adjusts for intensity because the stranger the phenomenon the more often it is likely to be mentioned.

Such a map could then be adjusted by population.

If high strangeness is contingent solely on population (in other words, if every person in every place has an equal likelihood of having an experience of a fortean nature) then our adjusted map should be a single fairly solid colour with very little variation, with the exception of places like Point Pleasant, Roswell, Dulce, the Groom Lake Facility, and so on, which are more notorious because of their particular relationship to the mass media.

If high strangeness has an inverse relationship with population (perhaps because loners see things, or because people steer themselves away from places with these phenomena, or for some other reason), we should see the original distribution of points made far more clear.

If high strangeness has a relationship with fault lines, as the authors of Space-Time Transients suggest, then we should see that distribution quite clearly once population is factored out. Likewise with leys, or aboriginal burial sites, or magnetic faults, or changes in gravity, or closeness to radio transmitters, or any of the other suggested explanations whose documented points can be traced on a map.

If I had machine-readable back-issues of the Fortean Times, I could do this in my copious free time. Perhaps I can rely upon the kindness of strangers and/or Fortean Times employees.

Monday, October 24, 2011

OWS and sustainable microcultures



The 'Occupy' movement has exploded recently, and the above interview with R. U. Sirius and Richard Metzger has reminded me about some of the reasons why. OWS started out with the support of Anon, a fairly potent meme factory, and as a result, Occupy has spread far and wide, beginning as a kind of half-protest-half-TAZ and eventually becoming a kind of franchise-state Christiania.

This is not like a Hoovertown. The people participating in OWS are middle class, not lower class, and can certainly afford to live in houses rather than tents. But, as a result of good organization and attempts to gain and maintain good will from nearby organizations, various Occupy movements have become miniature communities with their own periodicals, health care, economy, government, and community activities. For the number of protests and the variety of locales (over forty countries are being occupied -- this is not limited to the united states by any means) there has been minimal state-sanctioned violence (which is not to say that there's none, or that the violence that has occurred is any less despicable).

Previous discussion of the movement has been predicated upon the assumption that it would end after some change had been made in local, state, national, or global governance, but if this indeed goes on for years (as Christiania did on a much smaller scale), it may well be that Occupy communes will cohabit with or perhaps even outlive those states that encompass them. If anything, it is a proof of concept -- but I will be interested in seeing whether these protests survive the winter, and perhaps the following winter.

Update: Great minds think alike

Monday, September 26, 2011

Exploits in the Human Biocomputer: Buffet Edition
















Exploits in the Human Biocomputer (NLP Edition)


I suspect that NLPers have their own name for the Doctor Fox effect. That said, I don't see the point of saying something so precisely meaningless in terms of propaganda, except when preaching to the choir. There is something to be said for having confidence that if you are ever caught with nothing to say you can say nothing quite well and convince everyone you said what they wanted to hear, however. Perhaps I will do this the next time I give a presentation.

Sunday, September 4, 2011

Redefining scifi, again

Science fiction has been defined and redefined fairly frequently, both by fans and by authors. Since I have some fairly strong feelings on the subject, I figured I might as well cram them into your ears before they get too clogged.

I'm defining here what I consider to be the core of what makes something science fiction, but it differs enough from other people's idea of science fiction that we might as well call it something else. If you have a different conception of science fiction, perhaps you can agree to call this 'Fiction A is Fiction A', after its primary attribute.

First, I'd like to go over previous categorization systems and why they fail.

It should be clear to us, first of all, that setting something in outer space or having it involve computers is neither necessary nor sufficient to make it sci-fi. There was a time when the latter seemed less ridiculous. Let us settle for the idea that science fiction is not predicated upon the existence within the story of unfamiliar technology -- otherwise, a story full of technology that has become familiar will cease to be science fiction (as Terminal Man may soon be, since precisely the technology discussed in that novel is now being used by epileptics). Science fiction should not be vulnerable to the old chestnut about AI -- if everything it describes exists, it's no longer sci-fi.

An old standard, though, is the idea that sci-fi stays with known science as much as possible. I do not consider this a defining element, because it invalidates nearly all science fiction past and present. 1984 is based in large part around a naiive interpretation of the Sapir-Whorf hypothesis; if that turns out to be provably false, does 1984 cease to be science fiction? Dune (and many other science fiction novels from the 1950s-1970s) had in large part a basis in the assumption that precognition and remote viewing are possible. Are they no longer science fiction? Even if we limit it to the body of knowledge that was known clearly at the time of writing, we have some problems. Anathem uses the Everett-Wheeler 'multiple world interpretation', but (like every other use in science fiction) ignores the fact that in order for MWI to hold, no continuity is allowed to interact with any other; this is built into the formalism for MWI. Is Anathem not science fiction, since it selectively ignored science for the sake of a story?

So, it is neither necessary nor sufficient to keep with known science, and it is neither necessary nor sufficient to operate in the domain of not-yet-distributed futures. Placement in time itself is also no good; 2001: A Space Odyssey remained science fiction, as did 1984. So, how do we delineate what falls into the domain of science fiction?

My answer is consistency. Science fiction need not be consistent with known science, but it should be self-consistent. Its capacity for self-consistency, like that of mathematics, will likely be a boon to it in the future. It could also be claimed that by being consistent, science fiction makes itself diverge even moreso from reality, which is famously inconsistent (and when it behaves consistently, does so in baroque and mysterious ways).

So, what cornerstones of science fiction do we in this way eliminate? Star Wars, Star Trek, and Doctor Who do not retain consistency, but they all fail at other tests of science fiction purity. They fall into the domain of fantasy, which is perhaps where they belong.

Perhaps more interestingly, what falls into science fiction under this definition that would not otherwise? The Age of Unreason series does, despite being set in the eighteenth century and focusing on alchemy, because the rules set down for alchemy are not contradicted later on. The Laundry Files series also falls into science fiction, though due to some flukes in early books, the Dresden Files series does not. Hackers, while ludicrously and hilariously at odds with reality, does not break its own rules, and thus falls under the aegis of sci-fi.

Thursday, September 1, 2011

Rule 34 and sidestepping superorganisms

Before I start this rant, I should mention that I absolutely love Charlie Stross's books and his blog, and that Rule 34 was no exception. I say this because the rest of this post will be fairly critical of the book, and may be somewhat critical of the man himself in passing. I should also note that everything I know of Stross I have learned from his blog and his various talks, which are posted there; as a result, I probably have a skewed view of what he does and doesn't know or think.

So, if you haven't read Rule 34 yet (and you should; I had to order it from Amazon, but if you don't live in the sticks your local bookstore probably has it), the basic idea behind it is this: in the future, Scotland is its own country, and the Edinburgh police force has a special operations squad studying internet memes and keeping an eye out for dangerous ones (ranging from things like planking, which are dangerous due to stupidity rather than malice, to things like copycat suicides, which are dangerous due to the autotoxic nature of the meme itself). Great setup. Then, there are a series of strange murders framed as bizarre suicides. If you keep up with this blog, you know where this is going -- but you'd be wrong.

While I won't spoil the twist ending (which is interesting in of itself, for all kinds of reasons even tangential to plot), the perp is not in fact a meme, nor is it some superorganism. While memetic perps are hard to write, there are several minor ones in that book particularly (and memetic perps of the Young Werther mould have been a dime a dozen since Gothe's time -- while they probably have about the same audience familiarity as laughing plagues or the Boston Molasses Disaster and are considered odder than Strange Rains); as for superorganism characters, they are still rare but they have been handled convincingly even in Count Zero.

So, why did Stross sidestep this idea?

He's aware of it. I mentioned it in a comment on his blog, which he responded to. Perhaps he doesn't feel like he can handle it as well? Perhaps he thinks it won't make such a ripping yarn?

Unfortunately, despite his excellent record in terms of legitimately new ideas, Rule 34 is innovative only by 1982 standards. This is not a big deal -- Neuromancer has certainly kept its flavour. However, Stross' books have been consistently stuck twenty minutes into the future of science fiction authorship, leading the way for all sorts of new twists on old genres that haven't even come up from the underground yet. I might even be tempted to blame The Atrocity Archives for the popularity of Magic-A-is-Magic-A urban fantasy thrillers, if not for the distinct lack of vampires and the distinct lovecraft-nerd flavour.

Now, if you haven't read Rule 34, I still recommend reading it. Read Halting State first. Then, read everything else he's ever written. He's impressive, and his adherence to hardness even in things like the Laundry series borders on the obsessive; in Iron Sunrise he talks the reader through the mathematics of physics fermi-figures on a fairly regular basis. I just feel like he missed an opportunity to really wow his readers with the resolution of Rule 34, and I hope he will approach the subject of superorganisms with agency in some later book.

Monday, August 15, 2011

An example of glib pseudo-academic pretension

This article is incorrect on many levels, but it *does* bring up some good (though old) points. The way that media influences the types of memes that are most likely to be propagated has been discussed endlessly, and the kind of oversimplified "twitter is only good for talking about your breakfast" statements by journalists was a big part of the inspiration for the SOMA project (http://principiadiscordia.com/memebombs).

A breakdown of fallacies in this article:
1) Print literature is the only source of important ideas : This fallacy was discussed by McLuhan at length. While McLuhan was more or less in favor of print media, he recognized that it encouraged particular types of thinking and discouraged others, in addition to giving political power to particular types of people and taking it from others. Print media encourages linear, long-form ideas that can easily be put into words, while visual media encourages nonlinear ideas that are difficult to put into words and non-print textual media encourages ideas that can be formulated in shorter sequences of words. Long-form and easily-written ideas are not the sole source of useful knowledge.

2) We live in an age of information overload : This old chestnut has been popping up in the speech of old-guard folks since Socrates (and probably even earlier). How legitimate it is depends upon how you define information. By the rigorous mathematical formulation, information is actually decreasing because knowledge (which is to say, mental models) is increasing and becoming more accurate -- the jumping jesus phenomenon makes things more predictable. By a lazy layman's definition, the consistent upward trend in the effectiveness of communications technology that appears to have been in place since the invention of spoken language means that someone of any given generation can expect to have the ability to learn more things more quickly and with a greater variety of subject matter than someone of a previous generation. But, just like the other old chestnut (that the world is going to hell in a handbasket), this one is always said and the implied end result never actually happens.

3) People are becoming more shallow : People are always shallow. The rule of news is that it progresses towards tabloid material, and Time Magazine is no exception. This has nothing to do with the Internet, aside from the fact that print magazines lost their monopoly and thus became less profitable, leading to the need to appeal to a larger audience in order to stay afloat. While it's true that much of the communication online is shallow, sturgeon's law applies here: 90% of any print publication is also noise; for every nine people talking about their breakfast on twitter there's one making groundbreaking statements, which is about as good as any other medium ever has been (including, arguably, peer-reviewed academic papers).

I post it here because I consider it to be an excellent example of this kind of shallow anti-anti-intellectualism that does nothing but serve the ego of the author (who comes off as a bastion of hope for intellectual purity against a world conspiring to eat the brains of otherwise promising intellectuals and replace them with copies of The Daily Mail, or something). These kinds of articles are no more new than the kinds of things they criticize, and I consider it important to realize that they don't have any more content than that which they deride either. A habitual contempt does not reflect a finer sensibility; it merely reflects a habit of contempt.

Monday, August 8, 2011

Putting the rock-occult connection into context

I have mentioned before in passing the connection documented between popular music and occult practice. I have come to the tentative conclusion that this is a subset of the general connection between occult movements and economic elites, and how this relates to leisure-class societies.

Most of us spend a large portion of our time working. It is anomalous to have large quantities of leisure time prior to old age, but there has been a leisure class since at least the establishment of Eridug more than six thousand years ago. The leisure class is defined not by not working, but instead by not needing to work (and thus having their occupation driven by interest rather than economic factors). My thesis about leisure class occupation is that there are three primary categories of popular leisure-class occupation, one of which is dominated by mysticism.

Those classes of occupation available only or mostly to those who are not viscerally and mortally concerned with the accumulation of money are: physical philosophy, abstract philosophy, and time-wasting. Physical philosophy contains such things as tinkering, home improvement, small-scale engineering, painting, sculpting, scientific experimentation, electronics, model trains, and computer programming -- things that are ostensibly potentially profitable but whose potential for profit is a gamble. Abstract philosophy has potentially higher stakes and a lower success rate, and contains both those things we think of as philosophy today, mysticism, the occult, and more rigorous practices like mathematics. Time-wasting contains things like casual golfing -- things that are done to fill leisure time but not done in such a way that money could be gained from them.

A rock star is not initiated into the occult by some record-company-sponsored ritual. A rock star, when he or she makes it big, no longer has to work day and night to make it big; there is more leisure time (though I am not implying that rock stars are slackers; a self-made band, as opposed to a group manufactured by a record company, must generate enormous quantities of flukes before managing to get signed -- Radiohead sent out demos for years under the name On A Friday before they managed to push out Pablo Honey -- and an established artist can have those flukes published rather than sending demos to yet another company). If you no longer have money woes and you can get away with putting in a third of the work you have for years, you can either maintain your current effort (and potentially burn out) or you can take up those things that you could not before. As a result, The Beatles made Crowley and Hare Krishna references, and Bowie writes songs about the Sephiroth. It is easier to get into the occult in a group already saturated with occultists and ex-occultists, and the popular music scene is such a group. But, other notable successful musical artists have taken up building instruments or painting or piloting commercial aircraft.

Friday, August 5, 2011

Exploits in the Human Biocomputer (digest)

So, I haven't been around too much. But, I've run into several interesting things for my Unpatched Exploits in the Human Biocomputer series. Time for a digest edition.








On the subject of imposition of order (a subject very close to my heart) we have some scientism:



and some content-free narrative literature:



On the subject of subliminal messages and the use of priming, The Language Log has an excellent article that references several studies on the subject. There is also this post there, relating to the results of lacking audience-awareness, though that post relates more closely to my obsession with Project Xanadu (and, by extension, the use of appropriate rather than standard user interfaces) than to cogsci.

In fact, there are several recent Language Log posts that belong here: the abuse of empathy reflexes in persuasion and the joys and errors of computational linguistics have both been mentioned recently. I have expanded upon the paragraph length analyses in the above post.

The so-called 'nym wars' should yield the material for a possible pseudonym-vs-anonym post in the future.

Tuesday, July 26, 2011

The Chinese-Room Fallacy

I haven't been posting much lately because of work, but I came upon this paragraph and had to mention it:

This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.
source

The above is a very good example of the typical problems one runs into when one pretends that an ill-defined term can be definitively compared to a well-defined term (or another ill-defined term, for that matter), and is representative of the problems Wittgenstein and Russel saw in philosophy.
Here is a breakdown of the problems with the paragraph:
  • Watson is clearly answering questions. He has been given questions, and provided their answers, so there is no question there. There is a question about whether or not he is thinking, which is by no means a necessary tactic for answering questions.

  • By the understanding of neurobiology of the past few decades, thinking and knowing appears to be statistical. While human brains clearly don't have the logical-reasoning capacity provided by the reasoning engine within Watson, the implied distinction between 'knowing' and 'just noticing statistical correlations' is quite probably imaginary.

  • Noticing statistical correlations is enormously difficult, especially on the scale upon which it occurs in this case. Saying that a machine is 'just noticing statistical correlations' is like saying that a turtle is 'just moving an eighteen wheeler around with his mind'.

As the title of the post implies, I consider this kind of confusion the source of other 'controversies' like the Chinese Room problem.

Saturday, July 2, 2011

Project xanadu document dump

This has only the vaguest relationship to the content of this blog, but I'm posting it here because it's the place I'm most likely to look later.

A little context: I am a huge fan of Project Xanadu, and a ridiculous percentage of my personal projects have to do with variants on the Xanadu designs (mostly xu88 and zigzag). Going through my scribd feed I discovered that I at some point gathered and repackaged a bunch of old Project Xanadu documents and stuck them up there. I have the tendency to download every pdf I see, and I have a tendency to on occasion upload every pdf I have hanging around that looks public domain to scribd so they don't prevent me from downloading pdfs from them. So, if I don't link it up again, I will probably lose track of it for another few years.

xanadu

Note on copyright: IP belonging to Project Xanadu, Xanadu Operating Company, etc., is actually even more complicated than normal IP concerns. Early Xanadu material was made by an ever-refreshing group operating under 'Plan R doctrine': the idea that if one member lost contact with the rest of the group for a long period and no release seemed imminent, that one member would be expected to release the material he or she had in his or her possession to the public regardless of who wrote it. This conflicts with the fact that everyone involved kept copyright on their work, and various members of the group have different ideas about what qualifies as long enough for Plan R to be enacted, how little contact is necessary, and what constitutes a reasonable attempt to contact the rest of the group. Furthermore, XOC both operated on its own and was at one time a subsidiary of Autodesk, which may or may not affect copyright claims for material that was in XOC's possession during the Autodesk period or made during the Autodesk period by XOC. So, there is a fairly long history of one member (usually Ted Nelson) blessing an independent implementation or a particular release and then another member coming out of the woodwork unexpectedly with a suit or a complaint (this happened with gzz -- which is now known as FenFire and does not support zzstructures -- and may or may not be involved in the problems with the Udanax Gold release, which is source code in a SmallTalk-like language that compiles to C++ but for which no working compiler can be released).

Sunday, June 26, 2011

Ramblings: Novelty, Identity, Teleology


The Eliza Effect is the tendency to anthropomorphize those artifacts that present humanlike qualities shallowly. I have studied this effect in practice quite a bit, in an uncontrolled way, and discovered that in the context of markov chain bots (which learn by developing markov models, and as such adopt the manner of speech of those it speaks to) those people whose manner of speech is closest to that of the bot are the most likely to feel empathy with the bot -- a situation that should not be surprising, given the use of word patterns and vocabulary in forming social groupings (and, of course, the so-called Babel Effect).

Now, in humans, interest corresponds to novelty not entirely straightforwardly. Something that is interesting has not too little and not too much novelty: too little novelty is too little novelty is too little novelty is too little novelty, and too much novelty is nwiganbowyaionciwyea 893jf, while just enough novelty is the joy of whole milk until sunrise. This is affected somewhat by social concerns, since particular memeplexes encourage neophilia (meaning that a greater amount of novelty meets the sweet spot goldie locks zone of 'interesting') while others encourage neophobia (meaning that the zone of 'interesting' has less novelty). Where 'interesting' begins for a neophile is often where 'interesting' ends for a neophobe, but novelty depends quite strongly on mental models, so the gap widens quickly.

As a neophile, I tend to talk of neophilia, but neophilia is not all rainbows and unicorns. Neophilia is potentially dangerous because at a fundamental level is encourages dabbling in the unknown. Neophobes, whatever else they are, are at least as safe as they were last year. Neophiles become more safe only accidentally, because they spend their free time playing with fire. It takes someone who plays a lot with fire to invent fireproof curtains (or rocket science -- and Marvel Parsons probably makes it onto the list of archtypal neophiles for the prometheus element), and once fireproof curtains are invented suddenly the neophobes are safer too. But, we just need to look at what happened to good old Jack to see the danger in neophilia: eaten by living flame.

Now, if you look at a community (or, really, a superorganism) like Anonymous, you see a lot of deindividuation going on. Anon has no name. Anon's internal communications are clogged with cats, tits, gtfos, and brick-shitting. A perfect petri dish for deindividuation, in other words: lack of identity within the group, highly stimulating sensory input, and at a high frequency. One other thing that is certainly true of Anon is the high novelty content of communications. There are some old sawhorses here: lolcats have little information per-se, though they can be made to carry much more with clever juxtapositions, and most image macros are content free. However, the image macro has the potential (sometimes realized) to be a highly potent capsule of information: it is easy to transmit (cock and repost), highly stimulating at best, has the potential for the same message to be introduced in several ways (both text and images, which can interact again with existing idioms both verbal and visual), and is part of a conditioning loop that encourages spread (all the forums I've been on since 2004 have had at least one thread for posting interesting images, most of which are clogged with macros, and macros are certainly very popular on tumblr and twitter).


Once again, though, we must separate intended meaning with interpreted meaning. Plenty of accepted meanings were never intended, which is fine because the protocol of natural languages is loose and ambiguous. Many of the idioms attributed to Anon clearly began accidentally, as a look at memebase will demonstrate. Even the silliest of these idioms have the potential to be repurposed to say something decidedly important, or at least 'interesting'. Many of them have. These idioms have made their way into mainstream news through the releases of LulzSec, for instance, which says fairly serious things in fairly silly ways (Eric S Raymond calls this 'Ha Ha Only Serious'), and makes extensive use of the idioms generally attributed to Anon.

A machine could probably fairly trivially pass as human on various imageboards. It could repost images, write its own messages. It would become nearly invisible because of the sheer frequency with which /b/ moves (and because of the sheer glaciality with which some of the other imageboards move). It would mashup existing memes, and some of the mashed memes may gain a following. It would not be kicked out because it is not a spam bot. It would have an extensive archive of images and an extensive model of conversation. Once it begins to be accepted as clearly human (it has a hat) its word model could slowly be infiltrated, with new text introduced. It might mix and match lolcat speech with Karl Marx or with E. E. Cummings, or with Dylan Thomas, or with Ken Kesey. It might be a force to popularize the phrase 'on the gripping hand'. It would be an interesting experiment.

It is not worthwhile for one person to do it. That would not be even-handed. If you want to do it, please do so, and release your results when you finish. Ideally, more than six or seven people would do this at once, introducing entirely different texts of their choosing. Bots learn fast, but adapt slow. It may take a few months for Das Kapital to even subtly infiltrate Rage Guy. But, I have a sneaking suspicion that such an introduction will have a much more far-reaching affect on the superorganism than any conscious attempt to use humans to influence it, since it will get past mental defenses.

Wednesday, June 15, 2011

The problem of simulacra in ritual traditions

The Anti-occultist has a post discussing the problem of magical theatre. While his points are valid, they are not an argument against ritual theatre for those who practice it: the symbolism of ritual is, like any system of symbols, arbitrary, and the very hackneyed familiarity of the systems played up in popular culture may in some cases be a strength. A ritual first generates an atmosphere, then super-arouses those partaking in it and uses unusual (memorable, not directly related to mundane experience) language (written, spoken, and in the form of narrative and symbolism) to push a message into the now highly suggestible ritual users. A ritual using a symbolic language taken from pop culture can be used on people exposed to those media it appears in with less initial conditioning, though symbolic languages quickly gain baggage, and old, popular symbolic languages (such as the hermetic/alchemical tradition) are difficult to use without unintended side effects due to masses of conflicting associations.

The above-mentioned post does, however, touch upon another of my special fields of interest: the question of how simulacra tend to differ from those things they initially derive from. Memetics tells us that simulacra will probably have their most spectacular details exaggerated and less memorable details lost. They will be simplified, but any aspect that encourages remembering or retransmitting the information will generally be preserved. In other words, the hollywood voudoun will be simpler (it will be spelled voodoo and will lack the distinctions between voodoo and hoodoo; loa will rarely be ridden and when they are they will be ridden in arbitrary order without respect for gatekeepers), it will be more viscerally memorable (iconic images of voodoo dolls, snakes, and bags of graveyard dust will replace less-iconic veves, and all rituals will have suspenseful music and frightening practitioners), and it will emphasize those things that make it desirable to reproduce (what special effects are used will generally remain cheap, the tradition will be disconnected from any geographical location so that it can be used in a variety of settings, and the details necessary to research will be minimized by encouraging the idea that there is little legitimate content to be known). Hollywood voodoo, in other words, becomes something that any actor or filmmaker can create and have it remain recognizable -- and therefore, becomes something that many amateurs can also create. The mere dilution of the tradition through a game of telephone is enough to egalitarize it; voodoo queens and priestesses are no longer required for the Hollywood version. Of course, if we go along with the psychological interpretation of the mechanism of action of magical workings (as I am prone to do), Hollywood Voodoo is potentially precisely as effective as authentic Hatian Voudoun. There are situations wherein an authentic tradition is less useful than its diluted and bastardized pop-culture clones, and if you want to perform a ritual through the medium of film using a large number of people, Hollywood Voodoo is a good choice. Everyone understands it, and though it is less subtle and flexible than a more authentic tradition often is (due to much use), imparting nuanced ideas through complex ritual on large groups of people is hard even with extremely well-conditioned groups (just ask the Masons -- or the Catholics, who have been doing it for nearly two millennia).

I have an old joke I like to tell. I sometimes attribute it to William Gibson, but I don't think he ever actually said it outright. The question is, in a forest with some real trees and some artful replicas, how does one tell which one is real? The answer is, the fake trees are the ones that look too much like trees. Nature isn't bound to obey our platonic images of things, so only man-made things look precisely like our mental models of them. Since systems of ritual exist for the sole purpose of manipulating mental models*, there is no reason a so-called 'authentic' tradition is necessarily better than the historically inauthentic Hollywood version.

* Note that I am writing under the assumption that the psychological interpretation of magic is at least somewhat correct. If the meat-and-potatoes of magical workings are, as Allen Greenfield suggests in Secret Cypher of the UFOnauts and The Secret Rituals of the Men in Black, then an authentic tradition matters very much.

Sunday, June 12, 2011

Cold reading


Source

While I've heard this before, it makes me wonder what useful things such training could be applied to aside from telling people things they already know. Outside of the role of fortune teller (which itself is helpful), training the intuition to more accurately make these leaps has great potential, and I imagine that similar training is being used in other fields.

Saturday, June 4, 2011

A phonetic writing system based on cellular automata

In thinking about how to bridge the world of ideograms with the world of alphabets, I began thinking about how cellular automata perform fairly complex operations over time. The starting state of a cellular automaton with a known ruleset contains all the information necessary to produce all further states. So, I figured that a description of a state in some simple known system of cellular automata may, given a well-defined stopping state, both stand in for the word and guide the reader in its pronunciation, without merely being a phonetic representation devoid of non-phonetic content.

As a proof of concept, I considered the ruleset of otomata, remapped onto a new grid and with a slight change in rules. The grid is five by five, and there are five possible cell states: one for each of the primary orientations (up, down, left, right) and one for blank. At a constant rate, cell states indicating directions (denoted by arrows) move across the grid in the direction specified. If at any step a cell is required to be in two states (which is to say that two arrows collide) the cell in which the collision takes place is evaluated as a sound and pronounced, and the state of that cell is set to blank.

This is a fairly irritating thing to try to phrase in english, so I will give an example.

Here is a blank grid:

aeiou
t




k




s




l




m






Here is a glyph:

aeiou
t
<



k^



s

v


l




m



<

On the first step, the arrow currently located at te and the arrow currently located at ka collide at ta, the arrow located at mo moves to mi, and the arrow located at si moves to si. So, the first syllable is 'ta'.

>td><

aeiou
t




k




s




l

v


m


<


On the second step, the arrows located at si and mo collide at mi. So, the second syllable is 'mi'. All remaining cells are blank, so this glyph is pronounced 'tami'.

As for some way to quickly write these glyphs, I figured that the easiest thing to do was to put lines on both sides of the arrow heads and have meeting or intersecting lines for each blank position. So, in ascii, the above might be represented as:

+<+-+
^+|+|
|+v+|
|+|+|
+-+-<

Wednesday, June 1, 2011

On the ego



There are plenty of mystical traditions floating around based on ego dissolution as self-improvement, ranging from the nearly ubiquitous (Buddhism) to the obscure (several gnostic sects). The above posts explain better than I can why ego dissolution is worthwhile. The TL;DR version is that while the ego (or rather, the sense of self -- which is different from the ego as used in Freudian and Jungian traditions) serves a purpose socially, when it becomes inflexible it holds you back; who you think you are is not who you really are, and when you assume that those properties you attribute to yourself are truly essentially parts of you, you cement yourself into that role. Grant Morrison talks about this, too:


The ego is an extremely difficult opponent, because the ego is what we identify as our own selves. The thoughts that you consider your own are those of the ego, which is reasonable because of the basis of verbal thought in language and the social basis of language. However, the ego is also an insidious structure that very quickly fossilizes; when your ego is the primary mover and shaker in your head, you can't tell which thoughts are reasonable and legitimate and which thoughts are merely preserving the ego's dominant control over decision-making. The ego is the collection of all your oldest habits -- all the habits that are hardest to break. Not only does it have ages of operant conditioning behind it, but it has several defense mechanisms that keep it away from threatening ideas. These defense mechanisms are sometimes classified as cognitive biases, and sometimes classified as social filters. Having a fossilized ego is like traveling only by train: while your social life is made easier by the sheer predictability of your movements, that same structure prevents you from having any experiences that lie outside the dining car.

The mechanism for breaking down ego, which was mentioned in the video above but may not be clear, is to forcibly go against those behaviors you identify with yourself and jump into behaviors that you consider completely out of your domain of experience. One can do this constantly, or one can do this periodically by completely reinventing oneself on a regular basis. During some stages in human development this behavior is both normal and socially acceptable. During others, it is considered potentially pathological. However, it meshes nicely in spirit with other posts I've written here about agnosis.

I do not see myself undergoing such an experiment in the near future. I suppose that means I should.

Monday, May 16, 2011

Atari Punk Manifesto

.....AAA TTTTTT AAA   RRRR   IIIIII   PPP   UU   UU  NNN   NN  KK  KK
...AA AA TT AA AA RR RR II PP PP UU UU NNNN NN KK KK
..AAAAA TT AAAAA RRRR II PPPP UU UU NN NN NN KKKK
.AA AA TT AA AA RR RR II PP UU UU NN NNNN KK KK
AA AA TT AA AA RR RR IIIIII PP UUUUU NN NNN KK KK

ATARI PUNK MANIFESTO
1 Retrocomputing is useless, but that just makes it better
10 Obfuscation is a legitimate aesthetic choice
11 Strange haircuts, cardboard guitars, and computer samples
100 Cyberpunk was more fun when it was divorced from reality
101 New Wave is punk with synthesizers, but we love you anyway
110 All that we are is all that we need to be
111 All that we know is human machinery
1000 64k isn't enough for anyone, but it's more than enough for us
1001 There was a time when men were men, women were men, and children wrote their own device drivers
1010 We aren't nostalgics; we just play them on TV
1011 Our rear view mirrors are neon pink, unironically
1100 It's too much machine for you
1101 Paper is accordion-folded, and 3d is in wireframes
1110 Acoustic couplers make sweet music
1111 Mice? Where we're going, we don't need mice.

Saturday, May 7, 2011

Sublim experiment rundown

Back in the day (2006 or 2007), after several years of experimentation, I coauthored a document about the use of visual subliminal messages (specifically those produced by the xscreensaver package's xsublim program) for cognitive enhancement. It hasn't aged terribly well, and I'm rather embarrassed now by the writing style, but every so often someone contacts me asking whether or not I have continued experimentation. The answer is yes. I figure now is as good a time as any to give you the run-down on my later experiments.

As a first note, I am not experimenting with subliminal advertising. If you are looking for something about subliminal advertising, rocketboom has a good video on the subject, after which you will require no other materials.

At the time of writing the original document, I had a model of the mechanism involving chain reactions of primed ideas. This may still be relevant, but there are other (more down-to-earth) attributes of the process with more literature within the field of cognitive psychology to back them up. While subliminal messages do not give a strong enough priming to significantly influence behavior in the context of advertising (or rather, they don't have the property claimed of homeopathy: subliminals are not more powerful the less they are observed), subliminal messages have been shown to affect the sense of familiarity. In situations where unfamiliarity with terminology, wording, or notation is a major stumbling block, being subliminally primed with the terminology in question can act as a gentle introduction, making the terminology no longer seem arbitrarily difficult and frightening. By producing a false sense of familiarity with the subject matter, the subject matter seems easier to pick up.

Another idea (which is strongly influenced by the excellent book The Art of Memetics) is that mental blinders (and other psychological biases that prevent the absorption of unfamiliar or conflicting information) can be modeled as the defense mechanisms of dominant memeplexes. These memeplexes subvert, assimilate, or deny newcomers since new ideas can compete with the old ones. Subliminal messages allow slow and subtle subversion by all memeplexes, regardless of whether or not they conflict with existing ones. As a result, use of subliminals can decrease the likelihood of decisions being unduly biased by unseen socially reinforced heuristics, so long as documents whose dominant underlying assumptions differ conflictingly have their words primed.

So, above we have some new models for the mechanism of action. Furthermore, new attributes have been discovered.

The physiological effects of sublims are highly dependent upon the novelty of the content. A single static document of arbitrary length will quickly cease to be enough for sublims, eventually giving none of the symptoms at all. As the use of sublims increases, necessary novelty does not increase linearly but exponentially. I currently use more than twenty gigabytes of static plaintext as a small part of my sublim input, balanced out by semi-static input (fortune databases), significantly more dynamic input (mostly via the random page feature in mediawiki installations), and less structured 'noise' input (text generated from markov models of other documents, text generated by piping other inputs through rhyme generators and other filters, text generated using context-free grammars). Too much novelty (trying to sublim with a four gigabyte video interpreted as ascii text, say) is not physiologically pleasant.

Sublims have different effective novelty ranges given different mental states. Stimulants appear to raise the required novelty level. Depressants appear to lower the maximum novelty level, but occasionally they cause the sublims to have absolutely no effect. Binaural entrainment at theta range frequencies appears to maximize the physiological effects for as long as the entrainment is occurring, but when the pattern stabilizes the physiological effects disappear.

Finally, there are a few technical updates.

Xsublim is no longer maintained by the xscreensaver project, and if you install a modern version of xscreensaver xsublim will not be installed. The last time I checked, the xsublim source was part of the source tarball but could not be trivially coaxed to compile. I have been using an old binary copied from an earlier release.

I have used the xosd package to write a clone of xsublim, called asublim. It does not operate precisely the same way. Where xsublim caches the full run of the program from which it takes its input before displaying anything, asublim caches each space-separated token smaller than 512 bytes (and cuts those larger into 512 byte pieces) and displays them in real time. As a result, asublim starts more quickly but is also more sensitive to load fluctuations. When I have used it, the asublim program itself is significantly slower than most of the programs feeding it, and so I have not had pipe underflows or noticeable delays. Asublim does not currently have support for the various command line options that xsublim supports, though support for most of them can be implemented. Asublim also has a few glitches: the self-erasing feature appears to operate differently from xsublim's implementation, and so on programs (such as firefox) that are slow to redraw their window bitmap there is a tendency for already erased tokens to obscure the contents of the canvas. I have not duplicated this problem on anything other than firefox.

If you have found this post by researching the terms found in the original Infornography document, please post your comments here rather than looking me up.

Wednesday, April 27, 2011

Time Binding, or why the past isn't what it used to be

Today's XKCD reminded me of a fairly interesting effect called time binding. The term was coined by Alfred Korzybski. This is how Wikipedia defines it:
Time binding: The human ability to pass information and knowledge between generations at an accelerating rate. Korzybski claimed this to be a unique capacity, separating us from other animals. Animals pass knowledge, but not at an exponential rate, that is to say, each generation of animals does things pretty much in the same way as the previous generation. For example, at one time most human societies were hunter-gatherers, but now more advanced means of food production (growing, raising, or buying) predominate. Excepting some insects (for example, ants), all other animals are still hunter-gatherer species, even though many have existed longer than the human species.Source

Among the people to work with the concept of time binding are Marshal McLuhan and Robert Anton Wilson.

One of the fundamental results of time binding (and the source of the term) is that when you record information in this way, you lower the amount of distortion done to it by natural processes over a given span of time. By doing this, you make that particular piece of information more available to people in the future (or available to people further in the future). Because of the availability bias, information that is more familiar seems to some extent more current, in the sense of seeming more (currently) true. So, time binding, though it does not slow down time, slows down time perception of the past, creating a mental 'now' that extends further into the past. This makes rear view mirror thinking not only easier to do but easier to get away with.

Among the many things the widespread adoption of the internet has done for us is made a significantly greater quantity of significantly more trivial information significantly more available to a significantly greater quantity of people significantly less familiar with the subject matter. The widespread adoption of the internet has also made it easier for people like the archive team to save information that would have otherwise been lost. So, in other words, someone living in a bubble on Mars in 2525 can read the tweet you sent yesterday about what you were having for breakfast. Because of time binding, that tweet will seem significantly less old to that Martian bubble-boy than anything written in 1476 will seem to you.

Nearly everyone born in the industrialized west in the twentieth century, aside from the very poor and those living in anomalously technically retrogressive communities, is not only intimately familiar with but is involved with the franchise of some media created shortly before their own birth. There are many Star Wars fans who weren't born until after the last episode of the original trilogy was released, and probably some who weren't born until all of the current canon films were released. There are even more fans of the Lord of the Rings trilogy who were not alive when they appeared in paperback, let alone when they were written. Some classic rock fans who lived through the latter end of the era (and were actively consuming the release of, say, Pink Floyd's The Final Cut) were not alive for the early releases (such as Pink Floyd's Atom Heart Mother). Does The Final Cut sound old? Does The Empire Strikes Back feel like an old film? Neuromancer was written six years before I was born, but it still feels fresh. A piece of media feeling old has never had to do with its actual age; it has always been the gradual decay of information about it. With the increase in storage space, communications speed, and fidelity, we slowly extend our 'now' behind us in time, like a comet's tail.

Tuesday, April 26, 2011

More unpatched exploits in the human biocomputer

Fallacies underlying morality in hierarchical systems

One of the big arguments against pure voluntarism as a stable political system is that people tend to form hierarchies. Indeed, people do tend to form hierarchies. One of the reasons for this is Dunbar's number. This post will not cover that topic. I want to instead focus on what I consider the other major reason that human groups tend to fall into hierarchy.

There are two meta-moral systems (which is to say, categories of moral systems) involved with dominance and submission. One is that described by Nietzsche, who called it the 'slave morality'. The other I will call the 'slavemaster morality'.

In the slave morality, there is one major heuristic: the victim (or the person in the submissive position) is always morally in the right. He or she has been put upon unduly by the dominant party, and bears no responsibility for the problems. In a system of pure 'slave morality', the old 'just following orders' excuse always flies.

In the slavemaster morality, there is one major heuristic: the person who has taken charge is morally in the right, because something must be done. Those who have abdicated responsibility may be taken advantage of freely, because they have surrendered their choice. It is rarely put into these terms.

Mosbunall of those with the slave morality see mosbunall those with the slavemaster morality as evil greedy fascists. Mosbunall of those with the slavemaster morality see mosbunall of those with the slave morality as evil greedy parasites.

The funny thing is, slave morality and slavemaster morality are strangely entwined. Slave morality, with its abdication of responsibility in exchange for shielding from being morally in the wrong, requires those with the slavemaster morality to take charge of them. Slavemaster morality cannot survive in the absence of willing slaves. The symbiosis is fairly clear.

Not only are these two metamoral systems symbiotically linked, but they are based on the same pair of fallacies. They both take as absolutely true that:
  1. it is undesirable or unpleasant to be submissive,
  2. and
  3. it is desirable or pleasant to be dominant

These heuristics are rarely accurate. Dominance bears the burden of responsibility, not only for the self but also for others. Submission removes this burden. While there are positives to dominance (it helps one get one's way, for instance) and negatives to submission (abdicating control has many psychological impacts in addition to making it problematic to have one's needs reliably met), in general one state of affairs is not on the whole significantly better than the other. So, anyone who takes the above heuristics as absolutely true will subscribe to slave or slavemaster morality depending on whether they want to abdicate responsibility or gain power, respectively.

I suspect that these heuristics predate the dawn of man as such. Hierarchical behavior in packs of nonhuman primates sometimes follows the same pattern. In a system with communication limited in speed and complexity, there are extreme benefits to having a single leader, and the above-mentioned heuristics encourage the selection of a more nearly optimal leader over time, as well as discouraging mutiny by those not predisposed to a dominant position. But, I hope we have progressed far enough that we can attempt to overcome these biases with reason and endeavor to live in a society not essentially based on the structure of gangs of roving gorillas.

Tuesday, April 19, 2011

Ludibrium as weapon

Filed under the plans I approve of dept.:
In summary, several Chinese language, but overseas based, websites have been blogging on the creation of a ‘Jasmine Revolution’ in China. [...] The blogs and websites themselves are largely invisible to ordinary Chinese as the Great Firewall keeps them out, but they can be seen by the security agencies, who have been swift to react. The organizers, whoever and wherever they are, have repeatedly called on people to gather in a range of popular and public areas in the centre of major cities across China – shopping malls and university campuses – and go for a stroll every Sunday afternoon to call for minor political change. These public areas are, at that time of day, normally filled with young people and out-of-town domestic tourists, all now potential ‘protesters’. Now, because of the number of competing and overlapping security agencies, there is a lot of pressure on the local commanders to make some arrests and to show some success, but there are no genuine protesters, just some bemused local tourists and a lot of foreign journalists. [...] At the same time, the organisers have used a wide range of popular and politically ‘safe’ words to use as code words [...] These keywords get picked up by the censors, and all web and SMS traffic using them gets shut down or blocked – Jasmine itself is of course popular in Chinese culture and widely used in branding, but sites using ‘Jasmine’ in their copy, however innocuous, are blocked - with real-world social, political and economic consequences.
source


The use of a fake revolution to foment a real revolution reminds me of the Rosicrucians, who never existed. The original documents attributed to the Rosicrucians were later claimed, by their author, to be a hoax. By introducing a secret society so desirable, pretenders were attracted, who then claimed to be members and inducted new members who weren't in on the fact. These members inducted new members, and soon a set of completely different secret societies were formed with the same name and claiming the same lineage. But, unlike the rosicrucian ludibrium, this plan requires the paranoid chinese secret police to play their part -- which is to say it's only useful against a government that is extremely paranoid about the possibility of revolution.

While there are innocents getting hurt because of this, it's not comparable to a revolution or an invasion -- or even a real protest. Nobody's bringing in tanks.

That said, this still brings up the question of memetic warfare. This is not (as far as I can tell) a CIA operation, though the CIA does things like this fairly frequently. This appears to be a psyops project being performed by individuals, which is why it appeared on the globalguerillas blog. It remains to be seen whether or not this is effective, though as the comments thread indicated, the chinese economy is suffering, which may be related to the blocking of keywords associated with otherwise profitable things. But, it's the first example I've seen of a memetic warfare operation performed by individuals upon an entire government and its auxiliary support mechanisms without requiring budget with some level of success.



Edit: China is clearly no stranger to the infictive in politics. NB: Since I do not live in China, I cannot make arguments about quality of life issues there, so I do not have any confidently-held quality judgments about their methods in terms of the perspective of a citizen. I am sure that many chinese citizens think that the united states is a horrible totalitarian government and that US citizens are much less 'free' than they are. I don't know who discovered water, but it wasn't the fish.

Sunday, April 3, 2011

Words that don't mean what you think they mean, part 1: outlaw

The word: Outlaw
What you think it means: criminal
What it really means: someone immune to prosecution

Wait, what?
In historical legal systems, an outlaw is a person declared as outside the protection of the law. In pre-modern societies, this takes the burden of active persecution of a criminal from the authorities. Instead, the criminal is withdrawn all legal protection, so that anyone is legally empowered to persecute or kill them. Outlawry was thus one of the harshest penalties in the legal system, since the outlaw had only himself to protect himself, but it also required no enforcement on the part of the justice system. In early Germanic law, the death penalty is conspicuously absent, and outlawing is the most extreme punishment, presumably amounting to a death sentence in practice.
Source: wikipedia
In other words, an outlaw is someone who has committed a crime so heinous that even the people who hunt down criminals for a living no longer want anything to do with him. They declare him an outlaw, and so not only do they no longer need to prosecute him for further crimes, but they don't have to defend him from other criminals. By reserving this treatment for only the most dangerous of criminals, it simultaneously almost guarantees that the outlaw will be killed (either by law-abiding vigilantes or by other criminals), and also reinforces the idea within the culture that a lawless society is dangerous (because everyone who is an outlaw is a particularly dangerous criminal).

Wednesday, March 30, 2011

Two varieties of anarchist sentiment

There are many ways to categorize anarchist groups, and many systems of categorization are in conflict. This will not be covered in this blog post, because not only am I not an expert in the history of internal ideological schisms within anarchist philosophy, but I am more or less uninterested in the politics of anti-politics. Instead, I am categorizing anarchist sentiment more grossly, into two large chunks.

There is one type of anarchist who believes that anarchy is something to be achieved. Maybe it's something that was lost in the genesis of the first state. Maybe it's something that has never been tried, or never been attempted seriously. Maybe it's something that has been tried but has been crushed by statist forces external to itself. The bottom line is that this type of anarchist considers anarchy to be something that isn't, but should be.

There is another type, far less visible. This type considers anarchy to be omnipresent already, at the base of things. All states are structures built on top of anarchy, and obscuring it. The goal of this type of anarchist is not to bring anarchy into being but to make people aware that they have always lived in anarchy, and that this isn't a bad thing. This type of anarchist sentiment is in a sense even more subversive, and even more dangerous to the state. The first type can be pitted against the state in a kind of imaginary war of ideas, and can be manipulated to give statism even more ideological power; the stereotype of a bomb-throwing adolescent lifestyle anarchist is always accepting the existence of the state and by working against it reinforces the sense of its power. A state, however, cannot effectively fight a war of ideas against those who believe the state itself to be imaginary, nor can it effectively be seen to fight against a system of which it is a small part.

For a long time, I thought the second type was extremely rare, rather than merely underrepresented. Today, I saw a quote on the liberationfrequency tumblr:
Anarchy is every time you share a stick of gum. Every time you help someone with their homework, or with their bags. Every time you hold a door for a stranger. Every wallet returned to the lost and found. Every borrowed cup of sugar. Every driveway you helped shovel. Anarchy is people helping people not for the glory but for each other. Anarchy is for the people by the people; not this ballot-box nonsense. Anarchy is you and me... on the purest of all levels.
This is very much the essence of the second type, which I term epistemic anarchy. You could consider epistemic anarchy a subset of voluntarism, but it differs in that it talks about what is rather that what should be.

Keep in mind that this is also a key difference between Gnostic and mainstream Christianity, between Zen Buddhism and several other flavours, and between many forms of mysticism and those traditions that present the same texts and myth systems to mundanes. The difference is between going to heaven if you are good and being good in order to remember that you are already in heaven. The former makes people fairly easy to manipulate, but the latter is more difficult to impart to people. Mystical traditions often actively turn down the induction of people who they don't think are capable of fundamentally grasping the latter, in order to avoid the mutation of their tradition.

Tuesday, March 15, 2011

Energy drinks and I^2

Much has been made (mostly by Malcolm Gladwell, which isn't necessarily a stunning endorsement) of the connection between the enlightenment and the broad use of coffee in Europe. Coffee houses were about the only social gathering place where people were taking stimulants rather than depressants, and so they were extremely productive intellectual centers. This post will not discuss the social aspect except in passing, but I will be spending part of it claiming that energy drinks are better than coffee so I figured I'd get that out of the way. (Full disclosure: I hate coffee with a passion, so I am biased.)

Today, the social aspect of coffee houses has atrophied. Coffee houses aren't really meetingplaces anymore. The same amount of socializing taking place in coffee houses is taking place on the internet, and most of the socializing in coffee houses takes place on the internet anyway. All we have that remains is the caffeine, and that is now taken care of by other methods.

What do energy drinks have that coffee does not? I'm not counting things like b vitamins or taurine, or horny goat weed, or any of the other various and sundry ingredients that are hyped. Most of them don't have well-documented results, and they all balk in nootropic function in comparison to something else that's common to all energy drinks: a wide variety of types, most of them disgusting.

Bear with me here. There has been a lot of talk (and several seemingly legitimate studies) about the correlation between intelligence and novelty-seeking behavior. Most of the people who know the difference between correlation and causation seem to assume that intelligence causes novelty-seeking, and perhaps it does. I would go for the opposite explanation, however: those behaviors which we consider intelligent are largely the product of a novelty-seeking disposition. Someone can be extremely innately intelligent, skilled, and clever, and they can still be so stuck on a single model of the world that they avoid any activity that we'd consider the hallmark of a great intellect. Someone can be innately fairly dull, but through a drive to learn about the world come off as intelligent. One of the things an area like energy drinks encourages is novelty seeking behavior.

The likelihood of the first energy drink someone tries being tasty is minimal, but the likelihood of it being highly caffeinated is great. The moment someone tastes it, two things happen: he realizes that it's disgusting, and he is rewarded for trying it. He may not want to try that particular variety again for a while, but there are plenty on the shelf, and when he tries the next one the same thing happens. By exploring the many varieties of energy drinks he is performing operant conditioning on himself to become a novelty-seeker.

From what I understand, this was something like the state of coffee houses in London in the nineteenth century -- most of the coffee was terrible but still caffeinated, and you had to do a lot of exploring of the city to find a coffee house that served edible coffee. This may partially account for the success of Babbage and Lovelace, among others.

Tuesday, March 1, 2011

Increasing Signal to Noise Ratio in Markov Chain Output

The usual way of doing a markov chain bot (first order) is that for all pairs of tokens p and q, the likelihood of p following q is:

P(q|p)
P(p)

This gives us a probability graph that looks more or less like a line. The most common pairs are up top, and the least common are at the bottom. This is actually directly equivalent to the inverted graph of information entropy, where the least common would be up top and the most common at the bottom.

The problem is that in language, the most common sequences tend to be meaningless (or exist only for redundancy). Search engines filter these sequences out because they do nothing but make more work for systems that operate based on finding the sequences closest to unique. The outliers on the other side tend also to be meaningless, for a different reason: they tend to be errors. So, the optimal signal is actually of middling entropy.

How do we make bots that will (without special-case coding) automatically avoid succumbing to the usual exploits (such as the twelve year old troll who spams it in PM with the token “mantits” repeated eleven thousand times)? How do we reliably and elegantly improve the signal to noise ratio?

If you try to graph how likely something is to be signal-heavy in such a system, you’ll probably get a parabola that peaks about where the graph of probability and the graph of entropy cross. The goal is to make the graph of this weighted markov probability (which might be called cooked-model probability) quickly approach that of the signal. The easiest way to do this is, rather than incrementing both p and q when p follows q, doing the following:

Pn(q|p) <- Pn-1(q|p) + ((Pn-1(q|p))2-(Pn-1(p))2)1/2

Pn-1(p)

Pn-1(q|p)

Pn(p) <- Pn-1(p) + 1

As should be clear, the graph will rapidly approach resemblance to the signal graph, and it will slow its mutation as it gets closer to the signal graph, for a known sequence of tokens. This means that such a bot should be capable of operating at a similar signal to noise ratio as some standard with a much smaller training input set.


graph of raw markov model of phrack

Wednesday, February 23, 2011

Libertarian distcomp projects: solutions in search of problems

There have been several projects floating around involving distributed computing in a non-charity context. Unlike the charity-type distcomp projects like SETI@Home, distributed.net, Folding@Home, and friends, people who donate computing resources to these get something in return. The general form for this is: a token has the value of a certain number of operations, and also a value in some real-world currency. Someone can spend tokens in exchange for processing, or exchange them with real money, and people whose machines perform processing get tokens in exchange. It seems very straightforward.

The problem is, it's entirely useless.

The kinds of problems that can be trivially distributed are few and far between. There's cryptanalysis, protein folding, gene sequencing, the kind of rank-computing that gets done at google's data centers on hadoop... Most of the CPU-intensive tasks that one can list off are either also very data-intensive (huge data sets are easier to handle in-house than to send flying across the world, especially when the smallest chunks you can deal with are a couple hundred megs) or of no interest to anyone outside of academia (and thus worth very little money) -- in other words, video encoding and finding primes are out of the picture. We're left with things like cryptanalysis, which is somewhat problematic since the people willing to spend money to bruteforce encryption are probably either spooks or crooks (and the spooks have their own machines for this kind of thing).

I'd absolutely love to run this kind of system, because I'm wasting plenty of cycles at home and I'm not getting any richer. If any of you can think of a problem that would be a good fit for this solution, *please* let me know. I'll give you a million free cycles on my new distcomp network.