Tuesday 27 December 2011

Timescales of hope and critique

In Pierre Teilhard de Chardin's "Some Reflections on Progress" from The Future of Man,  we are confronted with a view of progress which is refreshing in the honesty with which it is proposed. This honesty and this hope are the best parts of Teilhard de Chardin because they are something that is all too alien to us critical secularists. As he says, "Whether from immobilist reaction, sick pessimism or simply pose, it has become 'good form' to deride or mistrust anything that looks like faith in the future."

What is refreshing here is that progress is taken as a given. On the conditions he sets out, it is, seen in the light of life as a "phenomenon of prodigious age", hundreds of millions of years in the making. What is then the next step, is to consider how such a timescale influences arguments about progress. In another essay in this collection ("The New Spirit, 1942") he describes "the immense travail of the world" as inevitably the reverse side of "an immense triumph." This is as troubling as all theodicies. One cannot an longer assert a crude polarity whereby suffering inevitably leads to its reverse; every manner of cruelty and hatred have been justified thus. This is the brutal catholic element of his thought which must be immediately jettisoned. This type of argument leads us to an optimism of which we have been rightly suspicious ever since Voltaire's critique of Leibniz, and even more so since the Holocaust. 


We must be very careful when we take a long view, because this is the timescale of institutions and the state. This is how suffering and injustice is rationalized. It is the deus ex machina to which individuals appeal when they wish to silence other individuals. It is just as troubling in this guise as it is in the philosophical apparatus of Habermas's "consensus". We should probably start (as with Hans Georg Gadamer, pace Habermas, though this article correctly points out that the two positions are finally united somewhat by Paul Ricoeur) from what is, and proceed thence to where we actually wish to be. This might lead us to a nuanced and fluid mode of thought and action. It begins with our human all too human foibles, and takes us from there towards something better. This is a praxis of becoming. 

By contrast, the position (one might say pose) of critique starts with negative criticism. It is already hoarse with the shrill haranguing of self-righteous denunciation. It takes no time to reflect, to consider, to discuss, to debate. It reifies critique. It is praxis still-born, strangled at birth by a flawed theory. This is not to suggest that critique has no place, ever. It is to show what influence a particular conception of time has on human thought. To say that an awareness of time stopped with the insights of phenomenology is misleading, and indeed has misled thought ever since. Returning to a more fundamental, even basic awareness of how time influences thought is a necessity. 

Thursday 10 November 2011

The Luddites were right

Luddites were not objectively anti-technology, even if that is what their acts of sabotage and destruction might suggest, and even if that is how history has on the whole remembered them. What they were searching for was a way to integrate new technologies into their way of life, rather than allowing each new gadget dictate what form society would take. In this, they were the first to encounter the problem which endures today, namely the disparity between new processes and technologies on the one hand, and on the other, social and political models which took place long before such changes could even have been imagined. It is the divide between the human and the machine. It is an abyss of scale.


What was needed was a new social technology which could have been attendant to the machine and organizational technologies which had been developed. As it was, the machine begat the organization, with the social and human crowded out. Concepts such as "change" and "progress" were introduced which lacked the nuance and sophistication to equip us intellectually with these inventions. This conceptual dearth impoverished our intellectual world to such a degree that only recently have we begun to exit this recession of the mind. Critics of the effects of industrial organization on the lowest levels (such as Engels and Dickens) showed that all was not well, but yet we still needed to have the tools of thought to formulate not just the answer, but also the problem.

One definition of ideology is that which we don't know we know. It is a pervasive paradigm of heuristics, prejudices (in Gadamer's sense), assumptions, and habits. It is that 'unthought' which to some extent leads our thinking. The Victorian moralists who criticized the 'excesses' of their age were still under the dark pall that the smokestacks spread throughout all economic and intellectual life. The should, rather, have been realizing that the problem was the essence of their age. The entire model of industrial scale economics allowed for a skewed accounting, such that those advocates of factories and railroads genuinely believed that theirs was the best way. They had so structured their informational world, that the excesses for which they were critiqued were in fact the foundation for everything they did. Industrialization was predicated upon markets being opened at gunpoint. Industrialism and colonialism are mutually defining.


What works on the international scale works on the local too, however, and 'markets for the sake of markets' is equally reprehensible to the Luddites as it is for the subjugated inhabitant of a colonized land. What was apparent even in 1811, when the Prince Regent offered a reward for information regarding "giving information on any person or persons wickedly breaking the frames" was that it was the new technologies who set the terms against which the actions of people might be measured. Destruction of property was all this matter was, and questions of why there were people attempting to be so destructive were scattered before they could be formulated in the knee-jerk accusation of immorality: destruction of property = evil, end of. Rather than engage in a discussion, it was more convenient to allow a spontaneous order to develop, without thinking it through. It is as though it was decided that the economy to which Adam Smith's invisible hand was attached should be blindfolded too, as long as it suited those whom it benefited. 

Wednesday 9 November 2011

The art of technology

"As ever more sensitive emulsions come into use, pictures could be taken in a flash of light, and there was no need for a sitter to pose for long periods of time with glazed, unnatural expressions."
What of an alternative view that there was, on the contrary, something more honest about a process with such undeniable artifice -sitting in a studio, remaining completely still, trusting in the expertise of the practitioner. That is a form of respect paid to the total difference manifested in this radical departure of a medium. It is not simply a 'democratization of the image' whereby anyone can now have a portrait, where previously such was only within the economic means of the higher ups. This was not an extension of the visual franchise, though it now suits all of us to call it such, and also suits the various companies who sell the equipment necessary to produce ever more snapshots.


There is a mendacity to the pervasive notion that the photograph gives us the natural, spontaneous "moment". On this, consider whether you would ever greet a friend by saying, "my, how natural and spontaneous you look today!" (If you would, then I don't know what can be done for you...) The snapshot pushes the natural and the real ever further away from us, and the technology makes it ever more difficult to see with our own eyes, instead of with the entire history of photography acting as a set of templates of acceptability. But this is a trite, cliched criticism. I have said nothing new. What I want to consider is the techniques of seeing that we can compare. Painters and photographers see the world differently. The photographer waits for that instant that is telling, unusual, striking. The painter, according to conditions of the medium, considers that which is to be depicted on a much longer time-scale.

For me, the above photograph by Diane Arbus does everything that photography can do. It erases what is potentially human in a portrait to the mechanism of the technology of depiction. It reduces the interplay of facial muscles to a time-scale that serves only the camera. It captures, with all its terrifying capacity for precision and focus, a particular series of electrical impulses firing throughout the hundreds of muscles in the face, reducing the becoming of a facial expression - in all its delicacy and complexity - to the lie of a static, solid gurn. It is a commonplace of discussing technology (via Lewis Mumford) that each new invention would be set in the terms of that which it replaces; thus the motor-car was in its early days referred to a horseless carriage. Time was needed for the new device or complex of techniques to come into its own. It saves us the effort of wasting time completely rethinking each new technology only to see it rot into redundancy. If it succeeds, then it can do so on its own terms. I do not think this is what I am doing in my comparison of photography and painting however.

For one thing, the photograph is not cutting-edge technology. I don't think I would be extreme in considering it established. What I would question, however, is if we ever properly considered it in dispassionate terms. It is a tool, and accordingly it is a response to a human need via a human capability. It offers a semi-permanent method of storage of visual processing that takes place via sight. It brings sight into ever smaller divisions and ever greater extension of time via super-fast cameras and long exposures. It allows us to see in lower light. It expands sight into the infra-red and the ultra-violet. All these it can do, but what is it for? On the human, social level, in terms of objectification (in whatever regard one wishes to consider), it serves the machine and not the human. The most successful photographers for my euro, are those who bring the two closer together (Nan Goldin springs immediately to mind), rather than reveling in the technical difference that this machine can manifest (as with Arbus, though pass over this dichotomy in silence as a blogger's prerogative).


Goldin revels (as with The Devil's Playground - read the review which this links to) in the act of seeing itself, and the long relationships over many years allow us to see with her, rather than to simply see what she saw. The "absence" which the above review refers to is actually us. We see friends and relatives in the long view of many years, and in some cases of different generations. Time is made to conform to a human scale, one that gives us a beautiful intimacy. The charges of voyeurism made against Goldin might stick against some in the snap-shot generation, but in considering her own work they reveal a blindness. In her work, Goldin has thought through the act of seeing, and the technique of seeing that the photograph gives us. The effect of her work is cumulative, so that while of course she is subject to the same contraints as all other photographers, it is how she structures her own context that sets her apart. Her snapshots (as above) are something more than that. It is one view among many, as in a fleeting memory. They are not the instant being held up as the entirety of that person. She makes her own canon.

Wednesday 26 October 2011

Information and stuff: Stephenson and DeLillo

Cryptonomicon and Underworld, two great novels in their respective literary realms, are definitely linked (as Richard Rorty noted in his blurb for the former), and they address each other's weaknesses. They both span much of the last century's history, and they both use the standard approach of the non-linear narrative. Underworld is material, replete with what Hart Crane called "America's plutonic ecstasies" - wastes nuclear, industrial, human... It also, however, misses a crucial trick by more or less accepting the standard model of material progress in the western world, and doing no more than flipping it for our delectation, presenting us with the opportunity to indulge in our bad faith, our consumer guilt. It is an inversion that lacks sophistication. Though the actual work itself is an achievement, the message is banal.
Cryptonomicon is also a mirror though which we darkly consider our recent past, except instead of how DeLillo gives us the flip-side of our material successes (the world of "Just What Is It That Makes Today's Home So Different, So Appealing?") in muck and mire, Stephenson focuses on information, on the immaterial. He is interested in how its structures structure our lives. More than this, if the 'darkside' parallel is to be valid, then just as the structure of information in the 20th century is such that openness and freedom of digital movement are the ideals held up by theorists and advocates, Cryptonomicon astutely shows that this is more often than not perverted by those who wish to dominate via information and technologies of communication.

With the Baroque Cycle (a trilogy, which serve as a kind of prelude rather than a prequel to Cryptonomicon, which he subsequently penned), Stephenson gives us what amounts to a secret history of the immaterial's ascendancy, which shows that the material world was predicated on this world that was accessible only to those with the particular inclination to learn the language of the universe, maths and physics. Where Underworld is about burying the implications of our approach to living in the world, of using our skills, ideas, tools to alter our surroundings, Cryptonomicon gives us that which we haven't even repressed as we don't know enough about it. Encryption is the key to the whole other side of the bright, shiny story that is peddled out so often by the corporate histories (that not infrequently becomes academic and popular history). Not all the information that is out there is available to us, and we do not make all our own information available to others. Stephenson brings us to the point where we can ask ourselves, should everything be available to all?

Saturday 8 October 2011

Do androids dream of remarkable things?

One notable difficulty which science fiction - and sci-fi - has is that while it is a discourse of possibility, it makes too few concessions to social reality for it to be regarded as a part of literature as conceived as a liberal art. Literature had become, via grammar and rhetoric, a liberal art in the sense of it being that which the free [Latin: liber] person (man originally) would study. Now it is also something we tend to do in our free time, though the professor of literature spends remarkably little time actually reading literature, and more time keeping up with other responsibilities, with writing about writing (about writing) having become more professionally rewarding than reading. Either way, it is a concern to us as a form via which we can be circumspect about aspects of our existence.

Science fiction is different to speculative fiction or to the fantastic (I think of Calvino's Cosmicomiche as an example of the latter, perhaps Margaret Atwood falls into the first loose bracketing) in that it is more concerned with being a type of thought experiment, than the fullness of life as is (apparently) found in realism. The most basic description you can give of a classic of science fiction starts with the sentence "Imagine if...". That goes for hard science-fiction in the line of Tau Zero or Ringworld. These are opportunities to chase after the myriad implications of an event or an idea.

There is then the question of the more nuanced texts, such as those of Philip K. Dick, Sheri S. Tepper, Dan Simmons, Walter Tevis etc. In these examples it is not the idea that is made the master of the form, and there is an interplay between character, setting, and form that makes this field more interesting as literature rather than as "ideas texts".

There is also that popular realm of Dune and Star Wars, which is (as Voegele notes above) is little more than swords and sorcery at faster than light speeds. Star Trek I would put in a sub-group, as the United Nations and the balance of power at FTL speeds.

What all of these have in common is that the overriding ideology defining the discourse is one of willful elitism. We have an unapologetically aristocratic system (as with Tepper's Grass; and in all of the positively feudal Dune series, notably with examples such as the priestly 'Bene Gesserit' and the 'Spacing Guild'), or a elitism of apparent intellectual entitlement. Even in those examples where we supposedly encounter the underworld (as in much of Dick), there is still yet the idea that they are subject to some powerful capitalist or some cosmic corporation. The reason for this is, according to Fredric Jameson's excellent Archaeologies of the Future is that all literatures write about now, and that at best the future is a distancing device.

The question becomes, then, why are authors of science fiction so perversely conservative, so reactionary? The objection might be made that the elitism of the 'scientist as hero' is but the meritocracy of the universities. Even the Jedi, you could argue, do not exclude people on the basis of sex or species, but only on the basis of ability. Very well and good if that is so, but my question would be a bit distanced from all that. If we consider the bustling, space-faring civilization either on the page or on the screen, more often than not we see things from the heights, from a privileged perspective. An exception might be in Bladerunner, where we are in the muck and mire of a decaying Earth, but the governing principle is still 'higher = better'. Indeed, in the text on which the film is based, the entire narrative hinges on a consumerist desire for nicer things, a cyborg keeping up with the Joneses.

For me, one of the most frightening examples of this blindness to any kind of social inclusion comes from Star Wars, and the fact that it is the most successful series of science fiction texts in history. It is a sub-genre unto itself. In my view, all six films should not be regarded as the story of princesses and knights, and the turn of Anakin to the Dark Side is irrelevant, for to my mind there is too much grey to be entirely comfortable with a fast distinction between Dark and Light (though that is Sith talk...). That is the history of the industrialists, the war-mongers, the bureaucrats. In my mind, the entire story can be seen in the arc from Jango Fett to 'the clones'. The reason for this is that within the Galactic Empire, these are the only non-Jedi, non-diplomats we encounter. Basically, from the perspective of anybody who matters, everybody else are just clones: interchangeable, replaceable, expendable. They are us.
In all those shots of busy worlds, where the people look like ants, those tiny dots are us, and they have as much an impact on their lives as does the average North Korean. The giant farms where the clones are grown on Kamino for the empire are not so different from the nightmarish world of The Matrix. The clones are but biological robot soldiers, and there is no notion of them having any autonomy. They are in Kantian terms an abomination, humans designed to be a means, and not an end.

What then, is the alternative to all this? Ursula Le Guin as always presents us with both sides, both the mirror of the world as we know it (as Deleuze's identity, under which I perhaps perversely also include the other three pillars of reason, namely: opposition, analogy, and resemblance) and that difference that 'makes a difference'. Examples of this are in the anarcho-utopia in The Dispossessed, as well as the properly alien (though Jameson finds echoes of medieval Muscovy) of The Left Hand of Darkness.

My favourite example of an alternative is in a short story collected in Cordwainer Smith's The Rediscovery of Man. The story is "The Ballad of Lost C'Mell" and concerns the attempts by the underpeople of Smith's cycle of stories (huge in scope, spanning thousands of years) to get the franchise for themselves. It concerns them as people (though not necessarily human), and relegates the controlling apparatus of the galaxy (aptly named 'The Instrumentality of Mankind") to the status of a blocking mechanism. It is but another example of attempts to shut down the opening up of citizenship, of rights as well as obligations, of personhood. These are the ideas informing this short story, but it is the execution of it that elevates this text above most others in this genre, bringing it to a level of literary greatness. The conclusion is as emotionally affecting as Flowers for Algernon, and indeed anything else in science fiction.

For the next stage of science fiction, we need to pass beyond the echoes of big science (as in the 40s and 50s), the counterculture (of the 60s and 70s), of neoconservatism (of the 80s and 90s, v. Cyberpunk), and of globalization (the 90s and 00s). For science fiction to remain an important discourse for examining ideas that confront us here, now, then it must step out from behind its blanket of distance, of cool examination, or of intellectual revenge. We must allow the clones, androids, the cyborgs, the robots, the underpeople to have hearts. This is how we can bring our ideas about technology and the future into contact with the human reality of our lives now.

Tuesday 4 October 2011

Horror: the poetry of the lizard brain

To be fair, this is not specifically about Lovecraft, but actually some notes on what I consider to be weaknesses that are structurally inherent to the horror genre itself. Lovecraft just happens to be the midden onto which I toss the rotting filth of my antipathy. Reasons to hate Lovecraft: 1, 2, ∞.

Horror is in a curious position as a genre given that it is almost entirely based on emotion, or rather a small gamut of emotional reactions, rather than a form or idea (pick your counter-examples in whatever genre you like). It relies upon stereotypes rather than tropes. Tropes are what we find in all literature, and it is the interruption and overturning of such tropes (which are weak indicators of our expectations) that we often point out a work's originality and creative merit.

In contrast to this, horror overturns little. It wishes to trigger. It seeks to plug into our most ancient reflexes. It is the poetry of the lizard brain. Its focus is disgust, and not thought. That is its first level. If horror is to become any more "intellectual" than this (i.e., at all...), then this visceral reaction must be aligned with some cultural analogue, and this is the second level of horror. In Lovecraft, this is via some barely veiled WASP racism (do I really need to spell out what lies behind the "horror" of Shub-Niggurath, all torturous attempts at etymological rehabilitation aside). In Kalki by Dan Simmons, it is through a disappointingly un-nuanced form of orientalism (yeah, I went there).

This photo is as crap as a reference to Lovecraft should be.

The final level, as I see it, is the attempt to make horror systematic, formalized. This is doomed by the very source of horror as the literature of that which cannot be expressed (quite different from the inexpressible...I am not getting into a discussion regarding Adorno and Paul Celan and how some things are such an affront that to write about them seems to put writing itself in jeopardy, though after reading Todesfuge I cannot but side with Celan). This third level of horror can be seen in Lovecraft's use of words that are the shibboleths of his oeuvre. These are meant to be some kind of etymological reaction formers. The preeminent example of this is "eldritch". It sounds venerable, ancient (elder), with echoes of an uncanny grotesqueness (witch, ditch...).

The problem with all this is that the part of the brain to which this genre makes its appeals resists systematization, and the reader who bothers to read through Lovecraft's collected works (have mercy on my sense of taste, as I did) begins to interact with each new example in the text of such words much as a bird-watcher might greet the most scraggly pigeon in the street, that is with something less than ecstasy. It all becomes a bit...obvious. The text waves a red flag at us screaming "you will be afraid soon, oh, so very scarified." In actuality, the logic of horror is analogous with the logic of pornography, wherein there is a continuous need to 'make it new' (I am thinking of Gore Vidal's words in the documentary Thinking XXX).

Finally, we can indirectly return to some of the problems surrounding the notion of the inexpressible, and consider Carrion Comfort by Dan Simmons. I simply don't feel comfortable with horror that leeches off the Holocaust. It smacks of theodicy, for how and when could it ever appropriate to introduce paranormal elements to the ferocious reality of millions of deaths?

Wednesday 21 September 2011

Facebook, my friend, you are entering a world of pain.

I quite enjoy impromptu battle with people getting furious over Facebook's redesign on one side, and then there is the "hey, keep it cool man, like, change happens, y'know?" brigade who have set themselves up as the default voice of reason. I side more with the former than the latter, since there's a difference between acceptance and acquiescence. The posture of the latter holds that we pay nothing for these online services, and basically falls in with the pre-digital mindset of "you'll take what you're given", or "any colour as long as it's #3B5998". It is the position of the Dude, who just accepts what happens. By default, I am put in the position of Walter by those who are, like, way more chilled out about it all. Facebook abides, man.
 I told that Kraut a fuckin' thousand times, I don't roll on shabbos.
Well, fine. I am he. But think about what exactly the dissatisfaction that people feel with these changes. It's not simply a manifestation of chutzpah (as someone following in the footsteps of a convert to Judaism, I'm assuming I can say that now) for us to point out that all is not well.
The Dude: Walter...
Donny: They already posted it.
Walter Sobchak: Well they can *fucking unpost it*! 

You know what, they can change Facebook, because we are what it runs on. It is of course not the case that we have paid for a service with cash, but do we think they are providing a service to us for free? Of course not, they get our time, they get our attention, and they get the revenue from every advertiser wanting to hit exactly the target-market that we represent.


This is the new logic of open source being brought to bear on ever more realms, and we need to expand our conceptual vocabulary accordingly. We no longer pay for services with money, but with our attention, with our time. That is as valuable as money, if not moreso, because in the murky world of Facebook's revenue stream via ads, they can tell marketers that there is billions to be made in the upcoming world. In capital terms, Facebook is not worth anywhere near the numbers thrown about (such as $100bn), what is behind such fantasy figures is the concept of there being a social ecosystem which this website represents. Those billions that don't come in via direct advertising are to be found right behind our eyeballs. Too right we can complain.


Finally, of course Facebook is going to listen and make more changes, because though Bebo and Myspace etc. are dead in the water, they died because they deserved to. They were no kind of a challenge. The situation we are in now is that Facebook is in the position of AOL, a stupid monopoly of closing off information. Creating a wall to keep information out equally keeps it in, and every information technology has proved that to be a foolhardy strategy in the long term (even the guilds only kept the print press out of Paris for 20 years). The prize of openness and market-share is Google's to grab.

Der Untergang des Magnum Opus

Quite aside from the basic and dull point that is made ad vomitum regarding "master-narratives", one alternative way to regard the decline of "great works" is the fact that writers are coming to be regarded, and to regard themselves, as accidents. It is a fact of our social structures, in a two-fold manner.

Firstly, in the narrative perspective whereby our overview of history needs actors for reasons of identification, the great writer or their text can serve this function, as a cognitive anchor. It is a way to make history less autistic, less concerned with facts. That is the "history of literature" view, but it holds true in any field, whereby an -ism is deployed like a sheep dog to round up those pesky individuals with fancy notions of freedom and independence.
Speculative Realism
Evidently, a text can serve just this function, but with each repeated deployment it has become increasingly attenuated. It got to the point where Adorno's Aesthetic Theory can, in its blurb, be referred to as his magnum opus. The fact that he did not complete it during his lifetime is related to the sapping of the ideal that such a book has, as much a result of an atmosphere where the fragment is fetishized (need one give examples?), as well as one in which we can witness its reactionary rejection (but this reaction is usually political in tone, thus Badiou's elevation of the Event, which is a tellingly Maoist movement, but good god I digress...) . The point, again, is that not all great texts surpass everything by their contemporaries, and so some are chosen by default, simply as others will never be considered once this process has taken place. 

The second manner is that whereby fame accrues more fame. Like any standing reserve in a structure (money, information, etc.), there is a network effect in evidence. Fame can cycle endlessly, as in Bataille's "general economy" (which is in contrast to the "restricted economy" that we are led to understand is economics proper), and once the surplus standing reserve being left to pass down along the same well-worn channels of a text having a place in the canon, or of a thinker being a part of an -ism, until it this channel is blocked. The point is that now there are so many various opportunities, that it becomes a question as to whether there is sufficient force behind the flow to create more than a trickle of renown. We see this in the notion that there are "too many journals", but as noted in this link, often these journals are no more than tags attached to articles. 

This is not a jeremiad against change, and I find the point made by Jan Velterop in that link more helpful, as it allows us to view the old problem in a new light. People have complained about there being too many books since there were two books. What changes is the structure that governs and facilitates our access to and interactions with these works. Accordingly, we get fields (can I please call this the "sheepdog effect"?) of influence, counter-influence, rejection, interest, and all these interact with myriad others. Returning to Adorno for a moment, we can see an analogy of this where the punning title of his work gives us a Minima Moralia rather than the Magna Moralia of Aristotle, a collection of observations which can be fashioned into a constellation by our own effort to read him, as well as what we bring from our reading others. The age of the magnum opus had its dark counter-image in the dilettante, but our age needs to develop its alternative to this. 

Thursday 8 September 2011

Žižek, alone.


By writing so prodigiously, and in opposition to apparently everybody else, in terms of style Slavoj Žižek seeks to deny affiliation with any other contemporary thinkers. Writing so much is an effort against particulars, it is an attempt to ascend to the level of the universal. In short, he seeks to deny any broader context for his work by becoming his own context. 


Monday 22 August 2011

Kardashev-Syndicalists of the world unite!

Allen Tate, "The Man of Letters in the Modern World", (1952): 'What modern literature has taught us is not merely that the man of letters has not participated fully in the action of society; it has taught us that nobody else has either.' 

Tate is fascinating for a modern reader because he comes from a position that is alien to most of us now, one that manages to be learned yet at the same time unapologetically moral. This is not in the often reactionary or defeated mode of T.S. Eliot, but somewhat closer to Emmanuel Lévinas. He calls for a distinction to be made between communication and communion, noting that what he terms secularism (as the death of the notion of "spirit") apparently seeks to do away with all ends in favour of the absoulte of the means. 

In thinking about Hyperion Cantos by Dan Simmons, we can see how this idea is extended outward.  In the "Consul's Tale" we read in the colonization/invasion of a planet the profound cynicism of the WorldWeb. In this space opera, 28th century, the hyper-technological human society called the Hegemony has spread throughout the galaxy. Quoting from Wikipedia:
'The farcaster network (the "WorldWeb") is the infrastructural and economical basis of the Hegemony of Man and thus determines the whole culture and society. Also flowing across these portals are the structures of the datasphere (a network reminiscent of the Internet in design, but far more advanced). In that lurks the powerful, knowledgeable, and utterly inscrutable TechnoCore — the vast agglomeration of millions of AIs who run almost every piece of high technology of mankind. The unthinking hubris of man resulted in the death of the home-world (Earth), and this arrogant philosophy was carried forth to the stars, for centuries. The Hegemony itself is a largely decadent society, relying on its military to incorporate into the WorldWeb the colony planets, even unwillingly, and also to defend the Hegemony from attacks by the Ousters, "interstellar barbarians" who dwell free of and beyond the bounds of the Hegemony and shun all the works of the TechnoCore (especially farcasters)'
Anyway, in this tale (the first Hyperion book follows loosely the model of The Canterbury Tales or Decameron) we see the profound cynicism of the WorldWeb's entire reasons for existence. There is no actual thought behind all the magnificent technology described to us, and this is but a cosmic extrapolation of the circumstances which confront human society today.

We have always been tool-using creatures, but the question deserves to be posed as to whether we still use tools as we did before. Do tools now begin to use us? This is one of the classic topoi of science fiction (as in The Matrix trilogy technology actively takes over, or Mockingbird by Walter Tevis where we apparently just give up, entrusting everything to our creations). Once, in conversation with a friend, I jokingly referred to my overall philosophical vision as Kardashev-Syndicalism, a mixture of the kind of alternative to state-directed socialism (because, as someone who takes the ideas of spontaneous order and emergence seriously, I would require a non-centralized form of government) with something else, a broader idea of what being human can mean. It sounds good even if you think Kardashev may have been a obscure Ukranian labour theorist affiliated with the Mensheviks. He was not.


Nikolai Kardashev is a Russian astrophysicist who developed the idea of a scale according to which civilizations might be ranked, based on their energy use. There are three types of civilization, with Type I able to use all the energy on its home planet, Type II able to use all the energy from its nearby star, and Type III thirsty or hungry enough to devour all the energy in its home galaxy. An alternative version of this taxonomy, that of Robert Zubrin puts the emphasis on extension throughout space, rather then energy usage, such that Type I has spread throughout the home planet, Type II has outposts and settlements on numerous other planets, and Type III the galaxy.

Now, the nitty-gritty details don't really concern me here. Nor do accusations of cosmic smugness (although I may have to call my first album that). The phrase to me is more of a tool to rethink our present in the light of all the alternative paths we might take. Our thought might be better served by allowing the dynamic and daemonic power of possible worlds to explode our ideas onto a cosmic scale. Woah.

The universe of Hyperion is more than a metaphor of capitalism's viral logic, though it is undeniably this too. The value of science fiction is that it is a truly a mode of thought, rather than adjustment or technocratic problem-solving; counter-intuitively, these latter two are actually the impulses behind all realisms, such that they basically are modes of acceptance. "Let the market take care of it." Thought, in contrast, confronts the machine world which Allen Tate considers was born in the 17th century, along with modern science, the modern economy, modern technology. This world has ever since done its best to erase thought via standardization, universalization. In short, to destroy difference and the ability to try a new approach. This is not because of innate malevolence. It is rather the impulse of this machine logic to seek efficiency at all costs, and if that means saving energy by not thinking. The side effects of this are political, social, and cultural. Machine logic must be contextual, and there are even contexts where the logic of machines won't suit machines.

Tate was right when he said we haven't participated in the action of society. We have accepted what has done before because it worked. Or we thought it worked. It worked within certain limits, and with the development of a science of ecology, of non-linear logic, of ever less authoritarian theories of government, we can see ever more clearly that what was accepted in the past is insufficient now. All thought is technology, and so we can develop new technologies of thought that do not limit us, because the limitations that we imposed (for reasons of expediency) do us no great service. Boundaries are lies.

See also Towards a Historical Cosmology and a Very Large Scale Ethics

Friday 19 August 2011

PhiloSawphy

Some films manage to provoke me to think about old ideas in a new way, and reading some jottings from a while back when I saw Saw (zing) for the first time I thought I would inflict them on the internets. Effectively, what these films are for me is an examination of technology and its relationship with the subject.

The killer (although this word seems too small for the character), Jigsaw, gives his victims explicit choices and instructions which are so basic as to be an affront to our autonomy. Indeed, the question of choice in its entirety is slowed down to a crawl, so that even its most elementary aspect (of a to be or not to be, to be dead or alive) ceases to mean anything. We would take the approach which would regard this film as inhabiting the universe of meaninglessness – but this would be too easy. It reduces the genuine trauma of the encounter with nihilism to the level of cliché.
The creators of this franchise describe their serial killer as anything but this, instead preferring to call him a scientist (though these are not mutually exclusive terms). Is it all one big experiment then? Is it an investigation into the …. no. Short answer. No. First options like this are to be avoided, and so we must make the effort to cease considering “meaning” because it probably won't get us anywhere: this is a world without significance (namely the world from inside Jigsaw's “game”), but it has real enough meaning. Indeed this is just the point that he repeatedly attempts to make. In his attempts to redeem his players through violence, he wants to drive home - via blood and suffering – that meaning is reality, and that this has always been enough. Don't look for it in status, work, drugs. Accordingly, his world is satanic, in the original sense of the word, without the religious overtones (excusing the occasional set-piece evocative of images of Christian suffering). He is the opponent to the views of all his victims. He is the adversary of all who inhabit his creation, which the outside world never truly penetrates unless on his terms (consider Saw II and the policeman's son).

The choices he presents are those of one testing creatures to see whether they are truly worthy of life, but unlike the prologue to the Book of Job there is no implicit defendant against the vicissitudes heaped upon the characters we observe. The subject posited here by the film and by Jigsaw is one that is fundamentally alone and isolated. Co-operation, when it surfaces, is exploitation. This is the political philosophy of the film. It presents a universe, as we said, that has meaning, but this meaning is anchored by evil. This gives us a theodicy. The subject never does good, but rather realizes that they have done bad. Though Jigsaw claims to be freeing his victims, he frees them into death. The choices he offers are made within the realm of psychosis. This gives us both a thanatocracy and a schizocracy. Nobody could forgive, as he asks at the end of Saw III (nor would it be forgiveness as understood within any moral-ethical tradition I can think of, but simply another game.

He punished his protege at the end of Saw III for allegedly having made impossible tests. This demonstrated to his own satisfaction that she was “unworthy” to carry on his work. More consistent would be the interpretation that she was punished for her crudity, for making explicitly that the dice is loaded in his game. It shows his clockwork universe to be a vicious construction that serves only itself, and that the interaction with humans for which he uses it, some kind of perverse educational apparatus, has only one end. That end is for the machine to rend the flesh. It is beyond Kafka's In der Strafkolonie, for even in this story the punishment of the prisoner brings about an epiphany through blood. Saw is the world where the lacunae by which we are constituted as social and ethical beings are played upon with a viciousness that is troubling in its honesty. Our negative constitution, if I can call it this, is made all too obvious in Jigsaw's refrain: 'I want to play a game'. It shows the limits of all these game logics made social. It is the world where we are only ever subject to, subjected.
What we are subjected to is clear. Metal, glass, clock-work. It is low-tech. Aside from video surveillance, much of the tortures would have been possible in the early days of industry- if Thomas Newcomen or James Watt had been completely, bat-shit insane. It is a return to a kind of simplicity, as in the Discovery Channel(s)' documentaries about steam engines, but inverted away from this techo-pastoralism. So many films attempt to convince us of our prowess, of our ability to be collectively in control. Conspiracy films especially manifest this, because somebody, somewhere holds the puppet strings. All Tom Clancy hi-tech propaganda movies say “behold, we are totally awesome”, it is pure techno-ideology. Reality proves otherwise. The mission to kill Osama (never mind the ten years it took to actually find the guy – what about all those super spy satellites) was less Top Gun and more Hot Shots given that they crashed a multi-squillion dollar helicopter in the process.

Jigsaw, trained as an engineer, points to the fragility of our bodies in the face of technology. And not digital, high-tech mechanisms of social control and surveillance (which Jason Bourne shows us can be outwitted anyway), but the metal and grease industrial type, of Blake's dark satanic mills, the capital of Marx and Engels. He talks of his rules as the rules. Disembodied and superficially logical (though diabolical), he says “follow them” and little else. It is utterly cruel because it we cannot follow such rules. These are the linear algorithms of the machine age, but we are inhabitants of the flexible information age. Does he perhaps have a point, noting in our political and ethical freedoms a lack of fixedness of purpose? No. It is utterly cruel, as we cannot revert to such a pre-scientific, dogmatic attitude, and using scientific tools of coercion is simply ironic. We are subjects, and Jigsaw seeks the erasure of this. Jigsaw is the inhumanity he claims to help us escape.

If nothing else, from this mess of philosophical confusion (my fault) we can note a contradiction between what still passes for a popular definition of the subject. You know the one; it rails against inauthenticity and atomization. two different, but related issues. Atomization is a derivative of scientific thinking, the person reduced to the smallest potential actor in the petri dish of human society (in some ways identity politics [wherein I am “gay”, or “a woman”, or “Christian”] is a further fragmentation, the sub-atomic splitting of the person...but there may be something akin to a principle of diminishing returns in this attempt at further precision). Inauthenticity posits some perfect ideal of coherence, one which is inimical to flow and change and development. The technology of today renders both of these irrelevant.

That we can be crushed and sliced by Jigsaw's blades and hammers, vices and spikes does points to the fragility of our bodies, it is true. We are not immortal. Our medical technologies cannot solve everything. We feel pain. This, however, is banal. We do not live in fear of slipping in the shower. We assume our proper-functioning. We live under the maxim that we will operate fairly efficiently, accidents notwithstanding. Jigsaw turns accident into necessity, however, and we are to take this as some sort of great lesson to be learned. But it is not. It is psychotic bullshit. Jigsaw is fucking mental. The best we can make of all this is that we are slowly leaving his machine-logic behind, and accordingly that we need to work to redefine the subject in the terms of our new technologies and scientific developments. The point about Jigsaw is that he should not be possible.

And I am only a little sorry about that pun.  

Monday 15 August 2011

WAR (hyuh, yeah)... what is it good for? Tech stocks


Any time I get into a conversation with somebody who is either interested in philosophy, or involved in technology, I somehow manage to steer conversation around to a basic problem I have, which is that technologists (Kevin Kelly, Ray Kurzweil, Barabasi, et al.) often seem to have an all-too-unambiguous relationship with the applications of their chosen field to warfare. This is a primarily ethical problem, but we can displace it into the political realm if we wish to remain dispassionate. I would like to push our conceptual apparatus even further (mainly to avoid my own rhetorical excesses), and to attempt an alternative view, namely to consider this problem in even more abstract terms.

Step back for a minute, and consider G.K. Chesterton's words on the detective novel:
keep in some sense before the mind the fact that civilization itself is the most sensational of departures and the most romantic of rebellions … It is based on the fact that morality is the most dark and daring of conspiracies. (from “A Defence of Detective Stories”)
When I reflect on this, it leads me to note the quite peculiar status of nature's relationship with all things man-made. There is, however, no simple good versus evil opposition. There is no reason why we should consider nature to be more beneficial to us than something constructed or artificial. The ebola virus comes from nature, hospitals are constructs – a crude pair of examples to be sure, but consider this as an antidote to any opposing crudity. Man-made and natural simply are, and both have different functions, derivations, etc. To valorise nature over a product of civilization is no more than, to paraphrase LCD Soundsystem, borrowed nostalgia for an unremembered time (which is the essence of Heidegger's “The Question Concerning Technology”). At best, it is the philosophy of “I remember when all this was fields”. Now, returning to G.K. Chesterton, there is nevertheless a tension between these two, given that the natural is what we humans ever attempt to pacify and combat (Francis Bacon), to transcend (Ray Kurzweil). The status quo, on this cosmological perspective, is entropy. This is our opponent.

So, to leap back to where we were with this in mind, war/defence (choose your euphemism) is an example of this will-to-break-down that is the reality of life. This is a fairly awkward definition, but bear with me. It is inevitable, this destruction; there is but a difference of time-scales between the cosmological entropy of the second law of thermodynamics, and our own short-term analogy in social and political conflict. 
This is not an apologia for violence, but it is the recognition of a (deeply unpleasant) fact of our present social reality (and, contrary to arcadian myths of noble savages, it is our past too). What we continually do is find ways to reduce the prominence and frequency of this fact of strife in our lives. It is not geological fatalism to say that erosion happens (allow me this minor contradiction in terms), and so it should not be considered fatalism that to say that similarly a type of socio-political erosion will happen. As I am being crushed under the weight of this belaboured analogy I will just say we should erect defences to protect ourselves and our institutions from this decay. It is our duty to recognise the existence of conflict, strife, warfare – and to do something about it. Human activity, technology, ideas are a part of a concerted effort (better a musical than a martial metaphor) to diminish, in an on-going manner, destruction at our own hands, as well as obliteration by the forces of nature.

Now that I have lost just about everybody with this line of argument, I can return to philosophy and technology. War and destruction are not addressed in the philosophy and history of technology as the catalytic forces they are (only Schumpeter comes close with "creative destruction"). I find this fundamentally troubling that human ingenuity and the ideology of technology (Daft Punk's harder, better, faster, stronger) is so readily applied to killing and exploitation. This is the impulse behind this tortured argument. As such, I have attempted to see matters from a different perspective, considering ourselves on the Darwinian time-scale. Survival in the present moment in order to pass on genes to the future is what is of prime importance to the organism (there are limit situations where this does not hold, but they are exceptional). Thus, any new technology could follow a distribution whereby it will be applied to these immediate needs of survival, even if these needs are only apparent (this requires a whole subsidiary caveat whereby we note the corporate-media nexus, and how “need” is created as part of a market strategy). In my dangerously telescoped argument, rocketry begins as warfare, and becomes a tool of government and science, and expands into the civilian and commercial realm. What was once closed-source becomes marginally less so. Vast sums of money are spent on defence in the United States, partly because it is effectively a standing tradition of budgets in that country to spend the rest of the world (i.e., enemies) into submission, so inevitably a lot of that will filter into research and development.
The top right-hand corner is the amount spent by the U.S. on the Iraq war. Click to see full-size version.
A part of that then will go into military applications, and then a smaller amount of that will be considered viable for actual manufacture and deployment. Even if, as the argument goes, the space race was the Cold War-by-proxy, note the difference whereby the “by proxy” in effect negates “war”. Yet another example is of course the Internet as we know it today, which was developed by the military as a means of decentralizing communication which would ensure that in case of an attack, the system would continue to function.

Returning to the ethical argument, which I bracketed away at the beginning of all this, there is surely room for us to be involved in war/defence, and yet to be doing our best to reduce the possibility of it ever happening. We are aware of the genesis of the Gatling gun, and how its inventor hoped that it would end warfare by its very efficiency and the enemy's fear of deployment; this utopian view did not account for how cheaply politicians and generals regarded their soldiers' lives. That said, the war-gamers at the RAND corporation, DARPA, the NSA, each in their own way undertook the effort to make nuclear war an ever less attractive option via their development of a doomsday logic, namely the doctrine of Mutually Assured Destruction. (Interestingly, a friend of mine informed me that this was communicated to the Soviet side too, in both abstract and concrete terms, for M.A.D. could not have functioned unless both sides were aware of its rules, according to the nature of Game Theory). We can but hope that by developing M.A.D., this leveled out the curve of inevitability which saw an ever increasing number of fatalities with each global conflagration, with WWI surpassed by WWII, which all feared would be surpassed by a WWIII. Large scale war became an ever less attractive possibility, such that now wars are economic, diplomatic, and both of the above, as well as the actual wars which the superpowers waged either by proxy or by feigned ignorance in the second half of the c20th (see Niall Ferguson's The War of the World on this).


We should allow ourselves to be reigned in, then, by an ethics of technology. Let there be a stricture set down, following Mario Bunge's observation that technology's morals right now are dubious at best, whereby we will not develop any technology which is dedicated to our own oblivion. Let a philosophy of possible worlds (and not just Candide's “best of all possible worlds” which seems to govern the most rosy-VR-goggled futurists such as Kurzweil) consider the worst possible outcomes of each new technology.  

Sunday 14 August 2011

Please give/read generously

The principle of charity is one of those difficult to maintain bits of basic conversational decency, namely the assumption our interlocutor is neither a moron nor evil. All too frequently, however, it is the first piece of excess weight to be jettisoned in the high-speed game of debate. Calm and considered becomes fast and dirty. It is the triumph of a certain type of rhetoric over reason, and the ground for all ad hominem attacks. What I wonder is whether there is room for some middle ground, or some manner whereby we can attack...justifiably.

The first need not be the face-saving, third-way, risk-averse, Swiss-inspired take on argument. Perhaps the reductio ad absurdum  is one way for this to take place, since in this way we accept our opponent's terms, and take them to a final conclusion that in its extremity whiplashes back in a shockwave of ludicrousness on the starting premises. This too, however, involves a certain suspension of the principle of charity, because our partner in this discussion (which is what we should probably consider our opponent to be) can protest that they did not intend for their position or argument to have the scope which we may have imputed. The second could be a kind of "just war doctrine" for argument.


I believe that the best route is to start with a technological, network approach, allowing this to inform our discussions and arguments. As such, if something (an idea, critique) creates more information or even knowledge, then it is valid. If it is posturing, cynical, opportunistic, then it is not. Only that which is constructive (or not overtly destructive) is valid. Cynicism and the "cool" attitude of neo-conservative, laissez faire post-modernism is a performance, an act of being "knowing" rather than the effort to created knowledge.
Your argument is invalid.
Well, all I can hope is that my interlocutors would start with the premise that if I managed to somehow feel my way through the fug of idiocy, to mash my troglodytic hands against a keyboard for long enough to eventually produce something that borders on grammatical consistency and logical intelligibility then that maybe, just maybe, I am not some sort of satanic imbecile hell-bent on the destruction of civilization and indeed the universe itself via the twin idols of incompetence and depravity. I should be so lucky.

Friday 12 August 2011

The Fallacy of Invention

In the sixth chapter of his book, The Nature of Technology, "The Origin of Technologies", W. Brian Arthur considers how technology is actually brought into being. It follows on from his observation that a technology is the exploitation of a natural phenomenon or principle to a specific end. As such, he notes that:
'Typically several groups of inventors have envisaged the principle in action at more or less the same time and have made attempts at a working version of it. Such multiple efforts and filling in of key pieces make it difficult to speak of "invention" in the sense of being first.'
He goes on to give an example of this, but consider it in the abstract for a moment. If we cannot say that invention in the "hard" sense of the lone genius tinkering away and creating something civilization-altering, then what type of reconsideration does this call for us to make?


Well, if we move from a "hard" notion of invention (which, as an aside, is a part of the myth of creative genius that was adopted to a large extent by those who wrote about rather than practiced scientific and technological investigation. I'm looking at you Mary Shelley...) to a "soft" one, we can then have either:
   (a) the idea of co-operation, with science as a collective enterprise, which is a standard refrain of the sociology of science, and finds support in the advent and successes (though subject to diminishing returns?) of Big Science.
   (b) the idea of there still being an individual innovator, but now their particular works contributes to an external, even transcendent "atmosphere" of innovation. This is the view that notes both Leibniz and Newton worked on the calculus in the c17th, and so there was something in the water that caused them to make these advances.

Well, I have problems with both these views (which is funny, since I was the one who proposed them). The first is a crushingly dull, pragmatic view of science. It is descriptive in a way which doesn't inspire further investigation. It seems content to accept this view as a fact of reality, without setting it up as a platform for further investigation. "This is this is this, and that's that." The second, though extreme to some, as it will appear to set up technology as a disembodied force, an autocatalytic entity that exists unto itself. Radical though this interpretation may seem, I read enough science fiction for this view to not go far enough (italics = I really mean it).



The question I want to pose is the following: if we allow that the standard idea of invention is sufficient, do we go on finessing it into newly watered-down versions (development, for example), purely to conform to some ideal whereby words precisely mirror something found in a natural state in reality? Clearly not, as this suffers the fate of infinite regress - "turtles all the way down". Still, words are imprecise, and we still use them, so we need to borrow a bit from column a, and a bit from column b. From the first in terms of invention we need to recognise the messy aspect of social reality, and from the second we need to recognise that there is something weirdly sci-fi about technology when considered in the abstract. It does seem to have a logic of its own, and so perhaps our social reality ought to reflect this, rather than trying to force invention, scientific creativity, and technology (the sources of this information age) into an out of date model of high-industrial capitalism.

Invention needs to be brought into line with what we know about ideas and how they permeate and power our information civilization. We are a network society, in Manuel Castell's phrase, and we are moving away from the zero sum game logic that resulted in the destructive rapacity and exploitation of the c19th entrepreneurial model. Even the high priests of this (such as Rockefeller, Carnegie), knew their way of doing things could not be maintained. They sought to atone after the fact via their good works (giving us insidious nonsense as found in that hateful coinage "philanthrocapitalism"), blighted in their thinking by a tit-for-tat, linear logic. From the very beginning, however, we must see the structural conditions of any situation.

Caption reads: "Forty-Millionaire Carnegie in his Great Double Role. As the tight-fisted employer he reduces wages that he may play philanthropist and give away libraries, etc."

Back to invention, and we see that invention is always a part of a network of ideas, building upon previous advances, and drawing upon the work of others (in terms of both actual physical, back-breaking labour, and the other mentally creative sort). The short-cuts taken by a business to increase productivity are also part of a network, but one that impacts others, elsewhere, at some other time. Why should our social reality (in legal, political terms) not directly recognise this? The creative commons and open source are a direct and logical corollary of social reality's inherently networked nature. Invention (here a synecdoche, not a straw man...I hope) is evidence of this, and we need greater fidelity in our conceptual apparatus when thinking about technology.

Tuesday 9 August 2011

I retrodict the riots

New Economics Foundation Fellow writes about the riots

"Officer Krupke, you're really a slob.
This boy don't need a doctor, just a good honest job.
Society's played him a terrible trick,
Und sociogic'ly he's sick"


I love the NEF, and the way they attempt to tackle problems in a way that is quite different to most think-tanks. That said, I don't see the point of invoking quasi-Jungian collective memory horseshit whereby he smells the whiff of "the faint folk memory of the Gordon Riots in 1780, when racist anti-Catholic mobs went on an orgy of burning and looting across London, culminating in the release of prisoners from Newgate and the destruction of the gaol." Ehh, right. That's lovely and everything, but let's get to the issue at hand right now. If we are drawing parallels with history, then how attempting to see what is not historically conditioned here since mobs have occurred in the past, and will again. We do not need a hermeneutics of violence at this stage; we need a cessation of violence.

This is obliquely approached when he asks whether we have a political language which can adequately deal with this. If we had a more comprehensive, or nuanced approach would this allow us to wonder why we need to politicize a mob? He says that our "political debate is now so impoverished that we barely have the political language to stitch together an alternative." I would suggest is that both hand-wringing and send-in-the-army-hanging's-too-good-for-them can wait, because you cannot have a dialogue with violence.


If it's not on Facebook, it didn't happen.

That's the popular wisdom of today at least. But hey! Let's not assume we are surrounded by idiots and instead try to find the dust off the scrapings off the husk off a seed of truth there somewhere.

In this there is first the point that our memories aren't the best, and so we outsource to various technologies. Previously these would have included journals, diaries, letters, then Polaroids, Super-8, up to the blogs and various other media of networked communication. That's the basic 'storage' problem.

There is also something else, however, which is a bit more numinous, and it is that nebulous idea that fetishists of technology (be they committed, or opportunistic such as those in technology marketing/p.r. [I refuse to give public relations a seal of approval via capitalization]), in terms of a supposed innately human need to connect. Remove the hyperbole, and there is simply the structural resilience of connecting things to other things that brings the storage of my first point to a quite different level.

We are inhabitants of a social rather than a merely physical reality. So if you have a thought or an idea, this is wonderful, but it has no social reality, and it has no reality that can be accessed by anyone other by yourself via your own memories - and that's assuming you remember. So, by embedding this idea or thought in a formal structure (and this is a purposefully inclusive notion, so that it can include writing, painting, photography, programming) it becomes a part of social reality.

In network terms, a node is a node is a node if left to itself. What makes it become more than this is if it is connected. So, by communicating in this manner, an escape velocity is attained such that we can escape the circular logic of  solipsism. The argument for open-source and the commons holds for all thought and creativity. I may have had the occasional idea in the past that I scribbled down in a notebook...but if people can't read it, why bother write it?

Wednesday 13 July 2011

Leave it to beaver.

Why is it somehow still valid to make statements along the lines of "X is what makes us human"? Whether that X is tool-use, or a sense of right and wrong, or opposable thumbs doesn't make much difference; it's pretty clear that you can change this slightly to say that X is "what makes me human", and making human is not much better than "what I want to talk about".

Usually when I come across this in a book or an article, there is some caveat, so that though there is room to introduce some other point (as an example, for tool-use, there might be a discussion of beavers building dams, or some other non-human example of technology), this is more or less dismissed with the old medieval philosophy method whereby if you meet a difficulty, you make a distinction. So now there is real tool-use, and whatever beavers do. Silly beavers. 

How far are we from a proper estimation of what it means to be human that we have to introduce some sort of test? Testing inevitably implies ranking and exclusion, so why do this? Whence the sense of being threatened by beavers, dolphins, or whatever other example of non-human intelligence that we meet? 

That's just one type of objection to the above form of quasi-philosophical statement. If you transpose some of the above terms, you get a sense of just what's up. Suppose somebody in a conversation said, "Well, X is fundamentally what makes me a man", you might well question their attitude towards their own self-worth, women, gender, sexuality, and all that. What masquerades as some attempt to get at the essence of a thing in actuality points away from it. This is why analytic philosophy cultists have issues with what Ayer saw fit to dismiss (and many still do) as 'metaphysical' statements. They achieve the exact opposite of their purported intent. Rather than getting to the 'essence' (yeah, whatever that is), they instead blithely assert.