I’ve come across a couple of articles recently that got me to thinking about, well, truth. Not so much Truth in the grand secrets-of-the-universe sense of the word but rather in the basic simple sense of not telling falsehoods.
Because it’s starting to look like one of the most significant societal changes we’ll have to deal with in the near future is that it may soon become nearly impossible to tell lies. Or, more specifically, to tell lies without a really really good chance of getting caught.
There are a few things pointing towards this:
First of all, the accuracy of brain-scanning lie-detectors will become radically better within
the next few years. Technologies based on MRI will almost certainly become accurate enough that we should start using them in court. Whether or not we will is a different question, although having done jury duty a couple of times I’m absolutely convinced that, overall, justice would be much better served than with our current system where the skill (or ineptitude) of the lawyer ends up being far more influential on the verdict than the evidence available in the case.*
But that sort of thing – requiring someone to undergo an obvious and overt brain-scan – isn’t particularly game-changing. Rather it’s all the work being done on passive lie-detection that’s really going to impact our day-to-day. The ability to truth-check even the simplest social interactions.
The Department of Defense certainly believes this is possible – they’ve called for the development of lie detectors that can be used without the subject knowing they are being assessed. (The Department of Homeland Security is also getting in on the act.) And if they’re successful, I’d personally rather have this technology available to the general public than to have it solely controlled by a group of military elites. Not that I’m skeptical of their trustworthiness or anything. <cough><eyeroll><cough>
There is also work being done on algorithms that can detect potential lies purely based on word-choice in written correspondences, and obviously voice-stress analysis is eventually going to get much more sophisticated as well.
Now, granted, some of these things may take a bit more time to develop – machines (see here) are still notoriously unreliable when it comes to accurate lie-detection. But I’d be willing to bet the timeframe is going to be measured in years, not decades.
Even if we ignore advances in technology that are specifically designed for lie-detection, there’s still the issue of good old fashioned fact-checking. We don’t necessarily need to tell if someone is lying directly if we have massive resources available to confirm what they’re saying. From ubiquitous security cameras to the army of cellphone-camera-toting individuals who can be anywhere (what Jamais Cascio, riffing on a Charlie Stross essay, has dubbed the participatory panopticon), nobody can be certain that they weren’t recorded doing something or saying something that would contradict any statements they’re currently making.
Finally, even if many of these technologies don’t all work perfectly, they really don’t have to. As long as people believe that the chance of getting caught in a lie is a significant probability, they will adjust their truthfulness accordingly.
Of course it’s going to be an arms-race, just like everything else. There will be technologies developed that can counteract various lie-detection systems, there will be legislative hurdles put in place to prevent the use of these systems, etc., etc. But ultimately I have to believe that the baseline for how easy it is to tell an untruth will be changed significantly. Which, in turn, could dramatically affect the way that we deal with each other. Can we survive without the little white lies we use as social lubrication? The small untruths (like the foma in Vonnegut’s Cat’s Cradle) that we need to keep us happy?
But enough about the future. Let’s talk about now. Because those last few video/audio analysis techniques I mentioned raise a particularly interesting scenario: Even though we may not have the technology yet to accurately and consistently detect when someone is lying, we will eventually be able to look back at the video/audio that is being captured today and determine, after the fact, whether or not the speaker was being truthful. In other words, even though we may not be able to accurately analyze the data immediately, we can definitely start collecting it. Infrared cameras are readily available, and microexpressions (which may occur over a span of less than 1/25th of a second) should be something that even standard video (at 30fps) would be able to catch. And today’s cameras should have plenty of resolution to grab the details needed, particularly if you zoom in on the subject – don’t frame like this:
Frame like this:
(Photo of GWB chosen completely at random – I would never suggest that our fearless leader might, perhaps, just occasionally, say something that is leaning towards the realm of non-truthiness)
Which brings us to the real point of this post.Is it possible that we’ve gotten to the point where certain peoples – I’m thinking specifically of politicians both foreign and domestic – should be made aware that anything they say in public will eventually be subject to retroactive truth-checking… Because it seems to me that someone needs to start recording all the Presidential debates NOW with a nice array of infrared and high-definition cameras. And they need to do it in a public fashion so that every one of these candidates is very aware of it and of why it is being done.
Or am I just being naïve in thinking that the fear of eventually being identified as a liar would actually cause people (or politicians) to modify their current behavior? Maybe, but it seems like it’s at least worth talking about.
And in the meantime I need to hurry up and post this to my blog so that I can run off to my dinner with Angelina Jolie get busy on the secret mission that the United Nations has asked me to undertake answer some questions that the Nobel Prize committee has about my work uhh, do some laundry.
*Incidentally, I’m not really advocating that we replace juries with lie-detection machines yet. But I’m wondering if the next sensible step might be to consider the lie-detector as, effectively, a member of the jury. Give the machine the equivalent of a vote in the verdict, instead of using it as a sole arbiter. Unfortunately this probably still stretches the interpretation of “a jury of one’s peers” a bit too far…