The inflatable trash-bag street art of Joshua Allen Harris (someone create a Wikipedia article for this guy!) has been floating around the youtubes for several months now but I just watched a recent video and was struck again by how organic the sculpture’s movements can feel.
As someone who has spent a not-inconsiderable amount of time dealing with what it takes to make animated (CG) characters feel alive, it’s rather humbling to see what some random air-currents can invoke. Watch the video that I’ve linked to above and pay attention to how much you naturally tend to assign emotional overtones to the action you’re seeing. There are parts where you’d swear there’s an intelligence/awareness behind the movements of this creature – I see shyness, curiosity, etc. I can easily build up a narrative in my head about how this lonely creature has come down out of the wilderness and is trying to understand how to survive on the streets of the big city.
It’s no secret that our brain tries to assign familiar patterns to the unfamiliar so it’s not all that surprising that we would tend to mentally map this four-legged inanimate object into a standard animal-model. But is there more to it? Does the actual construction of the sculpture contribute to this, where appropriate placement of leg or neck joints can cause the body to move in a more lifelike fashion? Does the artist ‘tune’ the construction to enhance this?
More importantly, does the fact that everything is based on an ‘engine’ that has a significant quantity of randomness in it teach us something about how we should approach our own animations? One of the most obvious problems with poorly-done computer animation is that it can feel far too deliberate and predictable – and consequently it has no soul. Maybe introducing a bit of randomness, particularly if it’s partially influenced by a number of environmental factors, could help significantly with the problem of creating a personality.
One of the most amazing things in the video comes from the fact that the air-currents which inflate and surround the creature are modified as people come near. This, consequently, affects the movement of the sculpture itself, making it seem as if it is intentionally interacting with anybody who approaches. It will sometimes flinch away if someone moves too quickly but it can also decide to show spontaneous affection, leaning it’s body against someone who is reaching out to pet it, in a motion that I’ve seen cats do a thousand times. (Take a look at about the 55-second mark).
Has anybody ever analyzed the micro-movements a human makes whenever someone new enters the room, or walks nearby, or starts to speak? In my book I have a chapter that focuses on how all the lights in an environment interact with each other and make the point that ultimately everything is a light source – it’s either generating light or it’s reflecting it. And if you introduce a new object into an environment – set a red ball down next to something – you need to recognize that the light reflecting off of that ball will affect everything that it is near.
But we need to recognize that the movement of a living creature is much the same. Our head will turn slightly whenever a new noise occurs, be it the sound of a door opening or a car honking in the street. We constantly readjust our position and stance relative to others in the room as they move about and come closer to our personal comfort zone. This is a concern even beyond the boundaries of animation. With so many movies being constructed in postproduction – people shot on bluescreen and added to a scene later – we have exactly the same problem. Yes, an actor can do their best to react to a co-star that isn’t actually on stage with them at the same time, but can they also do a convincing job of reacting to all the other missing stimuli? The characters and objects and sounds and smells that will surround them? These reactive movements may be tiny tiny tiny in some cases but they’re definitely there, and I’m sure we notice them if they’re missing. Maybe not consciously, but in the back of our brains the primitive subconscious who has a deep-seated distrust for the unnatural and the unfamiliar will start to scream ‘FAKE’ as things slip towards unreality.
Which of course takes us straight to the concept of the Uncanny Valley. The term has generally been applied to the ‘creepiness’ factor we feel if a face (robot/animatronic or computer-generated) is nearly believable but not completely. If it’s cartoon-like then we’re fine with it – our mind can interpret it as purely symbolic and the reality filter doesn’t click in. (See Scott McCloud’s brilliant Understanding Comics). But that borderline area where it’s trying to be real but doesn’t succeed – that’s the deadly valley.
And I would claim that this problem exists any time we’re trying to emulate reality, including for instance the bad physics of a character doing a stunt that relies on wires (which we later remove). When we see a superhero fly, it’s rarely the actual flying that feels strange, it’s almost always the take-off or landing. The flying itself is too broadly unnatural for us to worry about, but we’re all familiar with what it should look like when someone jumps or when they touch ground again after jumping. Then again I suppose you could probably just broadly categorize bad acting in general as being firmly planted at the bottom of the uncanny valley…
Getting back to animation, I’m wondering if it makes sense to develop tools that can be applied as a post-process after all the primary animation is done. Tools that analyze the rest of the environment/scene/characters and then algorithmically add in the final bits of nuance mentioned above to produce subtleties that are appropriate and motivated and organic.