Friday, August 30, 2013

Video Games as Storytelling


Video games are the new movies

Their storytelling has become increasingly sophisticated -- and Hollywood is taking note


Video games are the new movies
This article originally appeared on The Weeklings.
The Weeklings
DON’T KNOW about you, but I had to inure myself to modern video games.
Joust, Ice Climber, Bubble Bobble, Ikari Warriors—the titles released during my formative years left far more to the imagination than today’s cinematic pixel-fests. Considering the fact that a new technology was only capable of so much, the majority of early games were relegated to simple puzzle-challenges, 8-Bit topiary mazes skittering with blob monsters, block-heroes, square bullets and ear-wincing sound effects. Complex storylines weren’t necessarily out of the question, but they were hard to convey with anything but splash pages of text and pixilated still shots. Until rather recently design always came first in the world of video games. Even as consoles edged into advancement and narrative arcs developed in quality, the technology of the medium compelled its tinkerers to dive further into its visual potential. This is simply the side effect of working in an industry that leans toward the illustrative end of the entertainment spectrum. For the craft’s sake, image always comes first.
When we think of storytelling these days, anyway, one of the first things we turn to is cinema. Film and television certainly surpass literature as the primary way Americans choose to imbibe their tales. Television is easy to access and endlessly enthralling, especially now as it robs Hollywood of talent to utilize it better on highbrow networks and Netflix. Although a good novel ensures a more meditative, holistic psychological experience than any good show or film, it’s hard to ignore time as one of the main deciding factors. Marathoning twelve consecutive one-hour episodes can be tempting when measured against squinting your eyes at 120,000 words. As a writer and reader myself, even I must admit the appeal of this bargain.
Modern video games, however, seem as if they’re attempting to insert themselves into our array of storytelling devices with a fervor unlike ever before. And, they’re doing a damn good job at it: Heavy Rain, Bioshock Infinite, The Last of Us, Mass Effect—all titles that feature not only brilliant visual motifs, but powerful storytelling, compelling characters, psychological gambits and pithy themes. I get lost for hours, even days, in these narratives. And I see those around me doing the same. My wife—a Walking Dead fan who doesn’t even play video games—recently spent hours watching YouTube videos of the Walking Dead game being played by a third party. The reason being, as she said:

ADVERTISEMENT

“It’s just as good as the show.”
As video games become more cinematic, more capable of delivering emotional experiences as opposed to limiting their terrain to puzzle solving or besting your competitor, they move closer to the realm of film itself, threatening an eclipse. The real question then is whether or not the two mediums can retain their own separate identities and continue to play nicely as they near the same horizon, borrowing from each other to the point where their boundaries begin to disappear?
As a child, most of the differences I noticed between games and film were based on the degree to which they could successfully reflect reality. Playing Pitfall, while fun, just couldn’t make you feel like Indiana Jones, no matter the scale of your imagination.Eye of the Beholder and other first-person action Role Playing Games, while benchmarks of the genre, resembled less Skyrim and more Skyfox. Therefore, whenever I sat down to play a game in 1987, I felt as if I was doing just that: playing.Early gaming technology just didn’t have the means to measure up to what was happening elsewhere in the entertainment world. It certainly wasn’t capable of rendering the accuracy of a well-drawn cartoon. When I first played Who Framed Roger Rabbit? on the Nintendo, for instance, after seeing and liking the movie, I remember feeling deeply disappointed. Even the cartridge itself, illustrated with a scene from the film, served as cruel commentary on the broken 16-Bit promise within.
The funny thing about gaming, however, is that if you dissect it to the bone, you’ll discover that the idea behind is not all that different from the idea behind film. Mainly, to carry the viewer/player through a visual experience that culminates in a meaningful entertainment experience.
For this reason I find that a lot of new movies are now exhibiting game-like qualities. Interactivity is becoming the norm, and evidence for such can be found in the resurgence of 3D glasses and the proliferation of computer renderings of high-dynamic action sequences that often overshadow substance. One need only look to video games like KillzoneResistanceCrysis, or Gears of War to see how gaming’s influence has petered into other forms of visual media. And not always in a healthy manner. If a film is too much like a game, an audience will be dissatisfied because of the simple fact that they can’t actually play it. The Silent Hill and Resident Evil adaptations are good examples of games that can’t be played. They exhibit far too much of a reliance on the mechanics of the environment, and not nearly enough on the characters that inhabit it, their pathologies and worldviews. The same goes for films like Battleship, Transformers and Battlefield Los Angeles, whose concentration on computer-generated ephemera seems more similar to carnival rides, digital roller coasters, rather than meaningful narratives. If there’s no button to push, then all you are is an observer who wants badly to participate. Even a bad video game gives you a turn.
Other films provide better examples of this newfound relationship. While watchingPacific Rim recently for instance I found myself listening to the voice of GLaDOS (short for Genetic Lifeform and Disk Operating System) in the role of mainframe computer that controls the giant robots, or Jaegers. For those in dire need of explanation, GLaDOS is the artificial intelligence mechanism and main antagonist in the groundbreaking puzzle platform game, Portal. The cold, calculated wit of this construct generated a cultural phenomenon that took hold in mainstream gaming culture. Portal’s characters and design subsequently inspired Guillermo Del Toro. So much, in fact, that he hired the voice actress for GLaDOS (Ellen McClain) to do the voiceover in his film. The character was prepackaged, capable of making a seamless translation from one medium into another. Similarly, games like Bioshock Infinite,along with both the Uncharted and Assassin’s Creed franchises, employ full-scale cinematic drama, scenes in which characters profess their inner wants, and struggle to accomplish their dreams. Infinite even delivers a finale with a meta-science fiction, time-travel bent that, in my eyes, rendered it not just a ”game” but a fine piece of speculative fiction.
Regardless of shortfalls, much of what has happened as a result of the gaming world’s borrowing from film has been positive. With science fiction films especially, animation tools used in games have become a producer’s treasure trove.
For a long time I had a hard time considering video games as anything but outlets for stress-expulsion with colorful designs (imagine my surprise when I heard they’d be installed at MoMa!). I believe there’s a reason, anyway, that game development studios often adapt movies for consoles, as opposed to the other way around. This mimics the way in which film production studios adapt books for the screen—due to the presence of an artistic hierarchy. Blade Runner was fashioned for the Commodore 64 in 1985, and The Karate Kid for the Nintendo in 1987. Friday the 13th, Back to the Future, Dick Tracy…. There are hundreds more. Most film-to-video game adaptations have leaned towards failure in terms of industry-specific success. Not only would a classic film’s structure have to be severely warped in order to encourage the competition found in games, there’s the problem of genre. Mainstream games still tend to deal more with science fiction, fantasy and hard-boiled crime than they do with family sagas. I certainly doubt Atonement, Angela’s Ashes or Pale Fire will be adapted for the PS3 anytime soon, anyway.
I think that’s why I was so enthralled when I discovered the same voice in Pacific Rim that I’d become used to in Portal. I was amazed at how far game creators had pushed the boundaries of their craft, attracting the attention of filmmakers, actors and an industry rife with history and talent. In the case of Pacific Rim, GLaDOS’ character could pretty much just be lifted from the game world into that of the film world, much like an actor. Superstars are doing game voice-overs all the time. Patrick Stewart played Emperor Septim in Elder Scrolls IV: Oblivion. Brian Cox was in both Killzoneand Killzone 2. Billy Bob Thornton and the now departed Dennis Hopper combined talent in Deadly Creatures on the Wii. Liam Neeson played the main character’s father in Fallout 3, and Samuel L. Jackson voiced the game adaptation of Afro Samurai. Gary Oldman, Seth Green, Ron Perlman, George Takei. Each delivers his/her lines with conviction, strengthening game play with respectable apt. In this way, this joystick-derived industry is doing something that film never could. By attempting to marry narrative with play they are adding a new dimension to storytelling.
When the PS4 was revealed recently to an auditorium of astonished gamers, the world was introduced to the future of the industry. Currently being touted as the ”Creative Machine,” its pioneers showcased some of Sony’s astonishing capabilities. This machine would appeal to the booming world of indie gaming, offbeat and often experimental titles released by boutique developers. It would offer improved social platforms, liberal digital licensing, motion detection, and hardware that would enable resolution, speed, and movement unlike anything the world has seen. I found Quantum Dream’s Old Man’s Head  the best presentation of the evening. This display simply consisted of an old man’s head being projected on a giant screen, and a video performance that simulated the amount of Poly Counts, or facial movement response points, that could be articulated. The spectacle was immense. The industry has evolved from pixilated blocks and blaring synthesizers to animation so real it can reach out and shake your hand. In a demonstration of how far things had come, the old man’s brow furrowed, brightened, crossed, lifted, in manners so minute it was like a cadaver brought to life. His face was blemished in an astonishingly human manner. I imagined playing a game with such a character, having him speak and interact with me. Though I must be honest, the feeling I got was tinged with horror. How will game design, and more importantly, game play, be affected by the ability to achieve such likeness to the natural world? If the genre gives in to its increasing tendency to share in the same tropes, effects and story arcs as film, will the medium I loved since I was a child become impossible to recognize?
I’m sure that as games evolve, such changes will take getting used to. Eventually I imagine games becoming an amazing tool for real-world applications through which humans can alter aspects of their reality. I don’t think we’ve unlocked the genre’s potential yet by a long shot. Look at where film is roughly a hundred years after it shocked carnival goers, convincing them they were about to be hit by a train. A gamer today – especially an average one like myself – will likely be unconvinced of the future until it is here. But I do think it’s important to remember one’s roots. Multiple mediums can and will collaborate, but they should never fail to remember their past once in a while. Being separate but equal is where true potential is unlocked, impacting the future of creativity. The novel has properly maintained its place and importance in contrast to the popularity of film. There’s a reason why people often find film adaptations of brilliant books lackluster and insufficient. Because one can simply go where the other cannot, and that’s a good thing for the future of entertainment.

Monday, August 26, 2013

"Does Media Violence Lead to the Real Thing?"



August 23, 2013

Does Media Violence Lead to the Real Thing?


EARLIER this summer the actor Jim Carrey, a star of the new superhero movie “Kick-Ass 2,” tweeted that he was distancing himself from the film because, in the wake of the Sandy Hook massacre, “in all good conscience I cannot support” the movie’s extensive and graphically violent scenes.
Mark Millar, a creator of the “Kick-Ass” comic book series and one of the movie’s executive producers, responded that he has “never quite bought the notion that violence in fiction leads to violence in real life any more than Harry Potter casting a spell creates more boy wizards in real life.”
While Mr. Carrey’s point of view has its adherents, most people reflexively agree with Mr. Millar. After all, the logic goes, millions of Americans see violent imagery in films and on TV every day, but vanishingly few become killers.
But a growing body of research indicates that this reasoning may be off base. Exposure to violent imagery does not preordain violence, but it is a risk factor. We would never say: “I’ve smoked cigarettes for a long time, and I don’t have lung cancer. Therefore there’s no link between smoking cigarettes and lung cancer.” So why use such flawed reasoning when it comes to media violence?
There is now consensus that exposure to media violence is linked to actual violent behavior — a link found by many scholars to be on par with the correlation of exposure to secondhand smoke and the risk of lung cancer. In a meta-analysis of 217 studies published between 1957 and 1990, the psychologists George Comstock and Haejung Paik found that the short-term effect of exposure to media violence on actual physical violence against a person was moderate to large in strength.
Mr. Comstock and Ms. Paik also conducted a meta-analysis of studies that looked at the correlation between habitual viewing of violent media and aggressive behavior at a point in time. They found 200 studies showing a moderate, positive relationship between watching television violence and physical aggression against another person.
Other studies have followed consumption of violent media and its behavioral effects throughout a person’s lifetime. In a meta-analysis of 42 studies involving nearly 5,000 participants, the psychologists Craig A. Anderson and Brad J. Bushman found a statistically significant small-to-moderate-strength relationship between watching violent media and acts of aggression or violence later in life.
In a study published in the journal Pediatrics this year, the researchers Lindsay A. Robertson, Helena M. McAnally and Robert J. Hancox showed that watching excessive amounts of TV as a child or adolescent — in which most of the content contains violence — was causally associated with antisocial behavior in early adulthood. (An excessive amount here means more than two hours per weekday.)
The question of causation, however, remains contested. What’s missing are studies on whether watching violent media directly leads to committing extreme violence. Because of the relative rarity of acts like school shootings and because of the ethical prohibitions on developing studies that definitively prove causation of such events, this is no surprise.
Of course, the absence of evidence of a causative link is not evidence of its absence. Indeed, in 2005, The Lancet published a comprehensive review of the literature on media violence to date. The bottom line: The weight of the studies supports the position that exposure to media violence leads to aggression, desensitization toward violence and lack of sympathy for victims of violence, particularly in children.
In fact the surgeon general, the National Institute of Mental Health and multiple professional organizations — including the American Medical Association, the American Psychiatric Association and the American Psychological Association — all consider media violence exposure a risk factor for actual violence.
To be fair, some question whether the correlations are significant enough to justify considering media violence a substantial public health issue. And violent behavior is a complex issue with a host of other risk factors.
But although exposure to violent media isn’t the only or even the strongest risk factor for violence, it’s more easily modified than other risk factors (like being male or having a low socioeconomic status or low I.Q.).
Certainly, many questions remain and more research needs to be done to determine what specific factors drive a person to commit acts of violence and what role media violence might play.
But first we have to consider how best to address those questions. To prevent and treat public health issues like AIDS, cancer and heart disease, we focus on modifying factors correlated with an increased risk of a bad outcome. Similarly, we should strive to identify risk factors for violence and determine how they interact, who may be particularly affected by such factors and what can be done to reduce modifiable risk factors.
Naturally, debate over media violence stirs up strong emotions because it raises concerns about the balance between public safety and freedom of speech.
Even if violent media are conclusively found to cause real-life violence, we as a society may still decide that we are not willing to regulate violent content. That’s our right. But before we make that decision, we should rely on evidence, not instinct.
Vasilis K. Pozios, Praveen R. Kambam and H. Eric Bender are forensic psychiatrists and the founders of the consulting group Broadcast Thought.

Sunday, August 18, 2013

Games - power to do good or evil


For video games, a moral reckoning is coming

As games get closer to complete realism, developers have to decide whether to use that power for good or evil


For video games, a moral reckoning is comingMax Payne 3 (Credit: AP/Rockstar Games)
She was created with a computer program, but she looked real. The proportions were correct, the hair looked lifelike – the skin even had pockmarks and imperfections. For some reason, however, it felt a bit off. Maybe it was the eyes, maybe it was the way she moved, but the overall effect was, in a word, creepy.  This phenomenon is called the “uncanny valley,” and for some game developers, it’s the final barrier between fantasy and reality.
The “uncanny valley” refers to one’s psychological response to a visual representation – say, for example, a character in a video game. As the visual representation becomes more realistic and complex, a player’s psychological response becomes more positive – he or she begins to identify with the character’s human qualities. At a certain point, however, the dynamic shifts – the more lifelike the character is, the more unsettling that character becomes – and the player will feel disgust. Unless the character is a completely flawless rendition; then it’s outside the valley and everything’s fine.
The human eye is very good at detecting fakeness, and for a long time developers confronted this dilemma; they were able to create characters that were lifelike, but not perfectly lifelike. Technology, however, is moving at an exponential rate, and developers may soon escape the uncanny valley, creating something virtual that looks like flesh and blood.
What issues does this raise, at the point where artificial and real become muddled? And what responsibilities do developers have towards players if they decide to simultaneously push the boundaries of technology and gratuitousness?
***
Violence in video games has always been a given – in Contra, released on the Nintendo Entertainment System in 1987, a player killed hundreds of virtual people with a variety of machine guns. In early games, however, the violence, no matter how ubiquitous, was strangely quaint. It was difficult to be viscerally affected by such rudimentary graphics: a bunch of pixels shooting a pixel at another bunch of pixels.
Only five years later, Mortal Kombat was released in arcades. It was bloody, with dramatic ways of killing your opponent, but what set Mortal Kombat apart from other games was its photo-realism – real actors’ heads and bodies represented playable characters. Digitized voices screamed out when characters got hurt or killed. This wasn’t a matter of square pixels – blood was beginning to look like blood.

ADVERTISEMENT

Fast-forward to the present day, and there’s no question that violence in gaming can be painful and tasteless. Take 2007’s Manhunt 2, where a main objective is to execute one’s enemies with a variety of instruments – a pair of scissors, a bat, a scythe and a hacksaw, to name just a few. Or, take 2012’s Max Payne 3; sure, the player is shooting bad guys the entire time, but the game lingers on its violence in near-pornographic fashion. There are zoom-in shots, freeze frames, slow-mo splatters and low-shot angles of every major kill. Realism is one thing, but context also matters; no one would argue that this romanticizing of violence is necessary to creating a believable experience.
And then there are sandbox games, which give players the free will to do wrong. In 2010’s Red Dead Redemption, the player ties a prostitute to the railroad tracks. When she’s ground up by the oncoming train, the player earns a trophy for this specific “achievement.”
No, a player doesn’t have to do this in order to win the game. But that doesn’t really matter. The act of putting it in the game to begin with is where the moral justification fails. It’s not a space marine killing aliens, or a plumber stomping on cartoonish turtles – it’s a person killing people, and it’s supposed to look real.
***
Studies on violent video games have not supported the notion of psychological damage. Human empathy develops during the first years of life, way before the person can even pick up a controller. It continues to develop during childhood and adolescence as a result of many factors, such as parental relationships, environmental variables and an assortment of life experiences. To target video games as the main culprit of violent behavior seems simplistic at best.
“On one hand, it seems common sense to think that exposure to violence can desensitize the observer,” says Dr. Jean Decety, professor of psychology and psychiatry at the University of Chicago. “On the other hand, humans are very resilient and adapt quickly to different social contexts. Most of the soldiers torturing detainees in Iraq were probably caring people when they interacted with their loved ones. Probably a minority of them were psychopaths and enjoyed doing so, but this is a minority. I think that most healthy individuals do not confuse games and simulations with reality.”
“There is no solid evidence from brain research that video games lead to antisocial behavior,” Decety continued. “It is one thing to say that video games impact brain circuits – they do – but such a response is transient.”
“I am not sure that [realism in games] will lead to any sort of long-term damage of any sort, once we control for individual differences and potential vulnerability,” Decety says. “Individual differences in empathy will likely outweigh any type of effect from a particular video game.”
So, we’re left with the ethical principle of the issue, something that critics of video games rarely talk about. Realistic depictions of violence, without moral justification, are distasteful – although video games may not directly create violence, they can contribute to a culture of violence, where depictions of pain and suffering become commonplace and acceptable. And furthermore, why not aspire to something higher than a visceral thrill? Game developers should use violence, if they use it at all, to serve an ethical narrative, rather than making it an end in itself.
***
For David Cage, the founder and creative mind behind game development company Quantic Dream, narrative has always been the top priority. Cage’s last game, 2007’s Heavy Rain, was visually groundbreaking for its time, earning critical praise for its expressive characters. But more importantly to Cage, it meditated upon complex themes more often seen in film: the love between a father and son, moral quandaries about the worth of a single life, and the emotional cost of trusting others.
“[Early] films started with violent scenes, because the technology didn’t allow for anything subtle,” Cage says. “This is exactly where video games are today. We have not yet fully understood that technology is ready for more meaningful experiences. We need to have talent and something to say, which is unfortunately rarely the case.”
Cage was the buzz at this June’s Electronic Entertainment Expo when he presented a new tech demo for the Playstation 4. Tech demos show off a game developer’s graphics capability; last year, Cage’s award-winning demo was Kara, an emotional meta-commentary on a robot that gains the ability to feel. This year’s demo was The Dark Sorcerer, starring a computer -generated sorcerer and his goblin apprentice and playing out in real time. A hackneyed fantasy narrative eventually reveals itself to be a film set: the CGI characters are revealed to be actors, portraying the roles of a sorcerer and a goblin. It was weird and creative, but most importantly, it was funny, and the humor came not only from the words the characters were reciting, but also from the gestures and facial expressions – the actions and reactions that sprang from the back-and-forth rhythm of a conversation. Non-verbal communication – this was new for a medium that has relied on the most blunt of narratives – fists to faces, bullets to bodies – to tell the majority of its tales.
Cage’s upcoming game, Beyond: Two Souls, portrays 15 years in the life of its female protagonist, tracing her struggles from childhood to young adulthood. The game places emphasis upon choice and emotional conflict, and its realism gives texture and subtlety to these concepts. Realism, when applied judiciously, can affirm morality, and the modern game developer has the ethical choice of whether or not to elevate the discourse. For Cage, the decision has always been clear.
“Sometimes I am surprised at the little sense of responsibility that can be seen in some games,” Cage states. “Some of them give the feeling that they were made by a bunch of teenagers laughing out loud as they were making it. I am dreaming of a day when video games won’t need violence anymore to create interesting experiences. We will then have some credibility beyond our little world and be respected as adults in an interesting, creative medium.”
“Games have no choice if they want to continue to exist in the coming years,” he adds. “They will have to grow up.”