Teaching Video Game Theory, Part One

What Academic Study Can Do for Video Games

This past spring I presented an academic paper on issues of spatial representation in the video game Portal at the annual Society for Textual Studies Conference. My paper fit well with papers by my fellow panelists, including Marylhurst’s English Department chair Meg Roland, who offered important new insights on early modern maps, and recent Marylhurst alumna Jessica Zisa, who presented a smart paper on social and natural spaces in Sebold and Thoreau. As it turned out, the juxtaposition of our various analyses provoked a lively discussion with the audience. But as we jostled out of the room after our session, I couldn’t help overhearing one of the curmudgeonly older professors grumbling, “I can’t believe there was an academic paper about a video game!”

But why not? Did I do something wrong? Was I squandering my mental energies and straining my peers’ patience with a topic beneath scholarly attention?

As you can imagine, I’ve thought about this a lot for a while, but the more I considered the issue the more important it seemed to me that I continue studying video games.

images-85

In fact, I “doubled down,” as they say. I’ve already presented another conference paper on the video game L.A. Noire‘s adaptation of the detective genre, and this fall I’m attending a semiotics conference to discuss the paradoxical fantasies of military first-person shooter games. Not only that, but I developed a reading list that turned into a syllabus, and this summer I’m proud to say that I’m teaching Marylhurst’s first ever Video Game Theory class.

So, I suppose I have some explaining to do. Why is a 19th-century Americanist with expertise in textual studies and psychoanalytic criticism spending his time playing video games? Even worse, why is he talking about it in public?

Video games are no longer the exclusive province of nerdy teenaged boys who live in their parents’ basements. Recent demographics studied by the Entertainment Software Associations show that over half of American households own a dedicated gaming console, the average gamer is 31 and nearly 40% of gamers are over 36. While men do still edge out women among the gaming population, currently 48% of gamers are women.

And, beyond these basic stats, we really need to recognize that it’s not just about online fantasy games or military shooter games. Just about everybody has a game or two on their phone these days. Angry Birds anyone? Farmville? Flow? These games are changing how and when we communicate with each other. Some people use Words with Friends as an excuse to chat more frequently with friends and relatives over distance. Others use a regular online gaming night to maintain group friendships across the miles that separate their homes.

Games have been adapted to create fitness programs like Fitbit and Nike+. There are community-oriented good Samaritan game-type apps like The Extraordinaries app or the app that notifies CPR-trained specialists if someone in their vicinity needs help. Apart from the studied benefits of video games helping autistic kids adapt to social rules and learn how to communicated, there are also games specifically designed to help a variety of medical patients recover better and faster.

Beyond the stereotypes about video games that persist, what are some of the other reasons we need to think critically about this topic? For one thing, video games are big business, with the gaming industry generating over $21.5 billion last year. 2013’s top-selling game, Grand Theft Auto V, made over a billion dollars in its first three days. Compare that to other media. Top-grossing film Iron Man 3 also made over $1 billion in worldwide ticket sales, but it took nearly a month to hit that mark. Runaway bestseller Fifty Shades of Grey shattered every publishing record by selling 70 million copies in the U.S.–in both print and e-books. Counting those all at $15 (the print price), that’s also over a $1 billion, but it took a couple years to reach and it’s exceedingly rare.

Yes, I know it’s funny to hear an English professor measuring cultural significance by looking at sales figures. I know money isn’t the be all and end all of social values, but it’s a strong indicator. We all fundamentally “know” that books are obviously better and more serious works of art than movies, and even TV shows and comic books are infinitely more important than video games.

And yet… can we really just assume (or even argue) that either Fifty Shades of Gray or Iron Man 3 is an inherently superior cultural artifact than Grand Theft Auto V? In fact, do we even want to try to assert that position?

Granted, part of our job in academia is to serve as a standard bearer for important works from the past, to ensure they are not forgotten. As a 19th-century literary scholar, I’m acutely aware of this duty and I’m proud to say that I routinely inflict canonical “high literature” on my students, many of whom I actually convince to enjoy the experience and continue it of their own free will. But part of our job too should be showing our students how to use these powerful analytical tools at our disposal to analyze cultural artifacts that the general public chooses to experience on their own. What good are these various apparatuses we develop if they only apply to analyzing the works of “high culture” that Academia elevates to special, masterpiece status? Shouldn’t we also be able to apply our tools to “low-brow” works created primarily to entertain?

I think so. And I’m not alone. In fact, English professors have been expanding the canon from the very beginning. It’s a slow and painful battle, but notice how (despite the vestigial name) English Departments now routinely teach American literature. We take it for granted now of course, but that wasn’t always the case. We even teach post-colonial “world” literature and regularly include works of “popular” fiction in our academic purview. It’s much the same throughout the humanities. For years now Culture and Media scholars have been analyzing films and television and comic books, so isn’t it time we stretch ourselves to include video games in our conversation?

Whatever one thinks of them, video games are cultural artifacts. They are “texts” of a sort, and as such they communicate meaning. Furthermore, as we know, people are choosing more and more to experience these video game “texts” on their own in preference to reading or even to watching films. So, isn’t it better for us to teach our students how to apply critical thinking and analytical tools to these new texts?

It doesn’t mean that we will quit teaching Chaucer and Shakespeare. Not at all. But it means that we must also find a way to discuss Grand Theft Auto and Call of Duty.

*A slightly edited version of this multi-part essay is being cross-posted on the Marylhurst Blog.*

The Liberal Arts Make Us Free

Last winter, the New York Times ran an article (“Humanities Studies Under Strain Around the Globe“) about the current crisis facing humanities departments at universities around the world. While the humanities have long weathered criticism that they are impractical or irrelevant in the “real world,” and cuts to humanities funding are nothing new, the present situation seems worse than ever. Now more than ever, those of us in the liberal arts need to fight for our existence and demonstrate the value of our disciplines. Not just to politicians but also to the increasingly non-academic administrators who manage our schools under the ubiquitous common (non)sense idea every public entity needs to be “run like a business.” (More about this in a future post.)

Even more importantly, we need to convince undergraduates that studying the humanities is meaningful, that a liberal arts degree provides a valuable education with essential elements not available in other more “practical” disciplines. That “soft skills” like critical thinking and sophisticated communication can prove just as significant to one’s life and career as understanding accounting principles or knowing the fundamentals of biochemistry. Unfortunately, we in English departments and other humanities haven’t been doing a very good job of demonstrating the centrality of our role in higher education, so we’re being perceived as peripheral. As dispensable.

Harvard reports that the number of students studying humanities has halved since 1966. According to a related piece in the NYT, although nearly half of faculty salaries at Stanford University are for professors in the humanities, only 15% of recent Stanford grads have majored in the humanities. Florida governor Rick Scott recently suggested that humanities students should pay higher tuition as a penalty for pursuing “nonstrategic disciplines.” Public response to the proposal has been relatively anemic. An online petition against the proposal gathered only 2,000 signatures and could only muster the weak argument that differential tuition would result in the “decimation of the liberal arts in Florida.”

Sure that sounds terrible, if you happen to care about the “liberal arts.” But it seems that most people don’t have much idea what their loss means to our culture. The “liberal arts” (English, all the other languages, literature, culture and media, philosophy, classics, etc.) represent the highest ideals of a university education. If abstract and theoretical rather than practical, the liberal arts are those disciplines designed to empower students as individuals, to inculcate the wisdom and responsibility that allows them to be good citizens, to inspire them to work for ideals and to pursue social justice, to help them serve as productive, compassionate and innovative leaders.

By contrast, the hard sciences seem almost limited by that very practicality they tout. Never mind their absolute faith in the ideological oxymoron of “scientific progress.” (I plan to discuss this at some length in a later post.) Set against the philosophical depth of the liberal arts, professional degrees and certificate programs (MBA, MD, CPA, DDS, etc.) seem like glorified trade schools.

This goes straight to why the humanities are often called the “liberal arts.” Regardless of whatever Gov. Scott may believe, they are not “liberal” (as opposed to “conservative”) in the narrowly American political sense that tends to equate them with bleeding-heart socialist ideologies. In fact, the liberal arts do not have a fundamental political bias at all. They are “liberal” in the sense of liberating, of making one free, of freeing students to think for themselves, of teaching one how to imagine what freedom means, of exploring ways to experience human freedom.

What could be more important than liberal arts to education in a democratic society? What could be more central to human experience or more vital to a meaningful life?

The Revitalization of Heroic Fantasy

A week or so ago, I read a blog post about how epic fantasy is a big waste of time. The author of Pop-Verse argues, somewhat convincingly that 1) “The motherfuckers are too long,” 2) “They are … stylistically similar/identical,” and 3) “They don’t really say anything.” He has a point. After having recently stayed up late to watch the incredibly disappointing, low-budget fantasy picture Deathstalker (1983), a Barbie Benton vehicle with only a few giggling breasts and taut frolicking buttocks to (almost) redeem it from the cinematic dustbin, I’m tempted to agree. Maybe heroic fantasy truly is inherently lame.

MV5BMTQ0MTAxMjIyMF5BMl5BanBnXkFtZTcwMzEzOTQyMQ@@._V1_SY317_CR3,0,214,317_AL_

The tropes of arrogant adventurers, wily wizards, and damsels in distress were already tired by the time they found themselves being recycled into Victorian retellings of dragons and heraldry and deeds of daring-do, and they grew little less tedious even when spilling forth from the pens of literary giants like Tolkien and Eddison and Branch Cabell. Their novels are readable, even admirable, but these days their magic is accomplished primarily through a deliberate leap of faith, a willing suspension of contemporary expectations for rousing storytelling. The novels ultimately seem quaint and charming rather the compelling and vital.

images-106

After a brief infusion of surging hot blood provided by Robert E. Howard’s Conan stories during the 1930’s, these well-worn romances grew even more wearisome in the hands of rank imitators sucking their way through the insufferable 70’s. Writers like Brooks and Donaldson churned out trilogy after trilogy and could find nothing new to bring to the genre. Indeed, even in the hands of relative innovators like Michael Moorcock and Roger Zelazny, or under the marvelous ministrations of self-conscious prose stylists like Fritz Leiber and Avram Davidson, the fantasy tale continued to wane inexorably through the closing decades of the twentieth century. Personally, I enjoyed the familiar terrain of David Gemmell’s Deathwalker novels and the titillating arabesques of Cole and Bunch’s Far Kingdoms, but neither could finally breath fresh life into the corpse of heroic fantasy.

The genre might have been gracefully buried there. We might have happily reread Tennyson’s Idylls of the King and Sir Walter Scott’s Ivanhoe and Alexander Dumas’s musketeer novels and been spared the misery of reading any more such lackluster modern recountings of ersatz Arthurs, leftover Lancelots, and make-believe Merlins. Worse things could have happened. But fantasy wasn’t dead. Dungeons & Dragons had inspired the imagination of a new generation. And the collectible card game Magic: The Gathering left a generation poised for the young adult fantasies of Harry Potter and Eragon. The message to other writers was clear… there may be a dragon slumbering on the top of it, but there’s gold in them there hills.

Then along came the new century, and along with it there emerged some truly powerful new voices in heroic fantasy. George R.R. Martin, despite is overly allusive middle initials, proved himself a powerhouse of fresh ideas in fantasy. He dispensed with Victorian pieties, brought Machiavellianism to Middle Earth, and put sex and violence back where it belongs as central to the genre. Has there even been a more wonderfully complex character than Tyrion Bannister this side of the Bard himself?

images-105

The HBO adaptations of Martin’s Westeros leave me a bit over-heated. Not that I mind attractive flesh bouncing bountifully for my viewing pleasure, but it’s a bit distracting from the considerable storytelling virtues I found when reading Martin’s novels. I loved the shifting perspectives, the fully imagined world with serious interpersonal struggles and deadly political challenges. The series still has those elements, to be sure, but it also devotes a fair number of scenes to bawdy spectacle. Far be it from me to complain. The series is a thoroughly enjoyable hit, and it has TV junkies plowing their way through massive novels they might never have attempted. I’m just saying, the novels are better than the series. And we can leave it at that. The problem with porn, even of the soft-core variety, is that once a sex scene is over you have a hard time finding your way back to the main story. The intrusion of one’s own physical desires creates a barrier to the continued suspension of disbelief and the object of entertainment risks becoming ridiculous. Or worse, tedious.

images-104

The other contemporary fantasy writer that has me very excited about the resurgence of the genre is Joe Abercrombie. While he has not (yet) been graced with a television series (or even a film) to launch him into the mass-media stratosphere, his novels bring to high fantasy the same sort of gritty realism that Martin’s do. But there’s even something extra to Abercrombie. Much as I love Martin’s epic scope and wonderful way with characterization, he never quite captures the perverse gallows humor of the genre in quite the way Abercrombie manages. After all, there’s a lot of blood and shit to be spilt on the way to slaying dragons and topping fairy kingdoms. Abercrombie shows how it’s done.

944073

Do yourself a favor and start with Abercrombie’s 2007 debut The Blade Itself, a title taken from the Homer quotation, “The blade itself incites deeds of violence.” You’ll never feel yourself in more sympathy with a crippled turned government torturer. These are fantasy novels that push the limits of the genre and reveal again the hidden depths of social significance that first made Malory and Tennyson meaningful.

Science Fiction Summer Course

My online science fiction class at Marylhurst University (LIT215E/CMS215E) has gotten off to a lively start with another batch of great students this summer.  I teach this class pretty much every other year, and I’m always finding ways to tweak the syllabus.  This time around our main texts are Robert Silverberg’s excellent Science Fiction Hall of Fame: Volume One, 1929-1964 which we’re using in tandem with Volume One (the 2006 issue) of Jonathan Strahan’s annual Best Science Fiction and Fantasy of the Year series.  These two volumes provide us with a nice variety of science fiction stories across the last century of the genre.  While we can’t read every story for the class, these two books allow us to hit most the high points in the Golden Age from Asimov, Bradbury, and Clarke to some of the standout newcomers to the field, like Ian McDonald and Paolo Bacigalupi.  We also read just three novels: Mary Shelley’s Frankenstein (1818), which started it all; H.G. Wells’s War of the Worlds (1898), which isn’t my personal favorite of his works but which introduces the important SF theme of alien invasion; and Ursula K. Le Guin’s Dispossessed (1974), which helps us tackle both utopian themes and feminist/gender themes.

To give the students time to read Frankenstein, our first week’s discussion taps into a discussion of the two most culturally prominent SF franchises by reading David Brin’s somewhat dated but still relevant 1999 article “‘Star Wars’ despots vs. ‘Star Trek’ populists”.  I like this piece especially since it allows even those students without much interest or experience with SF to jump right into the fray.  Also, I’ve found people tend to feel pretty passionate about both of these franchises.  We also do a bit of work exploring the line between SF and contemporary technology by reading an interview with noted futurologist Ray Kurzweil and a slightly paranoid rant against the merging of humans with machines by Eric Utne.

This time around I’m also including a lot more films than I have in the past.  This seems important since at least in film and television SF seems to have become accepted as virtually mainstream, whereas SF novels are still somewhat consigned to the genre ghetto except when authors who are already considered “real writers” employ SF tropes in their “serious” work.  This is the only way to account for the different cultural reception of Margaret Atwood and Ursula Le Guin for example.  Yes, Le Guin has achieved broad literary acceptance, but this is often presented as being “in spite” or her being an SF author.  Okay, I know, I know, saying that genre writing isn’t “serious” literature amounts to fighting words in some circles, but the (perhaps) disappearing divide between “high” and “low” art is probably an issue for another blog post.  Scratch that – it’s an issue for a series of blog posts.  I’ll get on that.

So, anyway, we’re watching the following films:

  • Metropolis (1927), dir. Fritz Lang
  • The Day the Earth Stood Still (1951), dir. Robert Wise
  • 2001: A Space Odyssey (1968), dir. Stanley Kubrick
  • The Man Who Fell to Earth (1976), dir. Nicolas Roeg
  • The Matrix (1999), dir. Andy and Laura Wachowski
  • A Scanner Darkly (2006), dir. Richard Linklater
  • Children of Men (2006), dir. Alfonso Cuarón
  • Moon (2009), dir. Duncan Jones
  • Hunger Games (2012), dir. Gary Ross

I know I’ve probably opened up a whole can of space worms by publicizing my selections here, but before you reply with your own suggestions (which I welcome), just remember that this list is not supposed to represent the “best” of SF film.  It’s merely a collection of some interesting films that span a lot of years (skewed toward the present, admittedly).  I also wanted to touch on a wide variety of themes and trends in SF.

As always, I’m reading and viewing alongside my students as the term progresses.  No matter how many times I read Frankenstein, I always find new things to ponder.  I’m also excited because as I wrap up my current project on Edgar Allan Poe, I’m starting to consider attempting a longer academic work about science fiction.  Specifically, I think it might be interesting to perform psychoanalytic readings of Golden Age stories and novels.  I plan to take copious notes this term and see where this idea leads me.

Digital Democracy & American Anti-Intellectualism, Part II

Last week I wrote a post about some of the challenges we face in a digital age where expertise and authority seem to be under constant attack, but I’d like to follow that up here by exploring this issue from a slightly different angle.

What I see as the crux of our current challenge is this: how can we ensure that the digital democratization of human knowledge does not become mired in the same anti-intellectualism that has for so long been a hallmark of our American democracy?

I know what some of you are thinking. How can I say that America is anti-intellectual?  Isn’t it true that we are home to many of the greatest universities in the world, schools that continue to draw the best and brightest from around the globe for graduate studies?

Yes, that may be true, but looking at our culture as a whole, the anti-intellectualist attitude that pervades our country is undeniable. Consider how casually and caustically our politicians and pundits dismiss “experts” and “authorities” when such learned wisdom (or book-learnin’) disagrees with their own cherished personal opinions. Witness how during last fall’s debates before the elections, senatorial candidate Elizabeth Warren’s opponent called her “Professor Warren” as a put-down. True, Professor-cum-Senator Warren still won in Massachusetts but that state prides itself on the prestige surrounding its academic institutions.

By contrast, there are plenty of regions in our country where Warren’s academic credentials would more surely have done her irreparable political damage. Throughout most of the country, American anti-intellecualism is a hard fact. And it’s one I’d guess more than a few of us eggheads had thumped into us in grade school.

This isn’t a new observation. From the 1940’s to the 60’s, historian Richard Hofstadter explored these ideas in his works of social theory and political culture. The most important of Hofstadter’s studies may be Anti-intellectualism in American Life (1964), one of two separate books for which he won the Pulitzer Prize. While Hofstadter saw American anti-intellectualism as part and parcel of our national heritage of utilitarianism rather than a necessary by-product of democracy, he did see anti-intellectualism as stemming at least in part from the democratization of knowledge. Not that he opposed broad access to university education.  Rather, Hofstadter saw universities as the necessary “intellectual and spiritual balance wheel” of civilized society, though he recognized an ongoing tension between the ideals of open access to university education and the highest levels of intellectual excellence.

Of course, important as his work remains, Hofstadter wrote before the dawning of our own digital age. He didn’t grapple with the new challenges presented by an online world where (for better or worse) communication is instantaneous, everything is available all the time, and everyone not only has a voice but has the ability to speak in a polyphony of voices masked in anonymity.

Still, we must be willing to admit that things may not be as dire as all that. As Adam Gopnik has pointed out in the New Yorker (“The Information: How the Internet Gets Inside Us,” Feb. 14, 2011), the World Wide Web is sort of like the palantir, the seeing stone used by wizards in J.R.R. Tolkien’s Lord of the Rings. It tends to serve as a magnifying glass for everything we view through it.

As such, it’s no surprise that many have viewed the dawn of the Digital Age as signaling the end of everything that made the modern civilized world great. Indeed, for years now, academics and public intellectuals have lamented the way our digital media has seemed to dumb down our discourse, but there are signs of hope.

In his 2009 article “Public Intellectuals 2.1” (Society 46:49-54), Daniel W. Drezner takes a brighter view of the prospects for a new intellectual renaissance in the Digital Age, predicting that blogging and the various other forms online writing can in fact serve to reverse the cultural trend of seeing academics and intellectuals as remote and unimportant to our public life. Drezner argues that “the growth of the blogosphere breaks down—or at least lowers—the barriers erected by a professional academy” and can “provide a vetting mechanism through which public intellectuals can receive feedback and therefore fulfill their roles more effectively”  (50).

Some views are not so optimistic, but it’s true that are great online resources for serious scholarly work and there are even smart people who are thinking amazing thoughts and writing about them online.

But let’s see what a vigorous online discussion can look like. I anxiously look forward to hearing what others have to say about the issues I’ve raised here.

This piece is cross-posted at the Marylhurst University blog:  https://blog.marylhurst.edu/blog/2013/03/26/digital-democracy-american-anti-intellectualism-part-ii/ 

Drowning in Digital Democracy, Part I

It’s become commonplace, and maybe even a little passé, to describe our own ongoing digital revolution as analogous the advent of Gutenberg’s printing press in the 15th century.  Indeed, some points of comparison do continue to seem remarkably apt.  For example, the role of printed documents in spreading new ideas during the Reformation looks a lot like activists using Facebook and Twitter to share news and schedule protests during the Arab Spring.  Both show how technology can be a powerful force for democratization.  (Apologies if I’m stepping on any toes by seeming to valorize the Reformation as a positively democratic movement on the blog of a Catholic university, but you know what I mean.)

However, critics like Adam Gopnik in his New Yorker piece “The Information: How the Internet Gets Inside Us” (Feb. 14, 2011) have been quick to point out that overly enthusiastic interpretations of such revolutionary possibilities not only tend to confuse correlation with causation – that is, did the printing press give rise to the Reformation and the Enlightenment, or did it just help spread the word?  The truth probably rests somewhere in between cause and coincidence, but we should be careful not to ignore the distinction.

Similarly, technology’s vocal cheerleaders seem all too ready to ignore the potential negative aspects of such improved communication technologies – like the inconvenient historical fact that totalitarian regimes have typically printed far more works of propaganda than they’ve destroyed in book burnings.  Dictators figured out quickly that it’s far easier to drown out the voices of opposition than silence them.  Pervasive misinformation can do far more damage than tearing down handbills.

Now, I’m not suggesting that our globalized digital community is a totalitarian regime.  At least on the surface it feels like just the opposite, though Jaron Lanier expresses some dire warnings about what he calls “cybernetic totalism” in his “One-Half of a Manifesto.”  I’ll plan to address Lanier’s thoughts more fully in a future post.  For now, it’s enough to observe that in this brave new world of online culture we’ve adapted to communication being instantaneous, everything being available all the time, and everyone having a voice.  Well, in such an environment, succumbing to the endless seas of unmediated information (and misinformation), the rule of the mob begins to feel like a real possibility.

We’re drowning in digital democracy.

Forgive me if I sound less than perfectly egalitarian here, but when everyone not only has a voice but has the ability to speak in a polyphony of voices masked in anonymity, we’re no longer looking at a lively exchange of ideas.  We’re looking at the well-known horrors of mob rule, and it doesn’t make the stakes any lower or the threats any less real that it’s happening online.

Even in the best of scenarios, when everyone has a voice the quality of the conversation can plummet very quickly.  I’m not talking about those annoying people who use their social media to tell everyone from Boise to Bangladesh that they’re making a batch of chocolate chip cookies.  Those folks are easy enough to avoid and to ignore.

No, my concern is that too many of our students and friends and journalists and politicians and, hell, all of us are relying on Wikipedia and Google.  Not only are we trusting crowd-sourced encyclopedias written by people who may have little or no education or expertise (and some of whom are hoaxers or pranksters), but we’re relying on logarithmically-driven and advertisement-enhanced search engines to provide most (or all) of our information, without pausing to question or to ascertain the authority of what we’re reading.

Not only that, but because of such ready access to information we’re hearing people who are smart enough to know better trumpeting the end of all cultural and social authority.  “The expert is dead,” such digerati claim.  And indeed throughout much of our irreverent, anything-goes society, many people do seem to be acting as if at last the king is dead.

But is it really true that we no longer have any need for cultural, political, and intellectual authority?

No, in fact just the opposite is true.  Greater freedom brings with it greater responsibility.  We now need experts in every field to exert their authority more powerfully than ever.  Reason must lead.  Functional democracies (even digital ones) still need organization and leaders.  Otherwise we’re left with the chaos of a shouting match.

Having a voice is not the same as knowing how to participate a conversation.  Access to information is not the same thing as knowing how to use it.  We didn’t close up schools because every home had a set of encyclopedias.  We didn’t tear down universities because people had access to public libraries.

All those online sources might be fine places to start looking for information, but we need to be constantly vigilant about verifying what we’re accepting as valid and credible.  We also need to get better about documenting and providing links to our sources (as you’ll notice I’m trying to do in these posts).

And finally, we need to make sure we remain very clear about the vital differences between having ready access to information and gaining an education.  Now more than ever, our students need us to teach them how to read, how to research, how to analyze information, and how to participate responsibly in this emerging digital democracy.

Of course, if the Digital Revolution truly lives up to its name, its effects will be further reaching and less predictable than any of us can imagine.  That’s the problem with revolutions – they change everything.

This piece is cross-posted on the Marylhurst University blog: https://blog.marylhurst.edu/blog/2013/03/19/drowning-in-digital-democracy-part-i/