Today, video games make up a $100 billion global industry, and nearly two-thirds of American homes have household members who play video games regularly. And it’s really no wonder: Video games have been around for decades and span the gamut of platforms, from arcade systems, to home consoles, to handheld consoles and mobile devices. They’re also often at the forefront of computer technology.

The Early Days

Though video games are found today in homes worldwide, they actually got their start in the research labs of scientists.

In 1952, for instance, British professor A.S. Douglas created OXO, also known as noughts and crosses or a tic-tac-toe, as part of his doctoral dissertation at the University of Cambridge. And in 1958, William Higinbotham created Tennis for Two on a large analog computer and connected oscilloscope screen for the annual visitor’s day at the Brookhaven National Laboratory in Upton, New York.

In 1962, Steve Russell at the Massachusetts Institute of Technology invented Spacewar!, a computer-based space combat video game for the PDP-1 (Programmed Data Processor-1), then a cutting-edge computer mostly found at universities. It was the first video game that could be played on multiple computer installations.

Dawn of the Home Console

In 1967, developers at Sanders Associates, Inc., led by Ralph Baer, invented a prototype multiplayer, multi-program video game system that could be played on a television. It was known as “The Brown Box.”

Baer, who’s sometimes referred to as Father of Video Games, licensed his device to Magnavox, which sold the system to consumers as the Odyssey, the first video game home console, in 1972. Over the next few years, the primitive Odyssey console would commercially fizzle and die out.

Yet, one of the Odyssey’s 28 games was the inspiration for Atari’s Pong, the first arcade video game, which the company released in 1972. In 1975, Atari released a home version of Pong, which was as successful as its arcade counterpart.

Magnavox, along with Sanders Associates, would eventually sue Atari for copyright infringement. Atari settled and became an Odyssey licensee; over the next 20 years, Magnavox went on to win more than $100 million in copyright lawsuits related to the Odyssey and its video game patents.

In 1977, Atari released the Atari 2600 (also known as the Video Computer System), a home console that featured joysticks and interchangeable game cartridges that played multi-colored games, effectively kicking off the second generation of the video game consoles.

The video game industry had a few notable milestones in the late 1970s and early 1980s, including:

  • The release of the Space Invaders arcade game in 1978
  • The launch of Activision, the first third-party game developer (which develops software without making consoles or arcade cabinets), in 1979
  • The introduction to the United States of Japan’s hugely popular Pac-Man
  • Nintendo’s creation of Donkey Kong, which introduced the world to the character Mario
  • Microsoft’s release of its first Flight Simulator game

The Video Game Crash

In 1983, the North American video game industry experienced a major “crash” due to a number of factors, including an oversaturated game console market, competition from computer gaming, and a surplus of over-hyped, low-quality games, such as the infamous E.T., an Atari game based on the eponymous movie and often considered the worst game ever created.

Lasting a couple of years, the crash led to the bankruptcy of several home computer and video game console companies.

The video game home industry began to recover in 1985 when the Nintendo Entertainment System (NES), called Famicom in Japan, came to the United States. The NES had improved 8-bit graphics, colors, sound and gameplay over previous consoles.

Nintendo, a Japanese company that began as a playing card manufacturer in 1889, released a number of important video game franchises still around today, such as Super Mario Bros., The Legend of Zelda, and Metroid.

Additionally, Nintendo imposed various regulations on third-party games developed for its system, helping to combat rushed, low-quality software. Third-party developers released many other long-lasting franchises, such as Capcom’s Mega Man, Konami’s Castlevania, Square’s Final Fantasy, and Enix’s Dragon Quest (Square and Enix would later merge to form Square Enix in 2003).

In 1989, Nintendo made waves again by popularizing handheld gaming with the release of its 8-bit Game Boy video game device and the often-bundled game Tetris. Over the next 25 years, Nintendo would release a number of successful successors to the Game Boy, including the Game Boy color in 1998, Nintendo DS in 2004, and Nintendo 3DS in 2011.

The First Console War

Also in 1989, Sega released its 16-bit Genesis console in North America as a successor to its 1986 Sega Master System, which failed to adequately compete against the NES.

With its technological superiority to the NES, clever marketing, and the 1991 release of the Sonic the Hedgehog game, the Genesis made significant headway against its older rival. In 1991, Nintendo released its 16-bit Super NES console in North America, launching the first real “console war.”

The early- to mid-1990s saw the release of a wealth of popular games on both consoles, including new franchises such as Street Fighter II and Mortal Kombat, a fighting game that depicted blood and gore on the Genesis version of the game.

In response to the violent game (as well as congressional hearings about violent video games), Sega created the Videogame Rating Council in 1993 to provide descriptive labeling for every game sold on a Sega home console. The council later gives rise to the industry-wide Entertainment Software Rating Board, which is still used today to rate video games based on content.

In the mid-1990s, video games leaped to the Big Screen with the release of the Super Mario Bros. live-action movie in 1993, followed by Street Fighter and Mortal Kombat over the next two years. Numerous movies based on video games have been released since.

With a much larger library of games, lower price point, and successful marketing, the Genesis had leapfrogged ahead of the SNES in North America by this time. But Sega was unable to find similar success in Japan.

The Rise of 3D Gaming

With a leap in computer technology, the fifth generation of video games ushered in the three-dimensional era of gaming.

In 1995, Sega released in North America its Saturn system, the first 32-bit console that played games on CDs rather than cartridges, five months ahead of schedule. This move was to beat Sony’s first foray into video games, the Playstation, which sold for $100 less than the Saturn when it launched later that year. The following year, Nintendo released its cartridge-based 64-bit system, the Nintendo 64.

Though Sega and Nintendo each released their fair share of highly-rated, on-brand 3D titles, such as Virtua Fighter on the Saturn and Super Mario 64 on the Nintendo 64, the established video game companies couldn’t compete with Sony’s strong third-party support, which helped the Playstation secure numerous exclusive titles.

Simply put: Sony dominated the video game market and would continue to do so into the next generation

Advertising in the 1990s confronted new social and economic changes. As the baby boom generation aged, the birth rate in the U.S. declined and family units became smaller. At the same time, both immigrant and minority groups grew, the population shifted toward the Sunbelt states and new market segments emerged.

Advances in technology expanded the mass media audience, but new technologies?such as the Internet?fragmented that audience. As consumers were provided with more choices, more control and a greater capacity for interacting with sources of information, the mass media environment grew increasingly sophisticated and expensive.

New challenges

Specialized consultants began to insert themselves into the traditional relationship between agency and client. Experts appeared in database marketing, interactive media and all areas of market segments defined by race, ethnicity and values. They offered assistance to marketers unhappy with the services they were getting from large agency holding companies, which began to be perceived as aloof and unresponsive. As agencies awoke to this challenge, many moved to expand their capabilities. Advertising entered a period of transition: The size, structure and functions of agencies began to change, along with the nature of advertiser-agency relationships.

Some advertisers moved to consolidate accounts at fewer agencies, while others sought the services of several agencies. With a growing need to reach a global audience in an expensive mass media environment, advertisers began to cut back their ad spending. Both advertisers and agencies were under pressure to find the most effective media outlets to reach the largest audience at the lowest possible cost.

While some advertisers retained traditional organizations, others such as General Motors Corp., Ameritech Corp. and General Mills decentralized some of their decision-making, allowing brand, category and regional managers to make decisions on advertising and promotional activities faster, as market forces dictated. In addition, advertisers began moving their spending from advertising to promotion. Many changed agencies frequently, dissolving long-term relationships. As a result, methods of compensation changed as they become linked to the profitability of clients.

Small, creative boutiques developed as established agencies lost creative talent, often watching their most productive personnel leave to start their own new ventures. Small regional shops sprang up in cities such as Minneapolis; Portland, Ore.; Richmond, Va.; and Peoria, Ill., as technological innovations made the physical location of an agency less important. Advertisers and agencies relied on new forms of communication to receive and provide services from sometimes far-flung locations.

Established agencies reacted, creating niche units to serve particular clients. Specialized ethnic agencies focused on particular population groups, and with technology becoming an increasingly important tool in advertising, many agencies added separate technology departments. As the array of available media grew in complexity, big agencies spun off, or “unbundled,” their media departments to provide full media services and permit them to seek clients outside the parent agencies.

Among the leading independent media companies to be spun off from traditional agencies in the 1990s were: MindShare (from WPP Group), OMD Worldwide and PhD (Omnicom Group), Zenith Media ( Saatchi & Saatchi and Cordiant), Initiative Media Worldwide and Universal McCann (Interpublic Group of Cos.), Media Edge ( Young & Rubicam), MediaCom (Grey Advertising), TN Media (True North) and Starcom and MediaVest (Bcom3).

About a decade after its founding, Cordiant split into two independent agencies, the Saatchi & Saatchi and Bates networks, to deal more efficiently with client needs.

In another major trend of the 1990s, ad agencies began to provide integrated marketing activities ranging from sales promotion, direct response and public relations to high-tech alternatives such as online services and Internet pages and advertising. The process, known as integrated marketing communication, became a popular service with marketers.

New frontier: The Internet

The vision of commerce promised by the Internet beginning in the mid-1990s offered consumers the ability to purchase the precise goods and services they needed, quickly and at competitive prices, while shopping from the comfort of their homes. That promise also fueled the launch of many Internet-based, or “dot-com,” companies, such as ebay.com and Amazon.com. Many Internet companies initially offered only a single product or service, but as electronic commerce grew and bricks-and-mortar companies entered the fray, competition intensified as well.

Internet companies began offering consumers a variety of goods and services. Online auctioning opened the door for sellers to get the highest possible prices from the broadest possible group of bidders. From the mid-to-late 1990s, Internet commerce reached $8 billion and was expected to grow to $3 trillion by the end of the first decade of the 21st century, although its failure to show profits and its subsequent decline in the latter half of 2000 made such predictions seem unlikely.

Internet advertising arrived in 1994, with the launch of Hotwired. Hotwired charged sponsors a fee of about $30,000 to place ads on its Web site for 12 weeks. Initial sponsors included AT&T Corp., MCI Communications Corp., Club Med, Adolph Coors Co.’s Zima brand, IBM Corp., Harman International Industries’ JBL speakers and Volvo Cars of North America.

Modem Media, an early online agency, tracked an average 40% “click-through” rate for its clients, which included AT&T and Zima. Click-through occurs when a computer user places the cursor on a Web link and clicks to go to another page.

Based on the success of Hotwired, established online service providers that already had large clienteles started attracting advertisers to their sites.

Total Web ad spending reached $300 million in the mid-1990s and more than triple that amount by 2000, according to Jupiter Research.

The number of Web sites in existence grew exponentially in the late 1990s. By the year 2000, various estimates put the number in the hundreds of millions. Less than a year later, however, hundreds of those e-commerce and media sites were shuttered, victims of a downturn in dot-com values on Wall Street, known as the “dot-bomb,” that dried up venture capital.

More than ever, expectations that the Internet would lead to the demise of traditional mass media such as TV, radio and newspapers appeared unfounded, as many Internet companies retreated into bankruptcy while those that survived continued to rely on traditional media to promote their brands.

Technology also literally changed the face of advertising. The use of computers to create and alter images meant that spots could be made quite simple or extremely complex, but it led to an emphasis on nature and outdoor imagery (which could now be produced without costly trips to picturesque locations) as seen in spots where polar bears drink Coke, handled by Creative Artists Agency as part of its “Always” campaign for Coca-Cola Co. in 1993, or Jeeps burrowing beneath snowdrifts in an award-winning TV spot from Bozell, Detroit, titled “Snow Covered” for the automaker in 1994.

From Clueless to Tarantino: why the 90s was Hollywood’s fairytale decade

By

Rebecca Nicholson

The 1990s … a simpler, more innocent time? Composite: Allstar

Pulp Fiction to Magnolia: the best films of the 90s – as chosen by critics

Cinema is once again mining its recent past for hits. Joel Schumacher’s 1990 medic-goth thriller Flatliners is the latest cult classic to be remade for a contemporary audience. Almost three decades on, the original – in which a group of medical students deliberately kill and then revive themselves in order to report back on the afterlife – resembles in parts a crossover episode between Buffy and Holby City. But in the shiny 2017 sequel, the gritty, steamy streets have been replaced by brightly lit high-rises, its rudimentary computers have been given an upgrade, and the astonishing amount of angular yet floppy hair sported by Kiefer Sutherland, Kevin Bacon and Billy Baldwin has been closely cropped for more austere times. Sutherland pops up in the new film, too, although the starring roles go to Ellen Page, Rogue One’s Diego Luna and Happy Valley’s James Norton. It is, so far, untainted by reviews, as it did not screen in advance for critics, which is rarely a sign of confidence in its merits.

Yet it’s unlikely to matter too much if the film is not great. As Hollywood gets increasingly caught up in reboots and remakes, Flatliners is just one of a number of 90s hits to be lined up for a do-over, with a nostalgic audience ready to revisit it. Following in Jurassic Park reboot Jurassic World’s phenomenally successful jungle footsteps is a return for the 1995 Robin Williams hit Jumanji, which has been revived for Christmas as Jumanji: Welcome to the Jungle, starring Jack Black and Dwayne “the Rock” Johnson with a retro video game in place of the board game. Tim Curry played It in 1990; the Andy Muschietti remake has already become the highest-grossing horror film of all time, a rare success in an otherwise flat summer for the box office. Over the past few years, there have been reports of development deals and scripts for remakes of everything from Sister Act to I Know What You Did Last Summer (although both projects have since gone quiet), while new versions of Blade, The Crow, The Craft and White Men Can’t Jump are said to be making progress after a few false starts.

Back from the dead … Kevin Bacon, Julia Roberts, Oliver Platt, William Baldwin, Kiefer Sutherland in the original Flatliners. Photograph: Allstar/Columbia/Sportsphoto Ltd

One reason films from the 90s hold such appeal is that they seem to depict a simpler, more innocent time. “A lot of these films were written by baby boomers, so even if someone had a dead-end job, they were still able to afford accommodation, they lived in a nice neighbourhood. Even if they were struggling, their problems were quite superficial,” says Elizabeth Sankey, who worked on Beyond Clueless, a 2014 documentary about teen movies that focused heavily on the mid-to-late 90s. Sankey is currently directing a documentary about romantic comedies and has noticed a stark change in tone. “Later on, we had romcoms like Trainwreck, where the problem with the woman that needs to be fixed is that she’s an alcoholic. But in the 90s it was like, Oh, she’s really clumsy!” she says.

It’s natural, of course, to be nostalgic for the era of film-making that accompanied your adolescence, as the 90s did mine, but there is a breadth and brilliance to much of the decade’s output that is unrivalled. In this paper last year, Steve Rose described the early 90s as a “golden age” for black cinema, as film-makers such as John Singleton and Julie Dash broke through. The 90s saw the emergence of an LGBT cinema movement, with mainstream, award-winning films such as The Birdcage and Boys Don’t Cry as well as cult slow-burners such as But I’m a Cheerleader. Spike Jonze, Quentin Tarantino, Paul Thomas Anderson, David O’Russell, Noah Baumbach, Wes Anderson, Lisa Cholodenko, Alexander Payne, Lynne Ramsay, Darren Aronofsky, Sam Mendes and Todd Haynes all made their directorial feature debuts. Odd, experimental films with wild structures and often a leaning towards verbosity, such as Magnolia or Pulp Fiction or Being John Malkovich, were relatively mainstream successes.

But it wasn’t just the arrival of a particularly strong new guard that made the era special. The dominant formula of smart-but-accessible was evident across genres. There was an avalanche of glossy, skilful and lubricious trashy thrillers such as Wild Things and Single White Female, with director Paul Verhoeven in particular embodying a kind of knowing mischief that allowed him both to titillate and to tease the audience for desiring titillation. The witty smut of Basic Instinct and Showgirls have aged surprisingly well, and that is to say nothing of his sci-fi satire Starship Troopers, which he described with evident pride last year as “the most expensive art movie ever made”. (He has been less than complimentary about a mooted remake that would be less satirical, saying it would “very much fit in with a Trump presidency”.)

Like Starship Troopers, the action films released towards the end of the decade were filled with a millennial angst and political paranoia that appears both quaint, given the relatively sunny geopolitical climate of the time, and prescient. In the 2016 documentary Hypernormalisation, Adam Curtis compiles clips from Independence Day, Deep Impact, Armageddon and Godzilla of New York being terrorised or destroyed, just before he cuts to footage of 9/11.

Brittany Murphy, Alicia Silverstone & Stacey Dash in Clueless. Photograph: Allstar/Paramount

Outside of big-budget action, teen movies benefited from the era’s appetite for smartness and enjoyed a boom period of the kind not seen since Judd Nelson punched the air in The Breakfast Club. High-school films excavated literary classics, rather than just older high-school films, for their plots. Famously, Clueless is based on Jane Austen’s Emma, and anyone who saw Gwyneth Paltrow in the faithful 1996 adaptation will know that Amy Heckerling’s 1995 smash, starring Alicia Silverstone, did a far better job of telling the story. She’s All That was based on Pygmalion, while 10 Things I Hate About You made The Taming of the Shrew, one of Shakespeare’s most awkwardly sexist works, into a riot grrrl-referencing feminist-ish teen classic, complete with a Joan Jett soundtrack.

With the Scream trilogy, Kevin Williamson made clever horror, using a deep awareness of the genre’s tropes and playing with the audience’s expectations of them. He bumped off the big-name star Drew Barrymore in the first few minutes for not knowing the correct answer to a quiz question about a horror film, and didn’t kill off the lead, Sydney, despite her having sex with her boyfriend, which would usually signpost imminent doom. The same meta approach is present, although to a lesser degree, in other teen horrors such as I Know What You Did Last Summer, Urban Legend and The Faculty. The Craft marked the convergence of horror and high school with Neve Campbell at the centre of the Venn diagram, and although its box office takings were disappointing, it had a long afterlife on VHS, and is responsible for more than one generation of girls dabbling in black lipstick and playing the “light as a feather, stiff as a board” sleepover game.

“I think the 90s, like any other era of cinema, is remembered for a very small pocket of its output, in the same way that the 1980s wasn’t just Back to the Future and a bunch of movies where people wore bright neon bomber jackets,” says film-maker and critic Charlie Lyne, who directed Beyond Clueless. “I’m also very wary of my own biases; that a movie that came out during the 90s is much more likely to have then been watched on VHS and cherished by a teenage-me than a film that came out before or after that, therefore you get a certain inevitable attachment that is not necessarily reflective of any objective truth.”

He does say, however, that two things happened in tandem during the 90s that he sees as significant and measurable shifts in movie history. The first is that more films were made for less money. “There is a massive lowering of the barrier to entry to make a fairly serviceable feature film. What constituted a low-budget film dropped massively during that period because of technological changes and the way the industry saw low-budget film-making, coming through festivals like Sundance. That was absolutely true in the teen genre, but also more widely, you did see an explosion in films that cost $5m and under making it to a wide audience.”

Second, there were vast improvements in home video equipment, although it was just prior to the advent of DVDs. “That had an effect on film culture and the scope of film culture – people were more able to look backwards and to a broader swath of what was coming out at the time. All of those structural changes are distinct to that decade,” he explains.

Sankey’s research for her romcom documentary has only cemented her view that the 90s were a fairytale decade. “Every single person I’ve interviewed for my film says they don’t want to get married, because they don’t have that stability in their lives. All of those things that make people feel uncomfortable about marriage are the reasons they like those films, because it felt like a much more simple, naive time.” She says that it isn’t just people who were young at the time clinging on to their halcyon days. There’s a sense of loss, too. “I think we are pining for a time we felt like we were going to get and never got. It’s a promise that was made to us by these films that we haven’t actually been able to achieve.”

Another article who’s author wrote in outline form:

Digital Age & Home Viewing

High-Cost Demanding Stars

-VCRs were popular appliances in most homes, and by the end of the decade, DVDs got more popular.

Breakthrough of Serious Themed Films and Minority Groups in the Film Industry

-Films with serious themes, such as the Holocaust, AIDS, feminism and racism, were getting bigger and featured in films such as Schindler’s List and Silence of the Lambs.

-The 1990s were a big time for super-star actors like Julia Roberts, Tom Cruise, Arnold Schwarzenegger, Robin Williams, Demi Moore, Jim Carrey, Sandra Bullock and Tom Hanks.

-African American Filmmakers were more popular in the 1990s, such as John Singleton and Spike Lee with the creation of films like Boyz N the Hood and Jungle Fever.

-Female directors were having a greater influence and showing their skills in the industry. There was also a new feminist consciousness presented through films like Thelma & Louise and Fried Green Tomatoes.

Actors Turned Directors

-Their were three actors that received numerous awards for their best pictures.

-Kevin Costner directed Dances With The Wolves (1990) and was honored with twelve nominations including Best Picture and Best Director.

-Clint Eastwood starred and directed in Unforgiven (1992), another western genre film to win Best Picture and Best Director.

-and producer/actor/director Mel Gibson’s historical Scottish film Braveheart (1995), also won Best Picture and Best Director.

Decade of Money & Mega-Spending

-Film budgets drastically increased during this decade (the average budget was $53 million, with some over $100 million).

-There was more pressure on film studios to deliver big hit movies, and good stories were lost due to making money.

-Ticket prices increased, as well more multiplexes were built.

Indie Scene

The independent films of the 1990s seemed to be made in the view of young adults in that decade making the films believable and easy to relate to for the viewers.

Disney Renaissance

Walt Disney Animation Studios (though it was called Walt Disney Feature Animations at the time,) experienced a creative resurgence in producing successful animated films based on well-known stories, which restored public and critical interest in The Walt Disney Company as a whole.

The drug culture of the 90s played a big role in many of these popular films.

The Disney Renaissance began in 1989 with The Little Mermaid, a musical that refreshed Disney’s old formulas and brought them back into the game. It reached the peak in popularity with The Lion King in 1994 a cultural landmark and height of Disney’s success. It ended in 1999 with Tarzan.

The 90s were notable for both the rise of independent cinema, but as well as Independent studios

Major studios set up specialty art-house production and distribution divisions, such as Sony Pictures Classics and Fox World Cinema, Miramax, Lions Gate and New Line.

– Jim began to make a name for himself in the film Pet Detective (1993) as a low-cost Florida detective.

Chelsea, Donovan,

Nolan & Seager

Major Blockbusters

-By the mid 1990s, many celebrities were wanting more money & better perks. Many of them wanted script/director approval, approval of publicity images, restrictions on film schedules and getting their private jets paid for.

This is also the era that began the rise of computers in animation, riding the wave of the digital revolution that brought affordable PCs to the masses in the 1980s. Disney employed CG for major parts of their films starting with The Rescuers Down Under, and by Beauty and the Beast had refined it considerably. Disney’s 1995 collaboration with Pixar brought the first all 3D movie and the one that launched into the spotlight and a position to drive the future of the animation industry: Toy Story.

The Indie Scene

Super-Stardom Of Jim Carrey

Popular independent directors such as, Richard Linklater, Kevin Smith,Todd Solondz, Gus Van Sant, Larry Clark and Jim Jarmusch, took the real life situations of people in the 90s and turned them into controversial, dark romantic comedies that were entertaining to view.

Movies from these divisions were usually low-budget and non mainstream, for the people who didn’t enjoy the “popular” films being made at the time.

– Dumb And Dumber (1994) starred Jim Carrey and Jeff Daniels as Lloyd Christmas and Harry Dunne, two not so bright buddies on a cross country road trip to Aspen. The Mask (1994), Jim had the role of a shy bank teller Stanley Ipkiss, donned an ancient, magical mask and was transformed into a green superhero.

-Carrey won the Golden Globe Best Actor award for both roles as Truman Burbank in The Truman Show (1998) and as Andy Kaufman in Man On The Moon (1999)

Many movies in the 90’s were either major flops or major hits when they hit the blockbuster, many movies that were popular were as successful as they were because of proper marketing and merchandising techniques. Also, the 90’s was largely known for having movies that featured large CGI portions, which was beginning to be popular and fans loved the ability to see something impossible to do without the help of computers.

Some major hits included:

Jurassic Park (93) Featured completely computer generated dinosaurs in live-action sequences which shocked viewers.

Forrest Gump (94)

The Lion King (94)

Independence Day (96) Grossed $100 million in the first 6 days of being released in theaters, also won the Best Visual Effects Oscar for the scene of the White house being destroyed.

Titanic (97) Major hit, grossed $600 million in the US and $1.8 billion worldwide. Had a $200 budget and won 11 Oscars.

Men in Black (97)

Plot Line Derivatives

Realizing that some ideas were being made big by fans and followers in the 90’s, Hollywood tried to take these ideas and turn them into films. Some were loved and became classics while others were met by hatred from the original fans.

Many ideas came from video games and comics, turning into films like:

The Flintstones (94)

Street Fighter(94)

Mighty Morphin Power Rangers:The Movie (95)

Inspector Gadget (99)

Flops of the 90’s

Many highly anticipated movies in the 90’s were met with high budgets to be made, yet once they hit theaters fans and critics made the movies fail terribly. Later on in the revolution of renting movies these movies made a come-back, but at the time they were flops.

Major flops with high budgets were:

Godzilla (98) CGI lizard destroying New York, fans shunned it for its version of Godzilla.

Waterworld(95) Filmed completely on water, met with a high budget yet Kevin Costner didn’t help the movies success.

Hook(91)

Highlander II: The quickening (91)

The 1990s: The Loss Of Shared Experience

In the 60 years between 1929, when radio became the dominant conveyor of the prevailing mass culture in the United States, and 1989, when cable television became a truly mature industry, broadcasting provided something that was unique in human history. During that period, nearly the entire country—young and old, rich and poor, educated and uneducated—was feeding, at least occasionally, from the same cultural trough. Radio and television provided a kind of cultural glue; their programs penetrated nearly every segment of the national population to a degree that even the church in medieval Europe had not achieved. The control of the television industry by only three companies had produced, among other things, a unified mass culture, the products of which were experienced by nearly everyone. That era ended, in effect, in the 1990s.

The number of cable services aimed at specific audiences with specialized interests grew at its greatest pace ever during this period, dividing the audience into smaller and smaller segments. Inevitably, the share of that audience held by each of the major networks continued to decline, although each network was still attracting many more viewers than any of the cable channels. Besides the familiar cable services dedicated to news, sports, movies, shopping, and music, entire cable channels were devoted to cooking (Food Network), cartoons (Cartoon Network), old television (Nick at Nite, TV Land), old movies (American Movie Classics, Turner Classic Movies), home improvement and gardening (Home and Garden Television [HGTV]), comedy (Comedy Central), documentaries (Discovery Channel), animals (Animal Planet), and a host of other interests. The Golf Channel and the Game Show Network were perhaps the most emblematic of how far target programming could go during this era. By the end of the decade, almost 80 percent of American households had access to cable programming through cable hookups or direct delivery by satellite.

Actor Alec Baldwin (left) and Turner Classic Movies host Robert Osborne.

PRNewsFoto/Turner Classic Movies/AP Images

Many had predicted that cable would reduce the number of broadcast networks or put them out of business entirely. On the contrary, broadcast networks proliferated as well during this period, doubling in number from three to six. The Fox network began operation in 1985 with a limited evening schedule, and the repeal of the Financial Interest and Syndication Rules in 1993 set the stage for other production companies to enter the market. Since their inception in 1971, the fin-syn rules had substantially limited the amount of programming that networks could produce or own and therefore sell to local stations for syndicated reruns. As a result, networks would license or “rent” programs from studios and production companies, paying for the right to air the episode twice during the season, after which all rights would revert to the production company, which would in turn sell reruns of the series to individual stations. Once this regulation was eliminated, networks began participating in the production and ownership of programs (as they had before 1971), and, in turn, production companies began forming their own networks. In 1995 two networks were formed that would remain in operation for a decade (ending in 2006, when they would merge into a single network, the CW): the WB, premiered by Warner Bros., and UPN (the United Paramount Network), premiered by Paramount.

Demographic divergence

The programming of the 1990s is not easily categorized. Many complained about the increasing amount of violence, sex, and profane language on television during the decade. Few would argue the point, but there were also more documentaries, instructional shows, news, and religious programs on TV than ever before. In short, there was more of everything, including reruns of old shows from all eras of network TV history. The family sitcom provides a telling example. Traditional family comedies such as The Cosby Show, Family Ties, and Growing Pains (ABC, 1985–92) remained on the air into the 1990s, while at the same time more “realistic” shows featuring lower-middle-class families such as Roseanne (ABC, 1988–97), The Simpsons (Fox, begun 1989), Married…with Children (Fox, 1987–97), and Grace Under Fire (ABC, 1993–98) introduced a completely different vision of the American family. The cultural consensus that had united so much of television during the network era had been obliterated. Audiences were no longer watching the same things at the same time, and the choices they had were the greatest ever and continuing to multiply.

Urban humour

Of the programming on network TV that in the 1990s continued to attract the largest audiences, the most popular new entries were Seinfeld (1990–98), Friends (1994–2004), and ER (1994–2009), all part of NBC’s celebrated Thursday night lineup. Like so many of the situation comedies from the 1980s and ’90s (The Cosby Show, Roseanne, Home Improvement), Seinfeld was based upon the act of a standup comic, in this case the observational, “everyday life” humour of Jerry Seinfeld. Other shows had begun to explore this dramatic territory a few years earlier, including The Wonder Years (ABC, 1988–93), a comedy-drama that celebrated the minutiae of suburban life in the late 1960s and early ’70s, and thirtysomething, a drama that analyzed the psychic details of the lives of a group of young professionals. Seinfeld, however, was able to identify a new form for the traditional sitcom. It featured entire episodes about waiting in line at a restaurant, losing a car in a multilevel parking garage, and, in a notorious and surprisingly tasteful episode, the personal and social dimensions of masturbation. Self-declared to be “a show about nothing,” Seinfeld for five years was rated among the top three programs and spent two of those years as number one. The extent of the show’s cultural power became evident when Seinfeld announced that he would end the show after the close of the 1997–98 season. The countdown to the final episode and the airing of the episode itself became the biggest story of the season in American popular culture.

(from left) Jason Alexander, Julia Louis-Dreyfus, Michael Richards, and Jerry Seinfeld

Scene from the television series Seinfeld, with actors (from left) Jason Alexander, Julia Louis-Dreyfus, Michael Richards, and Jerry Seinfeld.

© Castle Rock Entertainment; all rights reserved

Seinfeld, which focused on four unmarried friends living in New York City, inspired a virtual subgenre. The generically named Friends, also on NBC’s Thursday schedule, was the only one of the imitators to approach the success of Seinfeld. Another of the imitations, however, was historically significant. Ellen (ABC, 1994–98), originally titled These Friends of Mine, also featured a standup comic (Ellen DeGeneres) and an ensemble of unmarried friends in the big city (in this case Los Angeles). The show was only a modest hit with both critics and audiences until DeGeneres decided that her character would openly acknowledge her lesbianism at the end of the 1996–97 season. When she did, after half a season of thinly disguised foreshadowing double-entendres, Ellen became the first broadcast television series to feature an openly gay leading character. While some saw such series as Ellen as an important breakthrough, others saw it as another example of the collapse of standards on television.

The 1990s did see the fulfillment of many of the trends that had begun in the 1980s. NYPD Blue, for example, introduced stronger language and more explicit nudity than any network television series to date when it debuted in 1993. Several affiliate stations refused to air the show, but when it became a hit, most of them quietly reversed their decisions. Complaints by parent, teacher, and religious groups that network television was no longer appropriate for family viewing became a major ongoing refrain in the 1990s.

The newsmagazines

The 1990s also saw the steady growth of the newsmagazine. The prototype of the genre was Edward R. Murrow’s See It Now (CBS, 1951–58), and 60 Minutes, which had been on since 1968, set the standard. ABC’s newsmagazine 20/20 was introduced in 1978. With production costs for traditional prime-time programming rising to nearly prohibitive heights at the same time that ratings were plummeting because of cable competition, network executives in the 1990s sought an inexpensive way to fill prime-time hours with popular programming. The long-term success of 60 Minutes suggested that the newsmagazine might be the perfect solution. Newsmagazines were inexpensive compared with sitcoms and dramas, and they had the potential to draw very large audiences. All three networks introduced new newsmagazines during the 1990s, and fierce competition for both audiences and stories resulted, especially since the 24-hour news channels on cable were competing in a similar arena. Some of the series became very successful, including Dateline (NBC, begun 1992), which, by 1999, was being aired five nights per week. 20/20 was extended to two nights weekly in 1997 and again to four in 1998 when it absorbed another ailing newsmagazine, Primetime Live (ABC, 1989–98; it emerged again in 2000 as Primetime Thursday and returned to its original name in 2004). Even 60 Minutes added a second weekly edition, 60 Minutes II (1999–2005). Several newsmagazines presented stories of a scandalous, sexual, or otherwise spectacular nature, and media critics attacked such shows for their tabloidlike approach to presenting news stories and accused them of playing a major role in the degrading of American journalism.

Teen dramas and adult cartoons

Many of the decade’s most innovative programs came from cable and the three new networks. Early in its history, the Fox network had established a distinct identity by airing programs that would probably not have found a place on the schedules of ABC, CBS, or NBC. The Simpsons (begun 1989), the first animated prime-time series since The Flintstones (ABC, 1960–66) to succeed in prime time, was Fox’s biggest and longest-running hit and became the longest-running animated program in television history. With its densely packed social satire and self-reflexive references to American popular culture, The Simpsons set a new standard for television comedy, and, by the end of the 1990s, many critics were calling it the best TV comedy in history. The Fox network focused on young audiences, as ABC had done in the late 1970s, with such teen-oriented series as 21 Jump Street (1987–91), the story of youthful cops working undercover in Los Angeles high schools, which introduced Johnny Depp, and Beverly Hills 90210 (1990–2000), a prime-time soap opera set in the fictional West Beverly Hills High School. The latter inspired an entire new genre of “teensploitation” series, many of which became the anchors of the WB network a few years later. Among these WB teen series, Buffy the Vampire Slayer (1997–2003), Dawson’s Creek (1998–2003), and Felicity (1998–2002) met with surprising critical acclaim. Professional wrestling, which had been a staple genre in the earliest days of television, made a major comeback in the 1990s in syndication and was later picked up by UPN as the first hit for that new network. All three of the newly formed broadcast networks—Fox, the WB, and UPN—depended on these signature shows to differentiate themselves for younger viewers from the old, established networks.

Simpsons, The

The Simpsons—(from left) Lisa, Maggie, Marge, Homer, and Bart—fleeing Springfield in the dark of night; from The Simpsons Movie (2007).

The Simpsons TM and © 2007 Twentieth Century-Fox Film Corporation. All rights reserved.

Throughout the 1990s, television content continued to move into areas that made many viewers and special interest groups uncomfortable. Strong language and explicit sexual topics became common both on cable and on broadcast TV, even in the early evening hours. Two of the more controversial series of the decade were cable products: MTV’s Beavis and Butt-Head (1993–97, 2011) and Comedy Central’s South Park (begun 1997). Both animated series that challenged traditional notions of taste, and both part of a new wave of adult cartoons inspired by the success of The Simpsons, these programs demonstrated that the bulk of the experimentation on television was taking place off the major networks. This was especially true of premium channels such as HBO, to which viewers could subscribe for an additional fee. As a pay service, HBO had considerably more latitude with regard to content than commercially supported cable channels and broadcast television. HBO and other pay services do not use the public airwaves, nor do they come into the home unbidden, and they need not worry about advertisers skittish about offending viewers. Furthermore, pay channels are not concerned with ratings. As long as viewers like the service well enough not to cancel their subscriptions, pay cable channels will thrive.

New boundaries: the growth of cable

Premiering in 1972, HBO, as its full name, Home Box Office, implied, originally presented uncut and commercial-free movies as its exclusive offering. In the 1980s, however, HBO began to experiment with the original series format. Some of these series, such as the suspense anthology The Hitchhiker (1983–91) and the sports sitcom 1st & Ten (1984–90), were of little note save for their adult language and some nudity. Others, such as Tanner ’88 (1988), hinted at the high levels of quality that could be achieved on pay services. Created and produced by comic-strip artist Garry Trudeau and film director Robert Altman, Tanner ’88 satirically followed, documentary-style, a fictional candidate for president. Some of the show was shot on the campaign trail itself, and several real political figures made cameo appearances.

HBO moved even farther into its own TV productions in the 1990s. The Larry Sanders Show (1992–98), starring comedian Garry Shandling, did to late-night talk shows what Tanner ’88 had done to political campaigns, to great critical acclaim. Throughout the decade and into the next, HBO presented a range of such adult-oriented, conceptually groundbreaking, and critically well-received series as Oz (1997–2003); The Sopranos (1999–2007); Sex and the City (1998–2004), an adult romantic comedy focused on four women friends in New York City; Six Feet Under (2001–05), the saga of a dysfunctional-family-run mortuary business; Deadwood (2004–06), a hard-edged western; and Curb Your Enthusiasm (begun 2000), an improvisation-based comedy inspired by the real life of its star, Larry David, cocreator of Seinfeld. Ambitious miniseries and made-for-TV movies also became an important part of HBO’s programming mix.

Sarah Jessica Parker as Carrie Bradshaw in the television series Sex and the City.

© Home Box Office

It is worth noting that HBO was not the only cable service to begin with a very specific product only to later diversify its offerings. This practice, in fact, became the norm for specialized cable channels. For example, as mentioned earlier, MTV, which started out as a 24-hour-a-day music video provider, would eventually introduce specials, documentary series, comedies, game shows, and a wide variety of other program types. Court TV, which was designed as a venue for coverage of significant trials, very early in its history added reruns of crime-oriented movies and old TV series to its schedule. By the end of the 1990s, very few cable channels were still based on the original notion of providing a single type of programming around the clock.

Conglomerates and codes

Network ownership changed again in the 1990s. The Walt Disney Company announced its plans to acquire Capital Cities/ABC in 1995 just one day before CBS accepted an offer to be purchased by the Westinghouse Corporation. Both deals created enormous media conglomerates that included production facilities, broadcast stations, cable channels, and an assortment of other major media venues. In 2000 CBS and Viacom joined together, creating a company that owned, among other things, two broadcast networks, CBS and UPN.

In the arena of regulation, the Telecommunications Act of 1996 was passed as the most comprehensive communications policy since 1934. Described as “deregulatory and re-regulatory,” it continued to encourage free-market competition by eliminating or weakening the industry restraints that were still intact, but it also instituted new rules covering children’s programming and programming with violent and sexually explicit material. The deregulatory aspects of the act included yet another extension of the term of a broadcast license, this time to eight years. Single owners, who had been restricted to 7 TV stations until 1980, 12 in 1985, and 20 in 1994, were now allowed to own an unlimited number of stations as long as the total coverage of those stations did not exceed 35 percent of the total U.S. population. The “duopoly rule,” which forbade any company to own more than one station of its kind (TV, AM radio, FM radio) per market until 1992, was eliminated and replaced by a formula based on the population of the market. The act also allowed networks to own cable companies, and telephone companies could own cable systems in their local service regions, neither of which had been permitted before 1996. The Prime Time Access Rule, which had limited networks to three hours of programming between 7:00 PM and 11:00 PM Eastern Standard Time, was also dropped.

Increased sensitivity toward program content, however, resulted in some new regulations. One of these required that stations air at least three hours of children’s educational programming per week. A heightened emphasis on “family values” and a widely held belief that social violence was to some degree being generated by violent content on TV were addressed by the new policy with the introduction of a program ratings code and a requirement that all new television sets be equipped with a violent-program-blocking device known as a V-chip. Ratings codes were required to appear on the screen for 15 seconds at the beginning of each show: TV-Y designated appropriateness for all children; TV-Y7 meant that the show was designed for children age 7 and older; TV-G indicated appropriateness for all audiences; TV-PG suggested parental guidance—that the program contained material that could be considered unsuitable for younger children; TV-14 suggested that many parents might find the program inappropriate for anyone under age 14; and TV-MA warned that the program was designed for adults over age 17. Beyond the first two categories, the ratings measured violence, sexual content, and coarse language. The ratings system is flawed at best: the age designations—especially those at 14 and 17—seemed to many arbitrary and insensitive to the variation in development between teenagers. Moreover, the application of the system depended entirely upon the sensibilities of those doing the rating, as did the singling out of language, sex, and violence as the categories for judgment. Some complained that only entertainment programs were rated, when in fact many news shows were becoming increasingly violent and sexually explicit. Some producers, of course, claimed that the ratings system was a form of censorship.

A key factor in the operation of the ratings system was the V-chip, which enabled parents to block out individual programs or entire ratings categories, making them accessible only by a secret code. At the turn of the 21st century, the effectiveness of the V-chip remained in question. Many older children have in fact used the adult ratings as an indicator of programs they may be more interested in watching, and many children are more likely to have the technical skills to engage and disengage the V-chip than their parents. It might also be noted that the ratings system actually increased the number of programs with explicit sexual content, violence, or strong language. In the movie industry, content was originally voluntarily regulated by the Hays Production Code (see Will H. Hays), which limited the kind of language and subject matter (especially that of a sexual or violent nature) allowed in a film. The Hays code was superseded by a ratings system in 1966, from which time “adult” content in movies has been more and more common. One might expect that the television ratings system could also produce the opposite of the desired effect. Once a rating is available for adult programming, there is a sense in which that programming has institutionalized permission to exist. As long as a program carries a TV-MA rating, one might argue, then it is free to present content that may have been discouraged before a ratings system was in place. Indeed, many language and sexual barriers have been broken on both cable and broadcast TV since the introduction of the ratings system in 1996.

The 21st Century

Breaking news

The biggest spectacle in television history began on the morning of September 11, 2001. For days the networks and cable news channels suspended all regularly scheduled programming and showed nothing but round-the-clock images, interviews, and reporting about the terrorist attacks on New York and Washington. Saturation coverage of a single news story went back to the assassination of Pres. John F. Kennedy in November 1963, when networks presented nearly continuous coverage over four days. Since the introduction of 24-hour news channels, many other stories had received this intensive treatment as well. When the Persian Gulf War began in September 1991, for example, CNN essentially emerged as a 24-hour war channel. To a lesser but still significant extent, the car chase and subsequent murder trial involving former football star O.J. Simpson, the Columbine High School shootings, and the 2000 presidential election were among the succession of stories to receive what came to be known as “wall-to-wall coverage.”

Television’s role on September 11, however, was like nothing that had been seen before. Hundreds of cameras were focused on one burning tower in Manhattan when a second tower was hit by a jet aircraft. That crash, along with the subsequent collapse of both buildings, was broadcast live to millions of stunned viewers, then replayed countless times throughout the following hours and days.

Regular programming began to return in the following weeks, but with noticeable tenuousness. Every one of the late-night comedians—Letterman, Leno, Kilborn, O’Brien, and the ensemble of Saturday Night Live—felt obliged to spend several minutes of their first episode back discussing the difficulty of performing comedy under the circumstances of such a profound national tragedy. On The Daily Show, Jon Stewart fought back tears while adding his thoughts to the discussion. After an awkward few weeks, however, the late-night comedies, and American popular culture in general, had returned to business as usual.

Cable news as entertainment

During important breaking news stories, ratings for cable news channels always go up. The problem is how to keep them up even when there are not big stories being reported. One way is to present personalities that audiences would want to watch every day, regardless of what is happening. This model, designed after the opinionated shows on talk radio, was employed with great success by the Fox News Channel, which was launched in 1996 and before long was outperforming both CNN and MSNBC in the ratings. Two conservative personalities, Bill O’Reilly and Sean Hannity, emerged as stars of Fox in the late 1990s. MSNBC tried to counter Fox’s prime-time strategy with a liberal personality, Phil Donahue, in 2002, with considerably less success: O’Reilly was regularly outperforming Donahue by a factor of six. In 2003 MSNBC introduced Countdown with Keith Olbermann and then, in 2008, The Rachel Maddow Show. Although these prime-time opinion shows did not earn audience numbers as high as their counterparts on Fox, MSNBC’s ratings did climb considerably. Opinion shows became the norm during prime time. Even CNN, on its Headline News Channel, abandoned its usual repetition of 30-minute headline reports during prime time in favour of personality-driven shows featuring the likes of Nancy Grace and Glenn Beck (who moved to Fox in 2009).

The return of the game show

The biggest prime-time story of the brand-new century was a surprising one. After a decades-long absence from the network prime-time schedules, an evening game show was introduced in August 1999 on ABC with astonishing results. Who Wants to Be a Millionaire, hosted by TV talk-show veteran Regis Philbin, began as a series of limited runs, functioning as a game show miniseries of sorts. In August, November, and January the show aired on consecutive nights—as many as 18 in a row. By January it was not uncommon to see the seven daily installments of the show holding all seven of the top slots in the Nielsen ratings for the week. The show’s ratings continued to climb, and by the time it was finally given a regular place in the schedule—three times per week starting in February 2000—it had become a cultural phenomenon, reaching an audience of more than 30 million per episode. Based on a British series of the same title, Who Wants to Be a Millionaire had a simple premise: contestants, selected by phone-in competitions open to the public, were asked 15 questions of increasing value if answered correctly, the last of which was worth a million dollars. During the process, a contestant who was stumped for an answer was allowed three assists: phoning a friend, polling the audience, or having the four multiple-choice answers reduced by half.

The idea to bring game shows back to prime-time television was a natural one. The game show had been a viable genre twice before: once on radio and again on television in the 1950s. In daytime programming and syndication the genre had never gone away, and shows such as Wheel of Fortune (NBC, 1975–89; syndication, 1983– ) and Jeopardy! (NBC, 1964–75; 1978–79; syndication, 1984– ) were among the best syndicated performers throughout the 1980s and ’90s. Any negative associations left over from the quiz show scandals had dissipated, and, more important, the shows were inexpensive—a crucial factor at the turn of the 21st century, when budgets for other prime-time shows were spinning out of control. Although audiences responded enthusiastically to Who Wants to Be a Millionaire, the other three game shows introduced by Fox, NBC, and CBS on the heels of Millionaire’s success did not even make it to the next season.

In the age of target marketing, demographically sensitive programming strategies, and proliferating programming options, Who Wants to Be a Millionaire seemed to be able to attract almost everyone. The first questions asked of each contestant were extraordinarily simple, aimed at the very young. From there, questions appealed to the cultural memories of every generation. Just as the network era was coming to a close—just as the memory of everyone watching the same thing at the same time was fading—Who Wants to Be a Millionaire reminded viewers what the experience of network TV used to be like all the time. The template of the show proved adaptable to local versions around the globe, one of which was featured in the Oscar-winning film Slumdog Millionaire (2008). The show evoked the 1950s, not only because it was a prime-time quiz show but because it attracted an audience that was as wide and diverse as the TV audience had been in the past. Cable, direct satellite, the VCR, and the Internet had shattered that audience into fragments during the 1980s and ’90s, but in 2000 this modest game show reminded viewers of what had been one of television’s greatest appeals.

Reality TV

“Reality TV” was one of the most significant new program developments of the new century, though the genre is in fact nearly as old as the medium itself. Live variety shows had taken cameras into the streets in the 1950s, and Candid Camera, which surreptitiously filmed people responding to elaborate practical jokes, debuted on ABC in 1948 (with stints on all three networks until 1967, its longest tenure coming on CBS [1960–67], before it was revived in 1989–90 and again in 1998). With the appearance of Real People (NBC, 1979–84), however, the genre began to thrive. Called “infotainment” by some critics and “schlockumentary” by others, Real People presented several short documentaries per episode featuring “real people” who did unusual things: one man ate dirt, for example, and another walked only backward. The program’s imitators included That’s Incredible! (ABC, 1980–84) and Those Amazing Animals (ABC, 1980–81). As home-video technology spread in the 1980s and ’90s, entire shows were designed around content produced by amateurs. ABC introduced America’s Funniest Home Videos (ABC, begun 1990), featuring tapes sent in by home viewers hoping to win prize money. When that show immediately reached the Nielsen top 10, it was followed by America’s Funniest People (ABC, 1990–94), a sort of updated version of Real People that mixed professional and amateur video productions.

Reality shows began taking on other forms as well. America’s Most Wanted (Fox/Lifetime, 1988–2012) and Unsolved Mysteries (NBC/CBS, 1988–99; Lifetime, 2001–02) used actors to dramatize stories about crimes for which the suspects were still at large. Traditional journalists decried the use of these reenactments, but hundreds of criminals were apprehended as a result of viewers’ calling the station in response to photographs of the suspects that were shown at the end of each episode. In Cops (Fox, 1989–2013; Spike, begun 2013), a camera crew rode along with the police as they patrolled various urban settings. Episodes of Cops had been taped in more than 100 cities by the end of the century. The reality genre owed much to An American Family, a 12-part documentary series that aired on PBS from January to March in 1973. In the making of this series, camera crews followed the Louds, a Santa Barbara, Calif., family, for seven months, revealing, among other things, the breakup of the parents’ marriage and the openly gay lifestyle of son Lance, a first for a television series.

At century’s end, however, the reality genre was tending more toward voyeurism and less toward reality. In spite of its title, MTV’s The Real World (begun 1992) was much more contrived than An American Family, and it set the style for future series of its kind. The Louds, after all, were a real family, as were the officers that were portrayed in Cops. For each new season of The Real World, however, seven young adults who had never met before were selected from thousands of applicants to live together for several months in a large MTV-supplied apartment or house in a major city. Cameras recorded them both inside and outside their home, and the footage was then edited into 13 half-hour episodes per year. It was, in effect, a documentary about a totally contrived and artificial situation. Eight years after the debut of The Real World, CBS picked up on the idea, introducing two series, both based on similar European shows, that brought the voyeuristic genre to a much larger audience than ever before. For Survivor (CBS, begun 2000), 16 applicants were selected to spend some 39 days on an uninhabited island in the South China Sea under the scrutiny of a hundred cameras. Taped footage was edited into 13 episodes. Although the “survivors” were forced to cooperate with each other for their daily needs and in competitive events that were set up by the producers, conflict was injected by forcing the group to vote one of their fellow castaways off the island at three-day intervals. The ultimate survivor at the end of the series won a million dollars. A month later, CBS debuted a variant of the genre, Big Brother, which featured 10 people locked in a house for the summer. Contestants on Big Brother were also voted out until one winner remained. It aired on consecutive nights during the week and included one episode per week that was broadcast live; there was also an Internet component, which allowed online viewers to access four cameras in the house 24 hours per day. In subsequent seasons the premium cable channel Showtime offered an “after-hours” version of the show.

When reflecting on the ’90s, it’s impossible to ignore how pivotal the decade was for animated films. So many that are considered classics now — Beauty and the Beast, The Lion King, Princess Mononoke, Toy Story, and The Iron Giant, to name a few — were all released within a few years of each other. The decade broke open barriers placed on animation in regards to what it could achieve and who it could appeal to. Not since Snow White and the Seven Dwarfs in 1937 had there been such groundbreaking advancement made in the world of animated features.

Two of the most significant achievements of the period that did much to set the stage for where we are today are Beauty and the Beast’s Best Picture nomination at the 1992 Oscars — the first-ever for an animated feature — and Toy Story’s success three years later as the first computer-animated feature ever released.

Beauty and the Beast’s nomination represented a new era for animated filmmaking, opening up the opportunity for animated features to be taken more seriously as Best Picture contenders especially. While this is not the only definitive mark that matters, it’s an achievement that nonetheless goes a long way to influence cultural perception. Especially at a time when animated films, possibly even more so than today, were thought of as primarily children’s entertainment.

The year Beauty and the Beast was up for Best Picture (among other Academy Awards), that top category was still down to only five nominees and The Silence of the Lambs won the Oscar. It’s probably safe to say that despite not winning Best Picture that year (it won in original score and original song categories only), Beauty and the Beast made just as large an impact on film culture from a contemporary perspective.

The Disney Renaissance was in its very early stages when Beauty and the Beast released in 1991. Despite having spent the previous decade making non-musicals mostly aimed towards children (including The Fox and the Hound, Oliver & Company, and The Great Mouse Detective) The Little Mermaid arrived in 1989 to wide success and began a new era for the studio. To many critics at the time, Disney simultaneously seemed to be harkening back to films such as Snow White and Cinderella while also feeling very forward-looking as well in regards to style and technique. And when Beauty and the Beast came two years later, it had these elements and more.

Some of the most beautiful and memorable film musical numbers to date revolve around the songs “Belle” (an Oscar nominee), “Be Our Guest” (also an Oscar nominee), and “Beauty and the Beast” (the Oscar winner). They work harmoniously with the film’s story and script differently than past Disney efforts because Beauty and the Beast didn’t rely on animators and story artists to build a narrative. The film had a devoted writer, Linda Woolverton, who worked with songwriter Howard Ashman to craft story and lyrics that brought dimensional characters and narrative to life.

“I hit it off right away with Howard, even though I didn’t come from musical theater,” Woolverton said of their collaboration in an oral history of the film’s making for Entertainment Weekly. “Howard and I wanted to make a sea change in the Disney heroine. Together we conjured up Belle, who loved to read.”

Then there is the groundbreaking use of computer animation in the film’s ballroom scene. The digitally animated location not only provides spatial depth to the scene but also emotional depth, as well. Belle and the Beast are encompassed by a gorgeous ballroom, with a three-dimensional chandelier and pillars, as they move around the dance floor, falling in love for the first time. The virtual camera movement allows us to absorb the scenery fully, swaying around the room along with them.

Bringing the ballroom to life in this way emphasizes the more fantastical elements of the fairy tale in a way that could not have been accomplished as effectively without the movement and dimensions provided by computer animation. While the story and music were lauded by many critics, it was this technical advancement that broke through as a sign that Disney was really investing itself in the future of animation.

Meanwhile, as the production for Beauty and the Beast was going on, Disney was working with the then-independent Pixar Animation Studios on the development of the CAPS (Computer Animation Production System) program, which began around 1986. After John Lasseter‘s computer-animated shorts Luxo Jr. (1986) and Tin Toy (1989) received Academy Award nominations, with the latter receiving the Oscar, Pixar wanted to begin working toward producing a feature-length film.

Despite their going into Toy Story focused on the idea of making the first computer-animated feature, however, the story and script were just as important in the minds of the Pixar team. A tidbit from David Price’s book The Pixar Touch highlights the fact that Lasseter and Pete Docter, both relatively new to the art of screenwriting, attended a three-day seminar in LA given by script guru Robert McKee, and when they returned, “McKee’s teachings became the law of the land at Pixar.”

v1.gif

When Toy Story finally released in 1995, with Disney handling distribution, it was a critical success. Roger Ebert wrote in his review, “Imagine the spectacular animation of the ballroom sequence in Beauty and the Beast at feature-length and you’ll get the idea.” The film also became the highest-grossing movie of that year. Audiences really latched onto not only this computer animation style but this type of original storytelling with such mass appeal. Pixar immediately made its cultural mark and carved its future brand.

Within the success narratives of Beauty and the Beast and Toy Story, there are themes that are prevalent throughout both: an evolving understanding of the art form; an emphasis on creating a fully realized story; and a desire to appeal to large audiences of all ages, making interesting and smart family entertainment.

Both Beauty and the Beast and Toy Story have become so ingrained into our popular culture at this point that it’s easy to forget the strides they made in these areas more than 20 years ago. This is especially because computer animation has become the primary format of animated films today. In addition to Pixar, our line-up now is filled with features from Illumination Entertainment, Sony Animation, DreamWorks Animation, and even Walt Disney Animation’s own computer-animated movies. Each studio has taken on their own style and approach to the format, as well.

Thinking of Toy Story then as the first computer-animated feature makes it feel too ancient in a world where it is still very relevant, especially when a third sequel has just come out 24 years later. Similarly, the 2017 live-action remake of Beauty and the Beast feels much like a retread of the original with nothing new to add to our current culture or the story itself. At the same time, it did breathe life back into the 1991 film by bringing it back to our attention and reminding us why it was so enjoyable the first time. The music especially is still spectacular in a loud theater.

More than the Beauty and the Beast redo or any of Disney’s other live-action reimaginings, this year’s remake of their 1994 animated feature The Lion King most resembles the ambitiousness of Disney and Pixar with the original Beauty and the Beast and first Toy Story in the 1990s. This time, the computer animation on display is meant to completely give the illusion of live-action. So much so that it’s not too far off to think of it as setting the stage for future possibilities through the technology, as Pixar did in 1995.

Beauty and the Beast and Toy Story’s technical achievements may have piqued everyone’s attention at the time, but the prominence placed on their stories is perhaps what makes them more interesting feats now. Maintaining narrative quality in addition to making technological advancement is admirable. Especially considering Pixar today has become one of the most highly regarded studios in terms of how it approaches storytelling. Finding Nemo being nominated for Best Original Screenplay at the 2004 Oscars is perhaps the best summation of how far Pixar had come in a little under a decade in not only its storytelling craft but its focus on original storytelling. Sure, they’ve become more of a sequel machine in recent years, but the original stories they continue to produce are unmatched in this day and age.

Beyond the sequels and remakes of today, though, Beauty and the Beast and Toy Story made possible the wide range of animated entertainments we enjoy today. They ultimately expanded the possibilities of the art form and placed focus on storytelling in ways that are still prominent and will continue to be for future generations.

Holland,_Brad-carrot.man.jpg

Illustration (detail) above: Brad Holland

The 1990s encompassed both old and new ways of making illustration. Oil and acrylic painting, watercolor, gouache, pastel, and pen and ink techniques were still the primary approaches being used by most illustrators, but there were many artists eager to explore drawing and painting software and learn the use of digital tools. The contrast between the look of traditional vs. digital media can be easily seen in the work of Kinuko Craft (watercolor and oil) and Don Arday (digital). The painterly illustrations of Brad Holland and Peter Fiore showed that classic traditions continue, while inventive ways of using software were appealing to artists like Mick Wiggins.

Craft,_Kinuko-geisha98.jpg

Kinuko Craft, advertising illustration, 1998

Arday,_Don-tech.man.jpg

Don Arday

Fiore,_Peter-oilrig.rider.jpg

Peter Fiore, institutional illustration

Wiggins,_Mick-photoshop90s.jpg

Mick Wiggins, digital illustration

Desktop computer technology caused every graphic designer to have to leap, like it or not, quickly into learning how to do their work digitally. They found that there were benefits of personal control and greater ease of accomplishing certain tasks, but they also discovered that there were new responsibilities. They no longer outsourced typesetting services, but had to become typesetters themselves. They had to learn to scan and retouch photographs and artwork, rather than passing the job on to specialists in reproduction. A digital process replaced old, hand-done production techniquess, and designers had to take full responsibility for whether the files would print properly. This meant becoming skilled at a half-dozen or so software applications, while spending more and more time in production and less and less time doing design concept work and commissioning illustration. With an overburdened schedule, many designers found it was faster and easier not to commission illustration at all and turned to “stock” illustration, which was created in advance on a variety of generic subjects and could be applied to a variety of purposes. A mixed blessing, this meant a new source of income for illustrators (often from the sale of rights to pieces already accomplished), but it also meant that as more and more illustration was bought this way, there would be fewer and fewer commissions for original work. Some feared that this new structure for buying and selling generic illustration would devalue the art form, making it just a decorative commodity.

Stahl,_nancy-flamingo300W.jpg

Nancy Stahl, vector illustration

All illustrators had to make a decision about incorporating the new digital tools into their careers. Those who loved the tactile working of traditional art materials and hated computers avoided the issue altogether. Many learned just enough about computers to scan artwork and create digital files for archiving and emailing to clients, but the endless potential and challenge of the new digital tools was compelling to others. Groundbreaking illustrators began to create digital illustrations that showed great variety and accomplishment, but at the expense of logging in hundreds of hours in order to master the new tools. Illustrator Nancy Stahl, for example, used a traditional gouache technique to create flat color shapes, but found that once mastered, vector-drawing programs were able to create the same effects with even more control.

Publications, particularly any which had to do with business or technology, were quick to adopt digitally-created pictures, but other clients were skeptical and saw the new work as being less artful, formulaic, and slavish to the particular software that was used to create it. The layering and controlled transparency capabilities of Adobe Photoshop inspired many artists and designers to create montage illustrations, and artists and designers did find that the digital toolbox placed its own limitations on what was possible. Artists seeking originality pushed deeply into software tool palettes to create new and personalized effects, and some also incorporated traditional media. Dave McKean and Eric Dinyer utilized photography combined with both digital and traditional painting techniques or even sculpture and found objects.

McKean,_Dave-VoodooLounge94.jpg

Dave McKean, illustration, The Rolling Stones’ Voodoo Lounge Tour booklet, 1994

Dinyer,_eric-galaxy99.jpg

Eric Dinyer, magazine illustration, 1999

Accompanying the technological challenges of the 1990s were also economic ones. A recession in the early 1990s forced design firms and ad agencies to deal with cautious clients less willing to pay for services that they might not absolutely require. Printers, digital service bureaus, and even ordinary companies were all forced to invest in digital equipment and software and a never-ending cycle of upgrades. Designers suffered layoffs as agencies merged, downsized, or folded, and enthusiastic young self-learners replaced those who weren’t skilled in “desktop publishing” and electronic production. Illustration fees stagnated, so that throughout the 1990s artists were being paid at 1980s rates. The best and most experienced illustrators continued to be seen in all the major publications—although on fewer illustrated pages—but it was getting harder for someone just entering the field to make an acceptable living. By the end of the 1990s even those who had previously been able to earn very good livings were finding that they had to seek other revenue sources to augment their reduced incomes as freelance illustrators.

And finally, the rapid growth of the internet was another “sea change” of the digital age. Businesses and institutions developed websites and soon realized the benefits of international exposure and e-commerce. They needed artwork and graphics for their sites, opening a new market for freelance illustrators and even providing full-time employment opportunities for those who could create digital art in the workplace. The internet also began to serve individual illustrators as an additional means of marketing and sales. Artists set up personal web pages that showed their portfolios electronically and even began to sell art and reproductions online. Reps and stock agencies created web sites too, as a convenient and fast way of getting art seen and sold to their clients. By the turn of the new century, cultural, creative, and business adjustments had been made, and an era had begun where digital tools and the internet were simply facts of life. Willing artists adapted to new technologies, personalized and creatively interpreted new mediums, and moved forward with the changing times.

10 significant books of the 1990s


Tim O’Brien, The Things They Carried (1990)

The Things They Carried was O’Brien’s third book about Vietnam, but it’s frequently heralded as one of the best books ever written about the war. It sold “well over two million copies worldwide” and was a finalist for the Pulitzer Prize as well as the National Book Critics Circle Award. “The Things They Carried has lived in the bellies of American readers for more than two decades,” A. O. Scott wrote in 2013. “It sits on the narrow shelf of indispensable works by witnesses to and participants in the fighting, alongside Michael Herr’s Dispatches, Tobias Wolff’s In Pharaoh’s Army, and James Webb’s Fields of Fire.” As far as its enduring legacy, Scott goes on:

In 1990, when Houghton Mifflin published the book, Vietnam was still recent history, its individual and collective wounds far from healed. Just as the years between combat and publication affected O’Brien’s perception of events, so has an almost exactly equal span changed the character of the writing. The Things They Carried is now, like the war it depicts, an object of classroom study, kept relevant more by its craft than by the urgency of its subject matter. The raw, restless, anguished reckoning inscribed in its pages—the “gut hate” and comradely love that motivated the soldiers—has come to reflect conventional historical wisdom. Over time, America’s wars are written in shorthand: World War II is noble sacrifice; the Civil War, tragic fratricide; Vietnam, black humor and moral ambiguity.

I’d argue that The Things They Carried is now itself a one-volume shorthand for the Vietnam War—or the closest thing to it.

Tony Kushner, Angels in America (1991)

Look, a play! Kushner’s Angels in America: A Gay Fantasia on National Themes was a huge hit and subject of national discussion when it was first performed, and won the Pulitzer Prize for Drama, two Tony Awards for Best Play, and the Drama Desk Award for Outstanding Play. In 2005, John Lahr called it “the first major play to put homosexual life at the center of its moral debate, [covering] territory that ranged from Heaven to earth, from the AIDS epidemic to conservative politics, encapsulating, in its visionary sweep, the sense of confusion and longing that defined late-twentieth-century American life.

“It gave a language to that generation,” the director George C. Wolfe, who staged both Angels in America and Caroline, or Change on Broadway, says. “It gave playwrights permission to think about theatre in a whole new way. A play could be poetic, ridiculous, fragile, overtly political, sentimental, and brave all at the same time. . . . [Angels in America] was an epic discourse on American life that mixed social reality with theatrical fantasy, naturalism with Judaism and magical realism. It told its story in numerous dialects—camp, black, Jewish, Wasp, even Biblical tones. At the same time, it provided a detailed map of the nation’s sense of loss.

. . .

Twenty-four characters, eight acts, fifty-nine scenes, and an epilogue: Angels in America turned the struggle of a minority into a metaphor for America’s search for self-definition. “I hate this country,” a gay black nurse called Belize says to Louis. “It’s just big ideas, and stories, and people dying, and people like you. The white cracker who wrote the national anthem knew what he was doing. He set the word ‘free’ to a note so high nobody can reach it.” Although “Angels” was not the first play to explore the aids pandemic—Larry Kramer’s polemical “The Normal Heart” (1985) preceded it—it was the first to explore the particular claim of the disenfranchised to a romantic vision of America. “We will be citizens,” Prior announces to the audience at the finale. “The time has come.”

It was a huge success, “hailed as a turning point for theatre, for gay life, and for American culture,” and its legacy was only further secured by the HBO miniseries, which, in 2004, won a then-record-breaking 11 Emmys.

Denis Johnson, Jesus’ Son (1992)

I don’t know about its national importance, necessarily, but this here is a literary website, and few books have had as direct and intense an impact on the literary world as Johnson’s cult story collection. I mean, it’s safe to say that almost every writer has read it, and about half of them have tried to emulate it (“So many people want to write this book over again,” Michael Cunningham once said). When I was accepted into an MFA program, another writer friend gave me a copy and inscribed it: “So you’ll know what everyone is talking about.” I read it, and found out, but personally prefer Train Dreams. However, I am rather in the minority. But the book is more than just a cult novel, as William Giraldi has attested in Poets & Writers.

It’s beautiful to see, back pockets sprouting Jesus’ Son, but I’ve wondered: Do all those hip young men believe “I knew every raindrop by its name” can mean anything they want it to mean? Are these back pockets evidence of what is lazily referred to as the book’s “cult following”? Consider that in the novel More Die of Heartbreak (William Morrow, 1987), Saul Bellow has that wonderful line to the effect that cults are neither that hard to get nor that much to be proud of. If ever you hear that a writer has a cult following, pause to remind yourself what a cult actually is and how cults usually end. Jesus’ Son, the preeminent story collection of the American 1990s, is worthy of much more than mere cultism.

. . .

The collection is singular in its alloy of rarities. It wields a visionary language that mingles the Byronic with the demotic—a language of the dispossessed, half spare in bewilderment, half ecstatic in hope. There’s the bantam power of its brevity—you can read the book in one sitting—and the pitiless, poetic excavation of an underground existence bombed by narcotics, of psyches that prefer the time of their lives to the lives of their time. It boasts a deft circumvention of that tired trope polluting so many American stories of addiction: the trek from cursed to cured, from lost to loved, from breakdown to breakthrough. It also maintains an effortless appropriation of elements from the three most important story writers of the American twentieth century: Ernest Hemingway’s sanctifying of the natural world in The Nick Adams Stories; Flannery O’Connor’s spiritual grotesquerie and redemptive questing; and Raymond Carver’s noble ciphers manhandled by the falsity of the American Dream (Johnson was one of Carver’s drinking compeers at Iowa in the early 1970s).

“We go to Jesus’ Son precisely because in its most sublime moments it reveals to us a condition both lesser and greater than human,” Giraldi concludes. “We go to it for the flawlessness of its aesthetic form, its transformative spiritual seeing, and the beauty, the deathless beauty, of sentences that sing of possible bliss.”

Donna Tartt, The Secret History (1992)

“How best to describe Donna Tartt’s enthralling first novel?” wondered Michiko Kakutani in a 1992 review. “Imagine the plot of Dostoyevsky’s Crime and Punishment crossed with the story of Euripides’ Bacchae set against the backdrop of Bret Easton Ellis’s Rules of Attraction and told in the elegant, ruminative voice of Evelyn Waugh’s Brideshead Revisited. The product, surprisingly enough, isn’t a derivative jumble, but a remarkably powerful novel that seems sure to win a lengthy stay on the best-seller lists.” Well, she wasn’t wrong—everyone’s favorite novel from the 90s was a full blown phenomenon, hyped to infinity but with good reason, both critically and commercially successful—and though not every critic was on board, the Cult of Donna was born.

Plus, as Ted Gioia put it, “The Secret History is, by any measure, a significant fiction, and arguably the book that tilted the scales away from the minimalist 1980s fictions with their Raymond Carveresque starkness, and toward the more maximalist sensibility that has been in the ascendancy in recent years.”

Jeffrey Eugenides, The Virgin Suicides (1993)

The Literary Hub office was split over whether to include The Virgin Suicides here or wait for the next decade to champion Middlesex, but in the end, we couldn’t think of the 90s without Eugenides’ debut. It is a rare book that feels absolutely essential to and evocative of the decade—but also not at all bound by it. As Emma Cline wrote in a new introduction to the novel,

Even as Eugenides interrogates the postwar suburban dream, the “dying empire” of a Michigan town, there’s a sense of timelessness, the setting both immediate and otherworldly, toggling between the daily boredoms of teen-agers and a realm almost mythic: when Cecilia slits her wrists, the paramedics with the stretcher are described as “slaves offering the victim to the altar,” Cecilia as “the drugged virgin rising up on her elbows, with an otherworldly smile on her pale lips.”

Twenty-five years later, I’d argue it holds up. The fact that it spawned an equally-iconic film is only icing on the cake.

David Foster Wallace, Infinite Jest (1996)

Yes, well. Despite the fact that lots of annoying dudes love this book, and Wallace’s own unignorable abusive tendencies, it’s more or less the ur-text of the literary 90s. The 1,000+ page tome was a bestseller (if not on the level as some of the others on this list) and a landmark literary event. The book, wrote one reviewer upon its publication, “has been moving toward us like an ocean disturbance, pushing increasingly hyperbolic rumors before it: that the author could not stop writing; that the publisher was begging for cuts of hundreds of pages; that it was, qua novel, a very strange piece of business altogether. Now it’s here and, yes, it is strange, not just in its radically cantilevered plot conception but also in its size (more than a thousand pages, one tenth of that bulk taking the form of endnotes): this, mind you, in an era when publishers express very real doubts about whether the younger generation—presumably a good part of Wallace’s target audience—reads at all.” People read—or at least people bought—and the book became a touchstone of literary culture, one that we’ve been arguing over and dissecting and refusing to read and forcing others to read and needing help to read ever since.

“Read today, the book’s intellectually slapstick vision of corporatism run amok embeds it within the early to mid-1990s as firmly and emblematically as “The Simpsons” and grunge music,” wrote Tom Bissell on the book’s 20th anniversary.

It is very much a novel of its time. How is it, then, that Infinite Jest still feels so transcendentally, electrically alive? Theory 1: As a novel about an “entertainment” weaponized to enslave and destroy all who look upon it, Infinite Jest is the first great Internet novel. . . . That 20 years have gone by and we still do not agree what this novel means, or what exactly it was trying to say, despite saying (seemingly) everything about everything, is yet another perfect analogy for the Internet. Both are too big. Both contain too much. Both welcome you in. Both push you away.

Bissell has some other theories, but most important to our purposes is his final one: that “Infinite Jest is unquestionably the novel of its generation.”

Helen Fielding, Bridget Jones’ Diary (1996)

I don’t know what to tell you. This book is good, and truly funny—and it’s also widely credited with kicking off an enormous wave of “chick-lit” (and attendant money for authors and publishers) on both sides of the pond. It not only reflected the culture of the 90s but also invented some of it—or at least a lot of its terminology. Were Americans saying “fuckwit” before Bridget? I think not. In closing, here is a blurb from Salman Rushdie: “Even men will laugh.” Even men, indeed.

Chuck Palahniuk, Fight Club (1996)

Typically, when it comes to Fight Club (and Palahniuk in general) I like to abide by the first rule of fight club. But alas, it’s the 90s, and there’s no ignoring it. The book was well-reviewed but sold modestly when it was first published—around 5,000 copies—but when the film came out in 1999, it became a cult phenomenon, particularly among young white men, and sales of the book skyrocketed.

But in retrospect, legacy hasn’t been great. These days, Fight Club is an inspirational text to incels, who look to it “as an example of why one shouldn’t underestimate ordinary frustrated men.” The term “snowflake” (in its current usage as an insult used by right wing boors, at least) comes from the novel, and Palahniuk is proud to have coined it. “There is a kind of new Victorianism,” he told the Evening Standard in 2017. “The modern Left is always reacting to things. Once they get their show on the road culturally they will stop being so offended.”

You can argue whether the novel glorifies or attempts to criticize toxic masculinity (I find the latter reading more palatable but extremely generous, considering the above), but there’s no doubt that toxic masculinity is its subject—a subject very relevant to the 90s, and also the present. As Ted Gioia wrote,

A real subculture exists that matches, to some degree, the rule-breaking ethos depicted in Fight Club. I’m not surprised that, in the years following the publication of this novel, the author was frequently approached by fans who either (1) believed that many elements in it were based on actual events, or (2) were determined to turn them into actual events. From this perspective, Palahniuk has at least surpassed Bret Easton Ellis, whose Harvard-MBA-turned-serial-killer is pure hokum aimed to gain notoriety through sheer shock value.

Which reminds me: American Psycho was published in the 90s too—but Fight Club has beaten it to the top 10 because of the 80s-ness of the former, and because of exactly what Gioia suggests above.

Frank McCourt, Angela’s Ashes (1996)

Listen, as far as 90s memoirs go, I personally wanted to champion a) Mary Karr’s The Liars’ Club (1995) or b) Elizabeth Wurtzel’s Prozac Nation (1994), but the Literary Hub office at large shouted me down. Apparently Angela’s Ashes is kind of a big deal. Sure, it sold some 4 million copies, even before the movie came out, was on the bestseller list for 117 weeks, won a Pulitzer Prize and a NBCC award, and was translated into over 20 languages, and is more or less the reigning champion of the Misery Memoir. America loved it, but folks from Limerick weren’t as pleased, accusing McCourt of exaggeration. One of them ripped up a copy in front of the writer at a book signing. “He named names. He insulted people,” said the book-defacer. “Most of the people are dead. But the families have to suffer and live with the consequences.” Apparently, even Angela herself (read: McCourt’s mother) once stood up at one of the author’s appearances to yell out, “It didn’t happen that way! It’s all a pack of lies!” Ouch.

Jhumpa Lahiri, The Interpreter of Maladies (1999)

To be fair, this book was barely published in the 90s—I’m sure more people read it in the early 2000s—but it was a sensation, and I can’t possibly discount it. Everyone read this book. It was a huge bestseller, selling upwards of 15 million copies, which is astounding for a short story collection, and won both the Pulitzer (making Lahiri the first person of South Asian descent to win an individual Pulitzer Prize) and the PEN Award. In World Literature Today, Ronny Noor wrote that “The value of these stories—although some of them are loosely constructed—lies in the fact that they transcend confined borders of immigrant experience to embrace larger age-old issues that are, in the words of Ralph Waldo Emerson, ‘cast into the mould of these new times’ redefining America.”

The 1990s was a decade that changed the comic book industry forever in any number of ways, whether it was the speculator-fueled boom and bust that killed a number of smaller publishers; massive storylines like the Death of Superman; Spider-Man being replaced temporarily by a clone; the removal of a number of key characters from the Marvel Universe due to publisher decree; or a number of business decisions that have repercussions to this day. 

Image Comics and the frenzy that preceded it

The story of the ’90s really begins in the 1980s. Comics were always about star creators, but by 1986, people like Frank Miller and Alan Moore had became iconic. Younger artists like Jim Lee and Todd McFarlane were watching as that happened. At the same time as Miller and Moore became stars, the Ninja Turtles became a phenomenon which showed there actually was the chance to get rich quick creating and selling comics. Then there was the 1989 Tim Burton Batman movie, which showed comics could get attention and popularity from the general public. During those few years, McFarlane was becoming a star on Amazing Spider-Man, and he was paying attention to what was happening to the older generation.

Those trends came to a head in 1990 with McFarlane’s “adjectiveless Spider-Man,” which set him up as a star creator and sold a cumulative 2.35 million copies. That comic showed a young star creator could sell a crazy number of comics and demonstrated to people both inside and outside comics that there was a new era dawning.

From there, the ’90s as we think of them really started. The next year Jim Lee’s X-Men No. 1 sold over 8 million copies, and that runaway sales success sparked the gold rush of people looking to pay their kids’ college funds with profits from holographic issues of Fantastic Four, The Death of Superman, Turok: Dinosaur Hunter and Youngblood. When they found their copy of Bloodshot No. 1 was worthless because everyone owned that comic, that helped spark the bust in sales that crushed the industry for the rest of the decade.

At the dawn of 1992, comic books were booming. Tim Burton’s Batman had kicked off a new wave of big-budget film adaptations. Superhero products could be found in nearly every aisle of every department store and supermarket. New comic shops were springing up in shopping centers and malls, publishers were seeing their highest sales figures in years, and new companies were making names for themselves as serious players. And Marvel Comics was the unquestioned big fish in the pool, with their stock booming in the six short months since they’d gone public, and an unparalleled creative stable.

But big changes were afoot. In December of 1991, Todd McFarlane, Rob Liefeld, and Jim Lee, Marvel’s three biggest artists, informed publisher Terry Stewart that the company’s policies toward talent were unfair, that creators were not being appropriately rewarded for their work, and that they were leaving, effective immediately. In the month thereafter, they joined forces with a few more like-minded artists from Marvel’s top-selling titles, worked out a deal with small publisher Malibu Comics for production and distribution, and decided on the title for their new company — recycling a name that Liefeld had originally intended for an aborted self-publishing venture. On February 1st, 1992, a press release was sent out announcing the formation of Image Comics.

The details of who exactly was involved were a bit vague, and conflicting reports appeared almost immediately: George Perez and a number of other name creators were rumored to be participants. But once the smoke settled, the founders of Image would forevermore be established as McFarlane, Liefeld, Lee, Erik Larsen, Marc Silvestri, Jim Valentino, and Whilce Portacio. (Chris Claremont was also listed in initial press reports, as he had been planning to team with Portacio on a new title called The Huntsman, but once Portacio decided to create his own series from scratch, Claremont turned his focus to projects for DC Comics.)

Just to put in perspective what a big deal this was: in June of 1990, Todd McFarlane’s Spider-Man #1 had become the best-selling comic issue of all time, touching down with 2.5 million copies. In June of 1991, Rob Liefeld’s X-Force #1 broke McFarlane’s record, with five million copies sold. And just two months after that, Jim Lee’s X-Men #1 trounced all previous numbers, selling 8.1 million. These weren’t just some hot-headed hotshot creators — these were the people behind the three biggest comics ever, at the top of their game. In the world of comics, this was like the Olympic Dream Team, The Highwaymen, and United Artists, all rolled into one.

And it wasn’t just a big deal to comic readers, but also to the market in general. Marvel Entertainment Group was a publicly traded company, and comics were becoming big business. Barron’s was the first mainstream outlet to note that many of Marvel’s top talents were leaving, in an article focused on the company’s unsustainable business practices, and once CNN’s Moneyline ran a story on the formation of Image, it was official — the rules of the game had changed.

Of course, as happens with any ambitious new undertaking, there were bumps that had to be ironed out. The Image founders had plenty of ideas, but they were suddenly faced with the realities of producing titles from scratch, handling their own scheduling, and operating without editors keeping things on track.

The company was founded on the tenets that each creator would own their own work, each partner would operate with total creative autonomy, and the only IP that the umbrella company itself would control was the Image name and logo — and while this was noble and idealistic, it meant that the company’s launch was a bit disorganized, and everything ran late right from the get-go.

Liefeld’s Youngblood was the first title out of the gate, but issue #1 didn’t go on sale ’til April, a month or so after it was expected. McFarlane’s Spawn #1 was the second Image issue to hit stores, and while it was cover-dated May 1992, it wasn’t actually released until the first week of June. Erik Larsen’s Savage Dragon #1 arrived the last week of June (actually beating its cover date of July), but #2 was delayed until October. Jim Valentino’s Shadowhawk #1 and Jim Lee’s WildCATS #1 didn’t arrive until August. Marc Silvestri’s Cyberforce #1 finally showed up in October.

But late or not, each of these books became a smash, topping sales charts and establishing new benchmarks for the success of independent comics. And while the “collective democracy” approach of the founders meant that some questionable titles ended up getting released over the first few years of the company — case in point: Image Plus #1, a book that consisted entirely of creator bios and Q&As, and contained no comic material whatsoever — the founders also quickly opened their doors to other writers and artists who wanted to create their own properties.

Within a year of Youngblood #1 appearing, Image had launched new titles helmed by creators such as Sam Kieth, Jerry Ordway, Dale Keown, Larry Stroman, Rick Veitch, and Alan Moore.

Some company founders also actively expanded their own corners of the Imageverse — Lee and Liefeld’s respective studios recruited young talent and produced a number of titles that spun off from their core franchises. And the company grew and matured quickly enough that by early 1993, they were able to establish their own central publishing office and cut ties with Malibu.

 

The effects of Image’s success were immediate and far-reaching. Creator-owned comics weren’t just relegated to niche markets any longer, and publishers began to realize that they needed to offer better deals.

Within a couple years, other groups of creators banded together to launch Image-esque imprints of their own, most notably Bravura (published through Malibu), and Legend (published through Dark Horse). For a new generation of comic readers, creators’ rights became a natural part of conversations about the industry, rather than a distant afterthought.

In the years since the company’s formation, Image has shifted, changed, and evolved. Rob Liefeld resigned (or was asked to leave) in 1996, and returned in 2007. Jim Lee left in 1998, bringing his Wildstorm imprint and all associated trademarks and characters to DC Comics, where he continued to oversee his family of titles until he was named co-publisher of DC in 2010. Robert Kirkman, who got his start at Image writing Superpatriot for Erik Larsen in 2002, and went to create Invincible and The Walking Dead, was made a partner in the company in 2008.

Image’s slate of titles has been in a constant state of flux, as befits a publisher where all rights reside with the creators — at various times, Mike Allred’s Madman, Jeff Smith’s Bone, Colleen Doran’s A Distant Soil, Kurt Busiek and Brent Anderson’s Astro City, and Brian Bendis and Mike Oeming’s Powers have all appeared under the Image banner.

The publisher has also given a home to some of the medium’s most distinctive voices; provided a home for many of the craziest, coolest, strangest, silliest, and smartest titles one could hope to read; and established the most brilliantly diverse slate of any publisher. And under the guidance of publisher (and former Liefeld employee) Eric Stephenson, the company has gone to new strengths.

 

Comic boom-bust

There are three core causes to the comic book market crash of the 90’s whose roots began around this time. 1) Collectors and speculators 2) The Retailers and Distributors and 3) Executive Boardroom battles within the Publisher’s, specifically Marvel. Let’s look at each of these individually.

Collectors and speculators misjudged the market and just what made those newsworthy collectibles like the Wagner and the first appearance of Batman so valuable. In short: They are extremely rare.  These things were made at a time when they weren’t considered collectible, when the paper was recycled for the war effort, when kids rolled them up and stuck them in their back pocket. To be read, traded and most likely discarded. Not many survived. But companies like Marvel and Upper Deck started to manufacture collectibles and rarities in the form of new first issues and rookie cards with gimmick covers and slick card stock. They featured deaths and major events within their pages. People ate them up en-masse, stuck the collectibles in their white boxes in the basement, bagged and boarded, with the promise that in 20 years they could put their kid in college or they’d be able to pay cash for that new Mercedes they keep seeing in the neighbor’s driveway. Superman is dead! That issue will be as important as Action Comics #1 (so they were promised). Batman’s back broke and there’s a new X-Men #1! Buy buy buy. The problem here is that everyone got the message.

Another contributor to the crash was the publishers and corporate greed. This one might be complicated but I’ll do my best to break it down for you.  A year after New World Entertainment began their 3-year ownership (they bought Marvel in ’86 for $46M during the Cadence Industries liquidation) stint at Marvel, Jim Shooter (the man who helped the boom of the mid-80s) was fired. The roots of the crash start here.  It was not long after this – 1989 actually – that Ron Perelman (via Andrews Grp.) bought Marvel for $82.5 Million dollars. Perelman’s goal was to create something much larger than a comic book company. He wanted an Entertainment company and, in the short term, was successful. To appease shareholders his mandate was to increase prices (they’d gone from 0.65 cents in 1986 to $1.25 by 1993 (… $2.17 in today’s #s, adjusted for inflation). The title line itself ballooned massively during this time but so did profits. This massive push was a strain on the bullpen and ideas started to form in the minds of a few to leave and start their own venture. By 1991 Tom DeFalco (Shooter’s replacement as EIC) and Ron Perelman took the Marvel public which caused stocks to shoot up even higher. Perelman’s strategy at this time was to issue  “junk bonds”, backed by Marvel Entertainment Group’s rapidly rising stock value. Perelman purchased other companies like Fleet in 1992 for $340 million dollars and 46% of ToyBiz in ’93. This is where two important players entered the arena: Avi Arad and Ike Perlmutter, both joined the Marvel Board of Directors. Perelman was also opening holding companies like Marvel Holdings and Marvel Parent Holdings to spread out operating losses and reduce tax payment requirements. These new holding companies also became collateral upon which he issued additional junk bonds. Money was good at the top.

In the middle of this period there was a controversial speech delivered by Neil Gaiman about Tulips.

Distribution changes

During the very late 80’s and early 1990s there was Diamond Distributors and Capital City who shared the burden of delivering comic books to all of the comic book shops across America. During the height of the boom, that number of shops reached somewhere around 10,000. With the promise of big returns and a very healthy market of consumers to sell to, people unfamiliar with the particulars of how to run a shop and manage inventory were getting in on the boom. Collectors and even Wall Street investors were opening shops, drawn by the promise of better ROI’s than some bonds after reading about those big auction sales in the paper. To feed to proverbial hunger of all those collectors and speculators at the peak of the boom in ’92/’93, the distributors lowered the “entrance fee” to become a retailer. Just $300 could set you up with an account in good standing and – boom- just like that you’ve gone from collector to retailer. But that doesn’t mean you have the experience, and the additional cash-flow to continue beyond that. And, just like the collectors, shops misjudged ordering. Chuck Rozanski of Mile High Comics estimates that 30% of new stock ordered from 1990-1994 ended up as over stock. These inexperienced store owners, cannibalized sales from more established stores. Publishers only saw sales figures at the retailer level (not what shops actually sold in store). Distributors knew shops were over ordering but were making money so they let it happen. What comic stores were left with is entire boxes and rooms and basements of unsold comic books and cards. And those unsold books led to the closing of nearly 6,000 Comic shops. We’ll talk about the bottom falling out and the burst in a moment.

So what happened? While most say that 1993 Valiant/Image inter-company crossover event DEATHMATE was the nail in the coffin, one could portend that the exodus of creative from Marvel to the newly minted Image Comics in 1992 was the start. Why? They were creative and when it came to running their own publishing company, they didn’t fully understand the business side. So when it came to their titles and, eventually, the Deathmate crossover, they were plagued with delays – called Schedule Slip – and issues with not understanding the other company’s characters and what people like about them. The delays were so bad that peoples’ demand for the book and interest in them waned. Part of the problem is that stores had already pre-ordered the $4.95 book based on initial demand 2 months before hitting shelves.  All those promises of permanence, namely with DC Comics’ “Death of Superman”, an event that made the evening news, proved to be empty promises. Less than a year after Superman died, DC decided to bring him back. When they did, collectors went to sell their Superman 75s and found that everyone else already had multiple mint copies stored away. That thing they didn’t understand about the Detective Comics #27 or the 1909 Honus Wagner was that they were truly rare because they just weren’t around. Just print “Collectible Issue” on the cover does not make it collectible or rare. Some estimates have the print run on Superman #75 in the 3-4 million copy range.

And the bottom dropped out.

By 1994, Marvel was reporting losses of $48.5 million dollars range. Alan Greenspan, fearing an inflation spike, had instituted a rate hike which dumped the Dow causing stock to fall sharply. Capital City Distributors, in response in part to the massive delays of Deathmate, instituted penalties for “schedule slip”. This propelled Marvel to buy their own distributor and distribute in-house via Heroes World. Capital City’s competitor Diamond Distributors did not want to lose a third of their business to Marvel’s in-house distributor so they bid aggressively for exclusivity on Dark Horse, image and the DC titles. Comic book shops didn’t like having to put in 2 orders, pay for double shipping and – frankly – couldn’t afford it in a large number of cases. Smaller publishers were going out of business like Defiant and Eclipse and Malibu. Eventually even Capital City, after clients like TSR shut down, went out of business themselves …. Leaving Diamond as the sole distributor for the entire country and an infrastructure that was far from support shops on that scale.

The Major League Baseball strike of 1994 and the NBA lockout in 94/95 killed demand on collectible cards and demand fell drastically on Fleer and Skybox cards which Marvel scooped up in 95 for $150 million. As money slowed and Perelman kept buying, Marvel’s debt soared to $700 million even after taking ToyBiz public. Perelman had convinced investors to  buy $900 million in junk bonds but these were only backed by Perelman’s own shares.

Both retailers and publishers misjudged the market. ”We couldn’t get a handle on how much of the market was driven by speculators,” Perelman said; “the people buying 20 copies and reading one and keeping 19 for their nest egg…” At the worst of the bubble, in 1996 there were just 4K shops nationwide. Buyers had left, disillusioned, which meant less orders, cut titles and less product for store shelves.

Marvel Bankruptcy

Around the Christmas of 1996, Marvel filed for Chapter 11 bankruptcy protection and thus began a corporate boardroom and courtroom war between Ron Perelman (an owner of 80% of marvel but whose value was all but pledged to all those issued bonds), corporate raider Carl Icahn (an investor who was buying up Marvel bonds at 20% of their value) and the ToyBiz team of Avi Arad and Ike Perlmutter. The Chapter 11 filing was partially to save Marvel, since a collapse of Marvel might’ve meant the collapse of the entire industry, and partially to block Carl Icahn from getting a majority ownership of Marvel. But in the end, Chase Manhattan made their choice so Ike Perlmutter and ToyBiz won out. In 1997 they bought the rest of Marvel and formed a new company, bringing them out of bankruptcy however the industry truly hit a nadir around the year 2000 and has been slowly evolving since then.

Sandman, Starman, and the independent scene.

Quickly, there were notable, literary comics that arose in this period of the early 1990s. Some were driven by corporate comics. Others were independent.

Here are some important titles and a sentence about why they were important:

  1. Hellboy – an independent series created by Milke Mignola. It was wildly successful because of smart writing and smarter art work that leans toward graphic design rather than traditional comic art.
  2. Bone – An independent comic. Often referred to as the “Lord of the Rings” for adolescents in the 90s. Jeff Smith created an interestingly odd pairing of cartoonish characters and naturalistic characters in a story that featured cow races and mystical forces. One of the most truly unique mash-ups of all time.
  3. Sandman – a metatextual story about storytelling by Neil Gaiman and a host of inventive artists including cover artist, Dave McKean, and the most celebrated of his interior artists: P Craig Russell
  4. Transmetropolitan, Planetary, and the Authority, all written by Warren Ellis who became one of the most interestingly prolific writers of the 21st century (so far). You probably know him as the main writer for the Netflix series: Castelvania.
  5. Starman – By James Robinson and Tony Harris is a generational tale of a black sheep learning to become a hero, which sounds really normal, but it was one of the most centered, personal stories to come out ofd super-hero comics in an age dominated by Image.

Marvel Knights

Here is a link to Marvel’s explanation of how they rebounded.

https://www.marvel.com/oral-history-marvel-knights

Essentially, a couple of dedicated writer/artists named Joe Quesada and Jimmy Palmiotti were given the reigns to produce 4 Marvel titles as Marvel was trying to recover from Chapter 11. The main thing they did to change the industry and Marvel’s situation in particular, was they took chances, valued artistry and story over “flashy” images, refused to use “tricks” to get people to buy the books, and concentrated on trying to make stories that were re-readable and didn’t require an audience to know the backstory or continuity of what preceded their stories.

The level of success they achieved is a story for when we discuss the 00s in media history.

By Ellen Lupton

How quickly “now” becomes “then.” A few weeks ago, I was looking for examples of experimental typography to show to my MFA students at the Maryland Institute College of Art (MICA). I pulled a book off the shelf called Typography Now Two: Implosion, edited by Rick Poynor in 1996. Twelve years later, Typography Now has become a fascinating piece of history, showing us what ambitious, forward-reaching design looked like at a time when the web was just finding its legs, print was digging in its heels, and digital tools had revolutionized our work flow.

Although some of the material in Typography Now Two reeks of grunge mannerisms and digital-effects mania, much of it still looks totally alive. This work was striving to define what was new for its time, and for many pieces, the freshness stamp has yet to expire. The early ’90s was an extraordinarily fertile period. In the U.S., a far-flung vanguard had spread out from Cranbrook and CalArts, where several generations of designers—from Ed Fella to Elliott Earls—had embraced formal experimentation as a mode of critical inquiry. Emigre magazine, edited and art directed by Rudy VanderLans, provided an over-scaled paper canvas for experimental layout, writing, and typeface design. In Europe, designers including Switzerland’s Cornel Windlin, Berlin’s CYAN, and Britain’s Tomato made mayhem with modernist vocabularies to create dense, dynamically layered posters and publications. What began as a vanguard movement never ossified into “old guard.”  The work shown here still feels marginal—entranced with the edges, challenging visual norms, and contesting values of legibility and order.

Most of the designers featured in Poynor’s book are still active today, and some remain among the field’s most prominent figures. Yet the fervent search for new forms no longer seems to energize the larger profession. Typeface designers are focused on creating useful, solidly researched fonts for general communications rather than high-concept faces addressing questions of chance, decay, and technological breakdown. New modes of experimentation have emerged in areas such as system design, code-driven graphics, and data visualization. Although these areas can yield astonishing visual results, a sense of order and sobriety prevails.

As a critic, Poynor embraced the examples he had collected while seeing them as historical artifacts gathered from a point in time. The book was designed by

Jonathan Barnbrook

, who lavished Poynor’s opening essay with interpretive devices that slow down the easy flow of reading. The main text block jogs left and right around large-scale callouts and visual footnotes. The essay concludes with a collage of quotations that point to designers’ anxiety about the relevance and longevity of the typographic avant-garde and its relationship to an advertising culture that had grown temporarily enamored with the new style. (That sure didn’t last long.)

Tucked into a corner is this comment by Tobias Frere-Jones: “The contortions of the 1990s will fall out of favor, but not before showing us what the tools can do.”