2000

Will Wright’s The Sims models real life. It is not the first simulation game—Utopia on Intellivision (1982), Peter Molyneaux’s Populous (1989), Sid Meier’s Civilization (1991), and Wright’s own SimCity (1989) preceded it—but it becomes the best-selling computer game ever and the most popular game with female players.

2001

Microsoft enters the video game market with Xbox and hit games like Halo: Combat Evolved. Four years later, Xbox 360 gains millions of fans with its advanced graphics and seamless online play.

The Strong’s collections

2002

The U.S. Army releases America’s Army video game to help recruit and communicate with a new generation of electronic gamers, and the Woodrow Wilson International Center for Scholars launches the Serious Games Initiative to encourage the development of games that address policy and management issues.

The Strong’s collections

2003

Valve energizes PC gaming with its release of Steam. The digital distribution platform allows players to download, play, and update games.

2004

Nintendo maintains its dominance of the handheld market with the Nintendo DS, an easy-to-use, portable gaming system packed with two processors, two screens, multiplayer capabilities, and a stylus for the touchscreen. Great games like Super Mario Kart DS helped too.

The Strong’s collections

2005

Microsoft’s Xbox 360 brings high-definition realism to the game market, as well as even better multiplayer competitions on Xbox Live and popular titles such as Alan Wake.

The Strong’s collections

2006

Nintendo Wii gets gamers off the couch and moving with innovative, motion-sensitive remotes. Not only does Nintendo make gaming more active, it also appeals to millions of people who never before liked video games.

The Strong’s collections

2007

Grab your guitar, microphone, bass, or drums, and start playing Rock Band. That’s what millions of would-be musicians did with Harmonix’s hit title.

The Strong’s online collections

2008

Four years after its release, World of Warcraft surpasses 10 million subscribers, making it the most popular Massively Multiplayer Online (MMO) game ever. MMOs create entire virtual universes for players and redefine how we play, learn, and relate to one another.

2009

Social games like Farmville and mobile games like Angry Birds shake up the games industry. Millions of people who never would have considered themselves gamers now while away hours playing games on new platforms like Facebook and the iPhone.

Advertising in the 00s

A transformational change occurred in the 2000s. Consumer obsession with the latest product from Google or Apple often clouds recognition of the long-term effects. Things changed in ten short years.

In 2001 Bill Gates called this new decade “The Digital Decade.”

  • When the 1990s began, there were 2.6 million broadband households in the US, one out of every 40 homes. Now there are 80 million, or two thirds of the population. Broadband has gone from rare to ubiquitous.
  • Starting from zero, digital video recorders reached 31 million homes and HDTV reached 51 million in this decade. Together with online video and video on-demand, these gadgets have completely transformed the television experience.
  • Mobile phones subscriptions were up to 270 million by 2009 out of 307 million US adults. (For a comparison, mobile phones were in 51 million households in 2000, but back then having more than one phone per household was unusual.) Back in 1999 phones were phones. Now they’re iPhones, BlackBerries and Androids — computers and internet access devices.
  • Portable digital music players have reached 76% of all US households. At the start of the decade, they were in practically none, because the iPod had yet to be introduced. Mark Mulligan calls it “The Decade That Music Forgot.”

And finally, it’s worth noting that Google just celebrated its 10th anniversary. In 1999, most of hadn’t heard of it yet. And forget social technologies — in 1999, most of the social activity online was in chat and discussion forums.

Looking back on all this from the perspective of media and marketing, it’s clear that consumer lead, media stumbles along behind, and marketers follow along behind.

2009’s consumers spend 34% of their media time online. As a result, digital marketing spending has gone from $6.2 billion in 1999 to $25.6 billion, or 12% of all marketing spending, in 2009. But marketers still spend most of their energy and dollars on TV, newspapers and radio.

Within those industries, the spend shifts slower than the behavior. Newspaper sites still bring in far less than ads in the paper. Video on-demand and online video ad models are still under construction.

But what you can learn from this decade is that consumers move quickly, models move slowly, and marketing moves conservatively. When you see a technology shifting, that’s the time to begin close observation of the models behind it. It will take years for those models to take hold, and in those years, you get the chance to learn. That’s when you need to experiment and figure out how things work, because that’s when it’s cheap and the competition is hanging back. The objective is not to make money right off, but to learn the ropes. Because when the transformation happens — and it will — then you will have the advantage of knowledge.

The History of Online Advertising

1994: The first banner ads appear

first-banner-ad-hotwired.png

Image credit: Wired

On October 27, 1994, the world of advertising was forever transformed by a small graphic bearing the presumptive words, “Have you ever clicked your mouse right here? You will,” in a kitschy rainbow font. The age of banner ads had officially begun.

You can thank (or blame?) Wired magazine’s former online off-shoot HotWired for introducing the world to the enduringly ubiquitous banner ad. HotWired was a digital publication, and it needed a way to generate revenue to pay its writers.

The publication devised a plan to set aside portions of its website to sell space to advertisers, similar to how ad space is sold in a print magazine. They called the ad spaces “banner ads,” and charged advertisers an upfront cost to occupy the real estate for a set time period — very different from today’s pay-per-click model. 

AT&T paid HotWired $30,000 to place the banner ad above on their site for three months. The ad enjoyed a click-through-rate of 44% — a number that would make most marketers balk in disbelief today. To put that in perspective, the average clickthrough rate on display ads today — 22 years later — is closer to 0.06%

Users enticed to click the mysterious banner were transported to a very early landing page for AT&T. Visitors could click links to view information about landmarks and museums around the world to highlight the internet’s ability to transport you to different locations virtually.

Craig Kanarick, one of the digital consultants hired to work on the campaign, remembers the team’s goal was to make an ad that didn’t feel like an ad, and actually offered valuable content to users. “Let’s not sell somebody something,” he recalled thinking, “Let’s reward them for clicking on this thing brought to you by AT&T.” 

The banner ad concept blew up as a way for websites to keep their content ungated and free for users, and it wasn’t long before other companies — such as Time Inc. and CMP’s Tech web — were seeking out advertisers to lease banner space as a sustainable way to scale their sites. 

1995: Display ads become increasingly targeted 

As banner ads continued to gain popularity, advertisers became increasingly interested in targeting specific consumer demographics, rather than just placing their ads wherever space was offered and hoping the right people would see it. This led to the beginning of targeted ad placement.

WebConnect, an ad agency that specialized in online ads, began helping their clients identify websites their ideal consumers visited. Now, companies could place ads where their target demographics were more likely to see them.

This was nothing short of revolutionary in the digital advertising space. Not only were companies reaching more relevant audiences, but websites hosting the ads were also able to display banners that were more applicable to their visitors.

WebConnect also introduced the CustomView tool, which capped the number of times a particular user was shown a single banner ad. If a user had already seen an ad a certain number of times, they would be shown another ad instead.

Users tend to stop noticing a banner ad after they’ve seen it before, so capping the number of times a user sees an ad helped early online advertisers prevent “banner fatigue.” Ad frequency capping is still a common display ad tactic advertisers use today. 

1996: ROI tracking tools begin to improve 

In 1996, banner ads plastered the internet, but advertisers still didn’t have a good process to determine if these ads were actually driving tangible results for their businesses. Marketers needed a way to more efficiently manage their display ad campaigns across multiple websites and report on how users were interacting with their ads.

Doubleclick emerged on the scene as one of the first ROI tools for banner ad campaigns. They offered advertisers a new service called D.A.R.T. (Dynamic Advertising Reporting & Targeting), which enabled companies to track how many times an ad was viewed and clicked across multiple websites.

The most impressive feature of D.A.R.T. was the fact that advertisers now had the ability to track how their ads were performing and make changes to a live campaign. Previously, advertisers needed to wait until a campaign was completed before they could analyze the results and optimize their next banner for better performance. If an ad was performing poorly, they were forced to wait it out.

With Doubleclick, advertisers could see if an ad’s performance was suffering midway through a campaign, and they had the option to make changes. For example, if a marketer noticed their ad was underperforming on one website, they could remove the ad and devote those resources to another website where the ad was performing better.

Doubleclick’s success also gave rise to a new pricing model for online advertising: Cost per impression (CPM). Previously, websites were paid a flat fee to host banner ads for a predetermined time period. With improved ad tracking, banner pricing transitioned towards an ROI-based model.

1997: Pop-up ads quickly rise and fall 

It would be an understatement to say that pop-up ads suffer from a poor image problem. They’ve been called internet’s original sin and the most hated advertising technique, and one of the original developers has even apologized for creating the underlying code that unleashed them upon unsuspecting web surfers. Even so, these much-maligned ads hold an undeniable place in the history of online advertising.

So who created the very first pop-up? Before you get your pitchforks and torches out, you should know their intentions were good. Ethan Zuckerman, then a developer for Tripod.com, is widely credited with creating the code that enables pop-up ads to open up a new browser window.

“It was a way to associate an ad with a user’s page without putting it directly on the page, which advertisers worried would imply an association between their brand and the page’s content,” Zuckerman wrote in the Atlantic.

Amidst dwindling banner ad clickthrough rates in the late 1990s, pop-up ads first seemed like a way to save online advertising and capture the attention of increasingly ad-blind users. And while pop-ups did force users to pay attention, they didn’t actually translate to real ROI. By the early 2000s, it was standard for web browsers to come with pop-up blocking features.

1999 – 2002: Advertisers turn to paid search and pay-per-click

By this time, the web was expanding rapidly and users needed a better way to navigate the terrain. With search engines steadily gaining popularity, advertisers looking to create ads that were more targeted and less loathsome turned to sponsored search as the next digital advertising frontier.

In 1999, GoTo.com — an emerging search engine company that would later be acquired by Yahoo — introduced the first pay-for-placement search engine service. Advertisers were given the opportunity to bid for top search engine results on particular keywords. Despite some initial outcries that paid search would lead to corrupt results, GoTo.com was able to monetize their search engine through the model.

Pay-for-placement eventually evolved into pay-per-click. Companies bid on search result placement on a per-click basis: e.g., I’ll pay GoTo.com $1 per click if you put my company as the top search result. This led to search results that were largely determined by how much a company was willing to pay. The highest bidders were usually listed first, even above more relevant content, and it was unclear to users which results were paid and which were organic content.

The user experience of paid search was suffering, and one up-and-coming search engine thought they could fix it. Google introduced AdWords in 2000, originally under a pay-for-placement ad model. Google wanted to create a sponsored search experience that generated revenue without compromising the quality and relevancy of search results.

While previous paid search models like GoTo.com relied on bids from advertisers to determine search rankings, AdWords introduced a Quality Score model, which took into account an ad’s clickthrough rate when determining its placement on the search results page. Even if an ad had a lower bid, it would still appear above other, less relevant paid ads in search results thanks to its high clickthrough rate. The Quality Score model is still used today.

2006: Digital ads become hyper-targeted

As social media platforms picked up steam in the mid 2000s, advertisers sought a way to integrate ad content in a way that was both effective and non-intrusive. Marketers wanted a plan of action to reach younger internet users who were increasingly unswayed by banner ads and spending most of their internet time on social networks.

After previously resisting ads on its site, Facebook started working with advertisers in 2006 as a way to increase the young company’s profitability. They started with small display ads and sponsored links, and eventually moved onto ads targeted to a user’s demographics and interests. Despite some controversies along the way, Facebook has proven itself to be a targeted ad pioneer, changing the way that companies reach their desired audiences online.

“Our strategy is much less [about] increasing the volume of ads and much more about increasing the quality of the content and the quality of the targeting to get the right content to the right people,”Facebook founder Mark Zuckerberg said in 2014.

Targeting consumers with relevant ads — rather than bombarding them with a large volume of ad content — has become a standard practice for online advertisers, particularly on social media. Beyond Facebook’s targeting efforts, other social networks such as Twitter, YouTube, and Google+ focus on providing an advertising experience for users that doesn’t feel aggressive or impersonal.

2010 – present: Marketers find value in native ads

Around this time, a new group of media companies began to emerge. Websites like BuzzFeed and Mashable presented advertisers with new opportunities to connect with their audiences through sponsored content and native advertising

Advertisers pay to produce articles, videos, and other types of content for news and media sites. The nature of the content itself is promotional, but the format looks less like an ad and more like a regular piece of content on the host’s website.  

Instead of relying on ads that disrupt their target audience’s online experience, native advertising allows marketers to create promotional content that supplements a user’s online experience. “Marketers interested in targeting ads to specific consumers in an unobtrusive fashion should seriously consider spending some time on native,” Mimi An concluded in a HubSpot Research study on native advertising.

Websites that traditionally generated revenue from display ads began to realize that they could create a better user experience by relying primarily on native ads — rather than traditional display ads — without compromising on ad revenue. 

The Future of Advertising

That’s a look back at the history of online advertising — but what about the future?

According to recent data from HubSpot Research, 91% of respondents say ads are more intrusive today compared to just two to three years ago. It’s clear that the future of digital advertising pivots on developing a targeted ad experience that offers consumers relevant content without feeling nosy or invasive.

2000s in Television

The Wire

Freaks and Geeks

Mad Men

Breaking Bad

The Office

Lost

Battlestar Galactica

30 Rock

Futturama

Friday Night Lights

Firefly

How I Met Your Mother

Big Love

Dexter

The Venture Bros.

The West Wing

2000s in Film

Bourne series

28 Days Later

District 9

The Wind that Shakes the Barley

Brick

The Departed

Lord of The Rings trilogy

O Brother Where Art Thou

Big Fish

Old Boy

Almost Famous

Dark Knight

Eternal Sunshine of the Spotless Mind

Memento

Comic Books in the 21st Century

The period of 2000-2009 was a rebuilding of the comic book industry after the collapse of the 1990s

Comics took on more of a storytelling role and less of a “collectable” role. 

Many of the stories developed in the 2000s went on to become the basis for successful TV shows and motion pictures. In fact, the existence of the “books” from the two major comic book companies received major scrutiny. DC Comics was purchased by Warner Brothers years earlier. Marvel was purchased by Disney in 2009. In both instances the corporations that own the properties see more income from merchandising as well as the film and tv rights. The publishing arms ofd those companies have to continuously defend themselves. The major argument for their existence is to be a laboratory for creating stories to be mined by the film/tv productions.

Marvel in the 00s

Brian Michael Bendis – Alex Maleev – Daredevil – basis for the Netflix TV series.

Ed Brubaker – Steve Epting – Captain America – basis for the Captain America Films

Mark Millar – Bryan Hitch – Ultimates – basis for the Avengers film

While Marvel was able to make this argument convincingly for a time, DC has had less success with that argument. However, there are a few stories from the 2000-2009 period that were highly critically acclaimed and sold very well. It is surprising that they have not had significant film/tv adaptations yet.

DC/Vertigo

Bryan K Vaughn – Pia Guerra – Y the Last Man: A masterpiece of the comic book form.

Bill Willingham – Mark Buckingham – Fables: Rumors abound that the ABC series “Once Upon a Time” is actually “Fables” in disguise so ABC/Disney doesn’t have to pay for the rights, but the storylines and premises are remarkably similar.

Darwin Cooke – DC: The New Frontier – Tells the origin of the Justice League as a period piece set in the mid/late 1950s. It is the strongest adaptation of that story created to date.

Meanwhile the independent comics of the 21st century have been rising in sales. By creating longer form narratives these “Graphic Novels” (a term arguably coined by Will Eisner in the 1970s) with complete stories contained in single volumes have become increasingly more popular than the comic book periodical format.

This is a selection of the most important volumes from the 2000-2009 period

Independent Comics

Stan Sakai – Usage Yojimbo – Sakai tells stories from Japanese history. Strongly influenced by Kurosawa films

Chris Ware – Jimmy Corrigan: The Smartest Kid on Earth – Ware created a litany of new visual language strongly influenced by graphic design to create a unique comic reading experience.

Blankets – an independent comic by Craig Thompson, the beginning of popular auto-biographical comics.

Asterios Polyp – David Mazzucchelli – Mazzucchelli came to prominence by drawing Daredevil and Batman stories written by Frank Miller, after which he famously said he had nothing more to say in regards to super-hero comics. Asterious Polyp is widely considered one of the greatest Graphic Novels ever created. It uses the design of the characters, the design of the pages, and the design of the physical book itself to communicate the ideas contained within.

Marjane Satrapi – Persepolis – Award winning auto-biographical comic in the tradition of Blankets about a muslim girl growing up during and after the Islamic Revolution.

Jason – Why are You Doing This? – Jason is a Norwegian cartoonist who uses line claire, a drawing style developed by French cartoonist Herge, years earlier, to tell short clear narratives involving anthropomorphic animal characters in situations punctuated by visual reversals.

Illustration 2000s

It’s been one heck of a decade for the discipline of illustration, and that’s certainly not an overstatement. To those creatives whose careers straddle the solast- century and noughties divide, it’s clear how much ground the discipline has covered – and how much it’s conquered – since the year 2000.

Over the last 10 years, this field has undergone a huge reinvention from cottage industry to creative industry. Thanks to a fresh breed of practitioners at the turn of the century, with new working methodologies and ideologies for contemporary illustration practice, an outmoded and outdated analogue craft has been dragged into the brave new digital world of the 21st century.

In charting the fall and rise, the near-death experience and radical rebirth of illustration from the mid- 1990s to 2000, it’s evident that renegade innovators were at the heart of instigating genuine and challenging change. At the beginning of the 21st century, only moments away from the final nail being hammered into the coffin, a new kind of creative started a wave of illustration that would determine and define the discipline’s future, and defy those who were ready to perform its last rites.

RECOMMENDED VIDEOS FOR YOU…

As for the tipping point – was there a moment in time when technology and ideology were in perfect harmony? Probably not, but both certainly played a role in the rejuvenation of illustration. Its changing fortune came about through a series of seemingly unrelated moments, when a new generation of young digital practitioners ran with – rather than away from – technology. Having grown up with the computer in the playroom, classroom and bedroom, they embraced the possibilities it offered for pushing new parameters, instead of remaining tied down by traditions.

While this was going on, the democratisation of the digital was also in full swing: kit, both hardware and software, was available to buy from out-of-town warehouses at rock-bottom prices. And around the same time, the internet arrived into studios up and down the land – albeit via 56k dial-up – for the first time enabling information and communication to be shared across the planet. Marshall McLuhan’s ‘global village’, a term that he first coined in 1962, had finally become a reality.

Ian Wright, a rare example of an ever-evolving illustrator to have continually practised across three decades, cites the internet as the major development for the discipline. “It has allowed self-publishing and digital viewing in a way unheard of before,” he says. And Anthony Burrill, no amateur himself, agrees: “The biggest advance this decade for me has been the internet and instant access to everything, all the time. It means I use it constantly for image research and when I’m looking for inspiration,” he says.

Yet aside from a small minority to jump the analogue/digital divide, as Wright and Burrill so visibly achieved, the future of the discipline was to rest with, and be wrestled by, a new generation. And the outlet for their first forays into reshaping the future? The Face: a fashion, music and style magazine originally launched back in 1980 with illustration content provided by a young Ian Wright. Constantly reinventing itself, some 20 years later the publication was to offer illustration a much-needed blank canvas. By the year 2000, and under the creative guidance of Graham Rounthwaite (himself a successful illustrator-turned-art director), the likes of Jasper Goodall, Miles Donovan at Peepshow and Austin Cowdall at NEW were given carte blanche to explore new illustrative opportunities within the pages of the publication.

“I don’t think I’d be the artist I am today if I hadn’t worked with Graham,” says Goodall. “He was a great art director who knew the value of letting artists do their own thing. We kind of came together at a time when what I was doing creatively totally fitted with what he wanted for the magazine, so he pretty much left me to it with almost zero amendments for three or four years. The Face was the bible of cool,” he continues, “and that was the best place for me to be in terms of media industry perception – it got me a lot of work.”

Adrian Johnson, much applauded for his work for clients that include The Independent, The Guardian, Robinsons and Stussy, agrees with Goodall: “The Face helped illustration break free of the shackles of the Radio Times and ES Magazine covers that it was synonymous with – illustration became cool. It’s now everywhere, from the printed page to the white walls of Berlin galleries, and from the Milan catwalk to the world wide web,” he says. “Illustration fought long and hard to be viewed as an equal to graphic design, but the blurring of the boundaries has given the discipline the shot it needed – credibility.”

If it was The Face that began to give illustration the confidence boost it so desperately required at the start of the decade, then it was the strut and last-gang-in-town mentality of a small number of collectives emerging from the shadows – Big Orange, Neasden Control Centre, Peepshow and Black Convoy – that drove the discipline further forward, and greatly helped to expand the overall remit of the illustrator.

Peepshow’s influence, since its formation following Miles Donovan’s graduation from the University of Brighton in 2000, should not be underestimated. Having constantly stood at the forefront of evolving illustration practice, and, just as crucially, reshaped the media’s comprehension of the role of the illustrator, Peepshow has continued to break down barriers and preconceptions about the discipline. “We work extensively within art direction, advertorial and editorial illustration, moving image, fashion and textile design, and set design,” explains Donovan – and it’s at the crossroads of these areas that new practices continue to emerge.

Naked ambition and raw desire to succeed saw John McFaul – a founding member of collective Black Convoy – split to set up McFaul, his own fully-fledged design/illustration agency, which has built up an international client base. “Illustrator became illustrators, then an art and design agency, and a bumbling creative mess became a tight business with a sense of purpose and more than a little swagger,” he admits.

With a portfolio that includes large-scale projects for clients such as Carhartt, Nokia, Havaianas and John Lennon Airport, McFaul can be forgiven for a touch of arrogance. “I’m smiling from breakfast to beers. Can there be a better job on the planet?” he asks.

Michael Gillette, illustrator to the Beastie Boys, Levi’s and James Bond (to pick just a few names from his ever-expanding client list) thinks not. From Gillette’s studio in San Francisco – he relocated from London at the start of the decade – he explains his own take on the developments that have been seen since the end of the last century. “It’s clear that we’ve had an explosion of illustrative creativity in the past 10 years, in all different directions and across many different media. I believe that the breadth is enough to ensure that illustration will remain a very viable creative solution and not a fad,” he says. “I believe that the digital realm has put designers and illustrators back on the same page – or screen – so developments will continue in a positive and dynamic fashion.”

Throughout the decade, illustration has become noticed on an increasingly global scale, as a result of enhanced communication through the web. Commercial markets have opened for practitioners in ways that were unthinkable a decade ago. Gradually the eyes of the media industries began to look further afield than the UK and US for new illustration talent, and groups such as eBoy in Germany and :phunk studio in Singapore came under the spotlight.

With corporate companies like Coca-Cola, Nike and Levi’s waiting in line to work with the pick of the bunch, some illustration outfits started to attain cult status. eBoy, comprising an ex-bricklayer, an electrician and a musician, created entirely digital pixelated worlds that couldn’t fail to impress – millions of pixels arranged to create hyper-worlds that pointed towards a Utopian urban dream.

Meanwhile, in Singapore :phunk studio (another truly global collective) were merging art and design, turning their hands to a never-ending range of projects. From skateboards, bikes, vinyl figures, bags, club interiors and exhibitions to retail graphics and design for print and screen, all were subjected to the :phunk studio visual approach.

“We get bored easily,” they explain. “We like to explore, express and communicate through different media.” Today, illustration is a wholly international discipline, manifesting itself through a seemingly never-ending array of outlets compared to the canvas it had in 2000.

So what and where next for illustration? Ambitious for the discipline, Holly Wales champions the new role of the illustrator, but certainly isn’t letting recent progress stand in the way of further transformations in the industry. “I’d like to see much more collaboration – a huge embrace of technology and illustrators behaving more like art directors on bigger projects,” Wales says.

Others are beginning to see the future for illustrators as animators: “As magazines increasingly go online,” says Jasper Goodall, “so the opportunities for non-static illustration increases.” Adrian Johnson is in agreement: “It appears, and I say this with some regret, that the printed page has had its day, and yet the possibilities for illustration online are astonishing. Illustrators will have to get their heads round animation, as illustration comes alive!”

Changes in technology and ideology have transformed and continue to transform contemporary illustration, with emerging practitioners keen to continue to push at the blurred edges of existing boundaries. Rose Blake is a recent graduate of Kingston University’s BA Illustration and Animation course, and is now in her first year of study on a Communication Art and Design MA course at the Royal College of Art. Blake was only 13 years old in 2000, and yet she captures the spirit of today’s young mavericks: “I’ve learnt that you have to be human and put that into your work,” she says. “I make work about stuff that really means something to me – I like honest work.”

Anthony Burrill, despite having been around the block a few times, agrees wholeheartedly with Blake. “Tread your own path,” he advises. “Be aware of everything that’s going on, but find your own voice.” And Ian Wright, from his studio in NYC and with a career spanning three decades, offers sage advice for the newcomer: “The future is always uncertain, yet look at it as a positive challenge and, above all, stick to it and keep the faith!”

We look forward to seeing what the next 10 years have in store: here’s to another heck of a decade for illustration.

__________

While the word “illustrator” might bring to mind a children’s book artist, these five illustrators graduated from storybooks decades ago. Thanks to the ever-growing arsenal of available digital media creation tools, the world of illustration has undergone a major transformation over the year

Today’s illustrators blend traditional and digital media to create artwork for magazines, books, advertisements, movies and more. While they use many of the same tools, each illustrator has an entirely different style. Keep reading to see each artist’s unique take on 21st century illustration.

Michael Kutsche

If you’ve watched Thor, John Carter or Oz the Great and Powerful, then you’re already familiar with Michael Kutsche’s work. Kutsche was born in Germany in 1970, and like many other illustrators, began drawing and illustrating at an early age. As his work demonstrates, Kutsche has never lacked imagination or the skill to turn his ideas into awesome otherworldly creatures.

Kutsche got his first big break as part of the design team for Tim Burton’s “Alice in Wonderland”, which was released in 2010. The self-taught German illustrator works in both traditional and digital media, usually drawing out his concepts before incorporating digital elements. His work with high-profile blockbuster hits has earned him worldwide acclaim.

 

James Jean

James Jean is an Indonesian-American artist known for both his popular commercial illustrations and his fine art gallery work. Jean was born in Taiwan in 1979 and raised in New Jersey. In 2001, he graduated from The School of Visual Arts and began his uber-successful career working with companies like Prada, Atlantic Records and DC Comics.

 

Many critics and fans now recognize Jean as one of the industry’s best illustrators. His style is characterized by unique, ethereal energy and sophisticated compositions. Jean often incorporates curvilinear lines and the wet media effect to complement the unusual perspectives in his work.

Brian Despain

Once you become familiar with Brian Despain’s smooth, antique illustration style, you’ll be able to pick out his artwork from a mile away. Despain loved art from an early age and spent much of his younger years daydreaming and drawing. Truly a sign of the technological times in which we live, Despain first learned to paint digitally, and only recently began painting with oils.

Despain has been particularly successful in the video game industry, though as a professional concept illustrator he works on a variety of projects. His depiction of robots and his use of a muted palette—browns, tans, and navy blues are his go-to color scheme—have carved out a unique illustration style that’s garnered him well-deserved critical and commercial acclaim.

Zutto

Zutto, whose real name is Alexandra Zutto, is a self-taught freelance illustrator based out of Russia. Using Adobe Illustrator, she creates colorful, playful scenes straight from her imagination, hoping to use her art to communicate with the outside world. Zutto also publishes works-in-progress as a sort of tutorial, allowing fans to see an in-depth look at her artistic process.

Andrey Gordeev

Andrey Gordeev is a Russian artist whose distinct style is easily recognizable. As evident from his work, Gordeev rarely takes himself seriously—his digital illustrations are colorful, fun and often humorous.

In one of his most popular series, Around the World in 12 Months, Gordeev drew a series of truck drivers from all over the world for a corporate calendar.

Since the invention of the Internet, text plays a major role on the web. For thirty years, the web has revolutionized our daily communications, interactions, and business transactions, but the true transformation of typography to the web only took off in the last few years. For the first twenty years, the web had gone through many changes, such as adopting web standards, using CSS for layouts, and processing dynamic data. Even though the web embraced text from the beginning, they were not well integrated until recent years.

The First Website

On December 12, 1990, Tim Berners-Lee published the first website on the Internet after he figured out the concept of Uniform Resource Locator (URL), Hypertext Transfer Protocol (HTTP), and Hypertext Markup Language (HTML). The project Berners-Lee launched was about the World Wide Web, in which he defined, “a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.” (Berners-Lee 1990) His site has only texts and links to other documents. As a result, it still works today as it did thirty years ago despite the changes and advancements in web technologies.

Type on Screen

The lowercase g in pixels with vector outlines. Design: Young Sun Compton, 2013.

In the mid-1990s, Matthew Carter designed Georgia and Verdana—two widely used typefaces for screen-based media. Commissioned by Microsoft specifically for texts on webpages, Georgia and Verdana were designed first in bitmaps to match the pixels of the screen resolutions at the time and then translated into outline fonts. For legibility and readability on screens, Carter designed these fonts with large x-height, open aperture, and generous space.

Image as Text

Newyorker.com’s old page using images as texts.

As graphic design was making the transition to the web from mid-1990s to mid-2000s, designers wanted to use more typefaces than just the handful that came with the operating system: Georgia, Verdana, Arial, Helvetica, and Times New Roman. The simplest alternative was to use image as text. Designers could use any typefaces available on their computer, but the downside was that each piece of text had to be sliced up individually using tools such as Photoshop and Illustrator. One popular site that used images as texts was The New Yorker. To be consistent with its printed publication, each headline on the newyorker.com served up images as texts in order to use NY Vogue Goat as its branded typeface. Until November 2010 when the publication started using Typekit to serve its custom fonts, someone’s job at he publication was to slice up those images every day.

Image Replacement Techniques

A major issue of using image as type was that text was not searchable, selectable, or translatable. To get around this problem, web practitioners came up with various image replacement techniques to fill the void. In April 2004, Shaun Inman developed a technique called Scalable Inman Flash Replacement (SIFR) to embed custom fonts in a small Flash movie. He also used JavaScript and CSS to make the text selectable.

While SIFR solved the issue of image slicing, its main drawback was relying on Flash, an Adobe’s proprietary software program for delivering rich contents on the web. Furthermore, setting up SIFR required some web knowledge. In April 2009, Simo Kinnunen created a new and improved technique called Cufón (Scalable Vector Graphics, SVG). Cufón used JavaScript to render generated fonts (SVG format) to the browser. This technique was easier to set up and did not rely on Flash. Although many image replacement techniques have continued to be developed and advanced over the years, they are not genuine web typography.

Webfonts

Web typography is not a new concept. In 1998, the Cascading Style Sheet (CSS) Working Group proposed the support of the @font-face rule to allow any typeface to be displayed on webpages. Internet Explorer 4 was the first browser to implement it, but with no success. The proposal had no piracy protection or licensing agreement in place. As a result, @font-face was stalled for almost a decade.

In 2008, @font-face made a comeback when Apple Safari and Mozilla Firefox implemented it. In May 2009, Jeffrey Veen introduced Typekit, a type hosting service that let designers use high-quality fonts on websites with the ease of implementation and the worry-free of licensing and cross-browser compatibility. In just two years, Adobe acquired Typekit bringing more classic types such as Garamond Pro, Minion Pro, and Myriad Pro to the web.

In 2010, Google launched its own library of open-source fonts. With its simple API (application programming interface), Google has succeeded in making webfonts more approachable.

The @font-face rule is now supported in all modern browsers (Internet Explorer, Firefox, Chrome, Safari, and Opera) and mobile browsers (iOS Safari, Android, and Chrome). In addition to Google Fonts, many type foundries began to offer webfont services. In just a few years, webfonts have swept the world of web design.

Variable Fonts

On September 14, 2016, Adobe, Apple, Google, and Microsoft joined forces to introduce variable fonts at the ATypI conference in Warsaw, Poland. The power of this new OpenType font variation technology is in its ability to pack multiple fonts into one without increasing the file size, which is a huge advantage for delivering webfonts. Variable fonts solve the issue of loading individual font file to get various weights, widths, heights, styles, and other attributes. With flexibility, variable fonts provide an exciting opportunity for responsive typography on the web.

Illustration: 3-axis variable font by John Hudson. Typeface: Kepler designed by Robert Slimbach.

With the rise of responsive web design and the emerging of variable fonts, typography is going through a new transformation like never before. Unlike printed publications, the flexibility of the web gives designers no control of their work. Whether through smartphones, tablets, laptops or game consoles, they have no idea how their work will be viewed on a user’s device. In order to accommodate the growing number of devices coming to the market continuously, they have to embrace the fluidity of the web and let go of the notion of pixel-perfect control. Designing for the unknown can be intimidating, but also challenging, rewarding, and exciting.

Graphic Design 2000s

Although there were leaps and bounds of progress in the ’70s, ’80s, and ’90s, those improvements do not compare to the expansion and revolution of graphic design in the 2000s. Print methods were relatively similar, and the technology, although improving, was never truly breaking the barrier of what was possible. That progression became exponential very, very quickly in the 2000s with the rapid improvement of technology. All of a sudden, people weren’t designing simply for print or digital media. Designers had to make sure their work looked good on something as large as a billboard and as small as a screen that fits in a pocket. New software allowed designers to easily create three-dimensional objects and manipulate typography like never before. On top of all this development, some of the most iconic designs in human history were created during this decade, including Shepard Fairey’s Obama Hope poster (which opened a whole other door into the legality of certain design practices, a rabbit hole that we won’t go down today) and Apple’s single-colored iPod ads. Whether it was the artists who were learning to use new tools or the new tools themselves that allowed the creation of new and exciting artwork, the possibilities of graphic design were expanding more and more by the day.

Here you can see the original 3Points Communications logo — 3Points was founded in early 2010, so this logo is more a reflection of the 2000s than the 2010s. [Also worth noting: unlike the preceding designs, which were created for this exercise, this was our actual logo, which means it is less exaggerated to reflect its decade.] Much like some of the most popular new logos of the 2000s (AT&T, Wikipedia, and Microsoft, to name a few), it features three-dimensional shapes, which were then a must-have for a brand hoping to keep up with the times. Although most brands chose to use either the effect of a gradient or a drop shadow to achieve a 3D feel, both are used here. On top of this, we can now see 3Points sporting a sleek and thin font. Thick fonts were very popular in the ’90s; however, the look considered modern in the 2000s was sleek and minimalistic.

Apple debuted the iPod in 2001 – this print ad is from 2003.

The first movie in the Harry Potter series – with its lightning-bolt font – was released in 2001.

“Never forget” was everywhere following the Sept. 11, 2001, attacks.

Milton Glaser modified his iconic design after 9/11.

Facebook launched in 2004, and workplace productivity was never the same again.

“Lost” crashed into TV households in 2004 along with its simple yet mystifying title sequence.

Co-founded in 2006 by U2’s Bono, Product Red raises funds to eliminate AIDS in Africa.

Street artist Shepard Fairey designed and printed this poster of Barack Obama during the 2008 presidential campaign. Fairey and the Associated Press just settled a lawsuit regarding the original photograph’s copyright.

The 2006 Super Bowl’s XL Roman numerals made for an extra-large logo.

image11.jpg

Blizzard Entertainment’s “World of Warcraft” was released in 2004, and even nongamers became familiar with terms like MMORPG.

“Avatar,” released in 2009, got so much praise for its special effects yet so much flak for the use of Papyrus in its title font and subtitles.

The bid logo for the 2000 Summer Olympics featured a stylized image of the Sydney Opera House.

 

The television show “24” first aired in 2001, and the logo resembling a digital clock went alongside the split screen with the time running down during each episode.

Wikipedia, launched in 2001, has gone through a handful of logo changes. The puzzle ball version came about in 2003.

The televised talent show “American Idol” debuted in 2002, changing the way viewers interact with their TV shows.

The 2000s began an entirely new frontier for graphic designers. In addition to tools becoming even more powerful, people were suddenly designing on portable devices, such as smartphones. On top of that, designers began to realize the importance of designing in a way that looked good across all device types.

Movement became more of a focus, with designers looking for ways to make even static logos look like they’re in motion.

The AT&T logo was redesigned in 2005 but still features a version of Saul Bass’ globe introduced in ’84.

One example of a logo that appears to almost be in motion is the AT&T logo of a globe. Because of the mix of blue and white along with the angles of the logo, one can imagine that the image is spinning slowly just as the earth does. They debuted the logo in 2005.

Published 1997

Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff

The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like “bleiner@computer.org” and “http://www.acm.org” trip lightly off the tongue of the random person on the street. 1

This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2

In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher-level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.

The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects – technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.

Origins of the Internet

The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his “Galactic Network” concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA,4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock’s conviction of the need for packet switching was confirmed.

In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the “ARPANET”, publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written apaper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word “packet” was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5

In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP’s). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP’s with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock’s team at UCLA. 6

Due to Kleinrock’s early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart’s project on “Augmentation of Human Intellect” (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC’s.

One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock’s laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial “hot” application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of “people-to-people” traffic.

The Initial Internetting Concepts

The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level “Internetworking Architecture”. Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service.

In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called “Internetting”. Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.

However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.

Four ground rules were critical to Kahn’s early thinking:

  • Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
  • Communications would be on a best effort basis. If a packet didn’t make it to the final destination, it would shortly be retransmitted from the source.
  • Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
  • There would be no global control at the operations level.

Other key issues that needed to be addressed were:

  • Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
  • Providing for host-to-host “pipelining” so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
  • Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
  • The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
  • The need for global addressing
  • Techniques for host-to-host flow control.
  • Interfacing with the various operating systems
  • There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.

Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled “Communications Principles for Operating Systems“. At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn’s architectural approach to the communications side and with Cerf’s NCP experience, they teamed up to spell out the details of what became TCP/IP.

The give and take was highly productive and the first written version7of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference.

Some basic approaches emerged from this collaboration between Kahn and Cerf:

  • Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.
  • Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.
  • It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.
  • Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.

The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.

A major initial motivation for both the ARPANET and the Internet was resource sharing – for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.

There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early “worm” programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.

Proving the Ideas

DARPA let three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply called TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate.

This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology. Beginning with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community. [REK78] With each expansion has come new challenges.

The early implementations of TCP were done for large time sharing systems such as Tenex and TOPS 20. When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible. They produced an implementation, first for the Xerox Alto (the early personal workstation developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. In 1976, Kleinrock published the first book on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community.

Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet and PCs and workstations the dominant computers. This change from having a few networks with a modest number of time-shared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional scale networks; and Class C represented local area networks (large number of networks with relatively few hosts).

A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a large number of independently managed networks (e.g., LANs) meant that having a single table of hosts was no longer feasible, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names (e.g. www.acm.org) into an Internet address.

The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Exterior Gateway Protocol (EGP) used to tie the regions together. This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables.

As the Internet evolved, one of the major challenges was how to propagate the changes to the software, particularly the host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including incorporating TCP/IP developed at BBN. Although Berkeley later rewrote the BBN code to more efficiently fit into the Unix system and kernel, the incorporation of TCP/IP into the Unix BSD system releases proved to be a critical element in dispersion of the protocols to the research community. Much of the CS research community began to use Unix BSD for their day-to-day computing environment. Looking back, the strategy of incorporating Internet protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet.

One of the more interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a “flag-day” style transition, requiring all hosts to convert simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took place and went surprisingly smoothly (but resulted in a distribution of buttons saying “I survived the TCP/IP transition”).

TCP/IP was adopted as a defense standard three years earlier in 1980. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. By 1983, ARPANET was being used by a significant number of defense R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs.

Thus, by 1985, Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications. Electronic mail was being used broadly across several communities, often with different systems, but interconnection between different mail systems was demonstrating the utility of broad based electronic communications between people.

Transition to Widespread Infrastructure

At the same time that the Internet technology was being experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were being pursued. The usefulness of computer networking – especially electronic mail – demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that by the mid-1970s computer networks had begun to spring up wherever funding could be found for the purpose. The U.S. Department of Energy (DoE) established MFENet for its researchers in Magnetic Fusion Energy, whereupon DoE’s High Energy Physicists responded by building HEPNet. NASA Space Physicists followed with SPAN, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the (academic and industrial) Computer Science community with an initial grant from the U.S. National Science Foundation (NSF). AT&T’s free-wheeling dissemination of the UNIX computer operating system spawned USENET, based on UNIX’ built-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which linked academic mainframe computers in an “email as card images” paradigm.

With the exception of BITNET and USENET, these early networks (including ARPANET) were purpose-built – i.e., they were intended for, and largely restricted to, closed communities of scholars; there was hence little pressure for the individual networks to be compatible and, indeed, they largely were not. In addition, alternate technologies were being pursued in the commercial sector, including XNS from Xerox, DECNet, and IBM’s SNA.8 It remained for the British JANET (1984) and U.S. NSFNET (1985) programs to explicitly announce their intent to serve the entire higher education community, regardless of discipline. Indeed, a condition for a U.S. university to receive NSF funding for an Internet connection was that “… the connection must be made available to ALL qualified users on campus.”

In 1985, Dennis Jennings came from Ireland to spend a year at NSF leading the NSFNET program. He worked with the community to help NSF make a critical decision – that TCP/IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide area networking infrastructure to support the general academic and research community, along with the need to develop a strategy for establishing such infrastructure on a basis ultimately independent of direct federal funding. Policies and strategies were adopted (see below) to achieve that end.

NSF also elected to support DARPA’s existing Internet organizational infrastructure, hierarchically arranged under the (then) Internet Activities Board (IAB). The public declaration of this choice was the joint authorship by the IAB’s Internet Engineering and Architecture Task Forces and by NSF’s Network Technical Advisory Group of RFC 985 (Requirements for Internet Gateways ), which formally ensured interoperability of DARPA’s and NSF’s pieces of the Internet.

In addition to the selection of TCP/IP for the NSFNET program, Federal agencies made and implemented several other policy decisions which shaped the Internet of today.

  • Federal agencies shared the cost of common infrastructure, such as trans-oceanic circuits. They also jointly supported “managed interconnection points” for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) built for this purpose served as models for the Network Access Points and “*IX” facilities that are prominent features of today’s Internet architecture.
  • To coordinate this sharing, the Federal Networking Council9 was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Coordinating Committee on Intercontinental Research Networking, CCIRN, to coordinate Internet support of the research community worldwide.
  • This sharing and cooperation between agencies on Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA’s Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-metered-settlements basis.
  • Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-academic customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all.
  • On the NSFNET Backbone – the national-scale segment of the NSFNET – NSF enforced an “Acceptable Use Policy” (AUP) which prohibited Backbone usage for purposes “not in support of Research and Education.” The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and/or growth of “private”, competitive, long-haul networks such as PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard’s Kennedy School of Government on “The Commercialization and Privatization of the Internet” – and on the “com-priv” list on the net itself.
  • In 1988, a National Research Council committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled “Towards a National Research Network”. This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway.
  • In 1994, a National Research Council report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled “Realizing The Information Future: The Internet and Beyond” was released. This report, commissioned by NSF, was the document in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think about its evolution. It anticipated the critical issues of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet.
  • NSF’s privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the now numerous, private, long-haul networks.

The backbone had made the transition from a network built from routers out of the research community (the “Fuzzball” routers from David Mills) to commercial equipment. In its 8 1/2 year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. It had seen the Internet grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States.

Such was the weight of the NSFNET program’s ecumenism and funding ($200 million from 1986 to 1995) – and the quality of the protocols themselves – that by 1990 when the ARPANET itself was finally decommissioned10, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure.

The Role of Documentation

A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols.

The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks.

In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, 1998.

The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or a least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams.

Over time, the RFCs have become more focused on protocol standards (the “official” specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the “documents of record” in the Internet engineering and standards community.

The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems.

Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed – RFCs were presented by joint authors with common view independent of their locations.

The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC.

As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network’s first role in information sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet.

Formation of the Broad Community

The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history beginning with the early ARPANET. The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various research programs, as the Internet started to evolve, the Network Working Group evolved into Internet Working Group.

In the late 1970s, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies – an International Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Internet activity.

In 1983, when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of the technology (e.g. routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces.

It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair. After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical/engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the form of working groups.

This growth was complemented by a major expansion in the community. No longer was DARPA the only major player in the funding of the Internet. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and there was a significant decrease in Internet activity at DARPA. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership.

The growth continued, resulting in even further substructure within both the IAB and IETF. The IETF combined Working Groups into Areas, and designated Area Directors. An Internet Engineering Steering Group (IESG) was formed of the Area Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured so that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Force (IRTF) chaired by Postel, with the old task forces renamed as research groups.

The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980’s and continuing to this day, the Internet grew beyond its primarily research roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and fair. This coupled with a recognized need for community support of the Internet eventually led to the formation of the Internet Society in 1991, under the auspices of Kahn’s Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI.

In 1992, yet another reorganization took place. In 1992, the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Internet Society. A more “peer” relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibility for the approval of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Society, with the Internet Society taking on as a goal the provision of service and other measures which would facilitate the work of the IETF.

The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers. A new coordination organization was formed, the World Wide Web Consortium (W3C). Initially led from MIT’s Laboratory for Computer Science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibility for evolving the various protocols and standards associated with the Web.

Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues.

Commercialization of the Technology

Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP/IP products.

In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in day-to-day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way discussion was formed that has lasted for over a decade.

After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone’s products interoperated with all of the other products – even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology.

In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative manner. The reason it is so useful is that it is composed of all stakeholders: researchers, end users and vendors.

Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation.

As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based management.

In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a “commodity” service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.

History of the Future

 

On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term “Internet”. “Internet” refers to the global information system that — (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.

The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.

One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide new services such as real time transport, in order to support, for example, audio and video streams.

The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications. This evolution will bring us new applications – Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, e.g. broadband residential access and satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself.

The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders – stakeholders now with an economic as well as an intellectual investment in the network.

We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stakeholders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.

Timeline

BriefHistoryInternetTimeline

Footnotes

1 Perhaps this is an exaggeration based on the lead author’s residence in Silicon Valley.
2 On a recent trip to a Tokyo bookstore, one of the authors counted 14 English language magazines devoted to the Internet.
3 An abbreviated version of this article appears in the 50th anniversary issue of the CACM, Feb. 97. The authors would like to express their appreciation to Andy Rosenbloom, CACM Senior Editor, for both instigating the writing of this article and his invaluable assistance in editing both this and the abbreviated version.
4 The Advanced Research Projects Agency (ARPA) changed its name to Defense Advanced Research Projects Agency (DARPA) in 1971, then back to ARPA in 1993, and back to DARPA in 1996. We refer throughout to DARPA, the current name.
5 It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.
6 Including amongst others Vint Cerf, Steve Crocker, and Jon Postel. Joining them later were David Crocker who was to play an important role in documentation of electronic mail protocols, and Robert Braden, who developed the first NCP and then TCP for IBM mainframes and also was to play a long term role in the ICCB and IAB.
7 This was subsequently published as V. G. Cerf and R. E. Kahn, “A protocol for packet network interconnection” IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.
8 The desirability of email interchange, however, led to one of the first “Internet books”: !%@:: A Directory of Electronic Mail Addressing and Networks, by Frey and Adams, on email address translation and forwarding.
9 Originally named Federal Research Internet Coordinating Committee, FRICC. The FRICC was originally formed to coordinate U.S. research network activities in support of the international coordination provided by the CCIRN.
10 The decommissioning of the ARPANET was commemorated on its 20th anniversary by a UCLA symposium in 1989.

 

History of the World Wide Web

 

Sir Tim Berners-Lee is a British computer scientist. He was born in London, and his parents were early computer scientists, working on one of the earliest computers.

Growing up, Sir Tim was interested in trains and had a model railway in his bedroom. He recalls:

“I made some electronic gadgets to control the trains. Then I ended up getting more interested in electronics than trains. Later on, when I was in college I made a computer out of an old television set.”

After graduating from Oxford University, Berners-Lee became a software engineer at CERN, the large particle physics laboratory near Geneva, Switzerland. Scientists come from all over the world to use its accelerators, but Sir Tim noticed that they were having difficulty sharing information.

“In those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. Often it was just easier to go and ask people when they were having coffee…”, Tim says.

Tim thought he saw a way to solve this problem – one that he could see could also have much broader applications. Already, millions of computers were being connected together through the fast-developing internet and Berners-Lee realised they could share information by exploiting an emerging technology called hypertext.

In March 1989, Tim laid out his vision for what would become the web in a document called “Information Management: A Proposal”. Believe it or not, Tim’s initial proposal was not immediately accepted. In fact, his boss at the time, Mike Sendall, noted the words “Vague but exciting” on the cover. The web was never an official CERN project, but Mike managed to give Tim time to work on it in September 1990. He began work using a NeXT computer, one of Steve Jobs’ early products.

Tim's original proposal. Image: CERN

Tim’s original proposal. Image: CERN

 

By October of 1990, Tim had written the three fundamental technologies that remain the foundation of today’s web (and which you may have seen appear on parts of your web browser):

  • HTML: HyperText Markup Language. The markup (formatting) language for the web.
  • URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify to each resource on the web. It is also commonly called a URL.
  • HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across the web.

Tim also wrote the first web page editor/browser (“WorldWideWeb.app”) and the first web server (“httpd“). By the end of 1990, the first web page was served on the open internet, and in 1991, people outside of CERN were invited to join this new web community.

As the web began to grow, Tim realised that its true potential would only be unleashed if anyone, anywhere could use it without paying a fee or having to ask for permission.

He explains: “Had the technology been proprietary, and in my total control, it would probably not have taken off. You can’t propose that something be a universal space and at the same time keep control of it.”

So, Tim and others advocated to ensure that CERN would agree to make the underlying code available on a royalty-free basis, forever. This decision was announced in April 1993, and sparked a global wave of creativity, collaboration and innovation never seen before. In 2003, the companies developing new web standards committed to a Royalty Free Policy for their work. In 2014, the year we celebrated the web’s 25th birthday, almost two in five people around the world were using it.

Tim moved from CERN to the Massachusetts Institute of Technology in 1994 to found the World Wide Web Consortium (W3C), an international community devoted to developing open web standards. He remains the Director of W3C to this day.

The early web community produced some revolutionary ideas that are now spreading far beyond the technology sector:

  • Decentralisation: No permission is needed from a central authority to post anything on the web, there is no central controlling node, and so no single point of failure … and no “kill switch”! This also implies freedom from indiscriminate censorship and surveillance.
  • Non-discrimination: If I pay to connect to the internet with a certain quality of service, and you pay to connect with that or a greater quality of service, then we can both communicate at the same level. This principle of equity is also known as Net Neutrality.
  • Bottom-up design: Instead of code being written and controlled by a small group of experts, it was developed in full view of everyone, encouraging maximum participation and experimentation.
  • Universality: For anyone to be able to publish anything on the web, all the computers involved have to speak the same languages to each other, no matter what different hardware people are using; where they live; or what cultural and political beliefs they have. In this way, the web breaks down silos while still allowing diversity to flourish.
  • Consensus: For universal standards to work, everyone had to agree to use them. Tim and others achieved this consensus by giving everyone a say in creating the standards, through a transparent, participatory process at W3C.

New permutations of these ideas are giving rise to exciting new approaches in fields as diverse as information (Open Data), politics (Open Government), scientific research (Open Access), education, and culture (Free Culture). But to date we have only scratched the surface of how these principles could change society and politics for the better.

In 2009, Sir Tim established the World Wide Web Foundation. The Web Foundation is advancing the Open Web as a means to build a just and thriving society by connecting everyone, raising voices and enhancing participation.

Please do explore our site and our work. We hope you’ll be inspired by our vision and decide to take action. Remember, as Tim tweeted during the Olympics Opening Ceremony in 2012, “This is for Everyone”.