54 Intro to Film

The History of Film

In 1872, Leland Stanford was a wealthy robber baron, former Governor of California, and horse racing enthusiast with way too much time on his hands. Spending much of that time at the track, he became convinced that a horse at full gallop lifted all four hooves off the ground. His friends scoffed at the idea. Unfortunately, a horse’s legs moved so fast that it was impossible to tell with the human eye. So he did what really wealthy people do when they want to settle a bet, he turned to a nature photographer, Eadweard Muybridge, and offered him $25,000 to photograph a horse mid gallop.

Six years later, Muybridge perfected a technique of photographing a horse in motion with a series of 12 cameras triggered in sequence. One of the photos clearly showed that all four of the horse’s hooves left the ground at full gallop. Stanford won the bet and went on to found Stanford University. Muybridge pocketed the $25,000 and became famous for the invention of series photography, a critical first step toward motion pictures.

 

The Horse in Motion. Eadweard Muybridge, 1878. Public Domain Image.
The Horse in Motion. Eadweard Muybridge, 1878.

Of course, the mechanical reproduction of an image had already been around for some time. The Camera Obscura, a technique for reproducing images by projecting a scene through a tiny hole that is inverted and reversed on the opposite wall or surface (think pinhole camera), had been around since at least the 5th century BCE, if not thousands of years earlier. But it wasn’t until a couple of French inventors, Nicephore Niepce and Louis Daguerre, managed to capture an image through a chemical process known as photoetching in the 1820s that photography was born. By 1837, Niepce was dead, and Daguerre had perfected the technique of fixing an image on a photographic plate through a chemical reaction of silver, iodine and mercury. He called it a daguerreotype.

But to create the illusion of movement from these still images would require further innovation. The basic concept of animation was already in the air through earlier inventions like the magic lantern and eventually the zoetrope. But a photo-realistic recreation of movement was unheard of. That’s where Muybridge comes in. His technique of capturing a series of still images in quick succession laid the groundwork for other inventors like Thomas Edison, Woodville Latham, and Auguste and Louis Lumiere to develop new ways of photographing and projecting movement. Crucial to this process was the development of strips of light-sensitive celluloid film to replace the bulky glass plates used by Muybridge. This enabled a single camera to record a series of high-speed exposures (rather than multiple cameras taking a single photo in sequence). It also enabled that same strip of film to be projected at an equally high speed, creating the illusion of movement through a combination of optical and neurological phenomena. But more on that in the next chapter.

By 1893, 15 years after Muybridge won Stanford’s bet, Edison had built the first “movie studio,” a small, cramped, wood-frame hut covered in black tar paper with a hole in the roof to let in sunlight. His employees nicknamed it the Black Maria because it reminded them of the police prisoner transport wagons in use at the time (also known as “paddy wagons” with apologies to the Irish). One of the first films they produced was a 5 second “scene” of a man sneezing.

Riveting stuff. But still, movies were born.

Sort of.

There was just one problem: the only way to view Edison’s films was through a kinetoscope, a machine that allowed a single viewer to peer into a viewfinder and crank through the images. The ability to project the images to a paying audience would take another couple of years.

In 1895, Woodville Latham, a chemist and Confederate veteran of the Civil War, lured away a couple of Edison’s employees and perfected the technique of motion picture projection. In that same year, over in France, Auguste and Louis Lumiere invented the cinematographe which could perform the same modern miracle. The Lumiere brothers would receive the lion’s share of the credit, but Latham and the Lumieres essentially tied for first place in the invention of cinema as we know it.

Sort of.

It turns out there was another French inventor, Louis Le Prince (apparently we owe a lot to the French), who was experimenting with motion pictures and had apparently perfected the technique by 1890. But when he arrived in the US for a planned public demonstration that same year – potentially eclipsing Edison’s claim on the technology – he mysteriously vanished from a train. His body and luggage, including his invention, were never found. Conspiracy theories about his untimely disappearance have circulated ever since (we’re looking at you, Thomas Edison).

Those early years of cinema were marked by great leaps forward in technology, but not so much forward movement in terms of art. Whether it was Edison’s 5-second film of a sneeze, or the Lumieres’ 46-second film Workers Leaving a Factory (which is exactly what it sounds like), the films were wildly popular because no one had seen anything like them, not because they were breaking new ground narratively.

https://youtube.com/watch?v=OjG5bujrzGo

There were, of course, notable exceptions. Alice Guy-Blaché was working as a secretary at a photography company when she saw the Lumieres’ invention in 1895. The following year she wrote, directed and edited what many consider the first fully fictional film in cinema history, The Cabbage Fairy (1896):

https://player.vimeo.com/video/278722967?h=71ed63aa9b&dnt=1&app_id=122963

But it was George Melies who became the most well-known filmmaker-as-entertainer in those first few years. Melies was a showman in Paris with a flare for the dramatic. He was one of the first to see the Lumieres’ cinematographe in action in 1895 and immediately saw its potential as a form of mass entertainment. Over the next couple of decades he produced hundreds of films that combined fanciful stage craft, optical illusions, and wild storylines that anticipated much of what was to come in the next century of cinema. His most famous film, A Trip to the Moon, produced in 1902, transported audiences to the surface of the moon on a rocket ship and sometimes even included hand-tinted images to approximate color cinematography.

He was very much ahead of his time and would eventually be immortalized in Martin Scorsese’s 2011 film Hugo.

 

By the start of the 20th century, cinema had become a global phenomenon. Fortunately, many of those early filmmakers had caught up with Melies in terms of the art of cinema and its potential as an entertainment medium. In Germany, filmmakers like Fritz Lange and Robert Weine helped form one of the earliest examples of a unique and unified cinematic style, consisting of highly stylized, surreal production designs and modernist, even futuristic narrative conventions that came to be known as German Expressionism.

Meanwhile in Soviet Russia, Lev Kuleshov and Sergei Eisenstein were experimenting with how the creative juxtaposition of images could influence how an audience thinks and feels about what they see on screen (also known as editing, a relatively new concept at the time). Through a series of experiments, Kuleshov demonstrated that it was this juxtaposition of images, not the discrete images themselves, that generated meaning, a phenomenon that came to be known as The Kuleshov Effect.

But it was the United States that was destined to become the center of the cinematic universe, especially as it grew into a global mass entertainment medium. Lois Weber was an early innovator and the first American director, male or female, to make a narrative feature film, The Merchant of Venuce (1914). In her short film, Suspense (1913) she pioneered the use of intercutting and basically invented split screen editing.

Others, like D. W. Griffith, helped pioneer the full-length feature film and invented many of the narrative conventions, camera moves and editing techniques still in use today. Weber, Griffith and many other filmmakers and entrepreneurs would go on to establish film studios able to churn out hundreds of short and long-form content for the movie theaters popping up on almost every street corner.

CINEMA GOES HOLLYWOOD

This burgeoning new entertainment industry was not, however, located in southern California. Not yet, anyway. Almost all of the production facilities in business at the time were in New York, New Jersey or somewhere on the Eastern seaboard. Partly because the one man who still controlled the technology that made cinema possible was based there: Thomas Edison. Edison owned the patent for capturing and projecting motion pictures, essentially cornering the market on the new technology (R.I.P. Louis Le Prince). If you wanted to make a movie in the 1900s or 1910s, you had to pay Edison for the privilege.

Not surprisingly, a lot of would-be filmmakers bristled at Edison’s control over the industry. And since patent law was difficult to enforce across state lines at the time, many of them saw California as an ideal place to start a career in filmmaking. Sure, the weather was nice. But it was also as far away from the northeast as you could possibly get within the continental United States, and a lot harder for Edison to sue for patent violations.

By 1912, Los Angeles had replaced New York as the center of the film business, attracting filmmakers and entertainment entrepreneurs from around the world. World-renowned filmmakers like Ernst Lubitsch from Germany, Erich von Stroheim from Austria, and an impish comedian from England named Charlie Chaplin, all flocked to the massive new production facilities that sprang up around the city. Universal Pictures, Metro-Goldwyn-Mayer (MGM), Warner Bros., all of them motion picture factories able to mass-produce dozens, sometimes hundreds of films per year. And they were surrounded by hundreds of other, smaller companies, all of them competing for screen space in thousands of new movie houses around the country. One small neighborhood in the heart of Los Angeles became most closely associated with the burgeoning new industry: Hollywood. By 1915, after a few years of failed lawsuits, Thomas Edison admitted defeat and dissolved his Motion Picture Patents Company.

In the heyday of those early years, some of those larger studios decided the best way to ensure an audience for their films was to own the theaters as well. They built extravagant movie palaces in large market cities, and hundreds more humble theaters in small towns, effectively controlling all aspects of the business: production, distribution and exhibition. In business terms that’s called vertical integration. It’s a practice that would get them in a lot of trouble with the U.S. government a couple of decades later, but in the meantime, it meant big profits with no end in sight.

Then, in 1927, everything changed.

Warner Bros. was a family-owned studio run by five brothers and smaller than some of the other larger companies like Universal and MGM. But one of those brothers, Sam, had a vision. Or rather, an ear. Up to that point, cinema was still a silent medium. But Sam was convinced that sound, and more specifically, sound that was synchronized to the image, was the future.

And almost everyone thought he was crazy.

It seems absurd now, but no one saw any reason to add sound to an already perfect, and very profitable, visual medium. What next? Color? Don’t be ridiculous…

Fortunately, Sam Warner persisted, investing the company’s profits into the technology required to not only record synchronized sound, but to reproduce it in their movie theaters around the country. Finally, on October 6th, 1927, Warner Bros. released The Jazz Singer, the first film to include synchronized dialog.

Suddenly, every studio was scrambling to catch up to Warner Bros. That meant a massive capital investment in sound technology, retrofitting production facilities and thousands of movie theaters. Not every production company could afford the upgrade, and many struggled to compete in the new market for films with synchronized sound. And just when it seemed like it couldn’t get worse for those smaller companies, it did. In October of 1929, the stock market crashed, plunging the nation into the Great Depression. Hundreds of production companies closed their doors for good.

At the start of the 1930s, after this tremendous consolidation in the industry, eight major studios were left standing: RKO Pictures, Paramount, MGM, Fox, Warner Bros., Universal Pictures, Columbia Pictures and United Artists. Five of those – RKO, Paramount, MGM, Fox and Warner Bros. – also still owned extensive theater chains (aka vertical integration), an important source of their enormous profits, even during the Depression (apparently movies have always been a way to escape our troubles, at least for a couple of hours). But that didn’t mean they could carry on with business as usual. They were forced to be as efficient as possible to maximize profits. Perhaps ironically, this led to a 20-year stretch, from 1927 to 1948, that would become known as The Golden Age, one of the most prolific and critically acclaimed periods in the history of Hollywood.

THE GOLDEN AGE

The so-called Golden Age of Hollywood was dominated by those eight powerful studios and defined by four crucial business decisions. First and foremost, at least for five of the eight, was the emphasis on vertical integration. By owning and controlling every aspect of the business, production, distribution and exhibition, those companies could minimize risk and maximize profit by monopolizing the screens in local theaters. The second crucial business decision was to centralize the production process. Rather than allow actual filmmakers – writers, directors, actors – to control the creative process, deciding what scripts to develop and which films to put into production, the major studios relied on one or two central producers. At Warner Bros. it was Jack Warner and Darryl Zanuck. At RKO it was David. O. Selznick. And at MGM it was Louis B. Mayer and 28 year-old Irving Thalberg.

Irving Thalberg. Central Producer at MGM. Public Domain Image.
Irving Thalberg. Central Producer at MGM.

Thalberg would become the greatest example of the central producer role, running the most profitable studio throughout the Golden Age. Thalberg personally oversaw every production on the MGM lot, hiring and firing every writer, director and actor, and often taking over as editor before the films were shipped off to theaters. And yet, he shunned fame and never put his name on any of MGM’s productions. Always in ill-health, perhaps in part because of his inhuman workload, he died young, in 1936, at age 37.

The third business decision that ensured studios could control costs and maximize profits was to keep the “talent” – writers, directors and actors – on low-cost, iron-clad, multi-year contracts. As Hollywood moved into the Golden Age, filmmakers – especially actors – became internationally famous. Stardom was a new and exciting concept, and studios depended on it to sell tickets. But if any one of these new global celebrities had the power to demand a fee commensurate with their name recognition, it could bankrupt even the most successful studio. To protect against stars leveraging their fame for higher pay, and thus cutting in on their profits, the studios maintained a stable of actors on contracts that limited their salaries to low weekly rates for years on end no matter how successful their films might become. There were no per-film negotiations and certainly no profit sharing. And if an actor decided to sit out a film or two in protest, their contracts would be extended by however long they held out. Bette Davis, one of the biggest stars of the era, once fled to England to escape her draconian contract with Warner Bros. Warner Bros. sued the British production companies that might employ her and England sent her back. These same contracts applied to writers and directors, employed by the studio as staff, not the freelance creatives they are today. It was an ingenious (and diabolical) system that meant studios could keep their production costs incredibly low.

The fourth and final crucial business decision that made the Golden Age possible was the creative specialization, or house style, of each major studio. Rather than try to make every kind of movie for every kind of taste, the studios knew they needed to specialize, to lean into what they did best. This decision, perhaps more than any of the others, is what made this period so creatively fertile. Despite all of the restrictions imposed by vertical integration, central producers, and talent contracts, the house style of a given studio meant that all of their resources went into making the very best version of certain kind of film. For MGM, it was the “prestige” picture. An MGM movie almost always centered on the elite class, lavish set designs, rags to riches stories, the perfect escapist, aspirational content for the 1930s. For Warner Bros. it was the gritty urban crime thriller: Little Caesar (1931), The Public Enemy (1931), The Maltese Falcon (1941). They were cheap to make and audiences ate them up. Gangsters, hardboiled detectives, femme fatales, these were all consistent elements of Warner Bros. films of the period. And for Universal, it was the horror movie:

Frankenstein (1931), Dracula (1931), The Mummy (1932), all of them Universal pictures (and many of them inspired by the surreal production design of German Expressionist films like The Cabinet of Dr. Caligari).

But the fun and profits couldn’t last forever.

Three important events conspired to bring an end the reign of the major studios and the Golden Age of Hollywood.

First, in 1943, Olivia de Havilland, a young actress known for her role as Melanie in Gone with the Wind (1939), sued Warner Bros. for adding six months to her contract, the amount of time she had been suspended by the studio for refusing to take roles she didn’t want. She wasn’t the first Hollywood actor to sue a studio over their stifling contracts. But she was the first to win her case. The court’s decision in her favor set a precedent that quickly eroded the studios’ power over talent. Soon actors became freelance performers, demanding fees that matched their box office draw and even profit participation in the success of their films. All of which took a sizeable chunk out the studios’ revenue.

Then, in 1948, the U.S. government filed an anti-trust case against the major studios, finally recognizing that vertical integration constituted an unfair monopoly over the entertainment industry. The case went to the Supreme Court and in a landmark ruling known as The Paramount Decision (only because Paramount was listed first in the suit), the court ordered that all of the major studios sell off their theater chains and outlawed the practices of block booking and blind bidding. It was a financial disaster for the big studios. No longer able to shovel content to their own theater chains, studios had to actually consider what independent theaters wanted to screen and what paying audiences wanted to see. The result was a dramatic contraction in output as studios made fewer and fewer movies with increasingly expensive, freelance talent hoping to hit the moving target of audience interest.

And then it got worse.

In the wake of World War II, just as the Supreme Court was handing down The Paramount Decision, the television set was quickly becoming a common household item. By the end of the 1940s and into the 1950s, the rise of television entertainment meant fewer reasons to leave house and more reasons for the movie studios to panic. Some of them, like MGM, realized there was money to be made in licensing their film libraries to broadcasters. And some of them, like Universal, realized there was money to be made in leasing their vast production facilities to television producers. But all of them knew it was an end of an era.

THE NEW HOLLYWOOD

The end of the Golden Age thrust Hollywood into two decades of uncertainty as the major studios struggled to compete with the new Golden Age of Television and their own inability to find the pulse of the American theater-going public. There were plenty of successes. MGM’s focus on musicals like Singin’ in the Rain (1952) and historical extravaganzas like Ben Hur (1959), for example, helped keep them afloat. But throughout the 50s and 60s, studios found themselves spending more and more money on fewer and fewer films and making smaller and smaller profits. To make matters worse, many of these once family-owned companies were being bought up by larger, multi-national corporations. Universal was bought out by MCA (a talent agency) in 1958. Paramount by Gulf Western in 1966. And Warner Bros. by Seven Arts that same year. These new parent companies were often publicly traded with a board of directors beholden to shareholders. They expected results.

And that’s when Warren Beatty, an ambitious young actor, walked into Jack Warner’s office with a scandalous script about two mass murderers named Bonnie and Clyde in his hand. Inspired by the upstart, avant-garde filmmakers making waves in France with their edgy, experimental films like Agnes Varda’s La Pointe Courte (1955), Jean-Luc Godard’s Breathless (1960) and Francois Truffaut’s The 400 Blows (1959), Beatty wanted to break the mold of the Warner Bros. gritty crime thriller. He wanted to make something bold, unpredictable, and transgressive. He begged the aging Warner brother to finance the film.

Maybe Jack Warner was at the end of his creative rope. Maybe he knew the movie business needed to start taking risks again. Maybe he was inspired by Beatty’s artistic vision. Or maybe he had just sold the studio to Seven Arts and figured Beatty’s crazy idea for a movie would be their problem, a parting shot before the last Warner left the building.

Whatever the reason, Warner Bros. bankrolled Bonnie and Clyde (1967), tried to bury it on release, but ultimately had to admit they had a huge hit on their hands. It was as bold, unpredictable, and transgressive (for its time) as Beatty had hoped. And audiences, especially younger audiences, loved it.

Six months later, an off-beat comedy no studio would touch called The Graduate (1967) opened to equally enthusiastic audiences and extraordinary profits. And two years after that, BBS, a fledgling production company bankrolled by its success in television, produced Easy Rider (1969), a drug-fueled, fever dream of a movie that captured a changing America, a seismic shift in the culture at the end of the 1960s. It cost less the $500,000 to make and earned nearly $60 million at the box office. Something had indeed changed. The major studios weren’t sure exactly what It was, but they knew they wanted a piece of it.

Like the Golden Age which rose from the ashes of the Great Depression and the rise of synchronized sound, The New Hollywood rose from the ashes of The Paramount Decision and the rise of television. Unlike the Golden Age, however, The New Hollywood emphasized the authority of the director and star over the material, not the central producer. And rather than control costs to maximize profits, studios allowed the freelance artists they employed to experiment with the form and take creative risks. In fact, more and more filmmakers were smart enough to shoot on location rather than on the studio backlot where executives might micromanage their productions.

Those risks didn’t always pay off, but when they did, they more than made up for the disappointments. Films like The Godfather (1972) and The Exorcist (1973) broke every accepted norm of cinematography, sound design, narrative structure, editing, performance and even distribution models. And in the process broke every box office record.

But such creative fertility and unpredictability couldn’t last forever. Not when there are billions of dollars at stake. The New Hollywood was done in by a one-two punch of films that were so successful, so astronomically profitable, they would have to coin a new term for them: Blockbusters.

The first was meant to be a run-of-the-mill Universal monster movie, a direct descendant of the studio’s Golden Age classics like Frankenstein and Dracula. This time around, it would be a shark. A really big shark. And in a (futile) effort to save some money, they assigned a young, 28 year-old television director named Steven Spielberg to helm the project. JAWS (1975) cost $9 million to make (three times more than Universal budgeted) and took 159 days to shoot (three times longer the Universal had hoped), but it grossed more than $120 million in its first theatrical run. It hit Hollywood like a tidal wave. A simple genre movie with clear heroes and just enough eye-popping special effects to wow the audience. Best of all, there was no need for an expensive, star-studded cast or a well-known, temperamental director. The concept was the star. It was a formula the studios understood and knew they could replicate.

Two years later, 20th Century Fox released Star Wars (1977). Its success dwarfed that of JAWS.

Hollywood would never be the same.

BIG MEDIA AND GLOBAL ENTERTAINMENT

The rise of the blockbuster breathed new life into the Hollywood studio system, and by the 1980s, they had successfully wrested control of the filmmaking process from the young upstart artists of The New Hollywood era. But with increasing profits came increasing interest from investors and larger multi-national corporations looking to diversify their portfolios. The acquisition of major studios in the late 50s and 60s by mega-companies such as Gulf Western continued into the 80s and 90s. In short, back in 1983, 90% of all American media was controlled by more than 50 distinct companies. By 2012, that same percentage was controlled by just 5. By 2019, it was down to 4: Comcast, Disney, AT&T, and National Amusements.

This massive consolidation of American media companies has equally massive implications for cinema. Beholden to shareholders and the corporate bottom-line, Hollywood studios must be more efficient than ever, producing fewer and fewer movies at higher and higher budgets to attract more and more eyeballs. And if that sounds familiar, you’ve been paying attention. A similar consolidation occurred after the advent of sound and the financial havoc of the stock market crash of 1929. Only this time, major studios don’t have the luxury of monopoly control through vertical integration (though they are dancing close to the edge with Comcast and AT&T, both internet and cable providers, controlling nearly half of all media in the United States). Instead, they’ve looked abroad to a new and growing global audience to ensure profitability.

Before 2008, international sales made up less than 20% of box office dollars. By 2008 it was 50%. By 2013 it had grown to more than 70% of Hollywood’s bottom line. That’s due in part to a massive investment in theaters around the world. In 2019, there were more than 200,000 cinema screens globally. Just over 44,000 were in the United States and Canada. Nearly 100,000 were in Asia alone. And the theaters themselves are not immune to consolidation. In 2013, Dalian Wanda, a Chinese company, bought the American theater chain AMC for $2.6 billion.

What does all of this mean for contemporary cinema? At the corporate Hollywood level, it means tailoring content for a global audience. That means building film franchises around globally recognizable characters and brands. If you’re thinking Marvel and DC comics, you’re on the right track. That means fewer original movies and more entertainment spectacles that in turn cost more money to make. The lessons Hollywood learned from the blockbusters JAWS and Star Wars in the 1970s seem to have been carried to their logical conclusion.

A NEW HOPE

While much of this (very) brief history of cinema has focused on the media machine that is the Hollywood studio system, cinema – that is, the art of motion pictures – lives and breathes outside of that capital-intensive entertainment ecosystem. And it always has.

Alice Guy-Blachè, Georges Melies, Lois Weber, D.W. Griffith, and most of the very first cinema artists operated independently of any corporate studio. And during that great Golden Age of cinema, which was so dominated by Hollywood studios, independent producers like David O. Selznick were putting out massively popular films like Alfred Hitchcock’s Rebecca (1940) and the perennially remade A Star is Born (1937). One of the most successful films of the era, Gone with the Wind (1939) was arguably an “indie” picture (Selznick produced it with MGM as distributor). In fact, the New Hollywood of the 60s and 70s could not have taken hold at the corporate level without visionary filmmakers like Mike Nichols, Dennis Hopper and Hal Ashby working outside of the studio system.

As the technology required to make motion pictures became easier and cheaper to acquire, more and more cinema artists chose to work outside of the studio system. Towering figures like Shirley Clarke in the 1960s, John Cassavetes in the 1970s and Jim Jarmusch in the 1980s put out provocative and engaging cinema with limited distribution to match their limited budgets but often with enormous cultural impact. That trend continued into the 1990s and 2000s, supported by new production and distribution companies like Miramax (founded by the now disgraced Harvey Weinstein) that insisted on working outside of the studio system and often outside of Los Angeles itself.

That independent spirit in American cinema also created space for women and people of color to have a voice in the art form. A quick scan of the history above and you’ll notice there are not a lot of women’s names. And almost all of the men are white. But filmmakers like Shirley Clarke, Julie Dash, and Allison Anders didn’t wait around for Hollywood to give them permission to make great cinema.

And as the massive corporate consolidation of the American media landscape has created a narrowing of cinematic content from the big studios, that indie spirit – along with a healthy dose of investor interest – has led to new innovations in production and distribution models. Whether it’s pre-selling foreign rights to a script to fund its production, or turning to streaming services for funding in return for exclusive rights to content, filmmakers continue to find new ways to push the boundaries of what is possible in cinema. Just take a look at the nominees for best picture at any of the recent Academy Awards ceremonies. Once dominated by studio-financed pictures, almost all of them are now independent productions.

But perhaps the most exciting new direction in cinema is not found in theaters at all. For more than a century, cinema has been most closely associated with that roughly 90 minute, closed-ended feature film playing at a theater near you. And while that continues to be an important cinematic space, the rise of cable and streaming services in desperate need of content has created exciting new frontiers to explore for the medium. No longer restricted to those 90 or so minutes, cinema can sprawl over 100s of hours or even just a few cut into 30 minutes chunks. And while it’s tempting to call this a new Golden Age of Television, even the term “television” no longer seems appropriate. We consume this content on all manner of devices, on our phones, laptops, even our wristwatches. Even theatrical content has picked up on the trend. What is the Fast and Furious, the Transformers or The Avengers franchises but multi-billion dollar episodic series distributed to theaters (and after a few months to our phones, laptops and wristwatches)?

Ultimately, regardless of how it’s made or how we engage with it, all of the above still fits into one artistic medium: cinema, the art of the motion picture. The tools and techniques, the principals of form and content, are all exactly the same. And that will be true whatever comes next, whether it’s VR, AR or a cinema-chip implanted in our visual cortex (heaven forbid…). Mise-en-scene, narrative, cinematography, editing, sound and acting will all still matter. And our understanding of how those tools and techniques not only shape the medium, but also shape our culture will also still matter. Maybe more than ever.

How to Watch a Movie

The way cinema communicates is the product of many different tools and techniques, from production design to narrative structure to lighting, camera movement, sound design, performance and editing. But all of these are employed to manipulate the viewer without us ever noticing. In fact, that’s kind of the point. The tools and techniques – the mechanics of the form – are invisible. There may be a thousand different elements flashing before our eyes – a subtle dolly-in here, a rack focus there, a bit of color in the set design that echoes in the wardrobe of the protagonist, a music cue that signals the emotional state of a character, a cut on an action that matches an identical action in the next scene, and on and on and on – but all we see is one continuous moving picture. A trick. An illusion.

In this chapter, we’ll explore how cinematic language works, a bit like breaking down the grammar and rules of spoken language, then we’ll take a look at how to watch cinema with these “rules” in mind. We may not be able to speed up the refresh rate of our optic nerve to catch each of those still images, but we can train our interpretive skills to see how filmmakers use the various tools and techniques at their disposal.

CINEMATIC LANGUAGE

Like any language, we can break cinematic language down to its most fundamental elements. Before grammar and syntax can shape meaning by arranging words or phrases in a particular order, the words themselves must be built up from letters, characters or symbols.  The basic building blocks. In cinema, those basic building blocks are shots. A shot is one continuous capture of a span of action by a motion picture camera. It could last minutes (or even hours), or could last less than a second. Basically, a shot is everything that happens within the frame of the camera – that is, the visible border of the captured image – from the moment the director calls “Action!” to the moment she calls “Cut!”

These discrete shots rarely mean much in isolation. They are full of potential and may be quite interesting to look at on their own, but cinema is built up from the juxtaposition of these shots, dozens or hundreds of them, arranged in a particular order – a cinematic syntax – that renders a story with a collectively discernible meaning. We have a word for that too: Editing. Editing arranges shots into patterns that make up scenes, sequences and acts to tell a story, just like other forms of language communicate through words, sentences and paragraphs.

From these basic building blocks, we have developed a cinematic language, a set of rules and conventions by which cinema communicates meaning to the viewer. And, by “we,” I mean all of us, filmmakers and audiences alike, from the earliest motion picture to the latest VR experience. Cinematic language – just like any other language – is an organic, constantly evolving shared form of communication. It is an iterative process, one that is refined each time a filmmaker builds a story through a discrete number of shots, and each time an audience responds to that iteration, accepting or rejecting, but always engaging in the process. Together, we have developed a visual lexicon. A lexicon describes the shared set of meaningful units in any language. Think of it as the list of all available words and parts of words in a language we carry around in our heads.  A visual lexicon is likewise the shared set of meaningful units in our collective cinematic language: images, angles, transitions and camera moves that we all understand mean something when employed in a motion picture.

But here’s the trick: We’re not supposed to notice any of it. The visual lexicon that underpins our cinematic language is invisible, or at least, it is meant to recede into the background of our comprehension. Cinema can’t communicate without it, but if we pay too much attention to it, we’ll miss what it all means. A nifty little paradox. But not so strange or unfamiliar when you think about it. It’s precisely the same with any other language. As you read these characters, words, sentences and paragraphs, you are not stopping to parse each unit of meaning, analyze the syntax or double check the sentence structure. All those rules fade to the background of your own fluency and the meaning communicated becomes clear (or at least, I sure hope it does). And that goes double for spoken language. We speak and comprehend in a fluent flow of grammar and syntax, never pausing over the rules that have become second nature, invisible and unnoticed.

So, what are some of those meaningful units of our cinematic language? Perhaps not surprisingly, a lot of them are based on how we experience the world in our everyday lives. Camera placement, for example, can subtly orient our perspective on a character or situation. Place the camera mere inches from a character’s face – known as a close-up – and we’ll feel more intimately connected to their experience than if the camera were further away, as in a medium shot or long shot. Place the camera below the eyeline of a character, pointing up – known as a low-angle shot – and that character will feel dominant, powerful, worthy of respect. We are literally looking up to them. Place the camera at eye level, we feel like equals. Let the camera hover above a character or situation – known as a high-angle shot – and we feel like gods, looking down on everyone and everything. Each choice effects how we see and interpret the shot, scene and story.

We can say the same about transitions from shot to shot. Think of them as conjunctions in grammar, words meant to connect ideas seamlessly. The more obvious examples, like fade-ins and fade-outs or long dissolves, are still drawn from our experience. Think of a slow fade-out, where the screen drifts into blackness, as an echo of our experience of falling asleep, drifting out of consciousness. In fact, fade-outs are most often used in cinema to indicate the close of an act or segment of story, much like the end of a long day. And dissolves are not unlike the way we remember events from our own experience, one moment bleeding into and overlapping with another in our memory.

But perhaps the most common and least noticed transition, by design, is a hard cut that bridges some physical action on screen. It’s called cutting on action and it’s a critical part of our visual lexicon, enabling filmmakers to join shots, often from radically different angles and positions, while remaining largely invisible to the viewer. The concept is simple: whenever a filmmaker wants to cut from one shot to the next for a new angle on a scene, she ends the first shot in the middle of some on-screen action, opening a door or setting down a glass, then begins the next shot in the middle of that same action. The viewer’s eye is drawn to the action on screen and not the cut itself, rendering the transition relatively seamless, if not invisible to the viewer.

Camera placement and transitions, along with camera movement, lighting style, color palette and a host of other elements make up the visual lexicon of cinematic language, all of which we will explore in the chapters to follow. In the hands of a gifted filmmaker, these subtle adjustments work together to create a coherent whole that communicates effectively (and invisibly). In the hands of not so gifted filmmakers, these choices can feel haphazard, unmotivated, or perhaps worse, “showy” – all style and no substance – creating a dissonant, ineffective cinematic experience. But even then, the techniques themselves remain largely invisible. We are simply left with the feeling that it was a “bad” movie, even if we can’t quite explain why.

EXPLICIT AND IMPLICIT MEANING

Once we have a grasp on these small, meaningful units of our collective cinematic language we can begin to analyze how they work together to communicate bigger, more complex ideas.

Take the work of Lynne Ramsay, for example. As a director, Ramsay builds a cinematic experience by paying attention to the details, the little things we might otherwise never notice:

Cinema, like literature, builds up meaning through the creative combination of these smaller units, but, also like literature, the whole is – or should be – much more than the sum of its parts. For example, Moby Dick is a novel that explores the nature of obsession, the futility of revenge and humanity’s essential conflict with nature. But in the more than 200,000 words that make up that book, few if any of them communicate those ideas directly. In fact, we can distinguish between explicit meaning, that is the obvious, directly expressed meaning of a work of art, be it a novel, painting or film, and implicit meaning, the deeper, essential meaning, suggested but not necessarily directly expressed by any one element. Moby Dick is explicitly about a man trying to catch a whale, but as any literature professor will tell you, it was never really about the whale.

That comparison between cinema and literature is not accidental. Both start with the same fundamental element, that is, a story. As we will explore in a later chapter, before a single frame is photographed, cinema begins with the written word in the form of a screenplay. And like any literary form, screenplays are built around a narrative structure. Yes, that’s a fancy way of saying story, but it’s more than simply a plot or an explicit sequence of events. A well-conceived narrative structure provides a foundation for that deeper, implicit meaning a filmmaker, or really any storyteller, will explore through their work.

Another way to think about that deeper, implicit meaning is as a theme, an idea that unifies every element of the work, gives it coherence and communicates what the work is really about. And really great cinema manages to suggest and express that theme through every shot, scene, and sequence. Every camera angle and camera move, every line of dialogue and sound effect, every music cue and editing transition will underscore, emphasize, and point to that theme without ever needing to spell it out or make it explicit. An essential part of analyzing cinema is the ability to identify that thematic intent and then trace its presence throughout.

Unless there is no thematic intent, or the filmmaker did not take the time to make it a unifying idea. Then you may have a “bad” movie on your hands. But at least you’re well on your way to understanding why!

So far, this discussion of explicit and implicit meaning, theme, and narrative structure points to a deep kinship between cinema and literature. But cinema has far more tools and techniques at its disposal to communicate meaning, implicit or otherwise. Sound, performance, and visual composition all point to deep ties with music, theater, and painting or photography as well. And while each of those art forms employ their own strategies for communicating explicit and implicit meaning, cinema draws on all of them at once in a complex, multi-layered system.

Let’s take sound, for example. As you know from the brief history of cinema in the last chapter, cinema existed long before the introduction of synchronized sound in 1927, but since then, sound has become an equal partner with the moving image in the communication of meaning. Sound can shape the way we perceive an image, just as an image can change the way we perceive a sound. It’s a relationship we call co-expressive.

This is perhaps most obvious in the use of music. A non-diegetic musical score, that is music that only the audience can hear as it exists outside the world of the characters, can drive us toward an action-packed climax, or sweep us up in a romantic moment. Or it can contradict what we see on the screen, creating a sense of unease at an otherwise happy family gathering or making us a laugh during a moment of excruciating violence. In fact, this powerful combination of moving image and music pre-dates synchronized sound. Even some of the earliest silent films were shipped to theaters with a musical score meant to be played during projection.

But as powerful as music can be, sound in cinema is much more than just music. Sound design includes music, but also dialog, sound effects and ambient sound to create a rich sonic context for what we see on the screen. From the crunch of leaves underfoot, to the steady hum of city traffic, to the subtle crackle of a cigarette burning, what we hear – and what we don’t hear – can put us in the scene with the characters in a way that images alone could never do, and as a result, add immeasurably to the effective communication of both explicit and implicit meaning.

We can say the same about the relationship between cinema and theater. Both use a carefully planned mise-en-scene – the overall look of the production including set design, costume, make-up – to evoke a sense of place and visual continuity. And both employ the talents of well-trained actors to embody characters and enact the narrative structure laid out in the script.

Let’s focus on acting for a moment. Theater, like cinema, relies on actors’ performances to communicate not only the subtleties of human behavior, but also the interplay of explicit and implicit meaning. How an actor interprets a line of dialog can make all the difference in how a performance shifts our perspective, draws us in or pushes us away. And nothing ruins a cinematic or theatrical experience like “bad” acting. But what do we really mean by that? Often it means the performance wasn’t connected to the thematic intent of the story, the unifying idea that holds it all together. We’ll even use words like, “The actor seemed like they were in a different movie from everyone else.” That could be because the director didn’t clarify a theme in the first place, or perhaps they didn’t shape, or direct, an actor’s performance toward one. It could also simply be poor casting.

All of the above applies to both cinema and theater, but cinema has one distinct advantage: the intimacy and flexibility of the camera. Unlike theater, where your experience of a performance is dictated by how far you are from the stage, the filmmaker has complete control over your point of view. She can pull you in close, allowing you to observe every tiny detail of a character’s expression, or she can push you out further than the cheapest seats in a theater, showing you a vast and potentially limitless context. And perhaps most importantly, cinema can move between these points of view in the blink of an eye, manipulating space and time in a way live theater never can. And all of those choices effect how we engage the thematic intent of the story, how we connect to what that particular cinematic experience really means. And because of that, in cinema, whether we realize it or not, we identify most closely with the camera. No matter how much we feel for our hero up on the screen, we view it all through the lens of the camera.

And that central importance of the camera is why the most obvious tool cinema has at its disposal in communicating meaning is visual composition. Despite the above emphasis on the importance of sound, cinema is still described as a visual medium. Even the title of this section is “How to Watch a Movie.” Not so surprising when you think about the lineage of cinema and its origin in the fixed images of the camera obscura, daguerreotypes, and series photography. All of which owe a debt to painting, both as an art form and a form of communication. In fact, the cinematic concept of framing has a clear connection to the literal frame, or physical border, of paintings. And one of the most powerful tools filmmakers – and photographers and painters – have at their disposal for communicating both explicit and implicit meaning is simply what they place inside the frame and what they leave out.

Another word for this is composition, the arrangement of people, objects and setting within the frame of an image. And if you’ve ever pulled out your phone to snap a selfie, or maybe a photo of your meal to post on social media, you are intimately aware of the power of composition. Adjusting your phone this way and that to get just the right angle, to include just the right bits of your outfit, maybe edge Greg out of the frame just in case things don’t work out (sorry, Greg). Point is, composing a shot is a powerful way we tell stories about ourselves every day. Filmmakers, the really good ones, are masters of this technique. And once you understand this principle, you can start to analyze how a filmmaker uses composition to serve their underlying thematic intent, to help tell their story.

One of the most important ways a filmmaker uses composition to tell their story is through repetition, a pattern of recurring images that echoes a similar framing and connects to a central idea. And like the relationship between shots and editing – where individual shots only really make sense once they are juxtaposed with others – a well-composed image may be interesting or even beautiful on its own, but it only starts to make sense in relation to the implicit meaning or theme of the overall work when we see it as part of a pattern.

Take, for example, Stanley Kubrick and his use of one-point perspective:

Or how Barry Jenkins uses color in Moonlight (2016):

These recurring images are part of that largely invisible cinematic language. We aren’t necessarily supposed to notice them, but we are meant to feel their effects. And it’s not just visual patterns that can serve the filmmaker’s purposes. Recurring patterns, or motifs, can emerge in the sound design, narrative structure, mise-en-scene, dialog and music.

But there is one distinction that should be made between how we think about composition and patterns in cinema and how we think about those concepts in photography or painting. While all of the above employ framing to achieve their effects, photography and painting are limited to what is fixed in that frame by the artist at the moment of creation. Only cinema adds an entirely new and distinct dimension to the composition: movement. That includes movement within the frame – as actors and objects move freely, recomposing themselves within the fixed frame of a shot – as well as movement of the frame itself, as the filmmaker moves the camera in the setting and around those same actors and objects. This increases the compositional possibilities exponentially for cinema, allowing filmmakers to layer in even more patterns that serve the story and help us connect to their thematic intent.

FORM, CONTENT AND THE POWER OF CINEMA

As we become more attuned to the various tools and techniques that filmmakers use to communicate their ideas, we will be able to better analyze their effectiveness. We’ll be able to see what was once invisible. A kind of magic trick in itself. But as I tried to make clear from the beginning, my goal is not to focus solely on form, to dissect cinema into its constituent parts and lose sight of its overall power. Cinema, like any art form, is more than the sum of its parts. And it should be clear already that form and content go hand in hand. Pure form, all technique and no substance, is meaningless. And pure content, all story and no style, is didactic and, frankly, boring. How the story is told is as important as what the story is about.

However, just as we can analyze technique, the formal properties of cinema, to better understand how a story is communicated, we can also analyze content, that is, what stories are communicating to better understand how they fit into the wider cultural context. Cinema, again like literature, can represent valuable cultural documents, reflecting our own ideas, values and morals back to us as filmmakers and audiences.

I’ve discussed at length the idea of a cinematic language, and the fact that as a form of communication it is largely invisible or subconscious. Interestingly, the same can be said for cinematic content. Or, more specifically, the cultural norms that shape cinematic content. Cinema is an art form like any other, shaped by humans bound up in a given historical and cultural context. And no matter how enlightened and advanced those humans may be, that historical and cultural context is so vast and complex they cannot possibly grasp every aspect of how it shapes their view of the world. Inevitably, those cultural blind spots, the unexamined norms and values that makes us who we are, filter into the cinematic stories we tell and how we tell them.

The result is a kind of cultural feedback loop where cinema both influences and is influenced by the context in which it is created.

Because of this, on the whole, cinema is inherently conservative. That is to say, as a form of communication it is more effective at conserving or re-affirming a particular view of the world than challenging or changing it. This is due in part to the economic reality that cinema, historically a very expensive medium, must appeal to the masses to survive. As such, it tends to avoid offending our collective sensibilities, to make us feel better about who we already think we are. And it is also due in part to the social reality that the people who have historically had access to the capital required to produce that very expensive medium tend to all look alike. That is, mostly white, and mostly men. And when the same kind of people with the same kind of experiences tend to have the most consistent access to the medium, we tend to get the same kinds of stories, reproducing the same, often unexamined, norms, values and ideas.

But that doesn’t mean cinema can’t challenge the status quo, or at least reflect real, systemic change in the wider culture already underway. That’s what makes the study of cinema, particularly in regard to content, so endlessly fascinating. Whether it’s tracking the way cinema reflects the dominant cultural norms of a given period, or the way it sometimes rides the leading edge of change in those same norms, cinema is a window – or frame (see what I did there) – through which we can observe the mechanics of cultural production, the inner-workings of how meaning is produced, shared, and sometimes broken down over time.

EVERYONE’S A CRITIC

In this short interview, film critic and author, Ann Hornaday, shares some tips for watching like a critic:

In as much as cinema is a cultural phenomenon, a mass medium with a crucial role in the production of meaning, it’s also an art form meant to entertain. And, while I think one can assess the difference between a “good” movie and a “bad” movie in terms of its effectiveness, that has little to do with whether one likes it or not.

In other words, you don’t have to necessarily like a movie to analyze its use of a unifying theme or the way the filmmaker employs mise-en-scene, narrative structure, cinematography, sound and editing to effectively communicate that theme. Citizen Kane (Orson Welles, 1941), arguably one of the greatest films ever made, is an incredibly effective motion picture. But it might not be your favorite movie. That doesn’t mean that you can’t appreciate the artistry in the creation of the film.

Fortunately, the opposite is also true: You can really, really like a movie that isn’t necessarily all that good. Maybe there’s no unifying theme, maybe the cinematography is all style and no substance (or no style and no substance), maybe the narrative structure is made out of toothpicks and the acting is equally thin and wooden. (That’s right, Twilight, I’m looking at you.) Who cares? You like it.

That’s great. Embrace it. Because taste in cinema is subjective. But analysis of cinema doesn’t have to be. You can analyze anything. Even things you don’t like.


Attributions:

“The History of Film” adapted from “A Brief History of Cinema” by Russell Sharman, licensed CC BY NC SA.

“How to Watch a Movie” adapted from “How to Watch a Movie” by Russell Sharman, licensed CC BY NC SA.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

The Worry Free Writer Copyright © 2020 by Dr. Karen Palmer is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book