Multimodality


Multimodality is the application of multiple literacies within one medium. Multiple literacies or "modes" contribute to an audience's understanding of a composition. Everything from the placement of images to the organization of the content to the method of delivery creates meaning. This is the result of a shift from isolated text being relied on as the primary source of communication, to the image being utilized more frequently in the digital age. Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources used to compose messages.
While all communication, literacy, and composing practices are and always have been multimodal, academic and scientific attention to the phenomenon only started gaining momentum in the 1960s. Work by Roland Barthes and others has led to a broad range of disciplinarily distinct approaches. More recently, rhetoric and composition instructors have included multimodality in their coursework. In their position statement on Understanding and Teaching Writing: Guiding Principles, the National Council of Teachers of English state that "'writing' ranges broadly from written language, to graphics, to mathematical notation."

Definition

Although multimodality discourse mentions both medium and mode, these terms are not synonymous. However, they may overlap depending on how precisely individual authors and traditions use the terms.
Gunther Kress's scholarship on multimodality is canonical with social semiotic approaches and has considerable influence in many other approaches, such as in writing studies. Kress defines 'mode' in two ways. One: a mode is a type of material resource that is socially or culturally shaped to make meaning. Images, writing, speech and gesture are all examples of modes. Two: modes are semiotic, shaped by intrinsic characteristics and their potential within their medium, as well as what is required of them by their culture or society.
Thus, every mode has a distinct historical and cultural potential and or limitation for its meaning. For example, if we broke down writing into its modal resources, we would have grammar, vocabulary, and graphic "resources" as the acting modes. Graphic resources can be further broken down into font size, type, color, size, spacing within paragraphs, etc. However, these resources are not deterministic. Instead, modes shape and are shaped by the systems in which they participate. Modes may aggregate into multimodal ensembles and be shaped over time into familiar cultural forms. A good example of this is films, which combine visual modes, modes of dramatic action and speech, and modes of music or other sounds. Studies of multimodal work in this field include van Leeuwen; Bateman and Schmidt; and Burn and Parker's theory of the Kineikonic Mode.
In social semiotic accounts, a medium is the substance in which meaning is realized and through which it becomes available to others. Mediums include video, image, text, audio, etc. Socially, a medium includes semiotic, sociocultural, and technological practices. Examples include film, newspapers, billboards, radio, television, a classroom, etc. Multimodality also makes use of the electronic medium by creating digital modes with the interlacing of image, writing, layout, speech, and video. Mediums have become modes of delivery that consider the current and future contexts.

History

Multimodality has received increasingly theoretical characterizations throughout the history of communication. Indeed, the phenomenon has been studied at least since the 4th century BC, when classical rhetoricians alluded to it with their emphasis on voice, gesture, and expressions in public speaking. However, the term was not defined with significance until the 20th century. During this time, an exponential rise in technology created many new modes of presentation. Since then, multimodality has become standard in the 21st century, applying to various network-based forms such as art, literature, social media and advertising. The monomodality, or singular mode, which used to define the presentation of text on a page has been replaced with more complex and integrated layouts. John A. Bateman says in his book Multimodality and Genre, "Nowadays… text is just one strand in a complex presentational form that seamlessly incorporates visual aspect 'around,' and sometimes even instead of, the text itself." Multimodality has quickly become "the normal state of human communication."

Expressionism

During the 1960s and 1970s, many writers looked to photography, film, and audiotape recordings in order to discover new ideas about composing. This led to a resurgence of a focus on the sensory, self-illustration known as expressionism. Expressionist ways of thinking encouraged writers to find their voice outside of language by placing it in a visual, oral, spatial, or temporal medium. Donald Murray, who is often linked to expressionist methods of teaching writing once said, "As writers it is important that we move out from that which is within us to what we see, feel, hear, smell, and taste of the world around us. A writer is always making use of experience." Murray instructed his writing students to "see themselves as cameras" by writing down every single visual observation they made for one hour. Expressionist thought emphasized personal growth, and linked the art of writing with all visual art by calling both a type of composition. Also, by making writing the result of a sensory experience, expressionists defined writing as a multisensory experience, and asked for it to have the freedom to be composed across all modes, tailored for all five senses.

Cognitive developments

During the 1970s and 1980s, multimodality was further developed through cognitive research about learning. Jason Palmeri cites researchers such as James Berlin and Joseph Harris as being important to this development; Berlin and Harris studied alphabetic writing and how its composition compared to art, music, and other forms of creativity. Their research had a cognitive approach which studied how writers thought about and planned their writing process. James Berlin declared that the process of composing writing could be directly compared to that of designing images and sound. Furthermore, Joseph Harris pointed out that alphabetic writing is the result of multimodal cognition. Writers often conceptualize their work by non-alphabetic means, through visual imagery, music, and kinesthetic feelings. This idea was reflected in the popular research of Neil Fleming, more commonly known as the neuro-linguistic learning styles. Fleming's three styles of auditory, kinesthetic, and visual learning helped to explain the modes in which people were best able to learn, create, and interpret meaning. Other researchers such as Linda Flower and John R. Hayes theorized that alphabetic writing, though it is a principal modality, sometimes could not convey the non-alphabetic ideas a writer wished to express.

Audience

Every text has its own defined audience, and makes rhetorical decisions to improve the audience's reception of that same text. In this same manner, multimodality has evolved to become a sophisticated way to appeal to a text's audience. In 1984, provided a framework for discussing audience that included addressed audiences, and invoked, or imagined, audiences. While conversations surrounding audience have continued since 1984, it is important to consider the role of audience while engaging in multimodal composing.
Relying upon the canons of rhetoric in a different way than before, multimodal texts have the ability to address a larger, yet more focused, intended audience. Multimodality does more than solicit an audience; the effects of multimodality are imbedded in an audience's semiotic, generic and technological understanding.

Psychological effects

The appearance of multimodality, at its most basic level, can change the way an audience perceives information. The most basic understanding of language comes via semiotics – the association between words and symbols. A multimodal text changes its semiotic effect by placing words with preconceived meanings in a new context, whether that context is audio, visual, or digital. This in turn creates a new, foundationally different meaning for an audience. Bezemer and Kress, two scholars on multimodality and semiotics, argue that students understand information differently when text is delivered in conjunction with a secondary medium, such as image or sound, than when it is presented in alphanumeric format only. This is due to it drawing a viewer's attention to "both the originating site and the site of recontextualization". Meaning is moved from one medium to the next, which requires the audience to redefine their semiotic connections. Recontextualizing an original text within other mediums creates a different sense of understanding for the audience, and this new type of learning can be controlled by the types of media used.
Multimodality also can be used to associate a text with a specific argumentative purpose, e.g., to state facts, make a definition, cause a value judgment, or make a policy decision. Jeanne Fahnestock and Marie Secor, professors at the University of Maryland and the Pennsylvania State University, labeled the fulfillment of these purposes stases. A text's stasis can be altered by multimodality, especially when several mediums are juxtaposed to create an individualized experience or meaning. For example, an argument that mainly defines a concept is understood as arguing in the stasis of definition; however, it can also be assigned a stasis of value if the way the definition is delivered equips writers to evaluate a concept, or judge whether something is good or bad. If the text is interactive, the audience is facilitated to create their own meaning from the perspective the multimodal text provides. By emphasizing different stases through the use of different modes, writers are able to further engage their audience in creating comprehension.