Developed by Keio Media Design’s Okude Laboratory, memorylane is a tablet/screen interface for digital photos, which allows users to draw on digital photos and exchange them with friends.
Developed by the Spatial Information Architecture Lab (SIAL) at RMIT University in Melbourne, Aegis explores interactive, indeterminate space.
“The Aegis Hyposurface is an art/architecture device that effectively links information systems with physical form to produce dynamically variable, tactile ‘informatic’ surfaces. Aegis is perhaps the world’s first such dynamic screen…. We therefore think of the Aegis Hyposurface as a giant sketchpad for a new age, a now 3-dimensional absorptive medium that allows all manner of graphic and glyphic sketching.”
Started in 2002, Brown University’s Cave Writing Workshops utilize an immersive environment to explore the intersections of text, sound, visuality, narrative, and space.
“Powered by a high-performance parallel computer, the Cave is an eight-foot cube, wherein the floor and three walls are projected with high-resolution stereo graphics to create a virtual environment, viewed through special “shutter-lens” glasses. The Cave Writing Workshop has introduced a Macintosh sound server to provide positional sound and augment the Cave’s performance potential, surrounding the “reader” with dynamic three-dimensional sound as well as visuals. It has brought text into this highly visual environment in the composing of narrative and poetic works of art, and has experimented with navigational structures more akin to narrative, and in particular hypertext narrative, than to the predominant forms of spatial exploration.” (Cave Writing Workshop website)
* New Reading Interfaces Objects
Immersive Text Environments
“Slated to debut in the spring of 2006, the Sony Reader marks a key example of the next generation of commercial eBooks. While previous eBooks suffered criticism for their bulky appearances, hard-to-read screens, and limited availability of downloadable works, Sony claims to have resolved these problems through its use of new technologies that include e-ink, “electronic paper,” and a “CONNECT store” from which customers can purchase various downloadable texts. At the time of this writing, the product has not yet been released, but the pre-release reviews of the Reader have been extremely positive across a variety of technology-centered forums.” (from Lisa Swanstron’s Research Report)
Innovative method for creating organizational structures and ontologies online.
“The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the Resource Description Framework (RDF)” (W3C).
Starter Links: Glossary definition | W3C
The World Wide Web Consortium (W3C) uses the term “the semantic web” as an umbrella identifier to refer to a number of initiatives that enable developers and archivists to add rich, meaningful metadata to digital resources. According to the W3C, the major reason for theses initiatives to tag information with explicit meaning is to make “it easier for machines to automatically process and integrate information” . The semantic web adds depth to the existing web protocols running over the application layer of the internet without involving any changes to its more basic architecture. Currently, the main feature that organizes the web is the “link”—any document (or resource) can link to any other. Additionally, each link is coupled with a method (or protocol) to present the resource to the user or application that followed that link (e.g., by clicking on it). That is, the web in one sense is completely non-hierarchical and unstructured. The only structural meaning of links between two web pages (or other resources) is simply that one of them refers to the other (and possibly vice versa); all other meanings are entirely contextual and must be interpreted by humans. The goal of the semantic web is to provide a richer structure of relationships to define formally some of the meanings that link resources. And in particular, to provide an extensible uniform structure that can be easily interpreted by search engines and other software tools. The W3C describes a number of potential practical applications that could make use of semantic web technologies, including enhanced search engines for multimedia collections, automated categorization, intelligent agents, web service discovery, and content mapping between disparate electronic resources . (more…)
“A Uniform Resource Identifier (URI), is a compact string of characters used to identify or name a resource. The main purpose of this identification is to enable interaction with representations of the resource over a network, typically the World Wide Web, using specific protocols. URIs are defined in schemes defining a specific syntax and associated protocols.” (Wikipedia)
The most common URI is a URL, or Uniform Resource Locator, which both identifies a resource and describes how to find it. The URN, or Uniform Resource Name, names a resource without giving location information.
“American Standard Code for Information Interchange. The world-wide standard for the code numbers used by computers to represent all the upper- and lower-case Latin letters, numbers, punctuation, and related data. Each alphanumeric character is represented as a number from 0 to 127, translated into a 7-bit binary code for the computer. ASCII is used by most computers and printers, and because of this, text-only files can be transferred easily between different kinds of computers. ASCII code also includes characters to indicate backspace, carriage return, etc., but does not include accents and special letters not used in English. Extended ASCII has additional characters (128-255).” (TechDictionary.com).
“The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full” (W3C).
“The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the Resource Description Framework (RDF)” (W3C). (more…)
RDF, or Research Description Framework, is a means of structuring metadata and describing relationships between resources, generally via XML namespaces. A resource can be any discrete item – a web page, .pdf file, media file, etc. A resource such as a web page might have particular properties defined such as “title,” “content,” “creator,” etc. Properties are non-hierarchical and the more properties that are defined, the better that search interfaces are able to sort through large numbers of resources. Relationships between resources can be made explicit through the defining of properties. For example, the resource “The Last Man” might be linked to the resource “Mary Shelley” via the property “author of.” (more…)
Collex is a tool developed at the University of Virginia’s Applied Research in Patacriticism lab (ARP) and currently operated in conjunction with NINES (Networked Interface for Nineteenth-century Electronic Scholarship). Described as an “interpretive hub,” (Nowviskie) Collex acts as an interface for nine different peer-reviewed, scholarly databases. The interface allows users to access all nine databases in one search, while results retain the unique characteristics of each individual source. Additionally, users can create exhibits for their own personal use, or they may submit exhibits to be shared with all users. As such, Collex and its relationship to data evolves as users interact with it, relying on folksonomy and user-generated relationships to construct new ways of viewing the information it contains therein. (more…)
About the Author: Monica Bulger is a doctoral student at the Gevirtz Graduate School of Education, former UCSB Writing Program Lecturer, and current Co-Director of the Bren Graduate Writing Center. Her research interests include educational technologies, cognitive writing processes, and student engagement. She currently works with the Technology in Education research initiative, an interdisciplinary team that studies the impacts of technology on student learning. More information about the author.
Related Categories: Literacy Studies
Today’s online reading experience is a convergence of search engines, blogs, wikis, forums, social networks, RSS feeds, and traditional web pages (Lieu & Kinzer, 2000). Efforts such as Google Books, Yahoo’s Online Content Alliance, and digital libraries are increasing the rate at which resources such as journal articles, books, periodicals, and informational websites are published online (Carlson & Young, 2004; Gorman & Wilkin, 2005; Hafner, 2005). Correspondingly, an increasing percentage of the U.S. population (73% in 2006) is turning to online resources for work-related research, education. and general information about hobbies, health and shopping (Madden, 2006). Online users now have access to vast amounts of information but may not know how to use it (Azevedo & Cromley, 2004; Rouet, 2006). The risk of information overload, combined with the seductive distractions of online media, challenge users to develop savvy navigation and filtering skills. Faced with over eight billion pages of information (Lyman & Varian, 2003; Markoff, 2005) and unlimited opportunities for interaction, how do online users select what they need and know when to stop?(more…)
First launched in February of 2004, Facebook.com (initially known as Thefacebook.com) is an online networking website that allows users to create their own profiles and link to and view the profiles of others. Facebook is unique in that its online communities are based on offline university communities and membership is restricted to users with a .edu email address.
Facebook is the second fastest growing website and is particularly popular with young adults currently enrolled in or recently graduated from college. Because Facebook users are organized by college affiliation, users have a clear offline presence. The site thus offers the opportunity to investigate the relationship between offline communities and their online counterparts. (more…)
Related Categories: Related Blogs
InfoDesign: Understanding by Design, is a blog devoted to the relatively new discipline of Information Design. The blog, maintained by a small team of people, is widely touted as one of the most comprehensive views of the growing field of Information Design. New posts appear approximately weekly, compiling links to pertinent articles, people, companies, organizations, degree programs, publications, events, and job postings. Like any blog, there is little native content on InfoDesign to review, as it is primarily links to articles and websites. For the Transliteracies project, InfoDesign is relevant in two capacities: as a guide to the field of Information Design – a young academic and professional field devoted largely to improving online reading, and as an index to the most important topics within the field. Every link to a news article is categorized, and each of these categories – there are 35 currently – are browsable. These 35 categories read as a list of what’s important in information design right now, and each will be reviewed for relevance to the Transliteracies project. (more…)
Table of Contents
» 1. Educational psychology
Document use skills
Learning to read
Learning to write
- Gibson, J.J. (1979). The ecological approach to visual perception. New York: Houghton Mifflin.
- Seely Brown, J. & Duguid, P. (2002). The social life of information. Boston, MA: Harvard Business School Press.
- Sellen, A.J. & Harper, R. H. R. (2003). The myth of the paperless office. Massachusetts: Massachusetts Institute of Technology.
Document use skills
- Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information. Cognition and Instruction, 20, 485-522.
- Macedo-Rouet, M., Rouet, J.F., Epstein, I., & Fayard, P. (2003). Effects of online reading on popular science comprehension. Science Communication, 25 (2), 99-128.
- Rouet, J-F. (2006). The skills of document use. Mahwah, NJ: Erlbaum.
- Wineburg, S.S. (1991). Historical problem solving: A study of the cognitive process used in the evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83 (1), 73-87.
- Azevedo, R. & Cromley, J.G. (2004). Does training on self-regulated learning facilitate students’ learning with hypermedia? Journal of Educational Psychology, 96 (3), 523-535.
- Debmo, M.H. & Lynch, R. (2006). Becoming a self-regulated learner: Implications for web-based education. In O’Neil, H.F. & Perez, R.S. (Eds.) Web-based learning: Theory, research, and practice. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
- Dillon, A. & Jobst, J. (2005). Multimedia learning with hypermedia. In Mayer, R.E. (Ed.) Cambridge handbook of multimedia learning. New York: Cambridge University Press.
- Mayer, R.E. (2001). Multimedia learning. New York: Cambridge University Press.
- Mosenthal, P.B. (2000). Assessing knowledge restructuring in visually rich, procedural domains: The case of garbage-disposal repair writ/sketched large. In Pailliotet, A.W. & Mosenthal, P.B. (Eds.). Reconceptualizing literacy in the media age. Stamford, Connecticut: JAI Press.
- Pressley, M. (1986). The relevance of the good strategy user model to the teaching of mathematics. Educational Psychologist, 21 (1 & 2), 139-161.
- Rouet, J-F., Britt, M. A., Mason, R. A., &Perfetti, C. A. (1996). Using multiple sources of evidence to reason about history. Journal of Educational Psychology, 88, 478-493.
- Winne, P. H. (2001). Self-regulated learning viewed from models of information processing. In B. Zimmerman & D. Schunk (Eds.), Self-regulated learning and academic achievement (2nd ed; pp. 153-189). Mahwah, NJ: Erlbaum.
Learning to read
- Alvermann, D., Simpson, M., & Fitzgerald, J. (2006). Teaching and learning in reading. In Alexander, P.A. & Winne, P.H. (Eds) Handbook of educational psychology (2nd ed; pp. 427-456). Mahwah, New Jersey: Lawrence Erlbaum Associates.
Learning to write
- Graham, S. (2006). Writing. In Alexander, P.A. & Winne, P.H. (Eds) Handbook of educational psychology (2nd ed; pp. 457-478). Mahwah, New Jersey: Lawrence Erlbaum Associates.
- Wray, D., Medwell, J., Fox, R., & Poulson, L. (2000). The teaching practices of effective teachers of literacy. Educational Review, 52(1), 75-84.
The role of the text
- Barthes, R. (1975). The Pleasure of the Text. New York: Doubleday.
- Barthes, R. (2002). Rhetoric of the image. In Mirzoeff, N. (Ed.) The visual culture reader (2nd ed; pp. 135-138). London: Routledge.
- Stefans, B.K. (2005, November 5). Privileging Language: The Text in Electronic Writing. electronic book review. Retrieved September 23, 2006. Object for Study
Literature and composition instruction
- Blau, S. (2003). The Literature Workshop. New Hampshire: Heinemann. (See Chapter 2: From Teaching to Telling)
- Lunsford, A.A. (2006). Writing, technologies, and the fifth canon. Computers and Composition, 23 (2), 169-177.
- Rosenblatt, L. (1965). Literature as Exploration. New York: The Modern Language Association of America.
Media studies (visual rhetoric, meaning making, digital literacy)
- Aarseth, E. J. (1997). Cybertext: Perspectives on Ergodic Literature. Baltimore, MD: Johns Hopkins University Press.
- Branch, R.M. (2000). A taxonomy of visual literacy. In Pailliotet, A.W. & Mosenthal, P.B. (Eds.). Reconceptualizing literacy in the media age. Stamford, Connecticut: JAI Press.
- Chorney, T. (2005, December 12). Interactive Reading, Early Modern Texts and Hypertext: A Lesson from the Past. Academic Commons. Retrieved September 23, 2006. Object for Study
- DeCertau, M. (1984). The Practice of Everyday Life. Berkeley: University of California Press.
- Gee, J.P. (2003). What video games have to teach us about learning and literacy. New York: Macmillan.
- Glister, P. (2000). Digital literacy. In Pea, R. (Ed.). The Jossey-Bass reader on technology and learning. San Francisco, CA: Jossey-Bass, Inc.
- Jenkins, H. (1992). Textual Poachers: Television Fans and Participatory Cultures, (pp. 50-85). New York: Routledge.
- Lemke, J.L. (2004). Metamedia literacy: Transforming meanings and media. In Handa, C. (Ed.) Visual rhetoric in a digital world. Boston: Bedford / St. Martin’s.
- Manovich, L. (1002). The Language of New Media. Cambridge, Mass: MIT Press.
- McPherson, T. (2002). Reload: Liveness, mobility and the web. In Mirzoeff, N. (Ed.) The visual culture reader (2nd ed; pp. 458-470). London: Routledge.
- Pailliotet, A.W. (2000). Introduction: Reconceptualizing literacy in the media age. In Pailliotet, A.W. & Mosenthal, P.B. (Eds.). Reconceptualizing literacy in the media age. Stamford, Connecticut: JAI Press.
- Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9 (5), 1-6.
- Davis, P.M. (2003) Effect of the Web on undergraduate citation behavior: Guiding student scholarship in a networked age. portal: Libraries & the Academy, 3 (1), 41-51.
- Grimes, D.J. & Boening, C. H. (2001). Worries with the web: A look at student use of web resources. College & Research Libraries, 62 (1), 11-23.
- Rice, R.A., McCreadie, M., & Chang, S.L. (2001). Accessing and Browsing Information and Communication. Cambridge: MIT Press. Transliteracies Research Report.
- Shenton, A.K. & Dixon, P. (2003). Models of young people’s information seeking. Journal of Librarianship and Information Science, 35 (1), 5-22.
Learning with online texts
- Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21, 487–508.
- Dumais, S., Cutrell, E., Cadiz, J., Jancke, G., Sarin, R., & Robbins, D. C. (2003). Stuff I’ve seen: A system for personal information retrieval and re-use. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Toronto: Canada.
- Ellis, R.A. (2006) Investigating the quality of student approaches to using technology in experiences of learning through writing. Computers in Education, 46, 371-390.
- Lenhart, A. & Madden, M. (2005). Teen content creators and consumers. Pew Internet & American Life Project Report. Washington D.C.: Pew Internet & American Life.
- Lieu, D.J. & Kinzer, C.K. (2000). The convergence of literacy instruction with networked technologies for information and communication. Reading Research Quarterly 35 (1), 108-127.
- Mayer, R.E. (2000). The challenge of multimedia literacy. In Pailliotet, A.W. & Mosenthal, P.B. (Eds.). Reconceptualizing literacy in the media age. Stamford, Connecticut: JAI Press.
- Winne, P. H., Nesbit, J.C., Kumar, V., Hadwin, A.F., Lajoie, S.P., Azevedo, R., & Perry, N.E. (2006). Supporting self-regulated learning with gstudy software: A learning kit project. Technology, Instruction, Cognition, and Learning, 3, 105-115.
Technology and instruction
- Fletcher, J.D. (2004). Technology, the Columbus effect, and the third revolution in learning. In Rabinowitz, M., Blumberg, F.C., & Everson, H.T. The design of instruction and evaluation. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
- Fletcher, J.D. (2003). Evidence for learning from technology assisted instruction. In O’Neil, H.F. & Perez, R.S. (Eds.). Technology applications in education: A learning view. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
- Jonassen, D.H. Peck, K.L., & Wilson, B.G. (1999). Learning with technology: A constructivist perspective. Upper Saddle River, NJ: Prentice Hall.
- Lave, J. & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press.
- Clark, R. C., & Mayer, R. E. (2003). e-learning and the science of instruction. San Francisco: Pfeiffer.
- Mayer, R.E. (2003). Theories of learning and their application to technology. In O’Neil, H.F. & Perez, R.S. (Eds.). Technology applications in education: A learning view. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
- Mayer, R.E. (2006). Ten research-based principles of multimedia learning. In O’Neil, H.F. & Perez, R.S. (Eds.) Web-based learning: Theory, research, and practice. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Cognitive approaches to literature
- Barthes, R. (1988). From Work to Text. In R. Barthes, Image-Music-Text (pp. 155-164; S. Heath, Trans.). New York: Hill.
- Blanchot, M. (1982). Reading. In M. Blanchot, The Space of Literature (pp. 191-197; A. Smock, Trans). Lincoln: U of Nebraska Press.
- Bortulussi, M. and Dixon, P. (2003). Psychonarratology: Foundations for the Empirical Study of Literary Response. New York: Cambridge UP.
- Crane, M. and Richardson, A. (1999, June). Literary Studies and Cognitive Science: Toward a New Interdisciplinarity. Mosaic: A Journal for the Comparitive Study of Literature, 32, 123-140.
- Dames, N. (2004). Wave-Theories and Affective Physiologies: The Cognitive Strain in Victorian Novel Theories. Victorian Studies, 46(2), 206-216. Object for Study
- Elfenbein, A. (2006). Cognitive Science and the History of Reading. PMLA 121(2), 484 – 500. Transliteracies Research Report
- Harker, W. John. (1996). Toward a Defensible Psychology of Literary Interpretation. In R. J. Kreuz and M.S. Macnealy (Eds), Empirical Approaches to Literature and Aesthetics. Norwood, NJ: Ablex.
- Gerrig, R. J. (2003). Experiencing Narrative Worlds: On the Psychological Activities of Reading. New Haven: Yale UP.
- Zwaan, R. A. (1993). Aspects of Literary Comprehension: A Cognitive Approach. Amsterdam: Benjamins.
- Chomsky, Noam. (2000) New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press.
- Crocker, M.W., Pickering, M. Clifton, C. (Eds). (2000). Architectures and Mechanisms for Language Processing. New York: Cambridge University Press.
- Downing, P., Lima, S. & Noonan, M. (Eds). (1992). The Linguistics of Literacy. Philadelphia: J. Benjamins Publishing Co.
- Ram, A. & Moorman, K. (1999). Understanding Language Understanding: Computational Models of Reading. Cambridge: MIT Press.
- Rodriguez, R. & Alexander, J. (2004). A Proposal for a Hypertext- or Cyber-Linguistics. Forma y Funcion (17), 207-217.
- Rosenblatt, L.M. (2005). Making Meaning with Texts: Selected Essays. Portsmouth: Heinemann.
Other cognitive theories
- Csikszentmihalyi, M. (1991). Flow: The Psychology of Optimal Experience. New York: HarperPerennial. Object for Study
- Herdman, C. M. (1999). Research on Visual Word Recognition: From Verbal Learning to Parallel Distributed Processing. Canadian Journal of Experimental Psychology, 53(4), 269-272. Object for Study.
- Marks, L. (2002). Touch: Sensuous Theory and Multisensory Media. Minneapolis: Univ of Minnesota Press. Transliteracies Research Report.
- Cognition and Instruction.
- College and Research Libraries.
- Computers and Composition.
- Computers in Education.
- Computers in Human Behavior.
- Educational Psychologist.
- Educational Review.
- Journal of Educational Psychology.
- Journal of Librarianship and Information Science.
- On the Horizon.
- portal:Libraries and the Academy.
- Reading Research Quarterly.
- Science Communication.
- Technology, Instruction, Cognition and Learning.
Related Categories: Cognitive Approaches to Reading
The Coh-Metrix Project is a research project concerned with predicting the readability of texts in order to facilitate textual comprehension. The underlying assumption of the project is that current “readability” tests, based upon word and sentence length, are inadequate to truly predict textual coherence. Coherence in this context is defined as a mental representation that results from an interaction between the reader’s skills and goals, and textual cohesion. The Coh-Metrix project proposes the creation of two tools that will provide a more nuanced prediction of textual cohesion than current indices allow: 1. Coh-Metrix computes the cohesion of a text based on complex cohesion metrics. 2. CohGIT locates where gaps in textual cohesion occur, facilitating textual improvement. The project relies upon an interdisciplinary approach to reading practices, drawing upon “psychology, linguistics, education, literary theory, cognitive science, mathematics, and artificial intelligence” (McNamara, Louwerse, & Graesser 5). (more…)
Developed by ARP (Applied Research in Patacriticism) in collaboration with NINES (Networked Interface for Nineteenth-century Electronic Scholarship), Collex allows a user to access resources from nine different online scholarly resources. Using semantic web technologies, Collex facilitates collaborative research and access to a variety of sources, while retaining the unique characteristics of each source. Resources are added on an on-going basis as they are evaluated by the NINES editorial team.
“Users of the web-based NINES aggregation can now, through Collex 1.0:
- perform text searches on finding aids for all 45,000 digital objects in the system;
- search full-text content across participating sites (currently Rossetti, Swinburne, and Poetess);
- browse common metadata fields (dates, genres, names, etc.) across all objects in a non-hierarchical, faceted manner;
- constrain their search and browse operations to generate highly-individualized results;
- create personal accounts on the system to save and share their research work;
publicly tag, privately annotate, and ultimately “collect” digital objects located through Collex or in browsing NINES-affiliated sites;
- browse their own and others’ collections in an integrated sidebar interface;
and discover new, related objects of interest through the Collex “more like this” feature”.
* New Reading Interfaces Objects
Online Knowledge Bases
Search and Data Mining Innovations
The Coh-Metrix Project is run by the Institute for Intelligent Systems at the University of Memphis. The project utilizes two computer programs, Coh-Metrix and CohGIT, to assess the difficulty of a given text. Coh-Metrix analyzes a text for its overall “cohesion,” a major factor in textual coherence. CohGIT pinpoints the areas of a text where gaps in cohesion occur. The goal of the project is to provide writers and educators with the ability to match texts with proper target audiences.
“How do you know if something you’ve written is too difficult for your intended audience? How can you tell if your writing makes sense – for the reader you have in mind? Recent advances in the areas of cognitive science, computational linguistics, educational research, and computer science are guiding us toward answers to these questions. These answers are coming to life within a web-based text analysis tool called Coh-Metrix. Using advanced technologies, Coh-Metrix will allow readers, writers, educators, and researchers to instantly gauge the difficulty of written material, based on the target audience. Moreover, CohGIT, our cohesion gap identification tool, will pinpoint where potential problems are hiding within a text.
The potential contributions of Coh-Metrix and Coh-GIT are innumerable. This project will benefit writers, editors, researchers, and policy makers. Our overarching goal is to develop methods and standards for improving academic textbooks, thus improving students’ ability to understand and learn difficult course material” (from Coh-Metrix Project website).
Elfenbein, Andrew. “Cognitive Science and the History of Reading.” PMLA 121.2 (2006) 484 – 500.
Elfenbein uses the strategies and terms of cognitive approaches to the study of reading to analyze the varied critical response to Robert Browning’s Men and Women, published in 1855. He argues for a critical practice that joins the complexity of literary criticism with the scientific attention to microprocesses of reading. His aim is to reveal that microprocesses, although always individually inflected, are locatable in various cultures and time periods. (more…)
Elfenbein’s article, appearing in the March 2006 PMLA, attempts to bring together the disciplines of cognitive psychology and literary criticism in order to understand historical reading processes.
“Cognitive psychologists, like literary critics, have spent many years wrestling with the complexities of the reading process. Yet psychologists and critics ask fundamentally different questions about reading because their fields have contrasting methods of defining, analyzing, investigating, and evaluating it. As a result, the terms of one discipline do not apply directly to the other. Creating an interaction between the two requires constant, often skeptical translation across disciplinary boundaries. This essay will concern itself with developing such a translation, using it to investigate the history of reading audiences, and drawing conclusions about the significance of the scientific study of reading for literary critics.”
Starter Links and Resources:
Elfenbein, Andrew. “Cognitive Science and the History of Reading.” PMLA. 121.2 (2006): 484 – 500. | Cognitive Science, Humanities, and the Arts.
The wiki is an increasingly popular content management system for organizing widely distributed collaborations over the internet. This report will describe the relevant history and evolution of the wiki, and then consider the technology, interface, and design of MediaWiki as an example of what a wiki is today. While there are literally dozens of implementations of the wiki format, MediaWiki is unique as the engine responsible for the operation of Wikipedia – currently the largest wiki—and as the software supported by the non-profit Wikimedia Foundation Inc. (more…)
Computing with Words (Lofti Zadeh’s Fuzzy Logic and Natural Language/Perception Processing)
Related Categories: Software / Coding Innovations
Fuzzy logic is a system of logic which applies meaning to imprecise concepts. Rather than simply labeling a statement as either “true” or “false,” as traditional binary logic does, a statement is instead mapped along a continuum of values. These mappings are interconnected with other mapped statements, ultimately yielding applicable functions and rules despite the imprecision of the concepts on which the rules were based.
Fuzzy logic was developed initially by the engineer Lotfi Zadeh in the late Sixties as a method to create control systems whose inputs were made up from imprecise data. More recently, Zadeh has conceived of a merger of natural language processing and fuzzy logic called Computing with Words, and also of an associated Computational Theory of Perception as a preliminary way of thinking about how to compute and reason with perceptual information. (more…)
Fuzzy logic is a system of logic which applies meaning to imprecise concepts. Rather than simply labeling a statement as either “true” or “false”, traditional binary logic does, a statement is instead mapped along a continuum of values. These mappings are interconnected with other mapped statements, ultimately yielding applicable functions and rules despite the imprecision of the concepts on which the rules were based.
Fuzzy logic was developed initially by the engineer Lotfi Zadeh in the late Sixties as a method to create control systems whose inputs were made up from imprecise data. More recently, Zadeh has conceived of a merger of natural language processing and fuzzy logic called Computing with Words, and also of an associated Computational Theory of Perception as preliminary way of thinking about how to compute and reason with perceptual information.
Haptic Visuality (Laura U. Marks’s Touch: Sensuous Theory and Multisensory Media)
Related Categories: Cognitive Approaches to Reading
In the last decade, the critical discourse of new media studies has shifted its focus from the virtual to the physical; from an abstract, decontextualized space to the embodied experience of augmented reality. Digital media have come to pervade everyday life and new media criticism has increasingly encouraged culturally specific, materialist and multisensory approaches. Laura Marks’s formulation of haptic visuality offers one such approach. As a way of seeing and knowing which calls upon multiple senses, haptic visuality offers a method of sensory analysis which does not depend on the presence of literal touch, smell, taste or hearing. While many sensory analyses focus on the evocation of and interaction between these literal senses (for example, the study of tactile interfaces, kinesthetics and textures), Marks’s concept of haptic visuality provides an alternative framework for discussing online new media works (too often understood as “simply” visual) in relation to multiple senses, affect and embodiment. (more…)
A concept developed by Laura U. Marks in the books The Skin of Film and Touch, haptic visuality refers to embodied spectatorship.
“Haptic criticism is a kind of criticism that assumes a tactile relation to one’s object – touching, more than looking. The notion of the haptic is sometimes used in art to refer to a lack of visual depth, so that the eye travels on the surface of an object rather than move into illusionistic depth. I prefer to describe haptic visuality as a kind of seeing that uses the eye like an organ of touch. Pre-Socratic philosophers thought of perception in terms of a contact between the perceived object and the person perceiving. Hence the haptic: looking, we touch the object with our eyes. This image might be a rather painful one, calling up raw, bruised eyeballs scraping against the brute stuff of the world. But I mean it to call up a way of seeing that does not posit a violent distance between the seer and the object, and hence cause pain when the two are brought together. In haptic visuality the contact can be as gentle as a caress.”
Desktop Theater utilizes 2-d visual chat to stage performances of theatrical texts. In addition to the participants who act out the parts of scripted characters, other users may take part in the performance in unpredictable ways.
“Making a compelling theatrical intervention or engaging group activity in a virtual public space is an adventure. Here, live theater has new parameters: gestures, emotions and speech are compressed into 2 dimensions and computer speech. How can we work within these boundaries to hold peopleís attention long enough to ask some questions? And why should we even want to?”
“There are many hundreds, perhaps even thousands of palaces (networked graphical spaces) that are used for a variety of purposes: social, promotional, conferencing, fan-based, etc. During the course of our Desktop Theater performances we inhabit several high-trafficked social palaces, as well as performing other activities in our own publicly accessible customized palace: the Genetically Enhanced Palace (GEP).”
Starter Links: Desktop Theater | “Clicking for Godot,” a Salon article by Scott Rosenberg
Desktop Theater was initiated as an alternate form of Internet chat that took place in the popular 2-D avatar-based chat room called The Palace. Started in 1997 by Adriene Jenik and Lisa Brenneis, Desktop Theater sought to extend the metaphor of chat room-as-public-space to creating a type of “street theater” through the avatars in a public chat room on The Palace’s servers. Several “actors” meet at a preset time in an agreed upon locale in The Palace, each donning a specific avatar for the performance. They perform a specific dramatic text through a cut-and-paste method that displays the text in a bubble above the avatar’s head. Other chatters enter the scene and often engage the actors, becoming part of the performance, or create their own conversations in the mise-en-scène of The Palace chat room, contextualizing the performance as an online form of street theater. (more…)
Designed by the RED (Research in Experimental Documents) group at Xerox PARC, Tilty Tables was an experiment for the museum installation, eXperiments in the Future of Reading (XFR; Maribeth Back, Rich Gold, Anne Balsamo, Mark Chow, Matt Gorbet, Steve Harrison, Dale MacDonald, Scott Minneman, 2001), which was also exhibited for SIGGRAPH 2001 Emerging Technologies. Of the three tables developed, The Reading Table and The Tall Tale Table shall be addressed here. Each table is a three-by-three-foot-wide white square resting on a metal podium, attached in such a way as to allow it to be tilted in all directions. A high-resolution image is projected onto the table’s surface, which gives the appearance that the table is a glowing screen. When visitors tilt the table the images on its surface change in response. With The Reading Table, visitors glide across a large “map” of “napkin drawings” on the subject of future reading practices. The Tall Tale Table resembles the conception of an unlimited and cyclical universe of books from the Jorge Louis Borges’ story “The Library of Babel”; it presents an infinite plane of nonsense tall tales, constructed using a simple computer program with the input of two real fairy tales from various cultures. (more…)
Collaborative tool for creating and exchanging multimedia compositions
“MediaBASE is a software application for creating, sharing and exchanging media objects and compositions within a delimited social context. It places rich media authorship—ordinarily confined to discrete, resource-intensive media projects—in the hands of casual users, who are able to manipulate and exchange media compositions with the speed and informality of text-centric technologies such as weblogs, chat rooms, instant messaging, discussion forums and e-mail. Because it is built around an associatively-indexed database, MediaBASE allows these media “conversations” or “dialogues” to transcend their original contexts and take on relevance for subsequent users of the system. MediaBASE can be used: to augment existing discourse communities, such as a school, course, museum, local forum or design collective; to provide a common forum for linked classes and remote user groups; to create networks around a given topic or body of material, such as an online art collection or digital archive” (from MediaBASE website).
Starter Links: MediaBASE at Institute for Multimedia Literacy
Art installation of 1999 by Camille Utterback and Romy Achituv. To interact with the installation participants stand or move in front of a large projection screen. On the screen they see a mirrored video projection of themselves in black and white, combined with a color animation of falling text. Like rain or snow, the text appears to land on participants’ heads and arms. The text responds to the participants’ motions and can be caught, lifted, and then let fall again. The falling text will “land” on anything darker than a certain threshold, and “fall” whenever that obstacle is removed. If a participant accumulates enough letters along their outstretched arms, or along the silhouette of any dark object, they can sometimes catch an entire word, or even a phrase. The falling letters are not random, but lines of Evan Zimroth’s poem about bodies and language, “Talk, You.” As letters from one line of the poem fall towards the ground they begin to fade, and differently colored letters from the next line replace them from above. “Reading” the poem in the Text Rain installation becomes a physical as well as a cerebral, perhaps even an impossible, endeavor. (more…)
BioMorphic Typography is Diane Gromala’s term for a family of fonts that continually morph in real-time response to a user’s changing physical states, as measured by a biofeedback device. Part of a larger initiative, Design for the Senses, the goal is to develop approaches to experiential design that focus on the senses and “the history of the body.” The first in the type-style family of this dynamic text is “Excretia,” meant for display on computer screens and wearable liquid crystal displays, upon which the user’s/writer’s autonomic states are graphically indicated—for example, the characters “throb” as one’s heart beats. (more…)
History of the GUI
“Today, almost everybody in the developed world interacts with personal computers in some form or another. We use them at home and at work, for entertainment, information, and as tools to leverage our knowledge and intelligence. It is pretty much assumed whenever anyone sits down to use a personal computer that it will operate with a graphical user interface. We expect to interact with it primarily using a mouse, launch programs by clicking on icons, and manipulate various windows on the screen using graphical controls. But this was not always the case. Why did computers come to adopt the GUI as their primary mode of interaction, and how did the GUI evolve to be the way it is today?” (by Jeremy Reimer from ArsTechnica)