Sunday, January 15, 2006

The Field of Literacy

A concern for literacy has been widely heralded. Statistics proclaim dire warnings: “Some 23 million American adults are functionally illiterate by the simplest tests of everyday reading, writing, and comprehension. About 13% of all 17-year-olds in the US can be considered functionally illiterate. Functional illiteracy among minority youth may run as high as 40%” (National Commission on Excellence in Education, 1983, p. 2). Seeing such statistics and headlines, one assumes that our methods of instruction are failing, and that teachers today are doing a worse job educating students than teachers in the past. In truth, there was no “golden age” of education. “In 1951, the Los Angeles Public schools’ associate superintendent tested the basic skills and fundamental knowledge of the district’s children. According to Time Magazine (“Failure in Los Angeles, 1951), when the teachers and parents got to look at the results, they ‘yelped with pained surprise.’ Among the 11th graders, 3 percent could not tell time, 4 percent did not know what letter preceded “M” in the alphabet, and 14 percent could not calculate 50 percent of 36” (Sedlak, Wheeler, Pullin, & Cusick, 1986, p. 18).
Nevertheless, such concern for literacy is of relatively recent origin. In ancient times, castes of scribes in Egypt, Israel, and India were highly respected since only a few people knew how to read and write. St. Augustine in his Confessions depicts people reading out loud; this was a common practice so that those who were illiterate could receive the information by “listening in” or “eavesdropping” if they wanted to. It was not until the late 19th century that the ability to read silently and understand an unfamiliar text became the goal of mass education. Prior to this time, literate individuals were those who could declaim familiar texts aloud. Getting new meaning from texts—in fact, getting any meaning from texts—was not the goal of reading instruction even throughout much of the 19th century (Cohen & Neufeld, 1981, p.81). While universities began offering reading improvement courses around the late 19th century—the most notable being the University of Michigan (1852), Iowa State (1870), Harvard, Yale, and Princeton (1902), enrollment in such courses did not expand until the 1970s when colleges began granting academic credit for these courses. Nevertheless, despite granting credit, as late as 1982, the effectiveness of college-level reading programs had not been systematically evaluated (Anderson, 1982).
Even though the field of literacy is a relatively recent development, the battle lines have already been drawn. Polemics and battles abound: phonics versus whole language, decoding versus comprehension, reading to comprehend versus reading to do, schema theory versus Vygotsky’s sociocultural theory, approaches to reading based on behaviorism as versus constructivism, pedagogical approaches based on “core knowledge” (affiliated with schema theory, as described by Hirsch (1985)) versus progressivism (which is affiliated with constructivism). Yet “as psycholinguist Frank Smith wisely notes, ‘In the two-thousand-year recorded history of reading instruction, as far as I have been able to discover, no one has devised a method of teaching reading that has not proved a success with some children.” (Meier, 2002, p. 27).
The battle between schema theory and socioculturalist theory is an intriguing development. Schema theory assumes that the human mind mediates information taken in through the senses, and integrates it into a structure made up of other information bits in order to contextualize and comprehend it. Socioculturalism says that meaning-making is essentially dialogic in nature, and highlights the role of mediational tools such as language, texts, and conversations.
In the past, schema theory was appropriated both by Progressivists such as Dewey and conservative educationists such as Hirsch. That the left and right once agreed on a topic might seem startling, but the polarization between right and left in literacy did not always exist. Instead, the polarization within the field of literacy seems to mirror the polarization of the culture wars and retribalization within our society. But while the polarization continues in society at large, within the field of literacy a truce has been declared as dialogue has recently emerged between socioculturalist (those opposing schema theory) and New Literacies scholars (who seek to expand schema theory).

History of Schemas

While the concept of schema theory can be traced to Plato and Aristotle, Kant is generally considered the first to speak of schemas as organizing structures that mediate how we see and interpret the world. For Bartlett, schemas highlighted the reciprocity between culture and memory. Dewey, a contemporary of Bartlett, later developed theories on the concept of transactionalism, the relationship between individual knowledge and cultural practice. For Piaget (1952), development was interpreted as an ongoing dialectic in which the individual assimilates new experience consistent with existing schemas or changes the schemas to fit his or her experience (McVee, Dunsmore & Gavelek, 2005). Schemas were originally meant to help researchers understand what processes were happening within the individual, but were transactionally linked to culturally organized experience. The transactional connection between the individual and the cultural experience was lost in relation to schema regarding the reading process as the concept was applied to cognitive science and psychological theory. As the link between individual and culture that was embedded in the original concept of transactional schemas broke, I suspect the field of literacy became polarized.
During the 1970s, schema became defined through cognitive psychology, which began to explore knowledge construction through computer metaphors, implying a dualism whereby the individual-as-knower stands apart from the world-as-known. Between 1978-88 research in schema theory was prevalent in journals (McVee, Dunsmore & Gavelek, 2005). This is far different than the transactionalist perspective which mediates between the two. As a remedy to the cognitivist approach, the socioculturalist perspective (as propounded by Florio-Ruane & McVee, 2000)—that thought has its genesis in social interaction—emerged. Socioculturalism says that meaning-making is essentially dialogic in nature, and highlights the role of language and texts.
Debating with the socioculturalists, New Literacies scholars point to the need to count diverse digital contexts, such as hypertexts, as forms of literacy, extending the schema construct. However, discussions of schema theory necessarily raise debates of cultural knowledge since even relatively subtle differences in people’s schemas can have dramatic differences on their interpretations. Unfortunately, research has not focused on schema construction, but only tested pre-made schemas.
While I think that it is a mistake to constantly break with continuity and drastically shift paradigms—since there has yet to be a heliocentric shift in the universe of literacy—I do not believe in creating syntheses that artificially unify the field. Rather, I believe that the field of literacy might be enriched through dialogue between two parties (rather than polemical discrediting of the opposing faction or hegemonic dominance of one party to the point of excluding the opposing viewpoint). As Pressley suggests in reference to teaching reading by phonics or whole language, it should not be an either/or, but both/and; I believe that this statement should be extended to the field of literacy as a whole.
Balanced Literacy Polemics

In “Balanced Literacy in an Urban School District,” Frey, Lee, Tollefson, Pass, and Massengill (2005) entered their study with the philosophy of “balanced literacy”—an approach that balances phonics and whole language, reading and writing, teacher-directed and student-directed activities—a sort of via media of education. This theory was supposed to bridge the educational religio-political divide between supporters of phonics and supporters of whole-language—an issue to literacy as divisive as the wars between Catholics and Protestants prior to the Elizabethan era. These two views of literacy were set out in the mid-1960s by the US Office of Education in publications regarding comparative research on reading instruction models for first graders, and later became solidified into the protracted battle that has continued into today.
To see if “balanced literacy” worked, the researchers collected data from 156 students in grades K-5 in 32 elementary schools in a high-poverty urban metropolitan area. These students had 72 teachers, whose classrooms were chosen for observation. The “balanced literacy” program had been implemented in all schools by district mandate; students were therefore subjected to a 90- to 120-minute “literacy” block every morning. No one asked if a “literacy” block were the most effective use of time; whether the time might be more effectively spent by breaking it down into smaller literacy intervals throughout the day, or whether students as young as five years old can even sit still for that long. The researchers had neither a “control group” to compare the “balanced literacy” approach to, nor a “before and after” study of the same schools, making their “research” more an exercise of polemics.
To indicate “literacy,” the researchers used classroom observations of students reading or being read to, classroom physical environment checklists of literacy components, physical building environment checklists of literacy components, teacher surveys, and student group interviews. The fact that the study was mixed method to help eliminate bias does not help much, considering that the researchers went into the study with a model they were trying to “prove.”
The researchers observed each classroom twice , and filled out a survey using the partial-interval recording format, summing up the data for each one-minute observational interval. In addition, they filled out a classroom physical environment checklist, indicating the presence of literacy centers, classroom libraries, reading nooks, examples of student work, and literacy posters or displays.
A third indicant was a school building environment checklist, searching for “physical features in the office, the hallways, and the library that reflected the school’s literacy activities” (Frey et al, 2005, p. 275).
The fourth indicant was self-reporting. Teachers filled out surveys on how often, or for how long they involved students in literacy activities each day. Self-reporting is notoriously unreliable, yet when you factor in that these teachers were under some duress, given that “balanced literacy” had just been mandated, it is unlikely that they would admit that they were not engaging students in a literacy activity for the requisite number of minutes, even if they were not.
The last indicant, student group interviews, might have been the strongest indicant in the study, considering that students had the least vested interest in the program, and were interviewed in a group, which is a more reliable method than self-reporting. Yet questions like, “Do you have time during the day when you can learn about reading and writing?” and “So, what do you guys like to read?” (Frey et al, 2005, p.276) were too vague to be of much help. The fact that students indicated that they enjoyed going to the public library and that their parents read to them at home seems to indicate that the interviews got off-base. To use triangulation with five weak indicants doesn’t help much, when the indicants used make little sense as stand-ins for “literacy.”
At the conclusion, while 98% of classrooms had classroom libraries, 97% had “literacy displays,” and 97% had a large group area for read-alouds and other activities, just over half the students said that they could find books at school—while almost all of the students agreed that there were adults at home who helped them read and write and find books at home or in the community (Frey et al, 2005, p. 278). This suggests that the posters were not an effective indicant for literacy. Although the researchers did not emphasize this gap, it seems striking that the students actually seemed to learn more about literacy at home than at school, based on their reporting.
I suspect that because the “balanced literacy” method is really a via media, that phonics proponents will continue to emphasize phonics more than whole-language, and that whole-language proponents will continue to emphasize whole-language over phonics, under the guise of promoting “balanced literacy.” However, based on this study, one cannot know whether “balanced literacy” is a “better” approach than any other method, or if, in implementation, it even exists.
What is known is that literacy is important not just for enjoyment but for everyday life. Reading signs, maps, bus schedules, directions on how to assemble a product, or directions for how to administer a medication: these are things literate individuals do every day without thinking about, and take for granted. Yet millions of illiterate individuals are locked out of this world because they are unable to decode or comprehend texts.
Document Reading

Document reading plays an important part in literacy, especially for adults, since documents are prevalent in modern society and the most common reading task on the job is document reading. Fifty to eighty percent of occupational reading tasks require workers to complete a specific task (Mosenthal & Kirsch, 1998, p. 639). Documents are complex in that they have many different organizational patterns such as tables, charts, schedules, maps, graphs, and forms, and have no universal grammar. Many literacy assessments include a document scale as well as a prose scale; these include the NAEP Young Adult Literacy Survey, the National Adult Literacy Survey (NALS), the International Education Assessment (Reading), the International Adult Literacy Survey, ETS’s Tests of Applied Literacy Skills, and the Iowa Tests of Basic Skills.
Since document readability is difficult to assess, Mosenthal and Kirsch suggested a readability formula to help evaluate such texts. The formula is based on two main components: organizational pattern and density. The researchers posited that all types of organization (i.e., including table, chart, schedule, map, graph, and form) stem from four types of lists: the simple list (of which there are two types: simple object lists, and simple modifying lists), combined list, intersected list, and nested list. They then assigned each of these types of lists a score: a simple list has a document structure score of 1, a combined list a score of 2, an intersected list a score of 3, and nested lists a score of 4. By working backwards with tables, charts, and schedules and figuring out which list type the form is based off of, a document score can be assigned.
Individuals who can read and comprehend the most complex documents tend to receive the highest scores on assessments of document literacy. While an individual’s high scores on assessments of document literacy may be seen as indicators of workplace success, literacy scores unfortunately differ between different ethnic groups. A major issue in literacy research is how to bridge the gap between ethnic groups, or if it can even be done.

Literacy Practices Among Different Ethnic Groups

Holt and Smith (2005) investigated the differences between the literacy scores of different socioeconomic and racial groups and the reading habits of these different groups. In entering their study, the researchers presumed the existence of two types of minorities: “voluntary” minorities (predominantly Asian-Americans) who immigrated to the US, and “involuntary” minorities (such as African-, Hispanic-, and Native- Americans). The researchers posited that “involuntary” minorities are less successful in educational endeavors and literacy scores because they do not want to identify with the “oppressor.” This explanation seemed a little too convenient for me, a bit like a “just-so” story explaining why Asian-Americans get higher scores than European-Americans even though they’re minorities. By grouping all “Asian-Americans” into one group, it does not take into consideration the diversity of circumstances under which Asian-Americans may have immigrated. For instance, Vietnamese, Cambodians, and Hmongs may have immigrated under circumstances of considerable duress, compared to Indians, Pakistanis, Bangladeshis, or Chinese who may have come for greater educational or career opportunities. The situations of Vietnamese, Cambodians, and Hmong may be more “involuntary” than voluntary, depending on personal circumstances and whether or not their migration was a result of the Vietnam War. In addition, the researchers assumed that voters were more knowledgeable than non-voters, although the study did not consider whether lack of transportation might have been an issue, or whether non-voters were as informed as the voters about the issues at hand but knowingly made a political statement by their abstention (i.e., “I don’t like either of the two candidates”). Gender was not accounted for, although it may have made a significant difference in the findings.
The question of the study was whether there were differences in literacy levels between different socio-economic and racial groups, and/or differences between socio-economic and racial groups’ reading habits. While the hypothesis was unstated, it was implied that African-Americans as a group would score lower on reading tests than any other ethnic group because they didn’t read as widely or as frequently as European-Americans (the majority social group) because African-Americans probably didn’t have equal access to good schools and libraries.
The group studied was large, insofar as the researchers used data from the 1992 National Adult Literacy Survey (NALS), the largest adult literacy survey conducted in the US—numbering 24,944 adult (16 years +) respondents with complete data—and was representative of the US population. The NALS study asked respondents about their reading of personal documents (letters, memos, magazines, and journals), quantitative documents (bills, invoices, spreadsheets, budget tables), documents (manuals, reference books, directions, and instructions), books, TV viewing for information (hours per day), and reliance on friends or relatives for information. Additional indicants were average annual family income, average annual family income of all the families surveyed in the neighborhood, parents’ educational attainment, home language, and whether the individual had a disability or illness. Essentially, the researchers took data collected from the respondents, set them in balance according to socioeconomic factors then divided the data into two groups—one group with 1/3 of respondents, the other group with 2/3 of respondents. They then used the second group as a check to the first group to see whether the data was valid, using hierarchical linear modeling, eigenvalues, promax rotation, and varimax rotation.
The study concluded that when family income (FI) and income of neighborhood (MIN) were controlled, African-Americans reported reading more sections of the newspaper than European-Americans. For both those above and below the poverty level, African-American adults received more information from TV, books, and magazines than did European-Americans. In addition, African-Americans read more types of books than did European-Americans.
However, African-Americans engaged in lower levels of work document reading than European-Americans, possibly attributable to their holding lower-level jobs.
On standardized tests, African-Americans’ literacy proficiency ended up being lower than European-Americans’, which the researchers believed to be due to a linguistic bias in the language used on standardized tests; I did not find this argument especially compelling considering that the African-Americans studied were apparently widely-read and were likely to have large working vocabularies acquired from reading. I however, began to think that perhaps African-Americans scored a lower proficiency on standardized literacy tests than European-Americans due to their lack of document reading. This question would be very interesting to explore; lining up groups of people matched for socioeconomic status, race, gender, and time spent on reading various personal documents, quantitative documents, and books, and merely comparing the number of work documents that the people read, and seeing if the scores differ. Overall, this study was compelling, insofar as the group studied was so large, but I disagreed with the researchers’ interpretation. Following Bourdieu, I think that perhaps document literacy scores reflect the skills of the hegemonic group who writes the test in their own interest.
Since document reading tests are not only distributed within the U.S., but globally, there is likely to be a cultural bias embedded within such tests. Nevertheless, since most of the world population does not live within the US, it is important to study these other countries as well. While no one has yet declared the fall of the nation-state, the world is “flattening out,” with the rise of technology, the internet, and outsourcing as Thomas Friedman wrote in The World is Flat (2005). With new forms of cooperation and collaboration increasing daily, we cannot afford to have large populations of illiterate people (in whatever way we decide to define the term “literacy”). As the world becomes increasingly interconnected, perhaps we are seeing the guiding force of the “invisible hand” leading us to do what’s right, as the economist/philosopher Adam Smith described so long ago. Who knows? Nevertheless, another nation’s concern for literacy must be our own as well.

Moroccan Literacy

In Lavy and Spratt’s (1997) study of patterns of incidence and change in Moroccan literacy, the topics under investigation were adult literacy skills and literacy levels and how they have changed over the years. However, it is necessary to note that the term “adult” in Morocco, like in Somalia, in statistical reports includes people aged 10-15 years. The authors’ goal was to “take a temperature” of Morocco’s literacy rate, and to see whether the number was going up (since a growing number of people worldwide are believed to be illiterate) or down. In addition, the authors examined the geographic distribution of literacy in Morocco to see if there were differences between the literacy rate in urban versus rural areas.
The literacy survey was designed with the help of World Bank and was conducted in connection with the 1990-91 Living Standards Measurement Survey (LSMS). 2/3 of those participating in the LSMS were given the literacy module. Those completing the literacy module included 2,240 households nationally. When a target household could not or did not participate, a similar replacement household was selected. The final sample of those completing the literacy module included 2,240 households and 8,050 people, of whom 47% were male and 49% lived in rural areas.
Since this survey ran concurrent with a government adult literacy campaign that reached 600,000 people during 1990-91, the results might be questionable, given that literacy instruction was widely distributed during this time, and the level of some skills might have superficially increased for a temporary period (if the skills were not truly absorbed, say in the case of an individual memorizing answers for a test). Scores may have therefore been inflated to higher rates than they otherwise would have been.
The Morocco Literacy Survey directly tested Arabic reading, including single letter and word decoding, comprehension of connected text, writing letters, words, and complete sentences, document reading of an identity card, letter envelope, electricity bill, medicine box, and newspaper; French reading and writing; and mental and written arithmetic. Vocabulary words and text tasks were selected from primary school books in Morocco.
In addition to direct testing, information was also collected through a traditional self-report survey. The self-report survey was completed by all people in a household between the ages of 9-69 years. The self-report section was given to participants first, and was used to screen people to see whether a higher or lower form of the literacy direct test should be given. Average time for a single administration [presumably of the self-report, although it was unclear] was 20 minutes.
Psychometric analyses of instrument reliability were conducted on pilot tests and on results of the first month of survey administration. Both direct tests and self-reports were graded in terms of competence scales between 0-3. Level 0 indicated that no competence was demonstrated—in other words, the individual was unable to decode or comprehend simple written words or to correctly write a simple dictated word. Level 1 indicated a rudimentary ability: the individual was able to demonstrate decoding and comprehension of single words and/or was able to correctly write single dictated words, but was unable to comprehend or write sentences or connected text. Level 2 indicated minimal competence, demonstrating the ability to comprehend simple texts—
although with some errors—and to write single dictated words without difficulty, completing sentences with some errors. Level 3 indicated complete fundamental competence; demonstrating the ability to comprehend a variety of texts, including a newspaper article, without error, and to write complete sentences without error.
Direct assessment of participants was cross-checked with the participants’ self-evaluation. They found that while underestimation of literacy skills was just over 1%, overestimation of skills was a more serious problem; roughly 5% of the population said that they were literate although they could show no literacy skills. Overall, 13.5% of the sample either overestimated or underestimated their literacy skills. Self-assessment differed by gender; while just 39% of males reported being illiterate, 44% actually were by direct test method. On the other hand, 68.2% of females reported being illiterate, while 70% tested out being so via the direct test method; in other words, females were more accurate in their self-reporting than males were. People who had 6 years of schooling in the Q’uranic (traditional Islamic) school also tended to overestimate their literacy level based on their years of schooling. Those who studied in secular schools had a lower incidence of illiteracy than those who studied for the same length of time at a Q’uranic school. Self-report data by kids aged 9-14 was again likely to have serious underestimations of illiteracy—self-report data resulted in an illiteracy rate that was more than 12% lower than the rate estimated using direct assessment. This discrepancy was even wider for subgroups such as boys of that age (16%) or for those living in rural areas (15%). Researchers stated that the rural-urban divide seemed to have an even greater impact on literacy than the gender gap, since the gender gap was even more pronounced in rural areas. The difference between direct-assessment-based illiteracy rates between urban (37.5%) and rural areas (76.9%) was dramatic.
Since in 71% of households, there was a perfect match between the skills of the head of the household and those of their spouse, the researchers proposed that skills be measured in terms of household rather than individual, where the group is the principal functioning unit (i.e., in tribal societies). In other words, based on this data, one could estimate adult illiteracy accurately simply by identifying illiterate households.
The authors also conducted a multivariate regression analysis to identify the relative importance of the correlates of literacy. The indicants (which they referred to as “potential explanatory variables”) included age, gender, residence milieu, parents’ literacy level, quintile of household per capita expenditures, years and type of schooling, and selected interaction terms. Age was strongly and negatively associated with literacy, as was female gender. The urban milieu had a strong positive influence on literacy level, as did literacy levels of mother and father. However, a mother’s literacy level was more highly correlated with a child’s literacy level than the father’s literacy level was.
While literacy levels are improving (doubling over the past three decades), creating a dramatic difference between the skill levels of older (55 years of age and up) and younger generations (15-24 years old), and even between age cohorts, the gender gap between skill levels also increased over time, due to the rate of improvement among men being greater than that among women, although both groups did improve. The urban population improved at a rate approximately three times greater than the rural population. While the rural-to-urban migration must be taken into account, the lack of development in rural areas is startling. Literacy levels obtained by direct assessment correlated highly with per capita household income, yet the richest rural quintile has a higher illiteracy rate (66.7%) than the poorest urban quintile (54.3%).
Surprisingly, within the labor force, the unemployed were endowed with more skills and an illiteracy rate under 14% while the rate among the employed is almost five times higher (62%). Similarly, mean years of schooling among the unemployed reached 11.3 years, much higher than the employed (3.7 years). In other words, those who were literate were much more likely to be unemployed, making literacy and education a ticket to nowhere for most people. The researchers suggested that jobs need to be more strongly connected to educational level before people will want to pursue an education.
Given that quantitative analysis can help delineate long-term trends, and that use of quantitative analysis implies that one can find general truths in numbers, the method does have its limitations; quantitative method cannot detail the impact that the unemployment of a literate person has on a single family; this would make an interesting follow-up study.
Using a method that the researchers in Morocco suggest, researchers in the Women and Literacy in India study used a survey method that includes one representative per household. In this case, because the researchers were interested in studying a literacy campaign’s effect on women, women were selected as the representative head of household, even though the culture might be more patriarchal.


Women and Literacy in India

Dighe (1997) investigated women’s literacy the urban setting of a re-settlement colony in Delhi, more specifically, whether women were empowered by the Total Literacy Campaign (TLC) program or not. It was unclear whether the author was involved in the campaign, or how she found out about the program in order to study it.
Dighe wanted to explore the effects of literacy groups on women’s reading ability, hypothesizing that female literacy learners probably took longer to become literate, just as it took female literacy learners longer to become literate in an African study. She also hypothesized that women were more prone to relapse into illiteracy than men, due to lack of opportunities to practice their newly learned skills.
Although there are several literacy groups in India—National Adult Education Program, Mahila Samkhya, Total Literacy Campaign (TLC), among others—the researcher selected the TLC group which operates in 250 districts because most of its attendees are women. To test women’s reading ability or “achievement,” she selected the standard National Literacy Mission’s literacy test, designed by the Directorate of Adult Education in 1992. The researchers chose to use the test as a standard measure in order to compare their study to others’ findings more readily, even though the researchers criticized some of the test questions that did not directly apply to the respondents’ daily lives—such as the question “do you know how to wire money?” when poor women infrequently do this. Following with the test’s tradition, if a respondent scored 70% or above, they were deemed “literate.”
The researchers selected a sample of women by dividing Ambedkernagar, a neighborhood of South Delhi, India, which has about 200,000 residents living in a 159-block area, into four geographically representative zones. They then randomly selected five blocks from each zone. For each of these selected blocks, a list of women who had completed the three TLC primers was obtained. Five respondents from each block were randomly selected—making the sample total 100 respondents. Although the sample was relatively small, it appears reasonable, insofar as its selection appears to have been unbiased, although it might have been better to have a control group to compare the TLC participants to.
The researchers then interviewed the 100 respondents about each member of their household. Information about sex, age, educational and occupational background was obtained. The researchers found an average of five members in a nuclear family household, and an average of six members in a joint household, of the households which they interviewed. 28.8% of the family members overall were illiterate, according to this method. Nearly 46% of females (aged 7+) were illiterate as compared to 14% of males.
Researchers looked around the respondents’ flats, and evaluated the respondents’ socioeconomic background based on examining the acquisition of certain household items, and ranking people as belonging to a high, medium, or low socioeconomic background. Yet, the authors did not name what items were used for this evaluation, or how the items were evaluated in a standardized way, considering that there were apparently multiple researchers in the field.
As a next step, the researchers provided the respondents with an open-ended questionnaire in order to gain some more qualitative insights. Apparently, the questionnaire was read aloud to respondents (although the researchers did not make this explicit). The interview included topics such as “place of origin and reason for migration,” “age and marital status,” “socioeconomic status,” “previous level of education,” “awareness of the literacy campaign,” “family support for literacy,” “interest in literacy,” “opinion about literacy volunteers and teaching/learning methods used,” “satisfaction of attending literacy classes,” and “frequency of watching/listening to TV and radio.”
Following the completion of the questionnaire, respondents sat down to take the National Literacy Mission exam. Some respondents were apparently eager to complete the test by this time, and such a fact might have skewed some of the results. Under these circumstances, only 16% of all respondents (those who completed all three TLC primers in the program) were able to reach the NLM norm for “literate.” The authors attributed this to an atrophy of literacy skills resulting from disuse (and a lack of provision of post-literacy reading materials) rather than the intrusive nature and length of the study.
In the study, several methodological errors were made. For instance, the questionnaire seems to have been far too long considering that it was only one part of a long process. As a general rule, the longer or more complicated the survey, the higher the non-completion rates. While it is probably easier for respondents to leave off a phone interview than to push a researcher out the door, I imagine that rushing through the exam may have been one way for respondents to get the researchers out of their home sooner. I am also curious as to how conspicuous the researchers’ “search” of the respondents’ apartments was, because it may have added to a participant’s feeling of discomfort in participating. It may also have contributed to participants’ rushing through the exam. This step may have been better eliminated, especially considering that the researchers do not appear to have had a steady scale for measuring the objects.
Because the author was concerned with empowering women, writing that “Literacy has to be perceived as a tool for empowering women in the wider struggle against inequality and injustice in society” (Dighe, 1992, p. 7)—confusing the role of the researcher with the role of the policy-maker—she seemed to imply a Freire-ian paradigm of empowering illiterates through discussions of politics and the world by literacy groups composed of volunteers (who were neighbors of the illiterate) and illiterates conversing on a level playing field about things that mattered to them. Freire suggested creating literacy groups that promote both dialogue and critical literacy, empowering the oppressed to read the word in order to read the world (namely what’s going on behind the scenes). When Freire’s standard of leading the illiterate adults in discussion towards a higher consciousness was not lived up to, because the discussions between the literacy learners and literacy volunteers were not as interesting or as connected to the world (rather than just learning the text) as Dighe thought they “should have been,” the volunteers were faulted. This suggests the Dighe based her study on sociocultural theory that avoids blaming the illiterate for his/her illiteracy, but did not extend the benefit of the doubt to volunteers who were equally impoverished or “oppressed” and had little training in literacy instruction.
Freire’s model—namely of creating literacy groups that promote dialogue and critical literacy— is apparently the dominant theory practiced in literacy campaigns. Since criticality, such as espoused by Freire’s critical pedagogy, presumes reflection on one’s assumptions, it requires dialogue with opposing view points. It is ironic that “critical pedagogy” has become un-critical through rising to the hegemonic heights which the field of literacy has enshrined it in. Since its proponents have become concerned with its preservation, they have become unreflexive and failed to recognize its limitations or think differently. If Freire’s model is to continue to be “critical,” and develop, it needs to dialogue with a counter-posed model.
In addition, I find Freire’s model to be internally inconsistent in that because it presumes the world can be read (assuming a dualistic worldview often associated with cognitivism that implies that the individual is not a part of the world as it is created and needs to uncover a reality which he/she is not a part of ) even though he claims truth is uncovered through dialogue with others (a position in line with socioculturalism which states that meaning-making is essentially dialogic in nature, and highlights the role of mediational tools such as language, texts, and conversations). I find it doubtful that Freire is presenting a new form of transactionalism. Instead, the fact that Freire’s model is internally inconsistent highlights the need for a counter-posed model to dialogue with it in order to work out the kinks, and create a new synthesis, or a better theory.

Conclusion

Since the field of literacy is relatively young, there is definitely room for improvement. To begin with, we need to more clearly define the term “literacy” and what is implied by it—whether we mean decoding or comprehension or numeracy or reading to do.
In addition, I believe that the field of literacy might be enriched through dialogue between those holding opposing viewpoints (rather than polemical discrediting of the opposing faction or hegemonic dominance of one party to the point of excluding the opposing viewpoint). Resurrecting the concept of transactionalism might help. While it is apparently standard for literacy tests to include both a direct-measure and survey portion, ideally, quantitative studies might also be augmented by creating qualitiative studies to accompany them.
Because research has not focused on schema construction, and only tested pre-made schemas, experiments on schema construction should be undertaken. We know little about the role of schema in regards to literacy and cultural knowledge even though so much of our research is based on this sketchy concept.
Since few studies exploring race and literacy have been done, it would be interesting to explore whether African-Americans score a lower proficiency on standardized literacy tests than European-Americans due to their lack of document reading by lining up groups of people matched for socioeconomic status, race, gender, and time spent on reading various personal documents, quantitative documents, and books, and merely comparing the number of work documents that the people read, and seeing if the scores differ. It would be even more interesting to see if this pattern holds in other countries by examining minority scores versus the scores of the hegemonic group.
It would also be interesting to extend Stoodley, Talcott, and Carter’s study (2000) about dyslexics’ weaker vibrotacticle sensitivity and see whether dyslexia affects phonetic learning. I suspect that the phonics method of teaching reading does not work for dyslexic students, considering that they perceive tones and sounds differently. I do not believe that such a test has been undertaken.
Since 1990, interest in international studies of literacy has grown, following the World Conference on Education for All in Jomtien, Thailand (Lavy & Spratt, 1997, p. 120). Despite a growing interest in literacy, the most commonly referenced source of information on national literacy levels remain census-based statistics of individual countries. Censuses contain problems of definition, measurement, and interpretation. Based on educational attainment information, self-report, or third-person report, censuses depend on some questionable assumptions. International studies have been difficult to draw conclusions from since literacy assessments do not have standards which are comparable. Even after 1990, we have continued to compare apples to oranges. I believe that there is a strong need for a set of agreed-upon indicants; for instance, it would be helpful if everyone agreed to define “adults” as those aged 18 years and up, versus 7 or 9 or 16 years of age, for ease of comparison. In addition, it would help if researchers set test-tasks at an agreed-upon level (i.e., primary or secondary) and style (academic tasks versus tasks based on daily life skills) or set of skills tested (i.e., wiring money, comprehending a college-level text, decoding letters or sentences), and agreed on a scale with which to compare the resultant scores. There are currently a multitude of measures, scores, and scales representing this undefined term “literacy” (i.e., 70% or above on the NLM exam in India, versus scores of 0-3 on a scale in Morocco) rendering comparison meaningless. A standard scale of comparison for use in international studies would prove invaluable to the field.



References

Anderson, C. (1982). History of Adult Reading Programs. College and Adult Reading XI NCRA Yearbook, 1-14.

Augustine, Saint, Bishop of Hippo. (2001). Confessions. (R. Warner, Trans.). NY: Signet Classic.

Cohen, D.K., & Neufeld, B. (1981). The Failure of High Schools and the Progress of Education. Daedalus, 110, 81.

Dighe, A. (1995). Women and Literacy in India: a study in a re-settlement colony in Delhi. Reading, England: Education for Development.

Freire, P. (2005). Pedagogy of the Oppressed. NY: Continuum.

Frey, B., Lee, S., Tollefson, N., Pass, L., & Massengill, D. (2005). Balanced Literacy in an Urban School District. Journal of Education Resources, 98, 5, 272-280.

Friedman, T.L. (2005). The World is Flat: A Brief History of the Twenty-First Century. NY: Farrar, Strauss, and Giroux.

Hirsch, E.D. (1988). Cultural Literacy. NY: Vintage Books.

Holt, J.K. & Smith, M.C. (2005). Literacy Practices Among Different Ethnic Groups: the role of Socioeconomic and Cultural Factors. Reading Research and Instruction 44, 3. 1-21.

Lavy, V., & Spratt, J. (1997). Patterns of Incidence and Change in Moroccan Literacy. Comparative Education Review, 41, 2. 115-141.

McVee, M.B., Dunsmore, K., & Gavelek, J.R. (2005). Schema Theory Revisited. Review of Educational Research, 75, 4, 531-566.

Meier, D. (2002). The Power of Their Ideas. Boston: Beacon Press, 27.

Mosenthal, P.B. & Kirsch, I.S. (1998). “A New Measure for Assessing Document Complexity: The PMOSE/IKIRSCH document readability formula.” Journal of Adolescent and Adult Literacy. 41:8, 638-657.

National Commission on Excellence in Education (1983). A Nation at Risk: The Imperative for Educational Reform. 2.

Sedlak, M., Wheeler, C., Pullin, D., & Cusick, P. (1986). Selling Students Short: Classroom Bargains and Academic Reform in the American High School, NY: Teacher’s College Press, 18.

Stoodley, C., Talcott, J.B., Carter, E.L. (2000). Selective deficits of vibrotactile sensitivity in dyslexic readers. Neuroscience Letters, 295, 1-2, 13-16.

0 Comments:

Post a Comment

<< Home