
Blind kid in Vietnam - foto de Tiet Ho
-
We consider now a set of interrelated cognitive processes
that includes
memory, attention, and cognitive strategies. These
"executive functions"
have not received much direct research attention in the
visual impairment
literature, particularly that with infants and children.
However,
many studies, some of which we have already mentioned in
previous
sections, bear at least indirectly on these issues.
Infancy
Several lines of innovative research with sighted infants
have revealed
that the neonate's attention is "captured" by certain
perceptual events,
and that the neonate has little if any volitional choice
about which stimuli
are actually attended. Initially involuntary, attention
gradually comes
under a degree of voluntary control. For example, early eye
fixations are
entirely determined by stimulus features but become largely
volitional
during the first year.
Among the visual events that are particularly
attention-commanding
are moving stimuli, facelike stimuli, and areas within the
visual field that
contain a moderate degree of complexity. Moving stimuli are
especially
effective in eliciting visual attention. Complexity is an
intriguing dimension
of visual attention: the evidence supports the notion that
as the
infant's visual information-processing capacities develop,
the infant prefers
to look at progressively more complex stimulus arrays. The
other
sensory modalities have been far less completely studied,
and the existence
of analogs to these visual-developmental principles in other
modalities is for the most part a hypothetical question at
present.
One of the most interesting lines of recent research in
early perceptual
and cognitive development has been in the area of memory.
The use of
ingenious research methods has revealed that by at least
four months,
infants begin to recognize and remember perceptual stimuli
that they
have encountered before. The implications of this
development are clear:
it is only when recognition and memory begin that conceptual
representation
and an understanding of the physical world can begin.
Caution is
in order, however: to say that the first evidence of
recognition and memory
occurs around four months of age is not to say that memory
is mature
at that point or that the capacity to represent aspects of
the physical world
conceptually springs forth fully formed.
Without research on perceptual development and executive
functions
such as attention and memory, our knowledge about the human
infant's
cognitive understanding of the physical world would be
incomplete.
Impressive strides have been made in research on these
issues with
sighted infants. There is as yet little such work with
visually impaired
infants. Some inferences about memory can be made, and we
reviewed
the evidence for these in connection with our discussion of
the development
of object permanence in Chapter 3.
We turn now to consideration of these issues in preschool
and schoolage
children.
Memory span
The digit span subscale of the WISC can be taken as a
measure of simple
memory. Tillman and Osborne (1969) evaluated WISC verbal
scale
scores for groups of blind and sighted children, ages 7 to
11, for whom
overall WISC scores were equated. Analysis revealed a
significant interaction
of scale (the six verbal scales) and group (blind and
sighted). This
was produced by superior performance of sighted children on
the similarities
scale, offset by superior performance of blind children on
the
digit span scale. This pattern of superiority of the blind
children's memory,
relative to their performance on the other scales, did not
change with
age. Print readers were excluded from the sample of blind
children, and
thus the results can be taken as applying to children with
severe visual
loss.
Smits and Mommers (1976), studying children in the
Netherlands,
reported a similar finding with children ranging from 7 to
13 years of age.
The pattern was exactly the same, with digit span
performance relatively
stronger for the children with visual impairments than
performance on
other scales. When the group was divided into blind and
partially sighted
subgroups, the overall WISC verbal IQ^was higher for the
blind subgroup.
This difference appeared in each of the six scales and was
apparently
no stronger for the digit span than for the other scales.
From this evidence, then, there is clearly nothing wrong
with the
simple memory capabilities of children with visual
impairments, and
indeed this may be an area of relatively high function.
Encoding of tactual information
How is tactual information encoded in memory by children
with visual
impairments? Although it seems evident that the nature of
coding must
be tactual, the issue is not that simple, since it is
possible that aspects of
verbal or visual encoding may also be involved.
Davidson, Barnes, and Mullen (1974) varied the memory demand
in a
task involving matching of three-dimensional shape stimuli.
The child
explored the standard stimulus, then felt each of the
comparison stimuli
in succession and chose the one that matched the standard.
Memory
demand was varied by including either three or five items in
the comparison
set: since the incorrect members of the set were similar to
the
standard, exploration of them constituted tactual
interference. Increasing
the size of the comparison set increased the error frequency
significantly.
The results thus support the hypothesis that features of
tactually experienced
stimuli are encoded in a specifically tactual form.
A useful paradigm for studying the nature of encoding
involves inserting
a delay between the experience of the standard and the
choice of the
comparison, during which various activities are interposed.
The logic is
that different kinds of intervening activity should
interfere selectively
with memory, depending on the nature of the encoding. For
example,
tactual intervening activity should interfere if the
standard stimulus is
tactually encoded, but not if it is encoded in some other
form.
Following this logic, Millar (1974) used three-dimensional
nonsense
shapes designed to be easily discriminated but not easily
labeled. The
subjects were 9- and 10-year-olds who were blind from very
early in life.
The procedure was to present a standard stimulus for a 2-sec
inspection,
then after a delay to present a comparison stimulus for the
child's same—different judgment. The delay was either
unfilled (no activity) or involved
rehearsal (finger tracing of the shape of the standard on
the flat
floor of the apparatus), verbal distractor (counting
backward by threes),
or tactual distractor (a tactual manipulation activity). If
memory of the
standard is tactual, then rehearsal should facilitate
performance, whereas
the tactual distractor should interfere with it. The verbal
distractor
should presumably be neutral in its effect if the standard
stimulus is
encoded tactually.
Errors were were infrequent in all conditions, but response
latencies
varied with the nature of the delay activity. Responses were
slow in the
tactual distractor condition, a result that supports the
tactual encoding
hypothesis. However, rehearsal did not facilitate
performance; furthermore,
the verbal distractor did interfere with performance. These
results
do not conform to the hypothesis of tactual encoding.
Since the tasks did not produce differential error rates, a
second experiment
was conducted with five- to seven-year-old children, using
basically
the same procedures. More errors occurred in the verbal
distractor
and movement distractor conditions than in the unfilled
delay or rehearsal
conditions.
The hypothesis that encoding is specifically tactual
predicts that the
tactual distractor should interfere most with performance,
and specifically
should interfere more than the verbal distractor. That the
verbal
and movement distractors both had interfering effects
suggests that the
interference was not a modality-specific effect that
interferes with encoding,
but rather was a matter of distraction of the child's
attention.
Pursuing the attention versus modality issue, Kool and Rana
(1980)
hypothesized that a verbal distractor would interfere by
distracting attention,
whereas a tactual distractor would interfere specifically
with the
retention of tactual information. They used conditions like
those of
Millar (1974) with congenitally blind children ages 9 to 11,
and ages 13 to
16 years in a second experiment. There was a decay of
tactual memory
with increasing delay in the unfilled delay condition. The
verbal distractor
interfered with performance, thus corroborating Millar's
(1974) results.
It was assumed that the older children would have more
tactual
experience and would therefore be more inclined to encode
tactually.
With these children, both verbal and tactual distractor
conditions were
effective. The effect of the tactual distractor was greater
than that of the
verbal distractor at every delay interval. This result
supports the notion
that the stimuli were tactually encoded. The significant
effect of the
verbal distractor suggests that there is also a general
effect on attention,
thus supporting Millar's (1974) conclusion.
These three studies support the hypothesis that tactual
stimuli are
indeed encoded in a tactual manner, but that the retention
of tactually
encoded information is affected by attentional factors that
are not specific
to sensory modality.
It is known that phonologically similar items can interfere
with one
another in memory and cause lower recall. Millar (1975a)
asked whether a
similar process might occur with tactual features encoded in
memory.
Such interference would be evidence for the encoding of
tactual features.
The blind children ranged in age from 4 to 10 years, and all
had lost
vision by the age of 18 months. The procedure required the
recall of the
position of an item in a series, with series length ranging
from two to six
items. The objects were presented sequentially, then one
test object was
given to the child with the request to replace it in its
correct position in
the series. Three types of series were used. One contained
phonologically
similar (but tactually distinct, e.g., rat, bag, man) items,
another contained
items that were tactually similar (but phonologically
distinct, e.g.,
ruler, knife, comb), and the third was heterogeneous,
containing items
that were tactually and phonologically distinct (e.g., ball,
watch, chair).
As expected, recall performance was worse for the
phonologically
similar series than for the heterogenous series. This
finding shows the
well-known phonological interference effect. The question of
interest
was whether a similar effect would be found for tactually
similar series.
Overall recall of the tactually similar items was indeed
worse than for
items in the heterogeneous series, thus indicating that
tactual features
must have been stored in memory. Interestingly, the tactual
interference
effect was strongest for the smaller series and decreased as
series length
increased. No variation with age or other individual
differences variables
was reported.
Summary. This literature on the nature of encoding of
tactual experience
is small but interesting. Studies that show adverse effects
of tactual
distractors during a delay before recall suggest that
tactually experienced
information is encoded in a specifically tactual form.
However, the issue
is not that simple, since performance also deteriorates,
though typically
to a lesser degree, as a result of verbal distractors. This
may be largely an effect of attention, but the relationship
between initial encoding and
retention variables is not completely clear. Finally, it is
surprising that not
more has been done to explore the possibilities that
children with early
visual experience encode tactual information in a different
way that is
somehow affected by that experience.
Strategies of tactual information processing
Performance in many perceptual and cognitive tasks is found
to vary
significantly as a function of the information-processing
strategy that the
subject adopts. There is a small but exemplary literature on
this regarding
children with visual impairments.
Using a very basic paradigm, Simpkins (1979) examined
children's
ability to recognize geometric shapes tactually. The child
first felt a
standard stimulus, then subsequently chose a match from a
set of four
sequentially presented alternatives. The children were four
to seven years
of age and varied in the amount of visual function.
There was little variation due to gender or visual status,
but the older
children performed better than the younger ones. Simpkins
reported that
in touching the stimuli, the younger children tended to
attend to a
peculiar topological property of a form (e.g., a hole in it)
whereas the
older children tended to hold the shape in one hand and
trace its contour
with the other hand. This shift in exploration strategy
parallels that
found with sighted children (Gliner, 1966), and it is not
surprising that
strategy shifts are related to performance in a similar way.
Berla (1974)
used irregular geometric stimuli varying in complexity from
three to five
sides. The child felt a shape in one orientation, then the
shape was
quickly rotated by 90, 180, or 270 degrees without the child
touching it.
The child's task was to return the shape to its original
orientation. The
accuracy of performance improved with grade level from
grades two
through eight. Increasing complexity did not decrease
accuracy but did
increase the time required for performance. Berla noted that
the graderelated
performance differences seemed to be connected to the
strategy
of choosing a distinctive feature of the shape to
concentrate on: the older
children appeared to attend more to the distinctive features
(e.g., sharp
angles) of the shapes. Berla suggested that a consistent
informationprocessing
strategy was the basis for their better performance.
Berla's analysis of shape discrimination in terms of
distinctive features
is reminiscent of Gibson's (1969) formulation: attention to
peculiar
distinguishing features, or areas of high information
content, improves
the efficiency and effectiveness of shape perception. In a
similar vein,
Solntseva (1966) suggested that the difficulties that the
blind child experiences
in the formation of tactual images of the external
environment are
caused by problems in the ability to differentiate
distinctive features of
tactual experience. Tactual qualities such as texture and
hardness are
relatively attention commanding (Klatzky, Lederman, & Reed,
1987) and
easy to discriminate, but the discrimination of shape
requires a more
systematic approach for the detection of critical features.
Davidson (1972), studying adolescents, used haptic judgments
of curvature
as a vehicle for studying the relationship of tactual
scanning
strategies and task success. The task was to judge whether
an edge was
convex, concave, or straight. Hand movement patterns were
videotaped.
The most frequently used strategy was the "grip," in which
all four
fingers are spread out along the curve, followed by the "top
sweep,"
which involved running the forefinger along the length of
the curve. It is
interesting that the blind subjects used the grip strategy
much more
frequently than a comparable group of blindfolded sighted
subjects, and
that the judgments of the blind group were more accurate.
(When
sighted subjects were instructed to use the grip strategy,
their performance
improved.)
The relationship of strategies and performance under
variations in
task difficulty is also of interest. Davidson and Whitson
(1974) varied
task difficulty by changing the number of items in the
comparison set.
That is, a standard curve was presented and felt, and then
the subject had
to find the standard when it was part of a comparison set of
one, three, or
five curves. (In the case of the single curve comparison, a
simple samedifferent
judgment was required.) The congenitally blind subjects
averaged
19 years of age.
When search strategy was unrestricted, errors increased
regularly with
the number of comparison alternatives, showing a basic
effect of task
difficulty. The question of interest, though, is whether
strategies are
differentially effective for various difficulty levels. The
"grip" strategy
was most frequently used regardless of difficulty level, but
there was a
tendency for the "top sweep" strategy to increase and the
"grip" to
decrease at the highest difficulty level.
Instructed strategies
In a second part of the same experiment, subjects were
instructed to use a
single strategy. There was a tendency for better performance
when strategies
were used in which more of the curved stimulus was
simultaneously
apprehended (e.g., the "grip," in which the subject's four
fingers are
spread along a substantial portion of the edge).
Berla and Butterfield (1977) examined the effectiveness of
training
procedures in improving tactual search performance. The
subjects were
braille readers in kindergarten through fifth grade, age
range 6 to 17
years. The test stimuli were outline tracings of various
states and countries.
The child felt the stimulus for 30 sec, then attempted to
find the
same stimulus in a set of four shapes. Based on a pre-test,
children were
divided into a training and a control group. The training
group received
three training sessions in which the child's attention was
drawn to distinctive
features of the shapes (e.g., "parts that stick out").
Following
training, a post-test was given, and the trained group,
which was matched
to the control group on the basis of pre-test scores,
performed very much
better than the control group, with 84% of the training
group showing an
improvement.
In a second experiment, the test materials were changed to
involve
searching for a shape in a complex array of shapes. Training
was similar
to that in the first experiment but involved shapes embedded
in more
complex arrays. On the post-test, the trained group again
performed
significantly better and faster than the untrained control
group. Thus
training improved performance, apparently not only by
drawing the
child's attention to distinctive features but also by
encouraging a more
systematic search process as well.
Berla and Murr (1974) also instructed subjects in the use of
specific
search strategies while searching for features on a tactual
map. The
subjects were drawn from grades 4 through 12 and ranged in
age from 11
to 19 years. All of the children were braille readers.
Following a pre-test
requiring the location of tactual symbols without strategy
instructions,
three groups were instructed to use either a vertical, a
one-handed horizontal,
or a two-handed horizontal scanning strategy. The children
practiced
the strategy for 4 min. Children in a fourth condition were
free to
scan as they wished. The task then involved finding as many
target
symbols on the map as possible. There was a modest general
increase
with grade level in the number of symbols located. Of more
importance, the scanning strategies produced different rates
of success: each of the
three instructed strategies produced significant improvement
compared
to the pre-test, with the vertical strategy producing the
greatest improvement.
The uninstructed control group did not improve over the
pre-test
performance. The benefit of the instructed strategies in
general seemed
to stem from the more systematic coverage of the map that
resulted.
Berla (1981) further explored the effectiveness of training
scanning
strategies as a function of age. The children, all braille
readers, were
divided into group that averaged 11, 15, and 19 years. Each
age group
was divided into a control and a training group. Early in
the test procedure,
children in the training group were briefly instructed in
the use
of a systematic vertical scanning strategy. The task
required the child to
feel the parts of a nine-item "puzzle" and remember their
locations, and
then to recreate the puzzle on a blank board using the nine
individual
elements.
There were no obvious effects of the training on vertical
location
errors. For horizontal errors, however, the effect of
training varied with
age group. Specifically, the youngest group benefited from
training, while
the performance of the oldest group suffered from training.
The performance
of the middle group was not affected. Berla reasoned that
the
youngest children benefited from training because they had
not yet established
habitual search patterns, whereas the instructed search
strategy
may have interfered with search patterns that the older
children had
already established.
Summary. Davidson (1976) argued that the better search
strategies facilitate
the representation of the stimulus in memory, with attention
as the
mediating process: as attention is more organized, so is
search, the result
being more effective encoding of tactually perceived
information. Whatever
the exact mechanism, it is clear from the work of Berla and
Davidson
that more systematic strategies lead to better performance,
and that
furthermore, strategies can benefit from training.
Integration of information from different sensory
modalities
There is extensive literature on sighted children regarding
issues of
intermodality relations, and particularly on what happens
when information
about events is received simultaneously from two or more
sensory modalities. The literature on the visually impaired
population is more
limited, but several interesting studies illustrate
important points, particularly
about the perception of spatial structure.
One issue is the relative effectiveness of the perception of
spatial and
temporal structure. O'Connor and Hermelin (1972a) addressed
this
question using auditory stimuli that were distributed both
temporally
and spatially. A sequence of three spoken digits was heard
from an array
of three spatially separated speakers. The sequences were
designed so
that when asked for the "middle" digit, the child would have
to choose
between the digit that had occurred at the spatially middle
speaker and the
digit that had occurred in the temporal middle of the
sequence. The
children were 13-year-olds who had been blind since birth.
The results were very clear: the overwhelming choice was of
the temporally
middle digit, rather than of the digit that sounded from the
spatially middle speaker. This pattern was strikingly
different from that
found with sighted children who saw the sequence of three
digits at
different spatial locations rather than hearing them. In
this condition, the
sighted subjects overwhelmingly reported the digit from the
middle spatial
location, rather than the digit in the middle of the
temporal sequence.
Children with hearing impairments responded in much the same
manner
as did the sighted children in this condition. O'Connor and
Hermelin
argued that blind children do not naturally encode spatially
distributed
auditory information in terms of its spatial distribution
(and that
hearing-impaired children, correspondingly, do not naturally
encode
spatially distributed visual information in terms of its
temporal
distribution).
Battacchi, Franza, and Pani (1981) similarly evaluated
children's ability
to process the spatial structure of auditory events. They
used a semicircular
array of six loudspeakers that were separated by at least 25
degrees and therefore highly spatially discriminable from
one another. A
sequence of six names was heard, one from each speaker, at a
rate of one
per second. In the congruent condition, the sequence started
at one end
and proceeded regularly to the other end of the set of
speakers, while in
the incongruent condition the order of the names did not
correspond to
the spatial sequence of the speakers. After the
presentation, the child was
asked to say the names that had been heard at two of the
speakers, chosen
at random.
Sighted children perform this task better in the congruent
condition than they do on the incongruent condition:
apparently the spatial structure
of the speaker array facilitates their processing of the
auditory information.
In contrast, neither partially sighted nor blind children
(ages 8 to
10) performed better in the congruent than in the
incongruent condition:
the regularity of the spatial sequence in the congruent
condition did
not facilitate their processing of the auditory information.
In fact, the
performance of the blind children was not above chance. The
performance
of the partially sighted children, however, was better than
that of
the blind. (A group of blind young adults did show a
performance advantage
in the congruent condition, suggesting that this ability
develops,
albeit slowly, with age.)
We should stress that in these experiments, the task is to
process
auditory spatial information. On the face of it, there is no
reason to
expect that impairment of vision should interfere with this
ability. However,
the empirical evidence is clear.
In another approach to the concept of "middleness," O'Connor
and
Hermelin (1972b) assessed the encoding strategies of seven-
to nine-yearold
blind children in a three-term series problem. Two types of
sentences
were constructed, each expressing a relationship among three
items. In
one type, the sequential order of presentation corresponded
to their
logical relationship, and in the other the sequential and
logical orders did
not correspond. The child was asked two types of question,
one dealing
with the logically middle member of the triad and the other
dealing with
one of the two logically extreme members. There was a
tendency to
report the sequentially middle item when it was incorrect.
O'Connor and
Hermelin suggested that the blind children did not have a
readily available
spatial code for use when it was appropriate, and instead
tended to
rely on a temporal code even when it was inappropriate.
Axelrod (1968) used still another method to approach the
same issue:
an oddity problem, in which the child is required to
identify a characteristic
that distinguishes one member of a triad from the others.
Children
who had lost vision earlier than 18 months had greater
difficulty
than children who had lost vision later than two years in
learning such
problems when the key characteristic was that the item
occupied the
temporal or spatial "middle" of a triad. Axelrod also
evaluated the formation
of intermodality learning sets, which have to do with the
ability to
transfer a solution from a problem learned in one sensory
modality to a
similar problem presented in another modality. When the
problem was initially learned tactually or auditorially and
then presented in the other modality, children with early
visual loss were again worse than those with
later visual loss.
With respect to the question of information-processing
strategies applied
to spatial and other tasks, Millar (1981a, 1982) argued that
there is
nothing inherently different in the information-processing
capabilities of
blind children, but rather that preferred strategies develop
as a result of
the typical ways that children gain their primary
information. Thus, with
respect to spatial perception, "If blindness leads subjects
to neglect external
cues, they will learn less, and know less about directional
connections
between external cues. This, in turn, strengthens the
preference for
strategies derived from the remaining modalities" (1982, p.
119).
This is not to say that the strategies actually chosen for
spatial (and
presumably other) tasks are necessarily the optimal ones: a
visualization
strategy may not be optimal for a given task, but the child
with residual
vision may use it nonetheless because of the effectiveness
of visualization
in many prior experiences. Similarly, the blind child may
have external
spatial-referential strategies available but tend not to use
them because
the primary source of spatial information (touch) tends to
elicit internally
referenced strategies. Similar conclusions were reached by
Robin and
Pecheux (1976), working with tasks requiring reproduction of
two- and
three-dimensional spatial models.
Summary.
It is clear that the processing of even nonvisual
information
about spatial structure is hampered by impaired vision.
These results
again underscore the important role of vision as a vehicle
for the organization
of spatial structure, regardless of modality. However, the
issue is
complicated by the issue of information-processing
strategies: strategies
tend to be selected based on the particular sensory modality
through
which information is received. Although this association of
strategy and
sensory modality may be natural, Millar suggests that it is
not inviolable.
The implication is that training studies designed to help
children select
appropriate strategies of information processing may prove
useful.
Verbal and phonological issues in encoding and
memory
We turn now from spatial issues to those related to the
encoding of verbal
and phonological information. Much of this research uses
braille characters as stimuli: these are especially
interesting as research stimuli because
they have both tactual and verbal-phonological properties.
Our intent
here is not to review how braille characters are learned or
how braille
reading is acquired, but rather to examine the nature of
encoding and
memory of verbal and phonological information, particularly
as it is
obtained via touch. The issue, in short, is the nature of
encoding of
information in memory.
Tactual versus phonological encoding
As we noted earlier, Millar (1975a) demonstrated that
tactual information
is stored in memory in a specifically tactual form, since
interpolated
activity of a tactual nature during a delay interfered
specifically with the
recall of tactually experienced information. Millar (1975b)
examined the
corresponding question with braille stimuli. That is, would
braille stimuli,
with both tactual and phonological properties, be stored
phonologically,
tactually, or perhaps in both forms?
Three sets of stimuli were used, one consisting of items
that were
tactually dissimilar but phonologically similar, another of
items that were
tactually similar but phonologically dissimilar, and the
third of items that
were dissimilar both tactually and phonologically. Set size
ranged from
two to six items. The blind children ranged in age from 4 to
12 years and
had lost vision within the first 18 months of life. They
were screened by
pre-test to ensure their ability to discriminate the letters
tactually, and in
the case of the older children, to identify the letters. The
child felt each
letter of a sequence in succession, then was given one of
the letters and
asked to indicate where it had occurred in the series.
Evidence for both phonological and tactual interference was
found for
all ages, indicating that both the phonological and the
tactual properties
of the stimuli were encoded. However, the younger children
tended to
show stronger evidence of tactual than phonological
encoding. It was also
clear that different processes were involved for tactual and
phonological
information, since there were different relationships of
tactual and phonological
interference effects in relation to overall memory demand.
Additionally, there was a tendency for children with higher
IQ_generally
to perform better than those with lower IQ.
Overall, Millar's (1975b) results for braille stimuli
corroborated her
(1975a) results for purely tactual stimuli in confirming
that tactual encoding does occur. However, the addition of
phonological properties
added a specifically phonological form of encoding as well.
It is well known that the grouping of items within a serial
string of
verbal material facilitates memory of that material.
Presumably such a
grouping effect should also occur with tactual material that
has phonological
correlates. Indeed, Millar (1978) used strings of braille
letters
and found that grouping facilitates recall. This result
further corroborates
the evidence of phonological influence on the tactual
encoding of
verbal material. However, would similar facilitatory effects
of grouping
occur with tactual material without verbal association? The
answer was a
clear no: when the stimulus strings were nonsense shapes
without phonological
correlates, grouping actually interfered with recall. The
results
supported the hypothesis that tactual encoding is
significantly different
when the stimuli have verbal associations than when they do
not. Overall
performance improved with increasing age over the 7- to
11-year range,
but the difference between associative and nonassociative
stimuli did not
change with age. Mental age (as well as digit span) was
similarly related to
overall performance but also did not interact with stimulus
type effects.
These findings constitute further evidence for the existence
of
different memory processes for verbal and tactual
information, and particularly
for the interaction of these processes when
verbal-phonological
information is involved.
At another level of phonological-tactual interaction,
pronounceability
may facilitate braille letter recognition. Such a study was
reported by
Pick, Thomas, and Pick (1966). The subjects were braille
readers ranging
in age from 9 to 21. They varied in age at visual loss,
amount of residual
vision, and braille reading experience. The stimuli were
letter groups
containing from three to six characters. In one condition
the stimuli were
pronounceable, whereas in another condition, the letters of
each group
were rearranged to render it unpronounceable. The child's
task was to
scan the letters tactually and name each letter as quickly
as possible. It
was hypothesized that for the pronounceable stimuli, the
sound sequence
would facilitate discrimination of the letters. Indeed,
there was a dramatic
speed difference in favor of the letters occurring in
pronounceable
groups, and fewer errors occurred for letters in these
groups, again
showing the facilitative role of phonological context. The
results did not
differ as a function of either age or braille reading
experience; variation
with age at visual loss or amount of residual vision was not
reported.
Summary. In sum, the evidence about tactual and phonological encoding
supports the view that when stimuli have both tactual and
phonological
properties, these are both encoded. The two kinds of
encoding follow
somewhat different processes. However, these processes
operate interdependently,
particularly in the encoding of braille.
The role of touch in semantic coding
In the case of print reading, if letter sounds are encoded
one-by-one, as
novice readers may be inclined to do, reading suffers. This
is particularly
so if the sound of the word is not congruent with the
sequential sounds of
the letters. (For example, it is difficult to arrive at the
sound of the word
eat by combining the sequence of sounds of the letters
e-a-t.)
Does such an effect also occur for braille? In fact, the
effect might be
stronger because of the sequential nature of encountering
braille characters.
Pring (1982) asked whether a phonological code is generated
as the
braille letters are contacted tactually, or whether touch
simply serves as a
channel, with phonological encoding occurring at some higher
level. If
individual letters are phonologically encoded, a
phonological-lexical
conflict would be generated in an incongruent condition
where the
sounds of individual letters do not match their role in the
sound of the
word (e.g., in the pair steak-leak) and reading should be
slower. In
contrast, in the congruent case (e.g., stake-leak) no
conflict is present
and reading should be faster.
Pring studied children who were congenitally blind, were
rated as
good braille readers, and ranged in age from 11 to 13.
Relatively few
errors were made in reading the word-pair lists, and the
primary analysis
was of response latency. The time required to read the
target word was
longer in the incongruent than in the congruent condition.
This result
supports the notion that phonological encoding occurs at the
tactual
level, and thus that pronunciation is constructed by
assigning phonological
properties to individual letters and generating the sound of
the word
(and thus eventually its meaning) by combining the
individual sounds.
However, if the children used only this form of
tactual-phonological
encoding, then errors of pronunciation would be expected for
words with
irregular spelling. That such errors were not generally
found supports
the notion of direct access via touch to the known meaning
of the word, rather than a constructive process from
individual letters. In short, the
results yield evidence for both processes.
In a second experiment with the same children, Pring tested
the hypothesis
that this process is mediated specifically by
grapheme-phoneme
(letter-sound) correspondence. This was done by having the
children
read lists of words that were either regular (words whose
pronunciation
corresponds to their spelling, such as wood and dance) or
irregular (those
whose pronunciation does not correspond to their spelling,
such as pint
and talk). Errors were again few, and latency data showed
that regular
words were pronounced more quickly than irregular words.
This suggests
that extra processing is required for irregular words, or
that an
alternative processing route is used for them.
Overall, these results show interesting parallels to the
processes involved
in reading print visually, but they do not yet yield a clear
picture of
how the processes work.
Attention to features of braille stimuli
In earlier sections we addressed the question of the
relative effectiveness
of different information-processing strategies in mediating
performance
in tactual perception, and the issue arises here as well for
the perception
of braille stimuli. Millar (1984) investigated children's
attention to
various features of braille characters, exploring in
particular the relative
attention to phonological and tactual properties of the
characters in
relation to the children's level of skill in braille
reading. The children
were congenitally blind and ranged in age from 7 to 12
years. They were
divided into three reading groups based on reading rate:
these groups
overlapped substantially in age and were moderately
differentiated by IQ_
score.
The test was a multidimensional oddity problem. Three
stimuli were
presented on each trial, and the child had to choose which
of the three
stimuli was the "odd one." The child was instructed that a
stimulus could
be "odd" by differing in meaning, sound, shape, or number of
dots from
the others. From the child's choice it is possible to
discern which of the
dimensions governed the child's choices.
Differences were found between the reading groups. Faster
readers
based their choices more on semantic features and less on
shape features.
This choice pattern was even more highly related to mental
age than to reading level. Each child was additionally
classified as a normal or a
retarded reader, based on whether or not his or her reading
proficiency
score was within a year of chronological age. The normal
readers did not
show predominance of any of the dimensions, whereas the
retarded
readers tended to focus on phonological features. This
indicates that
their reading strategy was to construct the sound of the
word by combining
the sounds of the individual letters, which also
characterizes poorer
sighted readers.
In a second experiment, children were instructed to use
specific features
for judgment. Faster readers were better able to respond to
different features in accordance with the instructions,
whereas slower
readers were less able to escape their own spontaneous
strategies.
Thus, Millar's work shows that there are relationships
between attentional
propensities and reading capability. The direction of
causality, of
course, is elusive.
Pring (1984) used a word-recognition task to explore a
similar issue at
the semantic level. The question at issue was the degree to
which semantic
or tactual information would govern children's ability to
recognize
words. Word pairs were constructed to contain semantically
related (e.g.,
bread-butter) or unrelated {nurse-butter) members. Nonword
combinations
were also included. The child's task was to determine, as
quickly as
possible, whether or not the stimulus was an English word.
Words in
semantically related pairs were correctly recognized faster
than those in
unrelated pairs. This semantic facilitation effect is
evidence that the
children attend to the semantic context while processing
information
about an individual word.
However, when the braille stimuli were tactually degraded by
physically
reducing the height of the braille dots, the semantic
facilitation
effect did not occur. Apparently reducing the legibility of
the braille dots
redirected the child's attention from the semantic to the
perceptual
characteristics of the stimuli. The children were
congenitally blind, were
of normal to high intelligence, were relatively experienced
braille readers,
and averaged 10:6 years of age.
Summary. The implication of the Millar and Pring studies is
that there
are indeed individual differences in children's attentional
propensities,
and furthermore that these are related to reading level.
Pring's results in
particular support the view that the child's attention is
limited and that its allocation is flexible, depending on
the balance of cognitive and perceptual
task demands.
The relationship between verbal and pictorial information
When sighted children look at pictures, their recognition
and memory
can be facilitated by accompanying verbal information: this
is an example
of verbal mediation. The question arises whether a similar
phenomenon
occurs when the verbal information is experienced via
braille. Pring
(1987) examined the role of verbal mediation in the
recognition of pictures
by congenitally blind children ranging in age from 7 to 16,
all of
whom were braille readers. The picture stimuli were
raised-line drawings
of objects with which familiarity could be expected (e.g.,
shoe, hand, or
sofa). In a matching task, each picture was paired with a
word and printed
in braille; the child's task was to report whether the word
went with the
picture ("same") or not ("different"). Following this, a
recognition test
was performed in which a series of pictures and words was
presented.
Some of the words and the pictures had been experienced in
the matching
task and some had not. The child's task was to judge whether
each
item had or had not been encountered in the matching task.
Performance was in general very good. For recognition of
pictures,
performance was best for pictures that had been encountered
together
with the matching word, and specifically better than for
those with a
mismatched word. Pring suggested that the results may
reflect not verbal
mediation in the true sense, but rather a dual encoding of
the stimuli such
that the picture and the verbal information are encoded in
parallel.
Whatever the exact mechanism, association of verbal material
and pictures
clearly occurred. In fact, this association also operated in
a negative
way in the results. Having encountered a word in the
matching phase
increased the likelihood that an erroneous "yes" recognition
response
would be given to the picture corresponding to that word.
That is, false
positive picture recognition responses were made as a result
of previous
exposure to the word.
Although it is not strictly an example of verbal mediation,
the question
also occurs whether there might be a reciprocal effect, such
that picture
information aids in the recall of verbal information. Pring
and Rusted
(1985) found positive evidence. A short prose passage
containing specific
facts corresponded to each of six raised-line pictures of
animals or plants that were explored tactually. Each of the
passages contained information
represented in the picture, as well as other information not
depicted. The
child heard the prose text twice, once with and once without
the picture
available. When the picture was available, the child was
encouraged to
explore the picture and identify features as they were
mentioned in the
text. Immediately after each text, the child was asked to
describe the
subject of the text (and picture). After completing the
trials, an interpolated
task of braille letter naming was used for 15 min, after
which a
delayed recall task was given.
Immediate recall for pictured information was, not
surprisingly, better
than delayed recall. When the text was accompanied by the
picture,
immediate recall of the depicted information was better than
when there
was no picture. Thus, there was a positive effect of
depicted information,
indicating that an effective association was being made
between the verbal
and the pictorial information.
When recall was delayed, the results were more complex. Two
groups
were tested, one of congenitally blind 13- to 15-year-olds,
and another
that had lost visual function after age two. Two-thirds of
the latter group
had some residual pattern vision, although it was not
sufficient to
discriminate the pictures. On delayed recall, the group that
lost vision
later showed the same pattern of facilitation by pictured
information as in
the immediate recall task. The pattern of the congenitally
blind children
showed that pictured information was recalled better in the
illustrated
condition, but that nonpictured information was better
recalled in the
unillustrated condition. Pring and Rusted suggested that
this was a result
of attention: the availability of a picture draws the
child's attention to the
textual information pictured, at the expense of information
not pictured.
Further, when a picture is not available, the child devotes
less attention to
information that is picturable. This is a provocative
pattern of results,
since it suggests that strategies of attention and
information encoding
differ as a function of early vision.
Summary.
Several major summary points emerge from this body of
research. First,
it is clear that when stimuli have both tactual and
phonological properties,
as in the case of braille characters, separate processes of
tactual and
phonological encoding occur. However, these forms of
encoding can have reciprocal effects on one another, and
thus they are not completely independent.
Second, the evidence shows that the child does not operate
with
a limitless reservoir of attention, but instead allocates
attention variously
to tactual, phonological, and semantic features of letters
and words as the
demands of the task vary. Third, in the case of tactually
perceived pictorial
stimuli, there are clear effects of related verbal
information, and the
reciprocal influence also occurs.
This body of evidence has been primarily directed to
demonstrating
the operation of basic processes of information encoding and
to elucidating
the variables that affect their operation. This is certainly
a valid and
valuable pursuit. However, the literature has generally not
addressed
issues of individual differences, aside from some
interesting evidence of
strategy variations in relation to reading.
Imagery
For decades there has been interest in the nature of the
mental images
that blind adults and children have, and in how their
imagery may vary as
a function of such variables as partial vision or an early
period of visual
experience.
For example, Fernald (1913) reported a study of imagery in
two university
students, one blind from birth and the other partially
sighted.
Reportedly, the latter used visual imagery abundantly
whereas the totally
blind student never used visual imagery. Schlaegel (1953)
reported interesting
variations between children with differing amounts of vision
in the
imagery characteristics that words evoke. Test words and
phrases were
presented orally to the child, who was asked to report the
sensory experience
evoked by the "first mental image." The predominant image
reported
by visually impaired children, as by sighted children, was
visual.
The visually impaired group was divided into three
subgroups. The
predominance of visual imagery varied regularly with the
amount of
residual vision: those with the least vision reported the
fewest visual
images, and those with the most visual capability reported
the most. It
may be that children with partial vision did indeed
experience a greater
frequency of visual images, but an equally plausible
explanation is that
there was a response artifact, such that children in this
subgroup were
more inclined to report visual images.
As a procedural matter, the difficulty of studying imagery
should be
noted. Two approaches are possible, and each involves its
own assumptions.
On the one hand, the child may be asked, as in the work by
Fernald
and Schlaegel, to describe the nature of his or her images.
This procedure
is open to the question of whether habits of language use
are artifactually
biasing the outcome: are reported variations in imagery
really that, or just
differences in the use of particular words to report images?
Generally,
this approach cannot generate unequivocal results.
The second approach involves the functional aspects of
imagery: tasks
can be designed on which performance should differ in
predictable ways
depending on the images hypothesized to be involved. The
work on
mental rotation, discussed in an earlier section, serves to
illustrate this
point. For example, when Carpenter and Eisenberg (1978)
found longer
reaction times to make judgments about letters that were
rotated from
the upright, they reasonably concluded that imagery must
have been the
mediating mechanism, and specifically that cognitive
rotation of a mental
image had occurred. As sound as this reasoning may be, it is
good to
remember that images are being examined not directly, but
indirectly by
inference based on the nature of their mediation of
behavior.
Imagery in spatial tasks
The imagery work may be broadly divided into that which
involves the
use of imagery in performing spatial tasks and that which
involves imagery
in other learning tasks. We have considered much of the work
on
spatial behavior in earlier sections, and a brief mention of
the imagery
aspects of that work should suffice here. Both Knotts and
Miles (1929)
and Duncan (1934) asked their subjects to report the nature
of their
imagery in solving maze problems: in both cases, subjects
who reported
using a verbal approach performed better than those who
reported using
visual images or kinesthetic-motor images. Worchel (1951)
used tasks
involving various tactually perceived shapes and solicited
reports of the
subjects of the nature of their imagery. The responses of
the congenitally
blind subjects tended to refer to the "feel" of the shapes,
whereas the
adventitiously blind subjects tended to refer to "mental
pictures." Worchel
interpreted this result as indicating that visual imagery
results from
early vision. Interestingly, the performance of the later
blind subjects was better than that of the congenitally
blind, thus suggesting that visual
imagery can effectively mediate performance.
Imagery in verbal tasks
We turn now to the issue of imagery in learning tasks, which
have involved
primarily verbal material. It is known that words that evoke
images
are easier to learn, for example in a paired-associate task,
than those
which do not. (The typical paired-associate task involves
the presentation
of a list in which both words appear, then testing for the
recall of one
word with the other as a cue.) This paradigm has been used
to assess the
imagery of children with visual impairments. Kenmore (1965)
studied
third and sixth graders from schools for the blind. About
half of the
children were blind, while visual function in the remainder
ranged from
object perception to 2/200: age of visual loss was not
reported. The
speed of paired-associate learning was assessed in
conditions involving
verbally and tactually presented material of varying
familiarity.
Overall, the sixth graders learned more quickly than the
third graders,
and children with higher IQ_ scores performed better than
those with
lower scores. No variation in results as a function of
residual vision was
reported. More substantively, Kenmore hypothesized that
since the
school experience of visually impaired children is highly
verbally structured,
it should lead to stronger verbal imagery in older children
because
of their longer experience in the environment. The older
children were
indeed better than the younger in learning verbally
presented pairs.
Conversely, the older children were worse at learning
tactually presented
material. Kenmore suggested that this may be a result of the
relative
neglect of tactual learning strategies in schools for the
blind, which
should in turn lead to less tactual imagery. (The
inferential dangers of
imagery work are evident here: since imagery was not
measured directly,
its role as a mediating mechanism is uncertain even though
the results are
consonant with that formulation.)
Paivio and Okovita (1971) studied visual and auditory
imagery using a
paired-associate learning paradigm. The congenitally blind
children,
whose ages were 14 to 18 years, were all "above average" in
IQ Lists of
word pairs were constructed to be high in both visual and
auditory
imagery (e.g., ocean-clock), or high in visual but low in
auditory imagery
(e.g., green-palace). Performance was significantly better
throughout for words with high auditory imagery, although
the children did learn both
the high- and low-auditory imagery lists.
In a second experiment, pair lists were created to contain
words with
high visual and low auditory (and tactual) imagery, or with
high auditory
and low visual (and tactual) imagery. Again, performance was
better with
the lists that contained words high in auditory imagery,
although the
differences decreased over the course of the experimental
session. Both
experiments clearly showed the ability of the children to
benefit from
auditory imagery, and a relative lack of ability to benefit
from visual
imagery. No variation in results was reported as a function
of age or IQ_,
but this is not surprising given the limited range of these
variables in the
sample.
On the other hand, Zimler and Keenan (1983) studied younger
children,
7 to 12 years of age, who had lost sight within the first
six months of
life. Word-pair lists of four types were created: high
visual and low
auditory imagery in both (V-V), high auditory and low visual
imagery in
both (A-A), and mixed imagery (V-A, A-V). The children's
performance
did not differ as a function of list type. The lack of an
advantage
for the lists with high auditory imagery stands in contrast
to the results of
Paivio and Okovita (1971). Although the children were
younger than
those of Paivio and Okovita, it is not clear how this
variable may have
affected the results.
Again we can turn to the issue of cognitive strategies in
pairedassociate
learning, with a study by Martin and Herndon (1971). The
words were not chosen for their modality-specific imagery;
rather, the
purpose of the study was to investigate the nature of verbal
strategies in
remembering word pairs. One member of each pair was a real
word (and
therefore presumably "imageable"), while the other was
either a pronounceable
nonword or a very low-frequency word (presumably less
imageable). In a control condition, children were not
instructed as to
strategy, whereas in an "aided" condition they were
instructed in the use
of associative strategies such as recognizing superordinate
relationships
between the two members of a pair. Learning performance was
significantly
better in the aided condition.
The children's reports of their strategies were classified
according to
the type and complexity of cues used (after Martin, Boersma,
& Cox,
1965). There was a significant correlation between
performance and the
level of associative strategy. This result, coupled with the
overall superiority of the aided group, suggests that
learning is better when associative
strategies are used, and that such strategies can be
effectively instructed.
The results of studies using the paired-associate learning
task suggest
a facilitory role of auditory imagery in paired-associate
learning, although
the relationship to CA is uncertain. Furthermore, there are
variations in
performance with associative strategy.
Serial learning tasks have also been used to investigate the
role of
imagery in learning. In this paradigm, the subject's task is
to learn items
presented in a serial list. After each run-through of a
list, the subject is
asked to remember as many of the items as possible, either
in order or not
(free recall). Craig (1973) used this method with
adolescents whose IQ
scores were in the normal range. Age at visual loss ranged
from birth
(70%) to six years, and all were braille users. Lists of
high- and lowimagery
words were created (the imagery characteristics of the words
were not further specified). More items were recalled from
the high- than
from the low-imagery lists. In serial learning tasks there
is a general
tendency to find higher recall of items both early and late
in the list than
in the middle; this effect was found for both high- and
low-imagery lists.
Groups of hearing-impaired, visually impaired, and sighted
subjects
were tested. The subjects with both sight and hearing
performed better
than either the visually impaired or the hearing impaired
(who experienced
the lists visually rather than auditorially). Following the
reasoning
of Paivio and Okovita (1971), Craig concluded that sighted
subjects have
two codes (visual and auditory) potentially available for
mediating the
task, whereas the visually impaired and the hearing-impaired
subjects do
not perform as well because in each case one of the codes is
unavailable.
Although there was apparently some range of visual function
in the
visually impaired group as well as variation in the age of
visual loss, the
possible relationship of these variables to performance was
unfortunately
not reported.
In the research noted above, Zimler and Keenan (1983)
studied the
free recall of serial word lists that differed in their
common attributes.
Three attributes were used, "redness," "loudness," and
"roundness."
The rationale for this choice was that the visual attribute
"redness"
should facilitate recall by sighted children, the auditory
attribute "loudness"
should facilitate recall by blind children, and the
attribute "roundness,"
which is accessible both visually and tactually, should
facilitate the
two groups equally.
Four words of one category were presented seriatim, then
four of the
next, and four of the third. The blind children were indeed
better at
recalling the "loud" words, but they were also better at
recalling the
"round" words than the sighted children, and contrary to
expectations,
they were equal to the sighted in recalling the "red" words.
These results
do not correspond to expectations based on a
modality-specific coding
hypothesis.
Summary. There is no doubt, based on this work, that imagery
facilitates
verbal learning. More specific questions arise about the
specific form of
imagery and how it exerts its effect. On the one hand, some
results (e.g.,
Craig, as well as Paivio & Okovita) support the notion of
modalityspecific
imagery, and specifically that visual imagery is not
facilitative of
learning by children with visual impairments, while auditory
imagery is.
However, other results (e.g., Zimler & Keenan) cast doubt on
the
modality-specific formulation. It is possible that this
varies with individual
differences characteristics such as visual experience, but
for the most
part the research has unfortunately not explored this issue.
Kenmore's
work raised an important issue in finding age-related shifts
in imagery
and in questioning whether these are experience related.
There are obviously
many unanswered questions in this area.
Developmental shifts in imagery
According to Bruner (1966), experience is encoded in a
series of stages
that proceeds developmentally from actions to images to
symbols. Enactive
representation refers to an action; ikonic representation
refers to an
image that is pictorial (and free of action); and symbolic
refers to an
arbitrary or more conceptual form of representation, as in
the case of
language labels. Hall (i98ia,b) used Bruner's framework for
representation
as a starting point. Based on a review of the literature on
imagery in
relation to the performance of various tasks by children
with visual
impairments, Hall suggested that these forms of
representation may not
be tapped in the same ways as in sighted children, and
specifically that
because of their experiential structure, children with
visual impairments
may not use ikonic representation as much but may rely more
on symbolic
and enactive modes of representation.
Hall designed a series of tasks to explore the use of
representational
modes in blind children who had lost vision within the first
year. Three tasks were used: a concrete task, a verbal task
with high-imagery words,
and a verbal task with low-imagery words. Questions were
designed to
tap classification strategies, and specifically to show
whether grouping
would be done on the basis of perceptible (sensory),
functional (referring
to the function of an object), or nominal (name) attributes.
It was expected
that the children's classifications would be based primarily
on
perceptible attributes in the early years, with functional
and nominal
groupings more frequent with increasing age. In addition,
the formation
of equivalence groupings was expected to vary with the
degree of concreteness
and imagery level of the task.
In the concrete task, children tended to classify based
primarily on
perceptible attributes over the entire age range from 7 to
17. As expected,
nominal and functional strategies increased slightly over
age for both of
the imagery tasks, although the use of perceptible
strategies in these tasks
remained high. Surprisingly, perceptible attributes did not
diminish in
use with age. Based on this result, Hall (1983) suggested
that the use of
concrete tasks in the educational setting may not promote
cognitive
growth and higher-level thinking skills. The relationship is
evident between
this possibility and Kenmore's (1965) suggestion of shifts
in imagery
tendencies as a result of school experience.
Summary.
We should reiterate the difficulty of studying imagery, and
particularly
the danger of relying on subjects' reports of the nature of
their imagery.
Nonetheless, some studies that obtain performance indicators
along with
self-reports (e.g., Worchel, 1951) tend to support the
validity of selfreports.
Other studies use performance indicators such as the
pairedassociate
or serial learning task as a basis on which to infer the
nature of
imagery. Much of this work has been done with children who
lost vision
at birth or early in life and who have at most LP, and
consequently
information about visual experience variables is
unfortunately lacking.
However, there is a body of evidence that suggests, though
not unequivocally,
that performance varies as a function of the imagery
characteristics
of the stimulus words; blind children's performance using
words with
visual imagery characteristics is not facilitated, whereas
auditory imagery
characteristics are facilitative. All in all, though, the
literature on imagery
is not very satisfying.
The End
ϟ
excerpt:
part II - chapter 6 in
Blindness and children - An individual differences approach
DAVID H. WARREN
Department of Psychology
University of California, Riverside
Cambridge University Press, 2009
https://doi.org/10.1017/CBO9780511582288
|