Which term refers to the ability to mentally rotate objects?

To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.

If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.

To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.

If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.

In the actual experiment, subjects were presented, on each trial, with one of 12 (asymmetric) alphanumeric characters (F, G, J, R, e, j, k, m, 2, 5, 4, 7). The characters appeared in various tilted orientations from 0° through 180° and the subjects were required to determine whether the character in question was the normal (standard) or the backward (mirror image) version of that character as generally seen in printed form–regardless of its orientation in the picture plane. As before, reaction time constituted the dependent measure.

The principal outcomes of this experiment were (a) that reaction time was a monotonically (though nonlinearly) increasing function of angular departure of a character from the upright position; and (b) that the response “normal” was consistently faster than the response “backward.” Introspective reports of the subjects in this experiment indicate that the task was usually performed by imagining the visually presented character rotated into the familiar, upright orientation–especially on those trials in which the stimulus was tilted quite a way from upright. Some possible reasons for the nonlinear nature of the increasing function will be considered in detail later.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780121701505500093

EEG Based BCI—Control Applications

Dipali Bansal, Rashima Mahajan, in EEG-Based Brain-Computer Interfaces, 2019

A Neuropsychology Pilot Study to Examine Mental Fatigue

Mark Ridgley from the Radius Teknologies, LLC (NI Alliance Partner), made use of LabVIEW to deploy a mental rotation task to survey the effects of mental fatigue. Mental rotation task helps in identifying the ability of human brain to manipulate and correlate 2D or three-dimensional (3D) visual stimuli it is subjected to. The visual stimulus is modified in a controlled manner and the subject is required to identify the alteration, which leads to mental rotation. Primary task of mental rotation follows the following procedure:

A cue image is presented on the screen.

Subjects are instructed to memorize the image and create a mental image of it.

Button “Next” is pressed to reach the orientation screen. (The orientation screen indicates the direction and number of degrees needed to rotate the image to zero degree position.)

Image in this position is memorized now and “Next” button is again pressed. (A test image appears on the screen.)

Subject is asked to rotate the image mentally till axial orientation permits comparison with the standard image.

Comparison is done and decision is reported by pressing “Yes” if the image matches else “No” is sent.

Brain wave variations are recorded throughout the task using EEG as per the International 10–20 setup.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128146873000065

Developmental Neuropsychology

A. Uecker, in Reference Module in Biomedical Sciences, 2014

Measurement of Cerebral Lateralization

Lateralization has been an important issue in developmental neuropsychological research. For example, it has been suggested that deficiencies in cognitive tasks such as reading and language are due to an abnormal or weak pattern of lateralization (see Section Learning Disabilities). The study of lateralization, though, presents many problems. For instance, the validity of methods typically used to assess, or infer, lateralization is often suspect, primarily due to the fact they involve inferring underlying brain function, rather than its direct measurement.

Lateralized brain damage or dysfunction is determined by comparing performance of the right and left hemispheres. Because the two hemispheres are contralaterally organized, the right side of the body is primarily controlled by the left hemisphere and the left side of the body is primarily regulated by the right hemisphere. While somatosensory, motor, and auditory systems are almost completely crossed, other pathways send impulses from the same side of the body to the same hemisphere (e.g., right ear to right hemisphere). The visual system is more complex than the other systems because the visual fields (not the eyes) are crossed in the hemispheres. Thus, part of the left visual field projects to the right visual cortex, and part of the right visual field projects to the left visual cortex.

Research measures typically used to assess for lateralization include dichotic listening tasks, manual preference, dichhaptic stimulation, tachistoscopic procedures, and mental rotation. A problem with these procedures is that they are highly inferential. During dichotic listening, two sets of information are presented simultaneously, one to each ear, arranged in such a way that one stimulus set arrives at the left ear at the same time a different stimulus set arrives at the right ear. Individuals are typically more accurate on the right ear than on the left, and clinical patients with known language lateralization are better on the ear contralateral (or opposite) to their language-dominant hemisphere. This right ear advantage is thought to reflect the left-hemisphere representation for language. Manual preference, or handedness, is not an adequate basis for identifying speech lateralization because it leads too frequently to misclassifications. In fact, although the majority of the population are left-hemispheric for both language and production and language perception, there are many exceptions, especially those who are left handed. During dichhaptic presentation, individuals are given two different shapes to palpate simultaneously, one with each hand. Because the ascending somatosensory systems are crossed, information from the right hand is transmitted first to the left hemisphere, while the reverse is true for left-hand information. Dichhaptic presentation of objects gives a better sense of balance between the two systems of the brain as compared to dichotic listening, since the auditory system is incompletely crossed. In tachistoscopic procedures, verbal or spatial stimuli are presented briefly, usually for less than 180 ms, to the right and/or left visual field. Stimuli perceived in the left visual half-field are processed in the right cerebral hemisphere, whereas stimuli perceived in the right visual half-field are processed in the left cerebral hemisphere. Mental rotation is a nonverbal spatial cognitive paradigm which requires the ability of the subject to mentally rotate the visual image of an object from one position to another. Mental rotation tasks are thought to be mediated by the right hemisphere. Additionally, due to mental rotation, as physical rotation, taking time, specifically, the greater the distance, the greater the amount of time, mental rotation tasks reflect analog processing that is occurring (Hynd and Obrzut, 1981; Obrzut and Hynd, 1986).

Whereas it has been more traditional to focus upon ‘left’ versus ‘right,’ or horizontal corticocentric differences in cognition, a more recent emphasis has been upon recognition and analysis of cortical–subcortical, or the vertical dimension within the brain (Koziol and Budding, 2009).

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383002609

Manipulation of Visual Information

LYNN A. COOPER, in Human Performance Models for Computer-Aided Engineering, 1990

TRANSFORMATIONS ON INFORMATION PRESENTED IN A STATIC VISUAL DISPLAY

One of the more robust findings in the literature in cognitive psychology concerns the relationship between performance (measured in time and accuracy) in judging some aspect of a disoriented visual display of an object and the extent of displacement of the object from a canonical or a previously learned position. The amount of time required to determine, for example, whether an object is “standard” or “reflected” in parity increases linearly with the magnitude of the angular difference between the object's displayed orientation and a familiar position. This basic linear relationship between processing time and angular difference holds whether visual stimuli are presented simultaneously (Shepard and Metzler, 1971) or successively (Cooper, 1975)—requiring a comparison of an object with a stored memorial representation, whether the objects transformed are portrayed as two or three-dimensional; whether the rotational transformation itself is in the picture plane or in depth; and, to some extent, regardless of the visual complexity of the objects (Cooper and Podgorny, 1976). Shepard and Cooper (1982) provide a relatively comprehensive, though slightly dated, review of this literature.

This basic finding suggests that the computational cost of mentally transforming a disoriented object can be expressed simply by the linear reaction time function. Although the stimulus parameters discussed above do not, in general, affect the shape of the performance function, they do have discernible effects on both the slope of the function (inferred to measure the rate at which correctional transformations can be carried out) and the intercept (a measure of the time to encode the visual display). Mode of presentation can affect both the slope and the intercept; stimulus complexity and the presence of landmark features can affect the rate of transformation (Hochberg and Gellman, 1977); and stimulus and transformational dimensionality have questionable effects on both slope and intercept. Estimated rates of mental rotation reported by various investigators for a host of stimulus and presentation conditions range from approximately 60 degrees (for perspective drawings of three-dimensional objects and three-dimensional transformations) to over 500 degrees (for highly practiced subjects transforming simple two-dimensional stimuli) per second.

A theoretical framework that has been proposed to account for these data, which generally takes the linearity of the relation between time and angular displacement as evidence for an internal analog or simulation of the process of physical rotation in the specific sense of passing through intermediate positions in a transformational trajectory that correspond to intermediate stages in the physical rotation of an object, has been demonstrated (Cooper, 1976). The basic finding of Cooper's experiment was that the time to respond to a disoriented object is essentially constant if the object is presented in an expected position, in the sense of being congruent with the currently assumed position of an internal representation of the object that the subject imagined rotating at a particular rate in a particular direction.

Simple linear relations between time for correctional processing and spatial extent have also been reported for transformations other than rotation. Bundesen and Larson (1975), Bundesen, Larson, and Farrell (1981), and Sekular and Nash (1972) have all demonstrated linear relations between the time required to compare two objects of different size and the ratio of the size differences (but see Kubovy and Podgorny, 1981), and combinations of size and rotational transformations contribute additively to comparison times under some circumstances (Bundesen et al., 1981). Kosslyn (1973; Kosslyn, Ball and Reiser, 1978) has shown a linear relation between the time required to “mentally scan” from one location to another in an array of objects and the metric distance between the objects in the scan path. Further evidence for the analog nature of translational mental operations is provided by Shulman, Remington, and McLean (1979) in a task requiring the shifting of attention from one location to another.

A host of additional questions that could bear on pilot performance issues can be asked about the nature and time course of correctional mental operations on disoriented or misaligned visual displays. Two that are presently unresolved in the literature but that have produced some empirical evidence concern (1) whether transformations take time in proportion to proximal or to distal variables and (2) whether transformations of abstract frames of reference can be carried out. With respect to the relative importance of proximal and distal distance, the original mental rotation experiments (Shepard and Cooper, 1982; Shepard and Metzler, 1971) suggest strongly that the relevant distance between two positions over which reaction time increases linearly is the distance between the positions of the two objects in three-dimensional space, rather than the distance between the two objects as projected on the retina (when the two sorts of measured distances are different). Corballis and his associates (Corballis and Roldan, 1975; Corballis, Zbrodoff, and Roldan, 1976) have asked whether mental rotation of a disoriented object occurs to the retinal or the gravitational upright, when the two are different by virtue of head tilt. For visual patterns of familiar objects with an overlearned canonical position in the world, rotation appears to be to gravitational upright, but with unfamiliar complex dot patterns, rotation is carried out to achieve congruence with the retinally defined vertical. Other investigations of the operation of proximally defined versus distally defined distance (in the context of a mental scanning task) indicate that instructions can effectively alter the character of the scan path: when a subject is instructed to imagine scanning between two objects located in three-dimensional space, time increases with distal distance; however, when a subject is instructed to scan from the visual direction of one object to the visual direction of another, time increases linearly with distance in the two-dimensional projection (Pinker, 1980; Pinker and Finke, 1980; Pinker and Kosslyn, 1978).

With respect to the question of whether transformations can be carried out on an abstract frame of reference as opposed to a representation of a particular visual object, experiments by Cooper and Shepard (1973) suggest that such an overall transformation of a coordinate system cannot be done effectively to prepare for the presentation of a disoriented test object. Providing time and the proper information to enable the transformation to be done in advance lowers subsequent reaction time, but the decrease does not change with the magnitude of the angular displacement of the prepared-for position. Subsequent experiments by Jolicoeur (1983) indicate that frames of reference can be transformed in advance when the type of stimulus and type of orientation are known, and the transformation involves assuming the next in a series of well-defined spatial positions. Note that manipulation of a frame of reference could be an important component of performance in reorienting after “pop-up”; thus, it is important to have a more definitive evaluation of this issue at the basic research level.

In addition to the basic analog model of rotation and related spatial transformations proposed by Shepard, Cooper, and their collaborators, other sorts of models have been offered to account for the data from transformation experiments that assume a discrete representation of a visual object and incremental transformations applied to subparts of the representation (e.g., Anderson, 1978; Just and Carpenter, 1976). The most detailed of these alternative models has been presented by Just and Carpenter (1976) and Carpenter and Just (1978) and is based on an analysis of patterns of eye fixations made during performance of a mental rotation task, similar to that studied by Shepard and Metzler (1971), in which two visual displays differing in orientation are compared with respect to shape. The process model that these investigators propose postulates three successive stages in carrying out transformations on objects presented spatially. In the first “search” stage, sections of the figures that are in potential correspondence are located. In the second “transformation and comparison” stage, segments that are taken to correspond in the two figures are mentally rotated, and a sequence of comparisons is made to determine when the orientations of the segments correspond. The transformations and comparisons are incremental, occurring about every 50 degrees of rotation. In the final “confirmation” stage, a determination is made of whether the other segments of the figure correspond as a result of the transformation. Thus, although this model departs substantially from the analog account, it does fulfill the criterion of an analog process outlined by Cooper (1976) and Cooper and Shepard (1973), but the succession of intermediate positions assumed is by a representation of portions of a visual figure rather than of an integrated representation. More recently, Just and Carpenter (1985) presented a detailed account of performance on a cube comparison task that requires transformations on visual objects. The model is designed to describe differences in performance between individuals of high and low measured spatial aptitude, and it is embodied in a running simulation. The central difference between the two aptitude groups resides in the coordinate system adopted for representing and transforming spatial objects. Note that since this model is designed specifically to account for group differences by strategy differences, its usefulness in predicting across performance, given a particular stimulus as input, is minimal.

A final example of a model that might be applied to transformations on visual information has recently been proposed by Kosslyn (1987). This qualitative model is a very general account of perceiving and imagining which assumes that different (neural) subsystems encode relations among parts of an object in a categorical fashion (i.e., top-bottom, right-left relations) and in terms of their actual coordinates (metric relations). Presumably, both subsystems are involved in the realignment of disoriented objects, with the categorical relations subsystem enabling comparisons of current relations with stored ones and the coordinate encoding subsystem enabling a precise computation of the position of all parts of an object in space.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780122365300500157

THE MIND'S EYE IN CHESS

William G. Chase, Herbert A. Simon, in Visual Information Processing, 1973

Forward Search in Chess

When the Master is staring at a chess board trying to choose his next move, he is engaged in a forward search through some kind of problem space. The problem space has generally been characterized as a branching tree where the initial node is the current board position, the branches represent moves, and the next nodes off these branches represent the new board positions reached by those moves (Newell & Simon, 1972, p. 665). But the Master's problem space is certainly more complicated than this, because he doesn't have the board position organized in short-term memory as a single unitary structure. As we have shown, the board is organized into smaller units representing more local clusters of pieces. Since some of these patterns have plausible moves associated with them in long-term memory, the Master will start his search by taking one of these moves and analyzing its consequences.

Since some of the recognizable patterns will be relevant, and some irrelevant, to his analysis, we hypothesize that he constructs a more concrete internal representation of the relevant patterns in the mind's eye, and then modifies these patterns to reflect the consequences of making the evoked move. The information processing operations needed to perform this perturbation, whatever they are, are akin to the mental rotation processes studied by Shepard (cf. Cooper & Shepard's chapter in this volume) and the mental processes for solving cube-painting and cube-cutting puzzles studied by Baylor (1971). When the move is made in the mind's eye–that is, when the internal representation of the position is updated–the result is then passed back through the pattern perception system and new patterns are perceived. These patterns in turn will suggest new moves, and the search continues.

External memory (Sperling, 1960), eye movements, and peripheral vision are also important for the search. When the player executes a move in the mind's eye, he generally looks at the location on the actual, external board where the piece would be, imagines the piece at that location, and somehow forms a composite image of the generated piece together with pieces on the board. Peripheral vision is important because the fovea can resolve only a very few squares (perhaps 4), so that verification of the location of the pieces within the image requires detection of cues in the periphery. Thus, forward search involves coordinating information available externally on the visible chess board with updating information held in the mind's eye. (For eye-movement studies of the coordination of external with internally stored information in a different problem-solving task, see Winikoff, 1967.)

If the Master wants to reconstruct his path of moves through the problem space, all he needs to store in short-term memory are the internal names of the relevant quiet patterns along the path, since the rest of the information can be retrieved, as we have seen, from long-term memory. This provides a tremendous saving of space in short-term memory for other operations, and time for the subsequent progressive deepening that is so often seen in the protocols.

We thus conceive of search through the problem space as involving an iteration of the pattern system's processes, and repeated updating of information in the mind's eye. Only the barest outline of this complex process is explicit in the verbal protocols. Given the known time constants for the mind's eye and for long-term memory retrieval (cf. Cavanagh, 1972; Cooper & Shepard's chapter; Sternberg, 1969), each iteration takes perhaps half a second.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780121701505500111

Psychological resilience

Gro M. Sandal, ... Jamie D. Barrett, in Space Safety and Human Performance, 2018

6.2.3 Cognitive and Psychomotor Performance

The two individual state factors considered, thus far, have involved aspects of human resilience toward the impact of living under conditions of microgravity, altered dark-light cycle, and confinement and isolation. However, space has not only become a living environment for human astronauts but also a place to work. While in space, astronauts have to perform many different tasks. These tasks usually include both technical maintenance and housekeeping tasks in order to keep the space station running, as well as performing many different experiments from various areas of science. This involves high demands on psychological functions of astronauts (e.g., memory, attention, decision making, visuomotor coordination). Thus, maintaining a proper functional performance state of astronauts throughout their stay in space is of paramount importance to ensure mission success.

Two different lines have been pursued to assess possible risks arising from both space-specific and space-relevant stressors on human performance (cf. Kanas and Manzey, 2003). The first one includes specific experiments to investigate possible effects of microgravity on certain cognitive and/or psychomotor performance functions, including spatial orientation (e.g., Glasauer and Mittelstaedt, 1998; Kornilova, 1997), processing of spatial information (e.g., Friderici and Levelt, 1990; Villard et al., 2005), object recognition (e.g., Leone, 1998), and fine motor skills (e.g., Bock et al., 2001a,b, 2010). Most of these experiments were conducted during parabolic flights, short-term space missions or the first days or few weeks of long-term space missions. The second line of research include what has been referred to as performance-monitoring studies and has focused more generally on possible changes of a broad range of different cognitive and psychomotor functions of astronauts as cosmonaut during their stay in space (Manzey, 2000). Comprehensive reviews of the main results of these two lines of research have been provided by Casler and Cook (1999), Kanas and Manzey (2008), and Strangman et al. (2014).

Overall, the results suggest that, after primary adaptation to the space environment, at least basic cognitive functions like memory retrieval, logical reasoning, or spatial information processing (e.g., mental rotation) are remarkably resilient toward the impact of different sorts of stressors in space. This confirms similar results of recent empirical studies in analog environments, which also failed to find impairments of cognitive functions or even reported improvements in cognitive performance associated with prolonged confinement (e.g., Griofa et al., 2011; Paul et al., 2010), challenging subjective and anecdotal reports, which had suggested that memory and attention/concentration might decline under conditions of isolation and confinement (e.g., see for reviews Palinkas and Suedfeld, 2008; Strangman et al., 2014). However, other performance functions seem to suffer at least during the primary phase of adaptation to the spaceflight environment and the change of gravitational force. Prominent examples include distortions of spatial orientation, reflected in different (visual) illusions and difficulties to identify the own orientation within the three-dimensional space of a space station. These effects can directly be attributed to the microgravity-induced changes of vestibular signals in space. According to a survey study including 104 Russian cosmonauts, 98% of cosmonauts reported to have experienced at least once a movement illusions or a state of partial or complete disorientation (coordinative illusion, particularly when the eyes are closed Kornilova, 1997). Typical types of movement illusions include erroneous perceptions of self-motion (e.g., feelings to rotate, tumble or fall) as response to head movements. One of the most consistently reported coordinative illusion is the so-called inversion illusion, i.e., a spontaneous feeling of hanging upside down (Kornilova, 1997). However, most of these effects seem to represent acute responses to the changed gravitational force and do not last much longer than a couple of hours or days after encountering microgravity. Remarkably, astronauts usually do not report to get completely rid of any perception of up and down when losing the external reference of verticality in the microgravity environment. Instead, they usually keep a more or less strong concept of subjective verticality relative to the own body axis (“up is there where the head is”; Glasauer and Mittelstaedt, 1998).

Another performance aspect, which has been shown to become impaired during space mission, is fine-motor control. This is suggested by converging evidence from neuroscience studies and performance-monitoring studies, which have investigated the effectiveness of astronauts in performing different classes of visuomotor tasks, including discrete aimed arm movements (e.g., pointing at a given target or grasping a given object) and continuous tracking movements (e.g., pursuing a moving target either directly or indirectly via a cursor controlled by a joystick). However, the specific sorts of impairments differ between these two classes of movements. Aimed arm movements can usually be executed with the same accuracy as on Earth, but only at the expense of a slower movement time (Berger et al., 1997; Bock et al., 2001a,b). The reverse holds true for tracking movements where the speed of movement cannot be adapted but is given by an external target. For these movements, usually no impairments of speed but impairments of accuracy are found (Manzey et al., 2000). This pattern of results suggests that astronauts become less able in space than on Earth to concurrently optimize both accuracy and speed of voluntary and aimed movements (Bock et al., 2001a,b). It is not yet clear whether these effects only occur during the primary adaptation to the microgravity environment or represent longer-lasting effects while in space. One mechanism, which has been proposed to cause these impairments, is an underestimation of the mass of extremities, due to the split of the used relationship between mass and weight in space (Bock et al., 1996; Heuer et al., 2003). This suggests that these performance problems should only occur very early during a space mission and should fade out quickly with adaptation to microgravity. However, several results suggest that astronauts just can compensate sufficiently for the observed impairments of voluntary movements after some days in space, but that visuomotor performance remains to be vulnerable for stress effects for at least the first 4 weeks in space or even longer (Bock et al., 2010; Manzey et al., 1998).

Somewhat more equivocal results have been reported for assessments of executive functions and higher cognitive processes. Only few studies have addressed such functions, thus far, with a mixed pattern of results. One set of studies has looked at dual-task performance in space as an indicator of the effectiveness of executive function involved in attentional control. While some of these studies reported a performance decrement compared to baseline performance on Earth (e.g., Manzey et al., 1995, 1998), others did not find comparable performance decrements (Fowler et al., 2000) or attributed it to other factors (e.g., higher effort for motor programming; Bock et al., 2010). Other studies have used interference tasks (e.g., Stroop task) to probe this sort of cognitive functions but also found some contradictory results (Benke et al., 1993; Pattyn et al., 2005). Based on this limited evidence, any decisive conclusions about effects of the space environment on executive functions and higher cognitive functions cannot be drawn at this time (Strangman et al., 2014).

A general limitation of the human performance research in space is that only very few studies have actually involved long-term space missions lasting longer than 3 months, and that just one single-case study is available, thus far, which has addressed performance of one astronaut during a space mission that came close in duration to future missions to Mars (i.e., 14 months; Manzey et al., 1998). The results of this single-case study suggest that at least this individual cosmonaut was able to maintain his cognitive and psychomotor performance on a comparatively high level, even across such a long time of living and working in a space habitat. However, recent results from the Mars 500 point to a considerable degree of individual differences with respect to behavioral adaptation to long-term confinement and isolation (Basner et al., 2014), and certainly more research is needed before the possible risks of human performance decrements associated with future exploratory space flights can eventually be assessed.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081018699000066

The Neuropsychology of Mental Imagery

MARTHA J. FARAH, in Functional Organisation of the Human Visual Cortex, 1993

The Relation Between Mental Imagery and Perception

The central issue in the neuropsychology of mental imagery, and the issue most relevant to the topic of this volume, is the relation between imagery and perception. This issue has a long history of controversy within cognitive psychology. An intuitively appealing hypothesis is that imagery consists of top-down, or efferent, activation of perceptual representations. To support this hypothesis, cognitive psychologists such as Kosslyn (1980), Shepard (1978) and Finke (1989) have devised a variety of ingenious experimental paradigms in which imagery and perception can be compared. The results of these experiments indicate that imagery and perception have many similarities, in terms of the behavioral responses of normal subjects, suggesting that the same underlying representations are being used in the two cases.

However, not all cognitive psychologists have found these demonstrations persuasive, and some have maintained that imagery involves more abstract, non-visual, language-like representations. Data that seem to support the visual-perceptual nature of visual mental images can also be explained in terms of non-visual representations. For example, Anderson (1978) has argued that no behavioral data (i.e. sets of stimulus inputs paired with subjects' responses to those stimuli and the latencies of the responses) can ever distinguish alternative, non-visual, theories of imagery from the visual-perceptual theories. Pylyshyn (1981) has suggested that the behavioral data that appears to show that imagery is visual might result from subjects simulating the use of visual representations using non-visual representations.

However plausible one finds the alternative, non-visual, theories of imagery (and different psychologists appear to differ greatly in their subjective judgements of plausibility in this domain), it would be desirable to obtain more decisive evidence. Neuropsychological evidence has the potential to be more decisive, in that it provides direct evidence on the internal processing stages intervening between stimulus and response in imagery experiments.

A number of studies have been carried out, using behavioral measures in brain-damaged subjects and psychophysiological measures in normal subjects in order to obtain more decisive evidence on the issue of the relation between imagery and perception. These studies will be reviewed very briefly here; a more detailed review of some of this material can be found in Farah (1988) and (1989).

Studies of brain-damaged patients

If mental imagery involves activating cortical visual representations, then patients with selective impairments of visual perception should manifest corresponding impairments in mental imagery. This is often the case. For example, DeRenzi and Spinnler (1967) investigated various color-related abilities in a large group of unilaterally brain-damaged patients and found an association between impairment on color vision tasks, such as the Ishihara test of color blindness, and on color imagery tasks, such as verbally reporting the colors of common objects from memory.

In another early study documenting the relations between imagery and perception, Bisiach and Luzzatti (1978) found that patients with hemispatial neglect for visual stimuli also neglected the contralesional sides of their mental images. Their two right parietal-damaged patients were asked to imagine a well-known square in Milan and to describe the scene from a particular vantage point. The patients tended to omit more landmarks on the left side of the scene than the right. When they were then asked to imagine the square from the opposite vantage point, they reported many of the landmarks previously omitted (because these were now on the right side of the image) and omitted some of those previously reported.

Levine, Warach and Farah (1985) studied the imagery abilities of a pair of patients, one with visual disorientation following bilateral parieto-occipital damage, and one with visual agnosia following bilateral inferior temporal damage. We found that the preserved and impaired aspects of visual imagery paralleled the patients' visual abilities. The first patient could neither localize visual stimuli in space nor accurately describe the locations of familiar objects or landmarks from memory. He was good at both perceiving object identity from appearance and describing object appearance from memory. The second patient was impaired at perceiving object identity from appearance and describing object appearance from memory, but was good at localizing visual stimuli and at describing their locations from memory.

Farah, Hammond, Levine and Calvanio (1988) carried out more detailed testing on the second patient. We adapted a large set of experimental paradigms from the cognitive psychology literature that had been used originally to demonstrate either the visual (i.e. pattern and color) nature of imagery or the spatial (i.e. 3D layout) nature of imagery, and administered these tasks to the patient and to age- and education-matched normal subjects. The visual tasks included imagining animals and reporting whether they had long or short tails, imagining common objects and reporting their colors, and imagining triads of states within the USA and reporting which two are most similar in outline shape. The spatial tasks included such mental image transformations as mental rotation, scanning and size scaling, and imagining triads of shapes and reporting which two are closest to one another. As predicted by the hypothesis that imagery involves the activation of perceptual representations in the visual system, the patient was impaired at the visual-pattern-color imagery tasks, but entirely normal at the spatial imagery tasks.

Farah, Hammond, Mehta and Ratcliff (1989) found a dissociation within this patient's knowledge of visual pattern information, and documented that the parallel dissociation held for his mental imagery. He appeared to be impaired at recognizing animals by sight, despite roughly intact recognition of most non-living objects. As scaled against age- and education-matched normal subjects, his performance on imagery tasks involving animals was selectively impaired. This result is consistent with the hypothesis that imagery and visual perception share long-term memory stores for the appearances of objects.

In our most recent study, we examined the role of the occipital lobe in mental imagery. If mental imagery consists of activating relatively early representations in the visual system, at the level of the occipital lobe, then it should be impossible to form images in regions of the visual field that are blind due to occipital lobe destruction. This predicts that patients with homonymous hemianopia should have a smaller maximum image size. Unfortunately, it is difficult to test this prediction for numerous reasons: Estimates of the size of people's mental images vary from individual to individual, making it difficult to know if the small image size estimated for one, or even a few, patients is abnormal. In addition, the procedures used to estimate maximum image size require a high degree of concentration and abstract thought, not often available in the stroke patient population.

We (Farah, Soso and Dasheiff, in press) were fortunate to encounter a very high-functioning, educated young woman who was a candidate for unilateral occipital lobe resection for treatment of epilepsy. We were able to estimate the visual angle of her mental images before and after surgery, thus using her as her own control. We found that the size of her biggest image was reduced after surgery. Furthermore, by measuring maximal image size in the vertical and horizontal dimensions separately, we found that only the horizontal dimension of her imagery field was reduced. These results paralleled the change in size of her visual field, and provide strong evidence for the use of occipital visual representations during imagery.

What is an example of mental rotation?

Mental rotation allows us to look at an object and be able to flip it, for example, reading a word thats been written backwards.

What part of the brain mentally rotates?

The parietal cortex has been identified most consistently in all brain imaging studies as being the core region involved in mental rotation. Some studies report activations centered more on the superior parietal lobe (SPL), while others emphasise the role of the intraparietal sulcus (IPS) as the core region.

What does the mental rotation test measure?

Following classic laboratory mental rotation tasks (Shepard & Metzler, 1971), the MRT is usually assumed to measure the ability to mentally manipulate images. This raises questions about the fundamental differences between individuals who perform at different levels on this test.