Thursday, January 29, 2009

Automatic vs. controlled processes - Relevant to Visual-Impedance Hypothesis?

As sparked by my browsing of Cohen, Dunbar, & McClelland's (1990) "On the Control of Automatic Processes: A Parallel Distributed Processing Account of the Stroop Effect", I'm wondering if the cognitive task analysis at the level of automatic vs controlled processing is relevant to understanding the computational mechanisms putatively involved in the visual impedance effect.

It seems that Knauff and his colleagues argue that visual images can impede reasoning because the generation of visual images in response to highly visualizable premise terms is automatic, and therefore precedes controlled processing of the premises into reasoning-specific representations (e.g., mental models or spatial imagery). As far as I can understand, then, it may be that the visual images are automatically generated by the cognitive system in response to the premises, and these images are inappropriate (or perhaps cannot be used at all) for deductive reasoning computations. Inefficient systems (e.g., we're hoping measured by WM capacity) may erroneously attempt to operate on these representations before "realizing" that the spatial representations are needed in order to make the inference.

Alternatively, it may be that the visual images are not automatically generated but are instead a controlled response to verbal input; by this reasoning, it is possible that we might see disparities between high and low efficiency systems (in our study, high spans vs low spans). However, the best data source would probably be temporally marked neuroimages of participants at each stage of the reasoning process. We might hypothesize that high spans would show little or no activity in the occipital lobes or "what" visual pathway (or at least less activation than low spans) during all stages of the reasoning process, but more interestingly, at the comprehension stage of the reasoning process. In other words, if the visualization is a controlled process, then it's possible that an efficient cognitive system might learn that the visual images are unsuitable for deductive reasoning and thereafter refrain from generating them in response to the verbal input from the premises.

In terms of the dependent measures we are using, we might expect little or no priming effects in the high spans for the categorical decision task, owing to the fact that they might either suppress visual images generated in response to verbal input (if visualization is automatic), OR refrain from generating visual images at all (if visualization is controlled), and therefore no priming effect would be present in their responses to the visual representations of the target words.

No comments:

Post a Comment