Mental chronometry is a classical paradigm in cognitive psychology that uses response time and accuracy data in perceptual-motor tasks to elucidate the architecture and mechanisms of the underlying cognitive processes of human decisions. The redundant signals paradigm investigates the response behavior in Experimental tasks, where an integration of signals is required for a successful performance. The common finding is that responses are speeded for the redundant signals condition compared to single signals conditions. On a mean level, this redundant signals effect can be accounted for by several cognitive architectures, exhibiting considerable model mimicry.
Jeff Miller formalized the maximum speed-up explainable by separate activations or race models in form of a distributional bound – the race model inequality. Whenever data violates this bound, it excludes race models as a viable account for the redundant signals effect. The common alternative is a coactivation account, where the signals integrate at some stage in the processing.
Coactivation models have mostly been inferred on and rarely explicated though. Where coactivation is explicitly modeled, it is assumed to have a decisional locus. However, in the literature there are indications that coactivation might have at least a partial locus (if not entirely) in the nondecisional or motor stage. There are no studies that have tried to compare the fit of these coactivation variants to empirical data to test different effect generating loci.
Ever since its formulation, the race model inequality has been used as a test to infer the cognitive architecture for observers’ performance in redundant signals Experiments. Subsequent theoretical and empirical analyses of this RMI test revealed several challenges. On the one hand, it is considered to be a conservative test, as it compares data to the maximum speed-up possible by a race model account. Moreover, simulation studies could show that the base time component can further reduce the power of the test, as violations are filtered out when this component has a high variance.
On the other hand, another simulation study revealed that the common practice of RMI test can introduce an estimation bias, that effectively facilitates violations and increases the type I error of the test. Also, as the RMI bound is usually tested at multiple points of the same data, an inflation of type I errors can reach a substantial amount. Due to the lack of overlap in scope and the usage of atheoretic, descriptive reaction time models, the degree to which these results can be generalized is limited. State-of-the-art models of decision making provide a means to overcome these limitations and implement both race and coactivation models in order to perform large scale simulation studies.
By applying a state-of-the-art model of decision making (scilicet the Ratcliff diffusion model) to the investigation of the redundant signals effect, the present study addresses research questions at different levels. On a conceptual level, it raises the question, at what stage coactivation occurs – at a decisional, a nondecisional or a combined decisional and nondecisional processing stage and to what extend? To that end, two bimodal detection tasks have been conducted. As the reaction time data exhibits violations of the RMI at multiple time points, it provides the basis for a comparative fitting analysis of coactivation model variants, representing different loci of the effect.
On a test theoretic level, the present study integrates and extends the scopes of previous studies within a coherent simulation framework. The effect of experimental and statistical parameters on the performance of the RMI test (in terms of type I errors, power rates and biases) is analyzed via Monte Carlo simulations. Specifically, the simulations treated the following questions: (i) what is the power of the RMI test, (ii) is there an estimation bias for coactivated data as well and if so