Explaining Eye Movements in Program Comprehension using jACT-R


We propose that experimentally recorded sequences of eye movements are input into a cognitive model. By removing the need to model decisions on where to look next during a complex task, modelling long-term activation effects in realworld data becomes conceivable. Eye movement records from experiments on program comprehension shall be used because object-oriented source code provides knowledge structures required by a cognitive model of comprehension. We introduce a tool that supports this new approach. The tool is based on an implementation of the ACT-R cognitive architecture written in the Java programming language and could therefore attract Java developers to the cognitive modelling community.