
Expertise in strategy games has long been a subject of interest for researchers,. Here we use the analysis of video game telemetry data from real-time strategy (RTS) games to explore the development of expertise. Until recently, however, there was no straightforward and direct way to test the assumption of static variable importance in a rich, dynamic, realistic context. The variable may, for example, be less useful for distinguishing novices and experts than it is for distinguishing intermediates and experts. One possible source of evidence could be found in medical expertise, as some authors report that the relationship between expertise and the number of propositions recalled from a medical diagnosis follows an inverted-U shaped function implying that the utility of this predictor varies depending on the levels of expertise being compared. Whether changes in variable importance exist on the longer timescale of the development of expertise, especially expertise involving a substantial cognitive component, is unclear. There is some evidence in the motor learning literature that variable importance can change over small amounts of training (<10 hours) in relatively simple laboratory tasks. Given that expertise encompasses years of training and significant cognitive motor change, the assumption that variable importance remains static warrants investigation.

Similarly, contrastive methods in the study of expertise are potentially misleading if variable importance changes throughout development. The two groups could obviously be distinguished by the capacity to pass traditional false belief tasks and by the capacity for algebra, but it does not follow that false belief tests are useful for distinguishing 15 and 20 year olds, or that such tests are even relevant to studying this period of development. For example, the method is deeply problematic in the comparison of 10 month old infants and 20 year old college students. These methodologies are thus highly informative where skill development is a smooth transition between expert and novice, but may be problematic if the skill level of the participants changes whether or not a process is important to success. The applicability of these paradigms to understanding the development of expertise rests on the validity of extrapolating from short-term laboratory training or from interpolating from long-term comparisons between experts and novices. Work in expertise and skill learning most often follows one of two paradigms: making precise measurements of performance, but with poorly trained participants doing relatively simple laboratory tasks, ,, or studying real-world experts while taking only indirect measures of domain performance from two or three levels of skill,. We also identify plausible cognitive markers of expertise. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We show that the static variable importance assumption is false - the predictive importance of these variables shifted as the levels of expertise increased - and, at least in our dataset, that a contrastive approach would have been misleading. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. Most studies in this area investigate expertise by comparing experts with novices. Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value.
