Title: Optimized Participant Assignment for an Online Experimental Framework
Experimental research is being transformed from being based in physical laboratories centered in research universities into web-based experimental platforms. Our group has built a web-based analog of a university’s shared laboratory for behavioural research. Our platform is an experimental testbed that enables researchers to quickly design and deploy behavioural experiments. The framework is implemented to conduct very large experiments with many individuals interacting synchronously. However, large scale platforms come with drawbacks that can affect both partici- pants and researchers. From the point of view of participants, too many participants mean excessive waiting times in the system. From the point of view of researchers, meeting the deadlines of different experiments and implementation details of the experiments can be problematic and may affect the research progress.
In this thesis, we develop and evaluate algorithms that efficiently assign incoming par- ticipants to experiments to meet experiments’ deadlines. We show that participant assignment can be modeled as the problem of minimizing the weighted tardiness of a parallel schedule, an NP- complete problem. We consider varying participant arrival and experiment availability cases for the participant assignment problem and use them to evaluate the effectiveness of exhaustive, greedy, dy- namic programming and integer-linear programming techniques on synthetic and real benchmarks.
The results over the synthetic and real benchmarks show that greedy algorithms perform better than the other compared algorithms for algorithm execution and solution optimality. With constant experiment release times, we observe that that the greedy ATC and ATCPA algorithms outperform other compared algorithms. For both online and offline greedy algorithms with vary- ing experiment release times, we observe that the greedy MPRA algorithm is the best performing algorithm. The experiments over the online algorithms with varying participant arrivals and exper- iment release times indicate that experiments’ release times and the number of experiments makes the biggest impact over minimizing the tardiness. It is shown that the synthetic datasets with different burstiness are not similar to real world dataset and MPRA algorithm performs better in both environments from which we conclude MPRA provides promising and robust results.
Prof. Waleed Meleis (advisor)
Prof. Ningfang Mi
Prof. Magy Seif El-Nasr
Prof. David Lazer