SURVEY:SUMMARY:BUILD_DIFFICULTY[not_applicable, reasonable_effort, code_problematic or string] not_applicable SURVEY:SUMMARY:CLASSIFICATION[practical,theoretical,hardware] practical SURVEY:SUMMARY:CORRECT_CODE_LOCATION[string] SURVEY:SUMMARY:PUBLISHED_CODE[not_applicable, yes, no] yes SURVEY:SUMMARY:SAME_VERSION[not_applicable, yes, no_but_available, no_and_not_available] no_but_available SURVEY:SUMMARY:STUDY_FOUND_CORRECT_CODE[not_applicable, yes, no] no SURVEY:AUTHOR1:BUILD_COMMENT[string] SURVEY:AUTHOR1:BUILD_DIFFICULTY[not_applicable, reasonable_effort, code_problematic or string] not_applicable SURVEY:AUTHOR1:BUILD_DIFFICULTY_COMMENT[string] none SURVEY:AUTHOR1:CLASSIFICATION[practical,theoretical,hardware] practical SURVEY:AUTHOR1:CLASSIFICATION_COMMENT[string] I couldn't find a definition of "theoretical" on your website, but I would not consider my paper theoretical. It is a user study in which we collected empirical data, so not theoretical at all. Had you asked, we probably would have made our code available, which would allow for the replication of our study (although there are a lot of other pieces involved as well). SURVEY:AUTHOR1:CORRECT_CODE_LOCATION[string] SURVEY:AUTHOR1:PUBLIC_COMMENT[string] To replicate a user study requires not just the code used to run the study (if any -- some studies do not have any code behind them) but also all surveys, scripts, and other study materials. This does not seem to be the focus of your repeatability study. But I think you should label these as user studies rather than theoretical studies because these are empirical studies. SURVEY:AUTHOR1:PUBLISHED_CODE[not_applicable, yes, no] yes SURVEY:AUTHOR1:SAME_VERSION[not_applicable, yes, no_but_available, no_and_not_available] no_but_available SURVEY:AUTHOR1:SAME_VERSION_COMMENT[string] I actually would have to talk to the student who maintains the code, but I believe it is possible to make that version available. SURVEY:AUTHOR1:STUDY_FOUND_CORRECT_CODE[not_applicable, yes, no] no SURVEY:AUTHOR2:BUILD_COMMENT[string] SURVEY:AUTHOR2:BUILD_DIFFICULTY[not_applicable, reasonable_effort, code_problematic or string] not_applicable SURVEY:AUTHOR2:BUILD_DIFFICULTY_COMMENT[string] none SURVEY:AUTHOR2:CLASSIFICATION[practical,theoretical,hardware] practical SURVEY:AUTHOR2:CLASSIFICATION_COMMENT[string] The study was a human-subjects study performed with code. Bad code could result in bad data. SURVEY:AUTHOR2:CORRECT_CODE_LOCATION[string] SURVEY:AUTHOR2:PUBLIC_COMMENT[string] The study was a human-subjects experiment, so the most important feature in reproducibility is that we described every aspect of the study in such as way that someone could reproduce the methodology -- including code -- from the description. This is not a systems study where someone who reproduces code that looks and appears the same to the human eye might get different results. That said, I don't think our lead author is particularly protective of the code. Since it's not a systems project, the code was not written to be scalable or easily adopted by other users. SURVEY:AUTHOR2:PUBLISHED_CODE[not_applicable, yes, no] yes SURVEY:AUTHOR2:SAME_VERSION[not_applicable, yes, no_but_available, no_and_not_available] SURVEY:AUTHOR2:SAME_VERSION_COMMENT[string] Not sure how to answer. It's available via email requests but nobody has asked SURVEY:AUTHOR2:STUDY_FOUND_CORRECT_CODE[not_applicable, yes, no] no