- Title: Human model evaluation in interactive supervised learning
- Reference Information:
- Rebecca Fiebrink, Perry R. Cook, and Dan Trueman. 2011. Human model evaluation in interactive supervised learning. In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA, 147-156. DOI=10.1145/1978942.1978965 http://doi.acm.org/10.1145/1978942.1978965
- UIST 2010 New York, New York.
- Author Bios:
- Rebecca Fiebrink has just completed her PhD dissertation. In September of this year she joined Princeton University as an assistant professor in Computer Science and affiliated faculty in Music. She spent January through August of this year as a postdoc at the University of Washington.
- Perry Cook earned his PhD from Stanford University in 1991. His research interests include Physics-based sound synthesis models.
- Dan Trueman a professor who has taught at both Columbia University as well as Princeton University. In the last 12 years he has published 6 papers through the ACM.
- Summary
- Hypothesis:
- Researchers hypothesized that Interactive Machine Learning (IML) would be a useful tool that could improve the generic supervised machine learning methods currently in practice.
- Methods
- The researchers developed a system to facilitate IML. This system was then used in three seperate studies (A, B, and C). The results of these tests were then analyzed throughout the paper. The first test was composed of six PhD students who used the system (and its subsequent updates) for ten weeks. The second study was composed of 21 undergraduate students using the system (Wekinator) in an assignment focused on supervised learning in interactive music performance systems. Finally, the third study was with a professional cellist to build a gesture recognition system for a sensor-equipped cello bow.
- Results
- The results from this study show various expected and unexpected results. One thing the system showed researchers was that it encouraged users to provide better data. Some users 'overcompensated' to be sure that the system understood what they were attempting to do. Additionally, the system surprised users occasionally which encouraged them to expand their attempted efforts. Sometimes the system performed better than their initial goals which encouraged them to redefine their ultimate destination idea.
- Contents
- The researchers determined that any supervised learning models should have their model quality examined. This is because cross-validation may not be enough to validate model quality. Additionally, Interactive Machine Learning was determined to be useful because of its ability to continuously improve the usefulness of a trained model.
- Discussion
- The researchers did an excellent job proving their hypothesis. Utilizing three seperate studies, that were formatted in different ways allowed them to collect a wide range of useful data. The real time feedback and interaction of this system is what makes it particularly appealing to me. Since the users are allowed to see the effectiveness of the training models their providing as they provide them, rapid marked improvements can be made to the system. This facilitates efficient development of a final system, as opposed to a slow and articulated struggle to reach an intermediate goal.
Picture Source: "Human model evaluation in interactive supervised learning"
No comments:
Post a Comment