Thursday, October 27, 2011

Paper Reading #24: Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures



  • Title: Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures
  • Reference Information:
    • Hao L&#252; and Yang Li. 2011. Gesture avatar: a technique for operating mobile user interfaces using gestures.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  207-216. DOI=10.1145/1978942.1978972 http://doi.acm.org/10.1145/1978942.1978972
    • UIST 2010 New York, New York.
  • Author Bios:
    • Yang Li received his Ph.D. from the Chinese Academy of Sciences which he followed up with postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build community while a professor at the University of Washington before becoming a Senior Research Scientist at Google.
    • Hao Lu is a graduate student at the University of Washington.  His research interests include improving interactions between humans and computers.
  • Summary
    • Hypothesis:
      •  The researchers had three hypotheses.  Gesture Avatar would be slower than Shift on large targets but faster on the smaller targets.  Gesture Avatar will have a lower error rate than Shift.  Finally, the error rate for Gesture Avatar will not be affected as much by walking as Shift's will be.
    • Methods
      • The researchers designed an experiment which required users to select targets using both methods (Shift and Gesture Avatar). Half of the participants learned Shift first while the other half learned Gesture Avatar first.  The variables were the two different techniques, the state of the user (sitting versus walking), the size of the targets being selected and the number of repeated letters in the selection group.
    • Results
      •   The results show the following facts.  Shift was significantly faster for larger targets, but significantly slower for smaller targets.  The error rate for Shift increased as the target size decreased, while the error rate for Gesture Avatar remained nearly constant.  Only one user in the study preferred Shift over Gesture Avatar.  Finally, surprising to the researchers was that the number of repeated letters had almost no effect on the accuracy of Gesture Avatar.
    • Contents
      • This paper presented one implementation of Avatar Gesture.  Minor modifications can be made, such as displaying a magnified version of the selected target as opposed to the gesture created.  This system works the best when the maximum amount of information is available about the underlying UI.  Essentially, it has been programmed into an API that provides a set of wrapper functions to embed the functionality to transform the user experience.
  • Discussion
    • I want to begin the discussion by thanking Yang Li.  Every single one of the researcher papers that he has authored have been presented in an extremely clear and efficient manner.  This makes reading the papers and drawing conclusions exceedingly easy.  The researchers were certainly able to provide support for all three of their (very clearly stated) hypotheses.  This is also one of the few papers that solves a current problem that I have personally experienced.  Many of the papers focus on solutions to problems in the future or for a select group of people (eg. how to control a wall-sized display).  This problem is widely experienced, with approximately 50% of Americans owning a smartphone.  The proposed technique seems very intuitive (especially the re-selection swiping) and it would be great to test this idea out in a real world environment.



Picture Source: "Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures"

Tuesday, October 25, 2011

Paper Reading #23: User-Defined Motion Gestures for Mobile Interaction


  • Title: User-Defined Motion Gestures for Mobile Interaction
  • Reference Information:
    • Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  197-206. DOI=10.1145/1978942.1978971 http://doi.acm.org/10.1145/1978942.1978971
    • UIST 2010 New York, New York.
  • Author Bios:
    • Jaime Ruiz is a fifth-year doctoral student at the University of Waterloo.  Ruiz plans to graduate in December 2011.
    • Yang Li received his Ph.D. from the Chinese Academy of Sciences which he followed up with postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build community while a professor at the University of Washington before becoming a Senior Research Scientist at Google.
    • Edward Lank is an Assistant Professor at the University of Waterloo.  Lank received his Ph.D. in 2001 from Queen's University. 
  • Summary
    • Hypothesis:
      • Researchers hypothesized that actions can be performed efficiently on a mobile device by utilizing 3D gestures recognized by sensors located on the device such as an accelerometer. 
    • Methods
      • The researchers designed an experiment to allow users to freely create their own gestures.  The screen on the phone was locked so that it wouldn't display any feedback to the users.  The participants were presented with sets of tasks and were asked to design an easy to use and remember gesture for each of them, and were not required to commit until all of them had been designed.
    • Results
      • The data collected was then analyzed which resulted in some classifications.  When mapping the gestures they were classified into four dimensions of the nature of the action: metaphor, physical, symbolic or abstract.  Other classification descriptions were developed which resulted in a gesture taxonomy.  
    • Contents
      •  Researchers hope that this taxonomy will aid in the creation of gesture interactions for phones in the future.  The researchers are unclear whether these gestures will be used in a generic fashion, with multiple applications supporting similar motions, or whether developers will use these to create their own arbitrary gestures for different applications. The hope is that representative motions will be utilized for similar functionality.
  • Discussion
    • The researchers seem to have presented a proposal for new navigational techniques which may be used in future generations of mobile devices.  The researchers proposed further research to investigate gesture delimiting techniques, so that fluid interactions can be achieved when performing tasks.  I believe this paper was also accepted to the same conference and would be an interesting read to determine the feasibility of this idea.




Picture Source: "User-Defined Motion Gestures for Mobile Interaction"

Paper Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays


  • Title: Mid-air Pan-and-Zoom on Wall-sized Displays
  • Reference Information:
    • Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. 2011. Mid-air pan-and-zoom on wall-sized displays.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  177-186. DOI=10.1145/1978942.1978969 http://doi.acm.org/10.1145/1978942.1978969
    • UIST 2010 New York, New York.
  • Author Bios:
    •  Mathieu Nancel is a Ph.D. student in HCI.  Focuses on distal interaction techniques.
    • Julie Wagner is a postgraduate research assistant.  Wagner currently works with Wendy Mackay on new tangible interfaces.
    • Emmanuel Pietriga is the interim leader of INRIA team in Situ where he is a full-time reasearch scientist.  Works on interaction techniques for wall-sized displays.
    • Oliver Chapuis is a research scientiest at LRI.  Received his Ph.D. in Mathematics in 1994.
    • Wendy Mackay is a research directory with INRIA Saclay in France.  Focuses on the design of interactive systems.
  • Summary
    • Hypothesis:
      • Researchers hypothesized that they could improve interaction with wall-sized displays by studying the effectiveness of several factors as gesture interactions.  These factors included the number of hands, the motion of the gesture and the degrees of freedom for the gesture.   
    • Methods
      • The researchers designed an experiment in which all patterns of interactions were exhausted.  The participants completed this test in several sessions, with a few guidelines set to minimize fatigue and memory loss.  
    • Results
      • The researchers took the data collected and analyzed it using several statistical analysis techniques.  The conclusions of their study cannot prove or disprove the effectiveness of any of the techniques, but they do suggest some would be more natural and useful than others.
    • Contents
      •  Researchers determined that participants preferred gestures utilizing both hands as opposed to single handed gestures.  Similarly, linear motions were preferred (as well as more accurate) than circular ones.  Researchers suggested that 3D free motions as well as one handed circular motions on a 2D surface should be rejected and not used in the future.
  • Discussion
    • The researchers had a very interesting problem to tackle, but I am undecided as to how effectively they were in proving or disproving their hypothesis.  Regardless, the work done here is exciting because of the possibilities that it implies for the future.  As mentioned in the paper, movies already visualize humans interacting with very large displays using fluid motions as opposed to tools.  While humans have never had to do this in the past, that is not an indication that it cannot be both smooth and natural.



Picture Source: "Mid-air Pan-and-Zoom on Wall-sized Displays"

Wednesday, October 19, 2011

Paper Reading #21: Human model evaluation in interactive supervised learning


  • Title: Human model evaluation in interactive supervised learning
  • Reference Information:
    • Rebecca Fiebrink, Perry R. Cook, and Dan Trueman. 2011. Human model evaluation in interactive supervised learning.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  147-156. DOI=10.1145/1978942.1978965 http://doi.acm.org/10.1145/1978942.1978965
    • UIST 2010 New York, New York.
  • Author Bios:
    • Rebecca Fiebrink has just completed her PhD dissertation.  In September of this year she joined Princeton University as an assistant professor in Computer Science and affiliated faculty in Music.  She spent January through August of this year as a postdoc at the University of Washington.
    • Perry Cook earned his PhD from Stanford University in 1991.  His research interests include Physics-based sound synthesis models.
    • Dan Trueman a professor who has taught at both Columbia University as well as Princeton University.  In the last 12 years he has published 6 papers through the ACM.
  • Summary
    • Hypothesis:
      • Researchers hypothesized that Interactive Machine Learning (IML) would be a useful tool that could improve the generic supervised machine learning methods currently in practice.   
    • Methods
      • The researchers developed a system to facilitate IML.  This system was then used in three seperate studies (A, B, and C).  The results of these tests were then analyzed throughout the paper.  The first test was composed of six PhD students who used the system (and its subsequent updates) for ten weeks.  The second study was composed of 21 undergraduate students using the system (Wekinator) in an assignment focused on supervised learning in interactive music performance systems.  Finally, the third study was with a professional cellist to build a gesture recognition system for a sensor-equipped cello bow.
    • Results
      • The results from this study show various expected and unexpected results.  One thing the system showed researchers was that it encouraged users to provide better data.  Some users 'overcompensated' to be sure that the system understood what they were attempting to do.  Additionally, the system surprised users occasionally which encouraged them to expand their attempted efforts.  Sometimes the system performed better than their initial goals which encouraged them to redefine their ultimate destination idea.
    • Contents
      • The researchers determined that any supervised learning models should have their model quality examined.  This is because cross-validation may not be enough to validate model quality.  Additionally, Interactive Machine Learning was determined to be useful because of its ability to continuously improve the usefulness of a trained model. 
  • Discussion
    • The researchers did an excellent job proving their hypothesis.  Utilizing three seperate studies, that were formatted in different ways allowed them to collect a wide range of useful data.  The real time feedback and interaction of this system is what makes it particularly appealing to me.  Since the users are allowed to see the effectiveness of the training models their providing as they provide them, rapid marked improvements can be made to the system.  This facilitates efficient development of a final system, as opposed to a slow and articulated struggle to reach an intermediate goal.



Picture Source: "Human model evaluation in interactive supervised learning"

Paper Reading #20: The aligned rank transform for nonparametric factorial analyses using only anova procedures


  • Title: The aligned rank transform for nonparametric factorial analyses using only anova procedures
  • Reference Information:
    • Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  143-146. DOI=10.1145/1978942.1978963 http://doi.acm.org/10.1145/1978942.1978963
    • UIST 2010 New York, New York.
  • Author Bios:
    • Jacob Wobbrock is an Associate Professor in the Information School at the University of Washington.  Wobbrock directs the AIM Research Group which is part of the DUB Group.
    • Leah Findlater is currently a professor at the University of Washington but will become an assistant professor at the University of Maryland in January of 2012.  Findlater has developed Personalized GUI's.
    • Darren Gergle is an associate professor at the Northwestern University School of Communication.  Gergle is interested in improving understanding of the impact of technological mediation has on communication.
    • James Higgins is a professor in the Department of Statistics at Kansas State Unversity. 
  • Summary
    • Hypothesis:
      • The researchers hypothesized that modifying the Aligned Rank Transform to an arbitrary number of factors would be useful for researchers attempting to analyze data.
    • Methods
      •  The researchers developed the method for the expanded ART and then coded this in both a desktop tool (ARTool) as well as an online, Java-based verson (ARTWeb).  After creating these tools the researchers analyzed three sets of previously published data.  This analysis was to demonstrate its utility and relevance, as opposed to its correctness.
    • Results
      • Examining old data revealed interactions that had not been seen before.  For example, in a study by Findlater et al. the authors noted that there was a possible interaction that was unexaminable by the Friedman test.  When this data was run using the nonparametric ART method, nonsignificant main effects for Accuracy and Interface were revealed, as well as a significant interaction.
    • Contents
      • This paper presents a nonparametric ART method, as well as two programs to support the calculation of data using this method.  The system has limitations, such as possibly reducing skew, which may be undesirable.  But, as demonstrated during their tests, the method can help reveal interactions that cannot be discovered through other analyses.
  • Discussion
    • The researchers were certainly able to prove their hypothesis, as seen by their test cases.  It will be interesting to see whether or not this tool is used for research analysis in the future.  The chart at the beginning of the research paper was somewhat intimidating, listing quite a few already commonly used techniques.  I have a feeling that statisticians will use this so that more intereactions can be observed.  As the saying goes, knowledge is power, so the more the researchers are able to understand the more they can build off of.



Picture Source: "The aligned rank transform for nonparametric factorial analyses using only anova procedures"

Paper Reading #19: Reflexivity in Digital Antrhopology



  • Title: Reflexivity in Digital Antrhopology
  • Reference Information:
    • Jennifer A. Rode. 2011. Reflexivity in digital anthropology.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  123-132. DOI=10.1145/1978942.1978961 http://doi.acm.org/10.1145/1978942.1978961
    • CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems 
  • Author Bios:
    • Jennifer Rode is an assistant professor at Drexel's School of Information. Rode has produced several interface design projects. 
  • Summary
    • Hypothesis:
      • The researcher hypothesized that various forms of digital anthropology can be utilized by researchers to learn more during field studeis.  No new ideas were presented in this paper.  Rather, ideas relating to utilizing other aspects in the digital research world were presented. 
    • Methods
      • The researcher did not perform any user studies as seen in other papers.  Instead, the author spent much of the paper simply defining different forms/aspects of digital anthropology.  These definitions had been collected from previously published research.  The researcher then argues why many of these unused techniques could be beneficial in digital research.
    • Results
      • Building off of the definitions, the researcher shows that the 'messy bit' may be where focus needs to be placed to gain a more valuable insight for digital research.  Since developers design for human users, all aspects of the human user's interactions should be considered.
    • Contents
      • This paper presents an argument for including the voice of the enthographer during both the experience and discussion afterwards.  These are techniques that have not previously been used in the HCI field for research, but it is argued that they will help developers be more succesful by understanding their users better.
  • Discussion
    •  It is hard to say whether or not the auther sucessfully proved her hypothesis over the course of this paper.  To me, it seemed more like an unsupported idea was written about in this paper, with nothing more than definitions from other research placed in here to help define the idea.  Since there was no study done to imply a greater level effectiveness from her various ethnography proposals, there doesn't seem to be any evidence that she is correct.  On the other hand, how would one really provide evidence that one opinionated summary is better than another?  Either way, it was an interesting read that gives a useful reminder: don't ignore the users you are designing for in the first place.



Thursday, October 13, 2011

Paper Reading #18: Biofeedback game design: using direct and indirect physiological control to enhance game interaction



  • Title: Biofeedback game design: using direct and indirect physiological control to enhance game interaction
  • Reference Information:
    • Lennart Erik Nacke, Michael Kalyn, Calvin Lough, and Regan Lee Mandryk. 2011. Biofeedback game design: using direct and indirect physiological control to enhance game interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  103-112. DOI=10.1145/1978942.1978958 http://doi.acm.org/10.1145/1978942.1978958
    • CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems 
  • Author Bios:
    • Lennart Erik Nacke is an assistant professor for HCI and Game Science at the Faculty of Business and Information Technology at University of Ontario Institute of Technology.
    • Michael Kalyn is a first time ACM publisher with this article.
    • Calvin Lough another first time publisher, from the University of Saskatchewan
    • Regan Lee Mandryk is a professor at the Simon Fraser University.  This is his 36th publication in 12 years.
  • Summary
    • Hypothesis:
      • The researchers hypothesized that they could increase the enjoyment of video games by utilizing physiological input as both direct and indirect input.
    • Methods
      • The researchers developed three different versions of a game, one as a control and two others that integrated physiological input.  
    • Results
      • The results were collected through open-ended survey questions.  These responses were then conglomerated to produce overall enjoyment charts.  Furthermore, the participants were asked to rate various aspects of the novelty of the inputs on a likert scale.  Many participants enjoyed input devices that were more 'natural', and did not feel 'like a controller'.
    • Contents
      • The researchers concluded that physiological inputs can add enjoyment to a video game experience.  The indirect controls were shown to be less enjoyable since they did not present the same 'instant feedback' that the direct controls did.  The more natural the mapping was between the input and the functionality, the greater the feature was enjoyed.  Finally, researchers believe that the indirect inputs can be used as a dramatic device.
  • Discussion
    • The researchers effectively demonstrated support for their hypothesis.  This paper, however, does not present anything dramatically different from what I've been expecting in the future of gaming.  Perhaps ironically, I'm the most interested in the indirect inputs being utilized as dramatic devices.  The 'background' aspects of a game really came to light one day when I was playing a first person shooter game.  Suddenly, I realized that my heart was beating at an alarming rate and then I realized that there was fantastic music on in the background.  Since then, I have really taken notice as to how the overall look or sound of a game enhances the overall experience.  Including indirect input could bring a very individualized experience to video games.  I can only imagine the changes that would happen in a stealth game if I started getting too nervous. 


Picture Source: "Biofeedback game design: using direct and indirect physiological control to enhance game interaction"

Thursday, October 6, 2011

Paper Reading #17: Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment


  • Title: Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment
  • Reference Information:
    • Andrew Raij, Animikh Ghosh, Santosh Kumar, and Mani Srivastava. 2011. Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  11-20. DOI=10.1145/1978942.1978945 http://doi.acm.org/10.1145/1978942.1978945
    • UIST 2010 New York, New York.
  • Author Bios:
    • Andrew B Raij has been affiliated with the University of Florida, Memphis and South Florida with more than 40 citations in ACM papers over the last 7 years.
    • Animikh Ghosh is a graduate student with this being their first published research paper.
    • Santosh Kumar is associated with both The Ohio State University as well as the University of Memphis.  This is his seventeenth research paper over the last 17 years, with an addition 285 citations.
    • Mani Bhushan Srivastava is a well known researcher from AT&T Bell Laboratories.  Over the last two decads he has published more than 150 papers through the ACM and has nearly 2,500 citations.
  • Summary
    • Hypothesis:
      •  Researchers hypothesized that user concerns over privacy in terms of data collected from wearable sensors has increased.  Furthermore, this fear is compounded when the users have a personal stake in the data being collected.
    • Methods
      • The researchers recruited 66 participants from a college campus and divided them into two groups.  The first group, NS, had no personal stake in the data being collected.  The second group, S, did since they were the ones wearing the sensors that were collecting data.  The NS group was simply given a demographics survey and then the privacy survey.  Group S, however, wore sensors that collected data before taking the privacy survey.  Following the survey, Group S then received a review of their analyzed data and then took the survey once again.  Finally, Group S was debriefed to learn more about their concerns.
    • Results
      • The data collected from the survey supports the notion that having a personal stake int he data increases privacy concerns.  Furthermore, that concern did grow after an analysis of their data had been presented to the participants.  Additionally, factors such as including the timestamp, place and/or duration all increased concerns in varying amounts depending on the activity (such as stress or conversation).
    • Contents
      • The paper showed evidence to support the idea that privacy concerns over collected data are affected by the relationship with the data as well as extra information collected such as the timestamp.  One participant stated concern about this linking information, "I'd rather people not know that I felt stressed at my particular job or when at my house, because they wouldn't have the whole picture".  Researchers propose removing as much identifiable information as possible, but realize that a middle ground has to be reached because some information is useless unless it can be linked to a particular person.
  • Discussion
    • Although the researchers were able to effectively support their hypothesis, I did not find this paper particularly interesting or useful.  I seemed to take this information as a known fact, if recent events serve as any indication.  For example, millions of people are upset at Google for collecting wireless information in Europe while building their StreetView database.  Or take a look at the Sony fiasco just earlier this year, where millions of accounts were compromised.  This privacy battle is only set to get worse, as larger amounts of information are stored electronically.  While this information management will have to be performed carefully, I feel that it is integral to further innovations.  Take the widely used example of medical information.  If such information is stored electronically it can be accessed anywhere, at anytime, and appropriate knowledge can  obtained when needed.

Picture Source: "Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment"