Thursday, November 3, 2011

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface


  • Title: Sensing cognitive multitasking for a brain-based adaptive user interface
  • Reference Information:
    • Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, and Robert J.K. Jacob. 2011. Sensing cognitive multitasking for a brain-based adaptive user interface.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  383-392. DOI=10.1145/1978942.1978997 http://doi.acm.org/10.1145/1978942.1978997
    • UIST 2010 New York, New York.
  • Author Bios:
    • Erin Treacy Solovey with Tufts University.  Fifteen publications.
    • Francine Lalooses with Tufts University.  Two publications.
    • Krysta Chauncey with Tufts University.  Six publications.
    • Douglas Weaber with Tufts University.  Two publications.
    • Margarita Parasi with Tufts University.  First publication.
    • Matthias Scheutz with Tufts University, Indiana University and the University of Notre Dame.  Thirty seven publications.
    • Angelo Sassaroli with Tufts University.  Six publications.
    • Sergio Fantini with Tufts University.  Seven publications.
    • Paul Schermerhorn with the University of Notre Dame and Indiana University.  Twenty four publications.
    • Audrey Girourd with Tufts University and Queen's University.  Nineteen publications.
    • Robert J.K. Jacob with Tufts University and MIT.  Two publications.Seventy five publications, and 1,004 citations.
  • Summary
    • Hypothesis:
      • An fNIRS tool will be able to capture the tasking state of a human mind as effectively as required by an HCI application standing, around the same level as an fMRI machine.  Researchers also hypothesized that a system can be designed to capture this tasking state and facilitate it by aiding the user in their tasks.
    • Methods
      • The first hypothesis was tested by seeing how accurately an fNIRS machine could classify a user when they were in various tasking states (branching, dual or delay).  They simply asked users to perform a variety of tasks and analyzed how frequently their system was correctly classifying the current state.  The second test involved participants instructed to perform various activities with a robot, and they analyzed how their system facilitated this interaction.  
    • Results
      • The fNIRS machine showed some recognition between various states, above 50%, but was not extremely accurate.  The researchers noted that this was a small group of testers and that other various improvements could be made to improve this accuracy.  Building off this the researchers built a tool that attempted to facilitate these changing states.
    • Contents
      • The researchers noticed that the fNIRS machine is not as effective at collecting data as the fMIR machine is, but it is much more practical in a real world environment.  Researchers built a proof of concept system that showed promise.  
  • Discussion
    • These researchers proposed an interesting system and were effective at providing proof for the second hypothesis (the first one still needs to be analyzed some more).  I like this idea because I am constantly attempting to multitask and the little tasks that can easily be automated are the ones that take up the most time (switching contexts etc.).  If this is effectively implemented, then the work place should increase in productivity by a significant amount.



Picture Source: "Sensing cognitive multitasking for a brain-based adaptive user interface"

Tuesday, November 1, 2011

Paper Reading #26: Embodiment in Brain-Computer Interaction


  • Title: Embodiment in Brain-Computer Interaction
  • Reference Information:
    • Kenton O'Hara, Abigail Sellen, and Richard Harper. 2011. Embodiment in brain-computer interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  353-362. DOI=10.1145/1978942.1978994 http://doi.acm.org/10.1145/1978942.1978994
    • UIST 2010 New York, New York.
  • Author Bios:
    • Kenton O'Hara has been cited in nearly 500 articles published through the ACM in the last 18 years.  He is affiliated with Hewlett-Packard as well as Microsoft Research.
    • Abigail Sellen is a Principal Researcher at Microsoft Reserach Cambridge.  She joined Microsoft Research after working for Hewlett Packard Labs.
    • Richard Harper is a Principal Researcher at Microsoft Research in Cambridge.  
  • Summary
    • Hypothesis:
      • Researchers hypothesize that Brain-Computer Interaction can have a social impact when used in different environments.  In particular, when a BCI is used in a gaming environment, the interactions between the people involved change fundamentally.  The researchers hope to examine exactly how this interaction changes.
    • Methods
      • The researchers sent the MindFlex game home with three different groups of people and asked them to record their gaming experience.  Each of the groups was supposed to choose when, where and with whom to actually play the game.  This created a very realistic, and fluid environment in which people freely came and went.  
    • Results
      •  The analysis showed a few novel differences, such as the unnecessary mental imagery created in an attempt to properly control the game.  Many users would frequently think 'up, up, up' to raise the ball when in fact all they had to do was concentrate a little more.
    • Contents
      • The paper shows results that can be used to extend the use of BCI into other environments. These environments will not be typical of previous social interaction space since there are new problems such as users not being able to acknowledge feedback from other people around them.
  • Discussion
    • The researchers hypothesis was very open-ended, simply that BCI interaction is something that needs to be studied in order to be expanded.  The researchers were able to effectively study these interactions, and presented several clear findings.  I had never thought about the fact that simply moving your hand or responding to a question could have such a profound effect on concentration.  I hope that this research is continued so that more fluid invisible computing can be accomplished in the future.



Picture Source: "Embodiment in Brain-Computer Interaction"

Paper Reading #25: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration


  • Title: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration
  • Reference Information:
    • Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, and Robert C. Miller. 2011. Twitinfo: aggregating and visualizing microblogs for event exploration.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  227-236. DOI=10.1145/1978942.1978975 http://doi.acm.org/10.1145/1978942.1978975
    • UIST 2010 New York, New York.
  • Author Bios:
    • Adam Marcus is a graduate student at MIT.  He received his undergraduate degree from Rensselaer Plytechnic Institute.
    • Michael S. Bernstein researches crowdsourcing and social computing.  He is in his final year at MIT.
    • Osama Badar is a graduate student at MIT.
    • David R. Karger is a member of the AI laboratory at MIT.  He has spent time working for Google.
    • Samuel Madden is an associate professor at MIT.  Has developed systems for interacting with mTurk.
    • Robert C. Miller is affiliated with Carnegie Mellon University.  He has 71 publications in the ACM over the last 15 years.
  • Summary
    • Hypothesis:
      • The researchers hypothesize that information aggregated from microblog sources, Twitter in particular, can be used to study events.  This should be accomplished in real time and produced by a system that makes data visualization and exploration simple and intuitive.
    • Methods
      • Researchers developed a tool called 'TwitInfo' to implement the goals set forth in their hypothesis.  They then evaluated the effectiveness of their system by letting average users of twitter and an award winning journalist test it.
    • Results
      •  The evaluation showed that TwitInfo effectively analyzed events based on spikes in tweets and allowed users to easily gain a shallow understanding of a chain of events.  The journalist emphasized that this knowledge was only shallow, but that the tool still allows people to gather an understanding of events from the first person point of view as they unfold.
    • Contents
      • The paper presents a tool that is able to analyze twitter information that is not domain specific in real time which has not been effectively accomplished before.  The major limitations of this system are that not all interesting events are flagged when analyzing peaks in the number of tweets (such as a yellow card in a soccer game) and that the information available is generally more shallow than what a standard news report would generate.  
  • Discussion
    • The researchers were certainly able to create a tool that performs the intended functionality.  It is interesting that this article was mentioned since Dr. Caverlee just recently gave a speech to UPE about a project involving Twitter.  Datamining this expansive source of information can certainly produce some interesting results if it can be done correctly.  I'm actually a little surprised that Twitter itself doesn't do more to support such functionality.



Picture Source: "TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration"

Thursday, October 27, 2011

Paper Reading #24: Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures



  • Title: Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures
  • Reference Information:
    • Hao L&#252; and Yang Li. 2011. Gesture avatar: a technique for operating mobile user interfaces using gestures.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  207-216. DOI=10.1145/1978942.1978972 http://doi.acm.org/10.1145/1978942.1978972
    • UIST 2010 New York, New York.
  • Author Bios:
    • Yang Li received his Ph.D. from the Chinese Academy of Sciences which he followed up with postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build community while a professor at the University of Washington before becoming a Senior Research Scientist at Google.
    • Hao Lu is a graduate student at the University of Washington.  His research interests include improving interactions between humans and computers.
  • Summary
    • Hypothesis:
      •  The researchers had three hypotheses.  Gesture Avatar would be slower than Shift on large targets but faster on the smaller targets.  Gesture Avatar will have a lower error rate than Shift.  Finally, the error rate for Gesture Avatar will not be affected as much by walking as Shift's will be.
    • Methods
      • The researchers designed an experiment which required users to select targets using both methods (Shift and Gesture Avatar). Half of the participants learned Shift first while the other half learned Gesture Avatar first.  The variables were the two different techniques, the state of the user (sitting versus walking), the size of the targets being selected and the number of repeated letters in the selection group.
    • Results
      •   The results show the following facts.  Shift was significantly faster for larger targets, but significantly slower for smaller targets.  The error rate for Shift increased as the target size decreased, while the error rate for Gesture Avatar remained nearly constant.  Only one user in the study preferred Shift over Gesture Avatar.  Finally, surprising to the researchers was that the number of repeated letters had almost no effect on the accuracy of Gesture Avatar.
    • Contents
      • This paper presented one implementation of Avatar Gesture.  Minor modifications can be made, such as displaying a magnified version of the selected target as opposed to the gesture created.  This system works the best when the maximum amount of information is available about the underlying UI.  Essentially, it has been programmed into an API that provides a set of wrapper functions to embed the functionality to transform the user experience.
  • Discussion
    • I want to begin the discussion by thanking Yang Li.  Every single one of the researcher papers that he has authored have been presented in an extremely clear and efficient manner.  This makes reading the papers and drawing conclusions exceedingly easy.  The researchers were certainly able to provide support for all three of their (very clearly stated) hypotheses.  This is also one of the few papers that solves a current problem that I have personally experienced.  Many of the papers focus on solutions to problems in the future or for a select group of people (eg. how to control a wall-sized display).  This problem is widely experienced, with approximately 50% of Americans owning a smartphone.  The proposed technique seems very intuitive (especially the re-selection swiping) and it would be great to test this idea out in a real world environment.



Picture Source: "Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures"

Tuesday, October 25, 2011

Paper Reading #23: User-Defined Motion Gestures for Mobile Interaction


  • Title: User-Defined Motion Gestures for Mobile Interaction
  • Reference Information:
    • Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  197-206. DOI=10.1145/1978942.1978971 http://doi.acm.org/10.1145/1978942.1978971
    • UIST 2010 New York, New York.
  • Author Bios:
    • Jaime Ruiz is a fifth-year doctoral student at the University of Waterloo.  Ruiz plans to graduate in December 2011.
    • Yang Li received his Ph.D. from the Chinese Academy of Sciences which he followed up with postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build community while a professor at the University of Washington before becoming a Senior Research Scientist at Google.
    • Edward Lank is an Assistant Professor at the University of Waterloo.  Lank received his Ph.D. in 2001 from Queen's University. 
  • Summary
    • Hypothesis:
      • Researchers hypothesized that actions can be performed efficiently on a mobile device by utilizing 3D gestures recognized by sensors located on the device such as an accelerometer. 
    • Methods
      • The researchers designed an experiment to allow users to freely create their own gestures.  The screen on the phone was locked so that it wouldn't display any feedback to the users.  The participants were presented with sets of tasks and were asked to design an easy to use and remember gesture for each of them, and were not required to commit until all of them had been designed.
    • Results
      • The data collected was then analyzed which resulted in some classifications.  When mapping the gestures they were classified into four dimensions of the nature of the action: metaphor, physical, symbolic or abstract.  Other classification descriptions were developed which resulted in a gesture taxonomy.  
    • Contents
      •  Researchers hope that this taxonomy will aid in the creation of gesture interactions for phones in the future.  The researchers are unclear whether these gestures will be used in a generic fashion, with multiple applications supporting similar motions, or whether developers will use these to create their own arbitrary gestures for different applications. The hope is that representative motions will be utilized for similar functionality.
  • Discussion
    • The researchers seem to have presented a proposal for new navigational techniques which may be used in future generations of mobile devices.  The researchers proposed further research to investigate gesture delimiting techniques, so that fluid interactions can be achieved when performing tasks.  I believe this paper was also accepted to the same conference and would be an interesting read to determine the feasibility of this idea.




Picture Source: "User-Defined Motion Gestures for Mobile Interaction"

Paper Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays


  • Title: Mid-air Pan-and-Zoom on Wall-sized Displays
  • Reference Information:
    • Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. 2011. Mid-air pan-and-zoom on wall-sized displays.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  177-186. DOI=10.1145/1978942.1978969 http://doi.acm.org/10.1145/1978942.1978969
    • UIST 2010 New York, New York.
  • Author Bios:
    •  Mathieu Nancel is a Ph.D. student in HCI.  Focuses on distal interaction techniques.
    • Julie Wagner is a postgraduate research assistant.  Wagner currently works with Wendy Mackay on new tangible interfaces.
    • Emmanuel Pietriga is the interim leader of INRIA team in Situ where he is a full-time reasearch scientist.  Works on interaction techniques for wall-sized displays.
    • Oliver Chapuis is a research scientiest at LRI.  Received his Ph.D. in Mathematics in 1994.
    • Wendy Mackay is a research directory with INRIA Saclay in France.  Focuses on the design of interactive systems.
  • Summary
    • Hypothesis:
      • Researchers hypothesized that they could improve interaction with wall-sized displays by studying the effectiveness of several factors as gesture interactions.  These factors included the number of hands, the motion of the gesture and the degrees of freedom for the gesture.   
    • Methods
      • The researchers designed an experiment in which all patterns of interactions were exhausted.  The participants completed this test in several sessions, with a few guidelines set to minimize fatigue and memory loss.  
    • Results
      • The researchers took the data collected and analyzed it using several statistical analysis techniques.  The conclusions of their study cannot prove or disprove the effectiveness of any of the techniques, but they do suggest some would be more natural and useful than others.
    • Contents
      •  Researchers determined that participants preferred gestures utilizing both hands as opposed to single handed gestures.  Similarly, linear motions were preferred (as well as more accurate) than circular ones.  Researchers suggested that 3D free motions as well as one handed circular motions on a 2D surface should be rejected and not used in the future.
  • Discussion
    • The researchers had a very interesting problem to tackle, but I am undecided as to how effectively they were in proving or disproving their hypothesis.  Regardless, the work done here is exciting because of the possibilities that it implies for the future.  As mentioned in the paper, movies already visualize humans interacting with very large displays using fluid motions as opposed to tools.  While humans have never had to do this in the past, that is not an indication that it cannot be both smooth and natural.



Picture Source: "Mid-air Pan-and-Zoom on Wall-sized Displays"

Wednesday, October 19, 2011

Paper Reading #21: Human model evaluation in interactive supervised learning


  • Title: Human model evaluation in interactive supervised learning
  • Reference Information:
    • Rebecca Fiebrink, Perry R. Cook, and Dan Trueman. 2011. Human model evaluation in interactive supervised learning.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  147-156. DOI=10.1145/1978942.1978965 http://doi.acm.org/10.1145/1978942.1978965
    • UIST 2010 New York, New York.
  • Author Bios:
    • Rebecca Fiebrink has just completed her PhD dissertation.  In September of this year she joined Princeton University as an assistant professor in Computer Science and affiliated faculty in Music.  She spent January through August of this year as a postdoc at the University of Washington.
    • Perry Cook earned his PhD from Stanford University in 1991.  His research interests include Physics-based sound synthesis models.
    • Dan Trueman a professor who has taught at both Columbia University as well as Princeton University.  In the last 12 years he has published 6 papers through the ACM.
  • Summary
    • Hypothesis:
      • Researchers hypothesized that Interactive Machine Learning (IML) would be a useful tool that could improve the generic supervised machine learning methods currently in practice.   
    • Methods
      • The researchers developed a system to facilitate IML.  This system was then used in three seperate studies (A, B, and C).  The results of these tests were then analyzed throughout the paper.  The first test was composed of six PhD students who used the system (and its subsequent updates) for ten weeks.  The second study was composed of 21 undergraduate students using the system (Wekinator) in an assignment focused on supervised learning in interactive music performance systems.  Finally, the third study was with a professional cellist to build a gesture recognition system for a sensor-equipped cello bow.
    • Results
      • The results from this study show various expected and unexpected results.  One thing the system showed researchers was that it encouraged users to provide better data.  Some users 'overcompensated' to be sure that the system understood what they were attempting to do.  Additionally, the system surprised users occasionally which encouraged them to expand their attempted efforts.  Sometimes the system performed better than their initial goals which encouraged them to redefine their ultimate destination idea.
    • Contents
      • The researchers determined that any supervised learning models should have their model quality examined.  This is because cross-validation may not be enough to validate model quality.  Additionally, Interactive Machine Learning was determined to be useful because of its ability to continuously improve the usefulness of a trained model. 
  • Discussion
    • The researchers did an excellent job proving their hypothesis.  Utilizing three seperate studies, that were formatted in different ways allowed them to collect a wide range of useful data.  The real time feedback and interaction of this system is what makes it particularly appealing to me.  Since the users are allowed to see the effectiveness of the training models their providing as they provide them, rapid marked improvements can be made to the system.  This facilitates efficient development of a final system, as opposed to a slow and articulated struggle to reach an intermediate goal.



Picture Source: "Human model evaluation in interactive supervised learning"

Paper Reading #20: The aligned rank transform for nonparametric factorial analyses using only anova procedures


  • Title: The aligned rank transform for nonparametric factorial analyses using only anova procedures
  • Reference Information:
    • Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  143-146. DOI=10.1145/1978942.1978963 http://doi.acm.org/10.1145/1978942.1978963
    • UIST 2010 New York, New York.
  • Author Bios:
    • Jacob Wobbrock is an Associate Professor in the Information School at the University of Washington.  Wobbrock directs the AIM Research Group which is part of the DUB Group.
    • Leah Findlater is currently a professor at the University of Washington but will become an assistant professor at the University of Maryland in January of 2012.  Findlater has developed Personalized GUI's.
    • Darren Gergle is an associate professor at the Northwestern University School of Communication.  Gergle is interested in improving understanding of the impact of technological mediation has on communication.
    • James Higgins is a professor in the Department of Statistics at Kansas State Unversity. 
  • Summary
    • Hypothesis:
      • The researchers hypothesized that modifying the Aligned Rank Transform to an arbitrary number of factors would be useful for researchers attempting to analyze data.
    • Methods
      •  The researchers developed the method for the expanded ART and then coded this in both a desktop tool (ARTool) as well as an online, Java-based verson (ARTWeb).  After creating these tools the researchers analyzed three sets of previously published data.  This analysis was to demonstrate its utility and relevance, as opposed to its correctness.
    • Results
      • Examining old data revealed interactions that had not been seen before.  For example, in a study by Findlater et al. the authors noted that there was a possible interaction that was unexaminable by the Friedman test.  When this data was run using the nonparametric ART method, nonsignificant main effects for Accuracy and Interface were revealed, as well as a significant interaction.
    • Contents
      • This paper presents a nonparametric ART method, as well as two programs to support the calculation of data using this method.  The system has limitations, such as possibly reducing skew, which may be undesirable.  But, as demonstrated during their tests, the method can help reveal interactions that cannot be discovered through other analyses.
  • Discussion
    • The researchers were certainly able to prove their hypothesis, as seen by their test cases.  It will be interesting to see whether or not this tool is used for research analysis in the future.  The chart at the beginning of the research paper was somewhat intimidating, listing quite a few already commonly used techniques.  I have a feeling that statisticians will use this so that more intereactions can be observed.  As the saying goes, knowledge is power, so the more the researchers are able to understand the more they can build off of.



Picture Source: "The aligned rank transform for nonparametric factorial analyses using only anova procedures"

Paper Reading #19: Reflexivity in Digital Antrhopology



  • Title: Reflexivity in Digital Antrhopology
  • Reference Information:
    • Jennifer A. Rode. 2011. Reflexivity in digital anthropology.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  123-132. DOI=10.1145/1978942.1978961 http://doi.acm.org/10.1145/1978942.1978961
    • CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems 
  • Author Bios:
    • Jennifer Rode is an assistant professor at Drexel's School of Information. Rode has produced several interface design projects. 
  • Summary
    • Hypothesis:
      • The researcher hypothesized that various forms of digital anthropology can be utilized by researchers to learn more during field studeis.  No new ideas were presented in this paper.  Rather, ideas relating to utilizing other aspects in the digital research world were presented. 
    • Methods
      • The researcher did not perform any user studies as seen in other papers.  Instead, the author spent much of the paper simply defining different forms/aspects of digital anthropology.  These definitions had been collected from previously published research.  The researcher then argues why many of these unused techniques could be beneficial in digital research.
    • Results
      • Building off of the definitions, the researcher shows that the 'messy bit' may be where focus needs to be placed to gain a more valuable insight for digital research.  Since developers design for human users, all aspects of the human user's interactions should be considered.
    • Contents
      • This paper presents an argument for including the voice of the enthographer during both the experience and discussion afterwards.  These are techniques that have not previously been used in the HCI field for research, but it is argued that they will help developers be more succesful by understanding their users better.
  • Discussion
    •  It is hard to say whether or not the auther sucessfully proved her hypothesis over the course of this paper.  To me, it seemed more like an unsupported idea was written about in this paper, with nothing more than definitions from other research placed in here to help define the idea.  Since there was no study done to imply a greater level effectiveness from her various ethnography proposals, there doesn't seem to be any evidence that she is correct.  On the other hand, how would one really provide evidence that one opinionated summary is better than another?  Either way, it was an interesting read that gives a useful reminder: don't ignore the users you are designing for in the first place.



Thursday, October 13, 2011

Paper Reading #18: Biofeedback game design: using direct and indirect physiological control to enhance game interaction



  • Title: Biofeedback game design: using direct and indirect physiological control to enhance game interaction
  • Reference Information:
    • Lennart Erik Nacke, Michael Kalyn, Calvin Lough, and Regan Lee Mandryk. 2011. Biofeedback game design: using direct and indirect physiological control to enhance game interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  103-112. DOI=10.1145/1978942.1978958 http://doi.acm.org/10.1145/1978942.1978958
    • CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems 
  • Author Bios:
    • Lennart Erik Nacke is an assistant professor for HCI and Game Science at the Faculty of Business and Information Technology at University of Ontario Institute of Technology.
    • Michael Kalyn is a first time ACM publisher with this article.
    • Calvin Lough another first time publisher, from the University of Saskatchewan
    • Regan Lee Mandryk is a professor at the Simon Fraser University.  This is his 36th publication in 12 years.
  • Summary
    • Hypothesis:
      • The researchers hypothesized that they could increase the enjoyment of video games by utilizing physiological input as both direct and indirect input.
    • Methods
      • The researchers developed three different versions of a game, one as a control and two others that integrated physiological input.  
    • Results
      • The results were collected through open-ended survey questions.  These responses were then conglomerated to produce overall enjoyment charts.  Furthermore, the participants were asked to rate various aspects of the novelty of the inputs on a likert scale.  Many participants enjoyed input devices that were more 'natural', and did not feel 'like a controller'.
    • Contents
      • The researchers concluded that physiological inputs can add enjoyment to a video game experience.  The indirect controls were shown to be less enjoyable since they did not present the same 'instant feedback' that the direct controls did.  The more natural the mapping was between the input and the functionality, the greater the feature was enjoyed.  Finally, researchers believe that the indirect inputs can be used as a dramatic device.
  • Discussion
    • The researchers effectively demonstrated support for their hypothesis.  This paper, however, does not present anything dramatically different from what I've been expecting in the future of gaming.  Perhaps ironically, I'm the most interested in the indirect inputs being utilized as dramatic devices.  The 'background' aspects of a game really came to light one day when I was playing a first person shooter game.  Suddenly, I realized that my heart was beating at an alarming rate and then I realized that there was fantastic music on in the background.  Since then, I have really taken notice as to how the overall look or sound of a game enhances the overall experience.  Including indirect input could bring a very individualized experience to video games.  I can only imagine the changes that would happen in a stealth game if I started getting too nervous. 


Picture Source: "Biofeedback game design: using direct and indirect physiological control to enhance game interaction"

Thursday, October 6, 2011

Paper Reading #17: Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment


  • Title: Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment
  • Reference Information:
    • Andrew Raij, Animikh Ghosh, Santosh Kumar, and Mani Srivastava. 2011. Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  11-20. DOI=10.1145/1978942.1978945 http://doi.acm.org/10.1145/1978942.1978945
    • UIST 2010 New York, New York.
  • Author Bios:
    • Andrew B Raij has been affiliated with the University of Florida, Memphis and South Florida with more than 40 citations in ACM papers over the last 7 years.
    • Animikh Ghosh is a graduate student with this being their first published research paper.
    • Santosh Kumar is associated with both The Ohio State University as well as the University of Memphis.  This is his seventeenth research paper over the last 17 years, with an addition 285 citations.
    • Mani Bhushan Srivastava is a well known researcher from AT&T Bell Laboratories.  Over the last two decads he has published more than 150 papers through the ACM and has nearly 2,500 citations.
  • Summary
    • Hypothesis:
      •  Researchers hypothesized that user concerns over privacy in terms of data collected from wearable sensors has increased.  Furthermore, this fear is compounded when the users have a personal stake in the data being collected.
    • Methods
      • The researchers recruited 66 participants from a college campus and divided them into two groups.  The first group, NS, had no personal stake in the data being collected.  The second group, S, did since they were the ones wearing the sensors that were collecting data.  The NS group was simply given a demographics survey and then the privacy survey.  Group S, however, wore sensors that collected data before taking the privacy survey.  Following the survey, Group S then received a review of their analyzed data and then took the survey once again.  Finally, Group S was debriefed to learn more about their concerns.
    • Results
      • The data collected from the survey supports the notion that having a personal stake int he data increases privacy concerns.  Furthermore, that concern did grow after an analysis of their data had been presented to the participants.  Additionally, factors such as including the timestamp, place and/or duration all increased concerns in varying amounts depending on the activity (such as stress or conversation).
    • Contents
      • The paper showed evidence to support the idea that privacy concerns over collected data are affected by the relationship with the data as well as extra information collected such as the timestamp.  One participant stated concern about this linking information, "I'd rather people not know that I felt stressed at my particular job or when at my house, because they wouldn't have the whole picture".  Researchers propose removing as much identifiable information as possible, but realize that a middle ground has to be reached because some information is useless unless it can be linked to a particular person.
  • Discussion
    • Although the researchers were able to effectively support their hypothesis, I did not find this paper particularly interesting or useful.  I seemed to take this information as a known fact, if recent events serve as any indication.  For example, millions of people are upset at Google for collecting wireless information in Europe while building their StreetView database.  Or take a look at the Sony fiasco just earlier this year, where millions of accounts were compromised.  This privacy battle is only set to get worse, as larger amounts of information are stored electronically.  While this information management will have to be performed carefully, I feel that it is integral to further innovations.  Take the widely used example of medical information.  If such information is stored electronically it can be accessed anywhere, at anytime, and appropriate knowledge can  obtained when needed.

Picture Source: "Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment"

Thursday, September 29, 2011

Paper Reading #15: Madgets: actuating widgets on interactive tabletops


  • Title:
    • Madgets: actuating widgets on interactive tabletops
  • Reference Information:
    • Malte Weiss, Florian Schwarz, Simon Jakubowski, and Jan Borchers. 2010. Madgets: actuating widgets on interactive tabletops.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  293-302. DOI=10.1145/1866029.1866075 http://doi.acm.org/10.1145/1866029.1866075
    • UIST 2010 New York, New York.
  • Author Bios:
    • Malte Weiss is a 4th year PhD student at Media Computing Group.  Just three days ago, on September 26, Weiss returned from an internship at Microsoft Research Cambridge.
    • Florian Schwarz is affiliated with the RWTH Aachen University.  This was his first research paper published through the ACM.
    • Simon Jakubowski is affiliated with the RWTH Aachen University. This is his first publication through the ACM but has been cited before.
    • Jan Borchersan assistant professor at Stanford University.  Received PhD in 2000 from Darmstadt University of Technology.
  • Summary
    • Hypothesis:
      • Researchers hypothesized that they could create small, lightweight physical widgets which would be placed on top of an interactive touch display that could modify the position of the widget as well as other properties about it.   
    • Methods
      • To realize this idea, researchers attached several magnets to the widgets and added an array of electromagnets below the display.  By utilizing infrared reflectors and sensors, the system is able to determine both the location as well as classification of various widget (by comparing them to a database with their stored information).  My changing the polarity and strength of the magnets beneath the display, the physical widget on top can be translated across the surface.  Additional magnets can be integrated with the widget to allow for other properties, such as physical radio buttons (that raise and lower) or an alarm system using a bell (with the magnet hitting it to make noise).
    • Results
      • The researchers were able to construct their prototype as well as several different widgets.  The widgets do not take long to physically build when using a laser cutter.  The time to actually enter the new widget into the database was greater, at about two hours.  The design team is working on designing an application that will expedite that process, allowing for rapid prototyping of new madgets. 
    • Contents
      • The researchers paper presented Madgets, a method for integrating physical objects with the virtual world.  A key aspect of this research is that both the physical users interacting with the system as well as the system itself can modify properties of the madgets.  The system is designed  in such a way that the madgets can perform physical tasks apart from moving across the surface, such as ringing bells or acting as physical buttons.  Additionally, they can be expanded to perform even more comlpex tasks.  Motors can be created by powering a gear and electronics can be powered through induction.
  • Discussion
    • The researchers sucessfully demonstrated their basic idea.  Namely, they constructed a system that contains physical widgets that can be modified by either users or the system.  I was not very excited about the system until I got closer to the end and was exposed to some of the various madgets that have been designed.  The two that really caught my attention were the motor madget and the electrical-producing madget.  Although I cannot come up with very good uses for these two off the top of my head since I am not the most creative person, I have a feeling that very complex systems can be modeled constructed with these.  One of the most powerful uses of this is that the modifications made physically by the users can be saved by the system and recreated anywhere else at any time. 

Picture Source: "Madgets: actuating widgets on interactive tabletops"

Paper Reading #14: TeslaTouch: electrovibration for touch surfaces


  • Title:
    • TeslaTouch: electrovibration for touch surfaces
  • Reference Information:
    • Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison. 2010. TeslaTouch: electrovibration for touch surfaces.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  283-292. DOI=10.1145/1866029.1866074 http://doi.acm.org/10.1145/1866029.1866074
    • UIST 2010 New York, New York.
  • Author Bios:
    • Oliver Bau received his PhD at INRIA Saclay.  Bau was conducting PostDoctoral Research for Disney Research until January 2011.
    • Ivan Poupyrevis a Senior Research Scientist at Disney Research Pittsburgh.  He is interested in developing technologies that integrate the digital and physical world.
    • Ali Israr received his PhD from Purdue University in 2007.  He primarily researches haptics and works with the Interaction Design group in Disney Research.
    • Chris Harrison has the coolest name out of all the authors.  He is a 5th year PhD student at Carnegie Mellon University.  
  • Summary
    • Hypothesis:
      • The researchers hypothesized that a haptic feedback system can be implemented by utilizing electrovibration to induce electrostatic friction between a surface and the users (moving) finger. 
    • Methods
      • The researchers constructed a prototype which consists of a glass plate on the bottom, transparent electrode layer in the middle topped by a thin insulation layer.  A periodic electrical signal applied to the electrode is the driving force behind the electrostatic friction.  This signal displaces electrons in the prototype which create varying amounts of attractive forces between the prototype and a finger moving across the prototype.  Researchers conducted several user studies to determine threshold levels of human detection as well as the differences felt when using varying frequencies and amplitudes.  Finally, the researchers developed several test applications to demonstrate the potential of their device.
    • Results
      • Results from the studies reveal that frequency is related to the perception of stickiness while amplitude was linked to the sensation of smoothness.  Lower frequencies were described as being sticky while higher frequencies felt more waxy.  Low amplitudes were more rough than higher amplitudes: "cement surface" versus "painted wall".  The demonstration programs developed by the researchers show that the haptic sensation can be 'localized' in the sense that only moving digits feel the effects of the electrostatic friction.  Additionally, the strength of the friction can be adjusted based on where the user is touching, leading to various 'surfaces' during an interaction.
    • Contents
      • The research paper presents TeslaTouch, a new form of haptic feedback that does not require any moving parts.  The advantages of lacking mechanical parts range from a uniform sensation across the entire surface to a savings in energy expenditure.  This technology can be utilized to provide information such as the 'density' of pixels on the screen or the size of a file being dragged.
  • Discussion
    • The researchers effectively demonstrated that their idea is feasible by both calculating threshold levels of human detection and developing various demonstration applications.  This was one of the most exciting research papers I have read to date, because I feel that this technology can be both useful and entertaining in a real world situation.  Artists, for example, will likely welcome the greater physical feedback when drawing on a virtual surface, as it is much more natural.  I would like to see a prototype for a mobile device tested in the future, as that (along with tablets) seem to be the most likely places to implement such a system.



Picture Source: "TeslaTouch: electrovibration for touch surfaces"

Tuesday, September 27, 2011

Paper Reading #13: Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces


  • Title:
    • Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
  • Reference Information:
    • Andrew D. Wilson and Hrvoje Benko. 2010. Combining multiple depth cameras and projectors for interactions on, above and between surfaces.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  273-282. DOI=10.1145/1866029.1866073 http://doi.acm.org/10.1145/1866029.1866073
    • UIST 2010 New York, New York.
  • Author Bios:
    • Andrew (Andy) Wilson is a senior researcher at Microsoft Research.  Wilson received his Ph.D. at the MIT Media Laboratory and researches new gesture-related input techniques.
    • Hrvoje Benko received his Ph.D. in Computer Science from Columbia University in 2007 and has more than 25 conference papers published.  Benko researches novel interactive computing technologies.
  • Summary
    • Hypothesis:
      • The researchers hypothesized that any surface can be converted into an interactive one through the use of depth cameras.  They also wished to avoid 'messy' skeleton tracking by utilizing simple 2D picture analysis.
    • Methods
      • The team constructed a room which contained an apparatus for mounting projectors and cameras on the ceiling.  There was a table in the center of the room.  The depth cameras and projectors had simple configuring accomplished by IR reflectors.  The prototype was demonstrated across a three day span.  Features that were shown at the exposition included picking up virtual items and transferring them to other interactive surfaces (wall to table and vice versa).  
    • Results
      • The system was shown to be effective in its early stages.  While there is no programmed limit on the number of users in the room at once, the system took a performance hit after about three and had trouble distinguishing unique entities (people) after six.  Unanticipated actions, such as transferring an object from the table to the wall through two people, were also seen during the exposition.  
    • Contents
      • Researchers envision more interactive experiences in daily lives and have started that process with this research paper.  The current implementation of this allows for only flat objects to become interactive surfaces, such as tables.  But this is a limitation that researchers feel is easy to overcome.  Once overcome, every single item in a room could become interactive in a very natural way.
  • Discussion
    • I believe one of the most interesting contributions of this research paper is that the researchers were able to achieve their goals through simple 2D picture analysis.  Not all 3D interactions will have the same possibilities, but it opens up exciting possibilities for many applications.  As the paper points out, skeleton tracking is computationally intense and error prone.  Alleviating the computation time required for simple tracking, while increasing the accuracy would allow for programs to focus on other aspects to create more holistic experiences.  One of the greatest challenges to overcome, in my opinion, will be getting users used to interacting with objects that are not only virtual, which they have become accustomed to through mobile technology, but not visually represented much.





Picture Source: "Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces"

Monday, September 26, 2011

Paper Reading #12: Enabling Beyond Surface Interactions for Interactive Surface with An Invisible Projection


  • Title:
    • Enabling Beyond Surface Interactions for Interactive Surface with An Invisible Projection
  • Reference Information:
    • Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin, Mike Y. Chen, Jane Hsu, and Yi-Ping Hung. 2010. Enabling beyond-surface interactions for interactive surface with an invisible projection.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  263-272. DOI=10.1145/1866029.1866072 http://doi.acm.org/10.1145/1866029.1866072
    • UIST 2010 New York, New York.
  • Author Bios:
    • Li-Wei Chan is a student at the National Taiwan University. Chan has had twelve ACM publications in the last four years.
    • Hsiang-Tao Wu  is a student at the National Taiwan University. Chan has had four ACM publications in the last year.
    • Hui-Shan Kao  is a student at the National Taiwan University. Chan has had four ACM publications in 2009 and 2010.
    • Ju-Chun Ko is a student at the National Taiwan University. Chan has had six ACM publications in 2009 and 2010.
    • Home-Ru Lin  is a student at the National Taiwan University. Chan had two ACM publications in 2010.
    • Mike Y. Chen is a student at the National Taiwan University. Chan has had seven ACM publications in the last year.
    • Jane Hsu  is a professor at the National Taiwan University. Chan has had thirty six ACM publications in the last twenty two years.
    • Yi-Ping Hung  is a professor at the National Taiwan University. Chan has had sixty seven ACM publications in the last twenty two years.
  • Summary
    • Hypothesis:
      • The researchers hypothesized that they could create an interactive table that is interactive through infrared projections.  By using more than one projector, the researchers hoped to give placement information to the system while displaying various visual information to the users.  
    • Methods
      • Infrared projections displayed tags that are invisible to the human eye but are used by the system as place markers.  These place markers allowed additional devices, such as modified tablets, to determine their 3D location with 6 degrees of freedom and present information accordingly.  Both projectors, the infrared as well as the color, were placed below the surface.  A diffuser layer was added to the table to reduce glare, but it introduced unwanted spots in reading the infrared data so a second camera was introduced.  Three additional devices (a lamp, a flashlight and a modified tablet) were produced to show interaction possibilities.
    • Results
      • Reading the infrared codes did allow the additional items to determine their 3D location. This allowed additional information to be displayed, such as a more zoomed in picture of an area or a 3D display of buildings shown in 2D on the table.  A problem labelled as 'dead reckoning' quickly emerged.  When the users tilted the tablets too far, in order to inspect the top of a building, the tablet would lose sight of the infrared tags therefore losing its 3D location.
    • Contents
      • The paper presents a method for enabling interactions beyond simple touch interactions on a surface.  These additional input methods and augmented reality aspects allow users to obtain more information than can be simply displayed by a 2D surface.  This includes displaying unique information to various users all at the same time, depending on what the user wants to focus on.
  • Discussion
    • The paper effectively demonstrates a working prototype of the researchers hypothesized system.  Hopefully this system will have further research put into it because I can see it being a powerful augmented reality tool.  For example, if people are at a museum and the room is filled with infrared tags, every individual could be inspecting and interacting with the same object at the same time without harming another person's experience.  Additionally, physical space would not be required to display textual information that only a limited number of people would be interested in, and that information would only be displayed if the user focused on it.  This system appears to give a powerful experience that can be personalized for every individual utilizing it.




Picture Source: "Enabling Beyond Surface Interactions for Interactive Surface with An Invisible Projection"

Paper Reading #11: Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input


  • Title:
    • Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
  • Reference Information:
    • Thomas Augsten, Konstantin Kaefer, Ren\&\#233; Meusel, Caroline Fetzer, Dorian Kanitz, Thomas Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch. 2010. Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  209-218. DOI=10.1145/1866029.1866064 http://doi.acm.org/10.1145/1866029.1866064
    • UIST 2010 New York, New York.
  • Author Bios:
    • Thomas Augsten is a masters student at the Hasso Plattner Institute in Potsdam Germany.
    • Konstantin Kaefer develops web applications.  Kaefer is a full time student at the Hasso Plattener Institute.
    • Rene Meusel is a student at the Hasso Plattner Institute who develops various projects, such as a construction game for the iPhone.  Meusel is also interested in photography.
    • Caroline Fetzer is another student at the Hasso Plattner Institute.  This paper was her first publication.
    • Dorian Kanitz researcher at the Hasso Plattner Institute.  
    • Thomas Stoff researcher at the Hasso Plattner Institute, first publication.
    • Torsten Becker a graduate student at the Hasso Plattner Institute.  Specializes in human-computer interaction as well as mobile and embedded devices.
    • Christian Holz a Ph.D. student in Germany.  Recently published a paper titled "Imaginary Phone" to appear in UIST 11.
    • Patrick Baudisch earned his PhD in Computer Science from Darmstadt University of Technology in Germany.  Prior to becoming a professor at Hasso Plattner Institute, Baudisch reasearched adaptive systems and interactions at both Microsoft Research and Xerox PARC.
  • Summary
    • Hypothesis:
      • The researchers hypothesized that an interactive display can be created to handle tens of thousands of items in a way that maintained accurate and convenient input methods.  In particular, they desired to create a floor that both displayed information as well as accepted input in the form of foot gestures and postures.  Additionally, they wanted to avoid awkward interactions with the device, such as walking across the entire surface to reach a menu or creating walking paths to avoid unwanted input.
    • Methods
      • The floor is composed of a screen, followed by a layer of acrylic with 34mm glass below that.  The glass installed in the lab was 1.2 tons and they only installed one small section for testing purposes.  They utilized frustrated total internal reflection for the input detection.  The researchers held a few small studies in order to better understand potential interactions utilizing feet.  For example, they had a study to help determine appropriate 'selection' gestures.
    • Results
      • Based on the studies, the researchers developed appropriate software solutions for various problems.  Selection of a context menu occurred when a user jumped on the floor.  Selection 'points' are set by the user, allowing them to select items as naturally as possible.  Virtual keyboards do not have to have extremely large buttons, this would actually make the typing process more uncomfortable for users since they have to reach to get to the key they want.
    • Contents
      • The paper demonstrated several features of the interactive device.  One of these features were additional degrees of freedom, effectively partitioning the foot into additional sections (as opposed to simply 'ball' and 'heel' sections).  When this is done it allows for a much greater array of input gestures.  So many, in fact, that researchers enabled users to play a first person shooter game using only their feet as input for the game.  This paper lays the foundation for the development of extremely large interactive displays that are not possible utilizing traditional touch input methods alone.
  • Discussion
    • The researchers certainly accomplished their goal with this paper.  They have continued their research by building an alternative, and larger, version of the floor.  Demonstration videos posted online show some of the features discussed in the paper, such as typing on a keyboard. Allowing users to stroll freely across displays without the fear of accidentally interacting with them is a powerful development and is essentially the key for achieving their goals.  Personally, I am just curious about alternative input methods.  I like what the researchers have done, but I don't feel as if foot interaction is always appropriate.  Allowing various input methods in additional to the feet gestures would, in my opinion, make this an even more powerful device.



Picture Source: " Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input"

Tuesday, September 20, 2011

Gang Leader for a Day

Sudhir Venkatesh's work is, to say the very least, both inspiring and motivating.  Prior to entering the CHI course at Texas A&M, I had never really imagined spending time with anybody except for the people I was familiar with while participating in tasks that I was familiar with.  Entering college, for example, was a fairly nerve-racking experience.  It incorporated the introduction of both unknowns: new people and new activities.  I quickly settled in to the computer science environment and stuck close by it.  Over the years I have had introductions to new people and experiences, but only on a minimal level.  For example, a very good friend of mine was met through tutoring and he was somebody that I would have never approached outside of the college environment.  That didn't stop us from becoming best friends, however, which now makes me consider how much I am possibly missing out on from staying in my comfort zone.
While Venkatesh questions the validity of his friendship with J.T., I feel as if they were.  Granted, they may not have had the same kind of relationship that Venkatesh maintained with his other 'friends', but what if their friendship wasn't determined by Venkatesh's point of view?  J.T. was in charge of hundreds of Black Kings, all of whom he certainly had to consider were at least friends.  Look at J.T.'s senior officers T-Bone and Price, both of them had been friends with J.T. since high school yet they were both using J.T. as a means to better their own life (similar to how J.T. relied on them).  That is powerful evidence against the idea that Venkatesh was simply using J.T for his own personal advancement and therefore could not have still had a friendship with him.
While neither I, nor Venkatesh, are suggesting that it is a brilliant idea to go out and mingle with the local gangs, I am suggesting that a key social experience cannot be obtained without stepping outside of one's box.  I have done countless things over the past year that I would have NEVER even considered doing if I had never crossed paths with my friend.  Similarly, I have been able to introduce my friend to things that are brand new to him.  It has been a mutually beneficial stroke of luck that I could not be more thankful for.  So, personally, I will never forgo an opportunity afforded to me again.  While not every opportunity to step out of one's comfort zone will have such a spectacular outcome, the innumerable opportunities that will certainly be missed by never doing so is something worth trying for.
  

Paper Reading #10: Sensing Foot Gestures from the Pocket


  • Title:
    • Sensing Foot Gestures from the Pocket
  • Reference Information:
    • Jeremy Scott, David Dearman, Koji Yatani, and Khai N. Truong. 2010. Sensing foot gestures from the pocket.  In <em>Proceedings of the 23nd annual ACM symposium on User interface software and technology</em> (UIST '10). ACM, New York, NY, USA,  199-208. DOI=10.1145/1866029.1866063 http://doi.acm.org/10.1145/1866029.1866063
    • UIST 2010 New York, New York.
  • Author Bios:
    • Jeremy Scott is a graduate student at the Massachusetts Institute of Technology.  His undergraduate thesis was the topic of this research paper.
    • David Dearman is a professor at Dalhousie University.  In the last 6 years he has published 21 research papers through the ACM. 
    • Koji Yatani is finishing up his Ph.D. this summer at the University of Toronto and will be working at Microsoft Research beginning this fall.  His interests include mobile devices and hardware for sensing technologies.
    • Khai N. Truong is an Associate Professor at the University of Toronto.  Truong's research is in improving the usability of mobile computing devices.
  • Summary
    • Hypothesis:
      • The researchers hypothesize that utilizing foot gestures as input is both plausible (they can be accurately recognized) as well as socially acceptable.  This paper focused primarily on the first of these two statements, with another future study planned to investigate the second.
    • Methods
      • The researchers conducted two small studies to complete this research paper.  The first of which was to measure the plausible range of accurate foot selection motion.  Additionally, the researchers used the feedback from this study to determine what moves (rotations of the foot) were the most comfortable to perform.  The second study was designed to determine if a cell phone with a triple-axis accelerometer could recognize these various selection gestures when in the users pocket or mounted on their waist. 
    • Results
      • The first study primarily showed the researchers the range of motion that potential users would be able to easily reach.  The interviews after this study also revealed that rotations of the heel were the most comfortable movement to perform.  The second study showed that a mobile device mounted on the side of a user was the most effective at recognizing gestures.  The next most accurate position is in the user's front pocket.  Researchers hypothesize that this placement is not as accurate as the side mount since the phone has a small area to move around in when placed in the pocket.
    • Contents
      •   The research paper presents an alternative interaction method with mobile devices.  This interaction method is aimed to be socially acceptable, as well as visually feedback free.  This method would allow users to perform tasks on their phone, such as changing songs, without actually having to pull it out to do so.  Other visual-feedback free methods are already being investigated, see my blog about Imaginary Interfaces (titled "Paper Reading #1), or already in use (such as voice commands).  The goal of this investigation was to discover a new method that was accurate while avoiding being socially awkward. 
  • Discussion
    • Immediately upon reading this article I recalled the Imaginary Interfaces paper.  Both papers are essentially studying input methods that don't require visual feedback.  Both have a question about accuracy, since implementing a system which had a low recognition rate would essentially defeat its own purpose.  This paper is very exciting because of the fact that it requires absolutely nothing more than what a large percentage already have, a smart phone carried in a pocket.  The early accuracy of this system is encouraging, the researchers have certainly shown what they set out to prove.  The biggest disappointment about this paper is that they haven't performed the study in daily life yet.  I am very eager to learn more from reading their follow-up paper.



Picture Source: "Sensing Foot Gestures from the Pocket"