Thursday, November 3, 2011

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface


  • Title: Sensing cognitive multitasking for a brain-based adaptive user interface
  • Reference Information:
    • Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, and Robert J.K. Jacob. 2011. Sensing cognitive multitasking for a brain-based adaptive user interface.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  383-392. DOI=10.1145/1978942.1978997 http://doi.acm.org/10.1145/1978942.1978997
    • UIST 2010 New York, New York.
  • Author Bios:
    • Erin Treacy Solovey with Tufts University.  Fifteen publications.
    • Francine Lalooses with Tufts University.  Two publications.
    • Krysta Chauncey with Tufts University.  Six publications.
    • Douglas Weaber with Tufts University.  Two publications.
    • Margarita Parasi with Tufts University.  First publication.
    • Matthias Scheutz with Tufts University, Indiana University and the University of Notre Dame.  Thirty seven publications.
    • Angelo Sassaroli with Tufts University.  Six publications.
    • Sergio Fantini with Tufts University.  Seven publications.
    • Paul Schermerhorn with the University of Notre Dame and Indiana University.  Twenty four publications.
    • Audrey Girourd with Tufts University and Queen's University.  Nineteen publications.
    • Robert J.K. Jacob with Tufts University and MIT.  Two publications.Seventy five publications, and 1,004 citations.
  • Summary
    • Hypothesis:
      • An fNIRS tool will be able to capture the tasking state of a human mind as effectively as required by an HCI application standing, around the same level as an fMRI machine.  Researchers also hypothesized that a system can be designed to capture this tasking state and facilitate it by aiding the user in their tasks.
    • Methods
      • The first hypothesis was tested by seeing how accurately an fNIRS machine could classify a user when they were in various tasking states (branching, dual or delay).  They simply asked users to perform a variety of tasks and analyzed how frequently their system was correctly classifying the current state.  The second test involved participants instructed to perform various activities with a robot, and they analyzed how their system facilitated this interaction.  
    • Results
      • The fNIRS machine showed some recognition between various states, above 50%, but was not extremely accurate.  The researchers noted that this was a small group of testers and that other various improvements could be made to improve this accuracy.  Building off this the researchers built a tool that attempted to facilitate these changing states.
    • Contents
      • The researchers noticed that the fNIRS machine is not as effective at collecting data as the fMIR machine is, but it is much more practical in a real world environment.  Researchers built a proof of concept system that showed promise.  
  • Discussion
    • These researchers proposed an interesting system and were effective at providing proof for the second hypothesis (the first one still needs to be analyzed some more).  I like this idea because I am constantly attempting to multitask and the little tasks that can easily be automated are the ones that take up the most time (switching contexts etc.).  If this is effectively implemented, then the work place should increase in productivity by a significant amount.



Picture Source: "Sensing cognitive multitasking for a brain-based adaptive user interface"

Tuesday, November 1, 2011

Paper Reading #26: Embodiment in Brain-Computer Interaction


  • Title: Embodiment in Brain-Computer Interaction
  • Reference Information:
    • Kenton O'Hara, Abigail Sellen, and Richard Harper. 2011. Embodiment in brain-computer interaction.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  353-362. DOI=10.1145/1978942.1978994 http://doi.acm.org/10.1145/1978942.1978994
    • UIST 2010 New York, New York.
  • Author Bios:
    • Kenton O'Hara has been cited in nearly 500 articles published through the ACM in the last 18 years.  He is affiliated with Hewlett-Packard as well as Microsoft Research.
    • Abigail Sellen is a Principal Researcher at Microsoft Reserach Cambridge.  She joined Microsoft Research after working for Hewlett Packard Labs.
    • Richard Harper is a Principal Researcher at Microsoft Research in Cambridge.  
  • Summary
    • Hypothesis:
      • Researchers hypothesize that Brain-Computer Interaction can have a social impact when used in different environments.  In particular, when a BCI is used in a gaming environment, the interactions between the people involved change fundamentally.  The researchers hope to examine exactly how this interaction changes.
    • Methods
      • The researchers sent the MindFlex game home with three different groups of people and asked them to record their gaming experience.  Each of the groups was supposed to choose when, where and with whom to actually play the game.  This created a very realistic, and fluid environment in which people freely came and went.  
    • Results
      •  The analysis showed a few novel differences, such as the unnecessary mental imagery created in an attempt to properly control the game.  Many users would frequently think 'up, up, up' to raise the ball when in fact all they had to do was concentrate a little more.
    • Contents
      • The paper shows results that can be used to extend the use of BCI into other environments. These environments will not be typical of previous social interaction space since there are new problems such as users not being able to acknowledge feedback from other people around them.
  • Discussion
    • The researchers hypothesis was very open-ended, simply that BCI interaction is something that needs to be studied in order to be expanded.  The researchers were able to effectively study these interactions, and presented several clear findings.  I had never thought about the fact that simply moving your hand or responding to a question could have such a profound effect on concentration.  I hope that this research is continued so that more fluid invisible computing can be accomplished in the future.



Picture Source: "Embodiment in Brain-Computer Interaction"

Paper Reading #25: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration


  • Title: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration
  • Reference Information:
    • Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, and Robert C. Miller. 2011. Twitinfo: aggregating and visualizing microblogs for event exploration.  In <em>Proceedings of the 2011 annual conference on Human factors in computing systems</em> (CHI '11). ACM, New York, NY, USA,  227-236. DOI=10.1145/1978942.1978975 http://doi.acm.org/10.1145/1978942.1978975
    • UIST 2010 New York, New York.
  • Author Bios:
    • Adam Marcus is a graduate student at MIT.  He received his undergraduate degree from Rensselaer Plytechnic Institute.
    • Michael S. Bernstein researches crowdsourcing and social computing.  He is in his final year at MIT.
    • Osama Badar is a graduate student at MIT.
    • David R. Karger is a member of the AI laboratory at MIT.  He has spent time working for Google.
    • Samuel Madden is an associate professor at MIT.  Has developed systems for interacting with mTurk.
    • Robert C. Miller is affiliated with Carnegie Mellon University.  He has 71 publications in the ACM over the last 15 years.
  • Summary
    • Hypothesis:
      • The researchers hypothesize that information aggregated from microblog sources, Twitter in particular, can be used to study events.  This should be accomplished in real time and produced by a system that makes data visualization and exploration simple and intuitive.
    • Methods
      • Researchers developed a tool called 'TwitInfo' to implement the goals set forth in their hypothesis.  They then evaluated the effectiveness of their system by letting average users of twitter and an award winning journalist test it.
    • Results
      •  The evaluation showed that TwitInfo effectively analyzed events based on spikes in tweets and allowed users to easily gain a shallow understanding of a chain of events.  The journalist emphasized that this knowledge was only shallow, but that the tool still allows people to gather an understanding of events from the first person point of view as they unfold.
    • Contents
      • The paper presents a tool that is able to analyze twitter information that is not domain specific in real time which has not been effectively accomplished before.  The major limitations of this system are that not all interesting events are flagged when analyzing peaks in the number of tweets (such as a yellow card in a soccer game) and that the information available is generally more shallow than what a standard news report would generate.  
  • Discussion
    • The researchers were certainly able to create a tool that performs the intended functionality.  It is interesting that this article was mentioned since Dr. Caverlee just recently gave a speech to UPE about a project involving Twitter.  Datamining this expansive source of information can certainly produce some interesting results if it can be done correctly.  I'm actually a little surprised that Twitter itself doesn't do more to support such functionality.



Picture Source: "TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration"