HCIC

Human Computer Interaction Consortium

Call for Papers

HCIC 2010 Winter Workshop > Call for Papers

     

Program Committee


Wendy Kellogg (wkellogg@us.ibm.com), and Judy Olson (jsolson@uci.edu)

HCI Ways of Knowing: 'How We Know What We Know' and 'Does It Work?'

Most previous years at HCIC, the Governing Board picked a theme and authors submitted papers, a final set of which were then selected by the chairs. Papers were presented for about a half hour, a discussant spoke for 15 minutes and the remaining 45 minutes was spent in open discussion. This year, the Governing Board decided to break from this pattern for a year, and present a set of tutorials and discussions surrounding the topic of what it means to know something in the field of HCI.

Over its history, HCI has become more eclectic in its scope, perspectives, and research methods. From a fairly narrow set of methods derived from Cognitive Psychology and Computer Science, HCI has grown to include methods from Sociology, Anthropology, Survey Research, and Design. Some aim at scientific discovery and explanation; others at the creation of high-quality artifacts and user experiences. All are meant to produce insight into HCI. We are not all expert in all of the methods, so it is hard, in many cases, to know whether to believe what we are told, to evaluate findings on their own terms (i.e., within the paradigm and against the standards of the method or approach used), or to appreciate knowledge embedded in unfamiliar work.

In addition, the current landscape of technology use offers a number of new opportunities and ways to collect and analyze data that we are just beginning to understand and that only a few are skilled in doing. Today it is possible to collect data from large numbers - perhaps millions - of users of web-based or mobile applications about the details of their interactions (e.g., navigation history, clicks, search terms, locations, etc). Analysis of millions of data points requires a different style of argumentation than conventional statistics, for example, and issues of sampling and effect size move to the center. We wish to provide tutorials and deep discussion about these new research issues and methods, about what questions they can and cannot answer, and limitations in interpretations.

We envision using the time at the Consortium in a number of different ways. For example, we can imagine having an expert in a method talk about the guidelines for doing it right, so that it will not only be done right, but reviewers like us will be better able to judge its merits. This might be followed by a session that presents two papers that use the method, with the authors reflecting on what they heard in the tutorial and issues about both what they can learn from the method and what its limitations are. In another session, we envision having an expert, for example in design research, come to both give us insight into the fundamentals in design research and an exercise involving both design and critique so that those without backgrounds in design research can get a better sense of what it means to have a design "that works" and the kind of knowledge gained in the process.

We would like to see an opening session cover some of the philosophical issues about how we know what we know and how we can ask of a technology artifact "does it work?," perhaps a panel of people from different methodological and disciplinary traditions talking about the role of theory in selecting methods, the role of design traditions or prior art in how a design is framed and evaluated, what questions can be answered by various approaches and methods, etc.

The majority of the sessions would then address methods' requirements, standard levels of acceptability (e.g., external validity of a task the participant is asked to do, critieria for design critiques, generalizability of the findings in one domain to another domain), and the questions that can be answered (and those not). We would like to cover some of the following old and new methods:

Methods established in HCI:

  • Experiments, including issues about the external validity of the tasks, the generalizability from the participants selected, use of thinking aloud protocols, etc. This could include related issues in using larger simulations or serious games for data collection about emergent group behavior
  • Surveys, including issues of sampling, designing and formatting questions, analyses. This could include polls and attitudinal scales like Likert scales.
  • Ethnography or naturalistic observation, including the selection of a site, what to look for, and how one corroborates the themes selected to talk about, and the issue of generalizability and replicability.
  • Semi-structured interviews and the emergent themes developed under the rubric of Grounded Theory.
  • Design research, including discussion of the development of the requirements, how the design meets the requirements (at many levels), and the role of the critique.
  • Building a digital device that fits a known user need, and evaluating it according to some behavioral measure.
  • Content and conversational analysis, including selecting the coding scheme, attaining inter-coder reliability, and level of analysis.
  • Models and modeling, including math models, agent-based simulations, and econometric modeling.

Newer methods to HCI:

  • Statistics and large data sets, including how to clean data for analysis while preserving privacy, measuring meaningful effects, describing the population and situation from which the large data arise (including who or what might be excluded from the set), and visualization of large data sets as exploratory data analysis and final presentation.
  • Behavior analytics, including trace analysis, click-throughs, logging, and search term analysis.
  • Methods needed to assess adoption patterns and user appropriation of technology.
  • Web content analysis and relational analysis
  • Biometric measures of stress, affect (e.g. blood pressure, pupil dilation)
  • Cultural probes
  • Diaries and cell phone probes

We might combine some issues across methods, such as privacy or sampling. We might include a discussion of the value of using multiple methods, often called triangulation. We could then wrap up with some higher-level discussions about how a diverse research community can determine "what is good work" stimulated by the juxtaposition of the methods in three compact days.

If this all goes well and is seen as valuable, we might like to engage in an effort to codify this discussion and its follow-on activities (e.g., an NSF sponsored workshop) in a book and an accompanying evolving web site, tentatively titled, "HCI Ways of Knowing: Methods, Measures and Issues in Their Use."

The process for forming this HCIC will be understandably different from the standard submission of papers, judging, and presenting. With this call, we are presenting some topic/session ideas, and are now soliciting additional ideas for topics and/or pointers to people who can give clear, balanced tutorials to the mixed-method/background audience that HCIC is. We ask interested parties to submit ideas by September 30, after which we will work with the submitters to put together a program that will be instructive, reflective, and inclusive.

Deadlines

  • Wednesday, September 30, 2010: Submitted Ideas
  • Wednesday, February 23, 2010: Boaster papers due
  • Feb. 24th - 28th, 2010: HCIC at the glorious YMCA camp

I. Submissions

PLEASE NOTE THE ACCEPTANCE CRITERIA: All submissions will be reviewed by the program committee. Acceptance will be based on relevance to the theme. We also seek a balance between industry and academic papers and a preference for some balance in the distribution of papers from member organizations. Note that students are not eligible to present a major paper.

Each institution may submit multiple papers. The Program Committee will select among submissions according to the criteria above.

Requirements

  1. A cover page with:
    • Title, name(s) and address(es) of authors/participants
    • A 50 word abstract
  2. A draft of the paper not more than 10 pages in CHI format OR a 2 page abstract

II. Boasters

A "Boaster" is an eight page or less paper. Boasters will be posted in their entirety at the HCIC web site, but only abstracts will be distributed at the conference. At the opening session of the meeting, all Boaster presenters will be asked to stand, announce their names and read the title and a 50 words abstract of their boasters or give a 2 minute overview. This procedure provides a valuable way of getting smaller papers and new authors - especially graduate students - to attend the conference and to interact with the attendees. Historically, we find that the resulting interactions have been beneficial for all concerned.

Requirements:

  1. A cover page with:
    • Title, author(s) (indicate those available to chat at meeting)
    • A one-word keyword
    • A 50 word abstract
  2. A paper of 8 pp or less.

Deadline for Boasters:Wednesday, February 23, 2010

Directions for Boaster Submission:

Boasters must be submitted online in PDF format at the HCIC web page. All boasters are automatically accepted. There is no review process.

If you have any questions contact the HCIC Webmaster at hcic.webmaster@umich.edu.

Creating PDF Files

If you do not have software available to create a PDF file, check out:

Presentation and Attendance Rules

The rules of the consortium state that only employees of member organizations may present major papers. Papers may have nonmember coauthors. However the board must approve either attendance or attendance and co-presentation. Obviously, invited speakers are exempt from this rule. Students are not eligible to present a major paper. However, they are strongly encouraged to submit a Boaster.

HCIC Online Paper Archives

All papers and boasters will be posted and archived on the HCIC web site. Additionally, we request that authors and discussants submit their slides to be posted online. Access to the HCIC website and paper archive is limited to member organizations (through IP address or password authentication). The HCIC web site is not indexed by public search engines. If you have any questions or concerns about the online papers archive, please contact Lai Tutt.