Monday, October 20, 2014

Theme 6: Qualitative and case study research Reflection


I blogged about Haibos lecture in great detail in my last reflection so I’m not going to repeat that, anyways it mainly aimed at last weeks theme "design research". This week we’ve read texts as usual and I’ve also attended a seminar.

During the seminar we were divided into smaller groups where we discussed our texts. Then we asked Leif questions based on our discussions. The main topic concerned what a case study is and how it should be handled.  The mandatory article, “Eisenhardt, K. M. (1989). Building Theories from Case Study Research. Academy of Management Review, 14(4), 532-550.”, already stated much of the seminars main-topic. However, it’s always good with repetition and clarification.  We discussed how a case study is a limited-area or scenario, that the optimal number of cases for building theories from it are 4-10, how you first get familiar with the field and then create the methods to be used, that multiple methods are to prefer, how to define undefined cases as research fields (uniqueness), that they’re often open-ended, that there’s never to little data in a case study but quite the opposite etc. In other words, some case studies have greater potential to build theories from, we’ve checked the potential using the article mentioned above to examine a case study. In summary, depending on if you search for building theories there are preferred ways to do this, however not all case-studies has this intention as some just investigate one singular case.

Other topics we discussed were: discourse analysis and rule coding, mainly the latter. It’s a way to restrict subjectivity and categorize the qualitative results. The thing is that anyone should be able to do the coding and they should get the same answers, it should be objective. So therefore multiple people often do the coding and often they are not the researchers themselves just to avoid subjectivity. The coding is often done on some samples of the results, not all.

Another thing that Leif mentioned during the seminar was how some cases/phenomena’s are described in multiple different accurate ways. He used the two theories describing light as an example, one that uses waves to do this and the other one particles. Both ways are considered correct but the fields can’t cooperate as they could if they were the same field. Within one case-study I think it’s important to find one way to describe the phenomena in order to be convincing and not confusing. If there are multiple possible ways, this should be underlined to make it very clear.


This week I’ve learnt about the difference between case-studies intended to build theories and case-studies intended to research a singular case. How you often go from qualitative data to quantitative data back to qualitative data again and so on, in order to answer your questions, confirm, explore and get deeper understanding. Possible ways to handle qualitative data before analyzing it. 

Monday, October 13, 2014

Theme 5: Design research Reflection

This week we attended two lectures, one given by Haibo Li and one given by  Eva-Lotta Sallnäs. We read  3 texts in addition to this: Réhman, S., Sun, J., Liu, L., & Li, H. (2008). Turn Your Mobile Into the Ball: Rendering Live Football Game Using VibrationIEEE Transactions on Multimedia, 10(6), 1022-1033, . Moll, J. and Sallnäs, E-L. (2013). "A haptic tool for group work about geometrical concepts engaging blind and sighted pupils." ACM Transaction on Accessible Computing. 4(4), and 1-37, Huang, Y., Moll, J., Sallnäs, E-L., Sundblad, Y. (2012). "Auditory feedback in haptic collaborative interfaces." International Journal of Human-Computer Studies. 70(4), 257-270.
Haibos lecture touched upon the importance of choosing the right and easy problem. It's more important to find a problem that is of importance and that has a convincing solution than finding a problem which lacks obvious solutions. An enforced example is two men being chased by a tiger where the wrong kind of problem is: How do I outrun the tiger in order to escape and the easy problem is: How do I manage to outrun the other man in order to escape. It's about asking the right questions. Still if you've too many ideas, which one to choose, Haibo briefly discussed how you could differentiate between "great ideas" and "big ideas". Then how you can validate the idea with the help of prototyping. To then evaluate the prototype you can measure the usability: efficiency, satisfaction, and effectiveness. Other things to think about in regards of evaluation are that mathematics and statistics helps us find (and describe) exact correlations that otherwise wouldn't have been found so easy.  He gave an example of Analysis of variance (ANOVA), a relative comparative way of finding correlations between independent and dependent variables. At the end he briefly discussed the importance of communicating your technology ideas as an entrepreneur when presenting them to people outside the technological world.

Sällnas Lecture was kind of deeply involving her research in collaborative haptics. She mentioned that collaborative data resulted in a lot of free data. An example from the lecture: In order to measure the way haptic feedback affects people she measured presence which is divided in social and virtual presence. She measured virtual prescence with the help of an existing scale, made for this purpose. However, there were no scale to measure social presence  so they had to define social presence and then build an own way to measure it. This example shows that there are not measures for everything and that it's possible to get around this. She also mentioned that there are general ways to measure things in order to put the in relation to other studies, for example by using Fitt's-law-task on input-output devices  you can put your input-output device into relation with other input-output devices outside the study. She discussed the importance of defining the study's keywords, collaborative setting as well as the relation between quantitative and qualitative data. All this with her own research as the base.


After the reflections before and the after this week's theme I think that I've learned quite a bit. Especially from reading the texts and reflecting around this week's questions. 

Friday, October 10, 2014

Theme 6: Qualitative and case study research Pre


I read this paper, Gilbert et al. (2013). The psychological functions of avatars and alt(s): A qualitative studyComputers in Human Behavior 32 (2014) 1–8.

I’ll shortly describe the core of the paper with a citation from it:
“Prior research has shown that approximately 50% of active participants in the 3D virtual world of Second Life have one or more secondary avatars or ‘‘alts’’ in addition to their primary avatar. Thus, these individuals are operating a ‘‘multiple or poly-identity system’’ composed of a physical self, a primary avatar, and one or more alts. However, little is known about the functions these virtual identities serve for the virtual-world user.”

Which qualitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?
The method contained Semi-structured interviews with second life participants; they were done within the virtual 3D world. The participants had to have a primary avatar and at least one or more secondary avatars. The study found the participants via an announcement in second life’s calendar.  After the interviews they got virtual in-second-world money as compensation of participation.  Other factors that could impact the results regarding the method were: That they had been a member in second life for at least 6 months, that they were 18 years or older, and only in-depth English speaking people could participate. During the interviews the participant described “their primary avatar, their physical self, and then each alt.”

They made coding rules for data: bottom-up and top-down design were used to come up with these. Prior research as well as interview data gave them a model to interpret the results in relation to the coding-rules.

Limitation according to the paper: Small sample of in-depth interviews as “their labor-intensive nature often constrains the size of the sample that can be used“. This small sample makes it hard to generalize in relation to how many users there are within Second Life. In the paper they stated that their paper is only descriptive of how you could do future studies, and this because of its limitations, it should be seen as a starting-point. The study only applies to Second Life and not other virtual worlds. The small scale of participants made them not see a frequent answer as more important than an answer that was not frequently maintained, they said caution is needed.

Benefits according to the paper: “Qualitative techniques are useful when an area is not well studied or understood as they can reveal nuance and context that is difficulty to capture through quantitative techniques alone”.

Other advantages and disadvantages: They performed the interviews in Second Life which made them catch active participants and geographically distant participants, however would they’ve given the same answers if they were interviewed in person (as their physical identity).

What did you learn about qualitative methods from reading the paper?
I think I knew pretty much of it since theme 4 and 5 past 2 weeks. One thing that I did not know about was the coding of interview data, and how they cooperated to analyze the data. That it can be done in such systematic way, based on previous research, coding rules, the data itself and a model.

Which are the main methodological problems of the study? How could the use of the qualitative method or methods have been improved?
Further on, as they wrote that more frequent answers do not necessarily imply that they are more important. It’s not generalizable, not even within second life. The method could’ve included a larger scale of participants, maybe by hiring people to perform the interviews if possible with the use of a framework/interview-form and training. To be a bit harsh, it could’ve included non-English speakers, to give the generalizability more potential. The participants could’ve also been interviewed in all their identities, as the psychical self, primary avatar, and other avatars to see if their identity varied (a time-span between the interviews). According to the paper, one woman didn’t understand the questions and her results were therefore left out, unnoticed misunderstanding like this could’ve been easier avoided in a face-to-face interview, but that would’ve had other disadvantages. The coding and the analyzing of the qualitative data included different opinions on how to interpret things according to the paper. I think they had a good strategy with coding rules; still they weren’t able to agree. I think a larger scale of participants could’ve solved this problem, partly.

Read the following article:
Eisenhardt, K. M. (1989). Building Theories from Case Study Research. Academy of Management Review, 14(4), 532-550.

Select a media technology research paper that is using the case study research method. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Your tasks are the following:

Briefly explain to a first year university student what a case study is.
The case could for example be a situation, event, person organization of some kind, and the study’s aim is to get deep knowledge about that particular case. It aims to describe, explore, and explain. Such deep knowledge means that lots of time spent on that case is needed and often you look at a single or few cases rather than multiple because of this.

Use the "Process of Building Theory from Case Study Research" (Eisenhardt, summarized in Table 1) to analyze the strengths and weaknesses of your selected paper.

The participants in relation to the main-questions of the study are good in relation to the amount of samples the study had since Second Life seems to be the most popular virtual world. There was however a lot of factors that could’ve had an impact (mentioned above) and no priori research saying that Second Life could represent virtual worlds, which is a weakness in terms of generalizability. Participants participated voluntarily which is a weakness (not totally random). The study concerned both quantitative data and qualitative data, except for the interviews, the gathering of data also included a web-form. However, the different ways of gathering data did not confirm each other as the questions/data from them were unrelated. This could however show relationships that the researcher didn’t know about. There’s no over-lapping in data-collection and data-analyze, coding rules were made and it’s clearly coded before it’s analyzed. From each participant there’s not a lot of data over a longer time of period, the interviews were extensive but only made once, and the model helped analyze the data. The paper seeks no cross-case patterns between different virtual worlds except for maybe the descriptive parts. The cross-case patterns between the participants were found via “categories”, multiple people coded this lead to disagreement on different points which otherwise wouldn’t have been found. Comparing with similar literature. Lastly, I think that the paper closes before the marginal for improvement becomes small, more patterns and and discussion could’ve been made but it’s unnecessary because of the descriptive purpose and the limitations of the study.

Monday, October 6, 2014

Theme 4: Quantitative research Reflection

For this weeks theme I read two papers, attended a seminar and studied some more.

I choose to read this: “Facebook and texting made me do it: Media-induced task-switching while studying”by Larry D. Rosen, L. Mark Carrier and Nancy A. Cheever, published in Computers in Human Behavior (Impact Factor: 2.067) Volume 29, Issue 3, May 2013, Pages 948–9581.

Then we had to read this:
Fondell, E., Lagerros, Y. T., Sundberg, C. J., Lekander, M., Bälter, O., Rothman, K., & Bälter, K. (2010). Physical activity, stress, and self-reported upper respiratory tract infection. Med Sci Sports Exerc, 43(2), 272-279.

The texts and seminar in relation to the suggested questions for this theme made me reflect around how specific methods are more accurate for a specific study, what kind of uncertainties comes from different kinds of used methods. Uncertainties regarding for example using the same method in different contexts, the reliability of the theories that the methods depend on, uncertainties regarding the participants, uncertainties regarding the way of gathering data/method, for example how you formulate questions without any confusion and so on.  With that said, we’ve looked at both uncertainties within one method but also uncertainties in comparison to other potential methods and this depending on the context, goal and purpose of the study.  For example is it a exploratory, confirmatory, or any other kind of study, what methods can answer to what the study seek? However, mainly I think this weeks theme learned me how to criticize results/methods internally and externally.

Seminar
During the seminar we got divided into 4 groups and had a small competition where the group with most unique answers to different questions won. The questions concerned disadvantages/advantages in qualitative research, disadvantages/advantages in quantitative research, disadvantages/advantages in web-survey vs paper-survey etc.  After each question Bälter presented his answers to the questions. To be more concrete, for example, we discussed how feedback could make participants more engaged because they get something back from the study, this will probably result in more accurate results. Another thing we discussed were how participation, in for example surveys, could increase by sending a pre-introduction, reminders and, as mentioned, giving feedback. Bälter also talked about the importance of how you formulate questions and scales when gathering data in order to avoid confusion and reliable answers.  Other topics were for example, you can test methods before you actually use them as well as not only having the study’s purpose in mind when choosing method, but also the target group.

I think much of this week theme introduced different kinds of methods depending on what type of theory the study seek/have, this relates back to last weeks theme. The generalizability is also something I think is affected depending on the method as well as the to what extent you seek causality and more. In another way I think this relates to earlier themes in that, depending if you know anything a priori or not, impacts what kind of method you prefer. Quantitative data can be good to confirm a hypothesis for example. Other than that I think that this weeks theme concerned finding problems and limitations with methods, but also how external problems and limitations can have an impact on the data gathered from methods.



Friday, October 3, 2014

Theme 5: Design research PRE

Read:
Réhman, S., Sun, J., Liu, L., & Li, H. (2008). Turn Your Mobile Into the Ball: Rendering Live Football Game Using Vibration. IEEE Transactions on Multimedia, 10(6), 1022-1033

Please reflect on the following questions:

1. How can media technologies be evaluated? 
For example in the text: Prototyping and usability test through questionnaire and observed experiments, usability defined as effectiveness (can the task be completed with the evaluated system? Success to failure ratio common way to measure this), efficiency (How much effort is needed to accomplish the task? Less effort is better of course) and satisfaction ("refers to the comfort and acceptability of the system to its users and other people affected by its use, questionnaire").

A technology can require experience or training before mastering the three aspects of usability. So to complement evaluation you can do a trainability test, as done in the text, to see if it’s any difference in usability afterwards. How much training is required before any difference is noticed etc.

2. What role will prototypes play in research?Prototypes can play a big role. For example it’s useful to only test parts of a system because you can prototype exactly the part that you want to test without having to spend time on completing the whole system before you can get any answers. In the text this concept could be used on live football for example, but in the tests their were no live football. I guess it both saves time and money. Since time is saved more alternative ways of doing stuff can be tested. The making of the prototype can also present the decisions that have to be made in order to make the system, which makes people understand quickly and have their own opinion on different paths. If those decisions are motivated they can be argued against or agreed which leads into choosing the most appropriate alternative.


3. Why could it be necessary to develop a proof of concept prototype?Other than it’s cheap and less time consuming than developing a complete system. It allows for mistakes and changes in a wider change than a complete system would. Imagine building a house and then having to eradicate it and rebuild it just because you decided it’d be better to use other dimensions than the first one had. So after other people have seen this prototype and expressed their opinion, or even when results from the prototype are evaluated maybe changes have to be made. Making those changes on a prototype instead of a complete system costs less. I think that prototypes also are a great way of presenting your ideas, it can get people understand and active within a concept easily. In comparison to only words a prototype is more convincing and interesting in my opinion.


4. What are characteristics and limitations of prototypes?Buggs, not full functioning, only a representation, doesn’t shut possibilities for future changes. Depending on what the prototype is, it has different limitation, maybe it’s just a mock-up or maybe it’s the real thing. If a prototype is made in a bad way, it can give inaccurate results or give a wrong impression.

5. How can design research be communicated/presented?
In every way: words, text, prototypes, pictures, models, mock-ups, substitutes, videos and more. I like different kinds of prototypes for the reasons mentioned above, different kinds of prototypes depending on what you want to present.
-----------------------------------------------------------------------
For the lecture Wednesday, read the following papers written by Eva-Lotta Sallnäs Pysander and her colleague. Reflect on the key points and what you learnt by reading the text. Prepare one question that you would like to discuss during the lecture.
1. Moll, J. and Sallnäs, E-L. (2013). "A haptic tool for group work about geometrical concepts engaging blind and sighted pupils." ACM Transaction on Accessible Computing. 4(4), 1-37.
2. Huang, Y., Moll, J., Sallnäs, E-L., Sundblad, Y. (2012). "Auditory feedback in haptic collaborative interfaces." International Journal of Human-Computer Studies. 70(4), 257-270.

1. How does a collaborative setting differ from a single user setting as regards methodology used and the results obtained?
I guess you need to define what’s collaborative in the specific context; it might be a common ground, awareness in-between the participants, awareness of their actions, awareness of their own actions, their goals, who takes initiative etc. Based on earlier work, studies and theories you might find what’s collaborative and how to measure it in the context you want. However I think that collaborative work include many different answers to why a specific outcome occurs, it’s often a unique context since you can collaborate in so many ways and places, hence more variables. Previous quantitative methods, performed studies, and theories within collaborate settings might not always answer to why a specific outcome in a specific context occur. Maybe qualitative methods can contribute to more understanding of why/why not something is collaborative and what effects come from it. If a user test is performed, it might not be a bad idea if the participants know each other depending on the study, this could prevent insecurity from having an effect on the result, in a single user setting this problem doesn’t occur, that is the relation between the collaborates.

2. How can qualitative and quantitative methods in the same study complement each other?Qualitative methods can bring results not encountered for; these results can then possibly be used in quantitative methods in order to measure and analyze them. Qualitative data can also give deeper understanding of the quantitative data. It can for example answer to why a specific quantitative or qualitative result was maintained. For example in the third text: ”qualitative analysis showed that the auditory and haptic feedback was used in a number of important ways”, qualitative data helped answering to why haptic feedback was used and in what way it was used, not only that it was used or used in certain predefined ways.

3. How can using both subjective and objective methods give a better understanding of a phenomenon?
Subjective method. Gaining data about how the subject interpreters and experience the phenomenon, does it conform to the objective data or does it differ, how, when and why? Objective method, can give less varying results, which then might be easier to compare with results from other studies or in the same study, generalize, and analyze with quantitative methods.