in an attempt to sharpen my methodological skills before exploring the brave new world of social neuroscience, i thought i'd take a dip in the social pool and gain a better understanding of how social psychologists frame, test, and evaluate their research questions. my experience in designing a perceptual neuroscience-based behavioural assessment has laid the groundwork for asking questions and positing about the neural mechanisms that underly human social behaviour. however, human social behaviour is rarely as simple as "receptors in X send signals to Y, which has typically been found to functionally correspond to Z (insert citations here)". so, to prepare for the new added variables and considerations that social research requires, i enrolled in a research methods course specifically designed for social psychology students.
as someone who is a self-declared stat-phobe, i quickly realized the importance and necessity as a graduate student/novice scientist/would-be academic to confront issues and fears with statistics head-on to be able to use them most effectively. but what is even more insightful (and a somewhat sad albeit very common trend considering i am starting my 2nd year of PhD studies) is the important and oft-overlooked distinction between statistics, research design and measurement. all are equally vital aspects of research, and before tackling the data of an experiment with various analytical methods, it is important that researchers are designing well-informed, carefully-planned experiments that use appropriate measures to gather "good" data. what's the point of collecting "bad" (biased, inaccurate, incorrectly or inappropriately measured) data that, no matter what statistical methods you use to draw your conclusions, will not accurately or correctly answer your initial hypotheses?
speaking of hypotheses, which i always thought had to be developed before an experiment was actually conducted (also called a priori hypotheses), they seem to have some very shaky definitions these days. there are indications that 'altering' (editing, amending, rewording, or completely reforming) one's hypothesis based on the observed results of an experiment (known as a post hoc hypothesis) and stating these post hoc hypotheses in the introductions of research reports as though they really, truly were a priori hypotheses is becoming more and more common in practice. this is violation of integrity is especially alarming when considering that it is often encouraged or suggested (albeit implicitly) by journal reviewers and editors when dealing with contradictory or nullifying results to leave researchers with the option to a) rework their entire analysis (expending much time & resources) or b) simply reframe the initial hypothesis and rationale to fit the results of the study. when left with that choice, many of us would find the latter option, albeit an unethical one, hard to turn down.
another interesting "lure" for HARKing (hypothesizing after the results are known) is to tell a better "story" when writing up a research paper, which if you're anything like me, you will try to spice up as much as possible so that you yourself have the motivation to go back and edit it for the hundredth time this month. in his analysis of HARKing, Kerr (1998) is sympathetic to the plight of readers & writers of scientific papers, acknowledging that attention and time for reading are valuable, increasing the "premiums" for those exceptional papers that pack the triple-whammy of being informative, readable, and leaving the best impression of one's research. put another way, did the authors accomplish the #1 goal of all scientific writing: convincing readers that their research questions are important, and worthy of the time it takes to conduct, publish and read them. Kerr expresses one of the toughest hurdles to jump when adjusting to a scientific writing style by reminding us that "... a scientist operates under different constraints than the fiction writer. No matter how much the addition [of prose, narrative, or post hoc hypotheses] might improve the story, the scientist cannot fabricate or distort empirical results." it's no doubt that one of the toughest adjustments to writing this way is to stick to the point and say what's important, and not catalogue every minuscule step of the research design and data collection phases (we all know how much work you did, we don't need to hear about every step and mis-step). but one value of scientific and empirical writing that absolutely cannot be compromised is the direct and honest reporting of one's expectations of the outcomes from the beginning, even if we end up completely wrong in the end.
if you're interested in the theoretical and ethical arguments involved in hypothesis generation, testing, and reporting, i would highly recommend the course reading referred to above, which can be found with the following citation and URL link to a free pdf c/o the university of frankfurt: Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196-217.
if you're not already, and have any intentions of pursuing a career in research, get hooked on meth[ods]. just ask Diederik Stapel...
p.
as someone who is a self-declared stat-phobe, i quickly realized the importance and necessity as a graduate student/novice scientist/would-be academic to confront issues and fears with statistics head-on to be able to use them most effectively. but what is even more insightful (and a somewhat sad albeit very common trend considering i am starting my 2nd year of PhD studies) is the important and oft-overlooked distinction between statistics, research design and measurement. all are equally vital aspects of research, and before tackling the data of an experiment with various analytical methods, it is important that researchers are designing well-informed, carefully-planned experiments that use appropriate measures to gather "good" data. what's the point of collecting "bad" (biased, inaccurate, incorrectly or inappropriately measured) data that, no matter what statistical methods you use to draw your conclusions, will not accurately or correctly answer your initial hypotheses?
speaking of hypotheses, which i always thought had to be developed before an experiment was actually conducted (also called a priori hypotheses), they seem to have some very shaky definitions these days. there are indications that 'altering' (editing, amending, rewording, or completely reforming) one's hypothesis based on the observed results of an experiment (known as a post hoc hypothesis) and stating these post hoc hypotheses in the introductions of research reports as though they really, truly were a priori hypotheses is becoming more and more common in practice. this is violation of integrity is especially alarming when considering that it is often encouraged or suggested (albeit implicitly) by journal reviewers and editors when dealing with contradictory or nullifying results to leave researchers with the option to a) rework their entire analysis (expending much time & resources) or b) simply reframe the initial hypothesis and rationale to fit the results of the study. when left with that choice, many of us would find the latter option, albeit an unethical one, hard to turn down.
another interesting "lure" for HARKing (hypothesizing after the results are known) is to tell a better "story" when writing up a research paper, which if you're anything like me, you will try to spice up as much as possible so that you yourself have the motivation to go back and edit it for the hundredth time this month. in his analysis of HARKing, Kerr (1998) is sympathetic to the plight of readers & writers of scientific papers, acknowledging that attention and time for reading are valuable, increasing the "premiums" for those exceptional papers that pack the triple-whammy of being informative, readable, and leaving the best impression of one's research. put another way, did the authors accomplish the #1 goal of all scientific writing: convincing readers that their research questions are important, and worthy of the time it takes to conduct, publish and read them. Kerr expresses one of the toughest hurdles to jump when adjusting to a scientific writing style by reminding us that "... a scientist operates under different constraints than the fiction writer. No matter how much the addition [of prose, narrative, or post hoc hypotheses] might improve the story, the scientist cannot fabricate or distort empirical results." it's no doubt that one of the toughest adjustments to writing this way is to stick to the point and say what's important, and not catalogue every minuscule step of the research design and data collection phases (we all know how much work you did, we don't need to hear about every step and mis-step). but one value of scientific and empirical writing that absolutely cannot be compromised is the direct and honest reporting of one's expectations of the outcomes from the beginning, even if we end up completely wrong in the end.
don't be such a HARK (image from news.nationalpost.com) |
if you're not already, and have any intentions of pursuing a career in research, get hooked on meth[ods]. just ask Diederik Stapel...
p.