How to read research like a pro in five minutes
Dorian Minors • November 18, 2015
Research can be a crock of s**t. Saying that the 'research says' doesn't mean anything if the research is problematic. That's why, here, we always try to link the article so one can evaluate for oneself. Although, if I'm honest, there are plenty of examples in which that has been neglected, and you'd need access to someone at a University a lot of the time to read without paying. The problem is not only that there are now money-making enterprises that will publish research without evaluating it, but that statistics can easily be manipulated to misrepresent the truth. For example:
- If someone says two things have a correlation of .5, we assume that they are half-related. But a correlation of .5 only accounts for 25% of the variability in the two things. Confused? I'll explain in a different way. A correlation of half only accounts for about a quarter of what's going on. Still confused? Good, read on young padawan, I'll fix it. Also, never forget that although something can be correlated we can't tell WHY they're related (is it that one causes the other or some other thing is causing them both?).
- Or, consider that a study shows that 50% of people like candy. If the study has 20 people in it, that statistic would mean nothing if it was intended to represent the population of Canada. But it might be telling in a high-school of 120 students. That's without going into whether those 20 people were equally represented by males and females, different ethnicities and so on.
It's rather easy to confuse the public. So I'm gonna give you an intro into research analysis so you can do it like a pro. Just whip this article out, press ctrl/command+f in your research article and search for the relevant terms (italicised) listed below.
Reliability refers to whether one is likely to get the same outcome again and again. We're talking about the consistency of the result. We need to make sure a study has tested that it's reliable; else it's results may not reflect the most common outcome for that subject. So we need to look for one or more of these (I often use the 'Find' function within the PDF to search quickly for the key terms):
- Have they tested more than once, usually over a period of time (e.g. test once, then again a week or six months later). This is called test-retest reliability and determines if something is stable over time. However, this is problematic, because anything can happen in between the two tests (like someone could have learned something that would make them better in the meanwhile, or gotten sick which might reduce their performance). Alternatively, you can test too close together and they might remember the test and so do better. So, there needs to be a balance between tests being too close and too far apart. It's often hard to tell where this balance might be.
- Have they tested with two similar tests (like doing two different types of IQ test)? This is called equivalent forms and this eliminates the problems inherent in the previous example (test-retest). But, it's important that the two are sufficiently equal in testing something and where you can't guarantee that, there lie problems. It's often easier to do test-retest as a result.
- Have they tested that the test is reliable within itself or put another way, have they tested that all the questions are all testing the things they want (are they relevant). This is called internal consistency, and is usually represented by a decimal number, in which case the higher the better.
- Finally, have they tested it with two or more raters? This is often for observational studies (studies in which an observer rates a participant on performance or something like that; not a test that the participant answers). This is called inter-rater reliability, and you're looking for different raters to give similar scores (this is often represented by a decimal number too, and the higher the better.
We want at least one, if not more of the above.
Validity means that the study is testing the thing it's supposed to be. So if a study says it's testing the anxiety, we want to make sure it's testing for anxiety and not depression or how sick the person is on that day for example. However a study needs to be reliable to be valid so look for that first. Then look for:
- Face validity - does it look like it would test the right thing; is it intuitive. This is actually one of the worst things to look for, because it might throw you off. But sometimes it's important, because if it's not face valid, it might upset the person being tested which would affect the scores (if someone asked to test your IQ, then tested what seemed to be your sexual habits, you probably wouldn't take the thing seriously). However, often it's important that the person being tested can NOT tell what's being tested. So sometimes face validity is irrelevant.
- Convergent and divergent validity - in these, what the research will have done is tested with their test, and then also tested with another similar (or a test that tests an unrelated thing for divergent) to how the two results correlate. So, you test for problem-solving and then test for flexible thinking (which are related) to make sure you test is roughly in the right area (you look for a higher correlation). For divergent, you're often testing to make sure that you aren't straying into an area that can be similar, so you might test for sleep quality and then test for sleep quantity to make sure you aren't accidentally testing for the wrong one. In that case, you'd be looking for a low correlation.
- Predictive validity - essentially, you check to see whether what you've tested predicts the thing later on. So test for the capability to perform and then come back later to see whether they actually performed. This is often tested in terms of a criterion, being a particular standard against which something is assessed.
- Concurrent validity - you might look at your test against data that already exists. So in a job environment, you might look at your test data for new candidates against data from existing employees you want your candidates to be like. Or, often psychologists try to improve the kinds of tests we have and will invent new ones. So they'll test their brand new test against one that's already well established (concurrent validity is very important in this scenario).
Reliability and validity are two of the most important aspects to check for in a study. If you have these, even if the data is misrepresented in other ways, it won't be too far off the mark. Remember though, that everyone has a bias and researchers will often be biased in a specific way so don't just take things for granted (especially if it goes against something that appears to be common knowledge), but don't assume that new research isn't valid either. Check yourself, or risk being wrong because some jerk researcher tricked you. A pretty dense article, I think. Why don't you treat yourself with something a little more straightforward? Like how one simple sentence can double your persuasiveness. Or learn how dodgy sources can be MORE convincing than a legitimate source, by psyching you out. Giving you the dirt on your search for understanding, psychological freedom and 'the good life at The Dirt Psychology.
Turning scholarship into wisdom we can use at The Armchair Collective.
More articles? View them all, or check these out: