Speaking to the BBC, the director of telecoms for a communications company announced that,
“Britain might be riding the wave of a super-fast broadband revolution, but for 49% who get less than the national average broadband speed, the wave isn’t causing so much a splash as a ripple,”
You may have one of two reactions to this; one is to assume that one cannot have 49% below the average, it must be 50% below and 50% above.
In fact, ‘average’ is usually taken to indicate the mean (in this case, the broadband speed for each household in the country, divided by the number of households included). This is not the same as the median (if you put the speed for each household in order, lowest to highest, then picked the middle one – that would be the median).
Either way, it’s splitting hairs about an amusing error. It would be more meaningful if the median were substantially different to the mean, as that would indicate a ‘skewed’ distribution. But that’s something for another time, perhaps.
Sometimes a science article in a newspaper is so bad that it pays to take a quick look at just why it’s so bad, to learn from the mistakes made. Today’s offering is courtesy of the Daily Mail.
“Playing football games on computers ‘makes you more aggressive‘” they cry. The upshot of the article is that when comparing violent video games and football games, it is kicking the virtual ball around that makes you more aggressive.
The problems with this article are extensive. First and foremost, this is not a reporting of a published journal paper. This means that the work has not been peer reviewed, and could be flawed on any number of levels. It is actually a proposed conference presentation, which means it hasn’t been unofficially scrutinized yet either. Tabloids usually do not link to the work they are referring to, but as no original article exists, they could not do so here even if they wanted to.
Then there are a number of assumptions made about the link between video games and violence (and I love violating assumptions). They state, “Computer games have been linked to increases in violence and crime”. I have blogged about this previously; there is almost no work looking at real world violence or crime in relation to video games, and when using indirect measurements, the relationship either does not exist, or is minuscule. That they accompany this statement with a large, emotive picture of a man in a balaclava crowbarring his way into a car makes this dubious assertion all the worse.
Then we get onto the actual content of the study, and the sparse details provided. The thing that instantly leaps out is that, “Measurements were taken of heart rate, respiration and brain activity before and during play.” There were no measurements of aggression taken. Only how physiologically aroused the person was. ‘Arousal’ here basically refers to increased heart rate, pupil dilation, increased respiration, that sort of thing. I don’t know about you, but I get pretty aroused by a lot of things; working out in the gym, having to give a public speech, seeing my boyfriend naked. As far as I know, none of these things are linked to increased violence (well, apart from some light S&M that the Mail would certainly disapprove of. But enough about my statistics seminars…)
They go on: “Analysis of the results showed that ‘killing’ someone in a game caused little brain activity, but conceding a goal or a foul caused high levels of activity.” Thus they are claiming that something that causes your brain to be active is equivalent to violent crime. Thus playing chess, having sex and reading should all result in Crown Court trials.
This really is poor quality science journalism. I cannot comment on the quality of the University press release or the quality of information released by the psychologists presenting the work, but it may be that they themselves overstate the results of the study and claim that it demonstrates something it does not. Either way, it should also be the role of journalists to check that what they are writing makes sense.
Oh, and in case you were wondering, on the facts presented in that article, the study provides no evidence that football video games make you more aggressive than violent ones, or that video games make you aggressive in the first place.
As human beings, our ability to make judgements and decisions quickly has long been paramount to our survival. Our brains have evolved to be able to take huge amounts of information and quickly synthesise conclusions based on what we see as the most important factors. This is useful in day to day life, but can sometimes lead us to systemic errors of reasoning. A recent paper from Ye Li and colleagues, published in Psychological Science, titled, Local Warming: Daily Temperature Change Influences Belief in Global Warming, is a good illustration of this.
Before discussing the paper, a few notes on global warming. The scientific consensus is that anthropogenic climate change (that is, climate change caused by human activity) is happening. A nice visual illustration from Information is Beautiful demonstrates that recent high profile petitions disagreeing with this represent only a very small percentage of active scientists and, notably, climatologists. This illustration almost certainly overrepresents the exact degree to which this consensus exists, but other published papers suggest that this illustration is pretty close to the mark. Additionally, there is always scope for the consensus to be wrong. The views of most scientists in the field are based on the available evidence, and it would be unwise to ignore said evidence; but further evidence to the contrary would (hopefully) lead them to update their positions to reflect this.
Global warming is, therefore, a serious issue. Furthermore, many skeptics also agree that the planet’s climate is shifting (but from natural causes).
The general population are less convinced that global warming is happening, and caused by human activity. How many times have you heard someone say, “global warming can’t be happening – we’re having the most snow we’ve had in 20 years!”? In part this is a confusion about the difference between climate and weather, and the variable local effects of global climate change. But Li and colleagues, in their paper Local Warming, decided to investigate whether people’s perceptions of the weather on a given day affected their perceptions about global warming as a whole.
Basically; participants were asked about whether they perceived the weather on the day of study as being colder or warmer than usual. They were also asked about how worried they were about global warming. In another study they were also asked whether they’d like to give money to a global warming prevention charity. The results showed that participants were both more worried about global warming, and gave more money to the charity, if they perceived the current temperature to be warmer than usual.*
How do we account for this? On the one hand it may be a lack of knowledge; in the absence of an understanding of climate, the current temperature may be all people feel they have to work with. The other driving force may be our tendency to use heuristics, mental shortcuts that result in cognitive biases. The authors of the study discuss the results in the light of, “attribute substitution”, where a simple, easily mentally accessible attribute (temperature that day) is used as a substitute for more relevant, but more complex information (long term or global temperature trends).
These kinds of mental substitution will be familiar to any psychology undergraduate. Two of the giants of social psychology, Tversky and Khaneman, describe a number of such biases. The availability heuristic is similar to attribute substitution – where people use easily accessible mental information or imagery in place of more relevant, but harder to retrieve models. Examples include someone saying that smoking doesn’t kill because their chimney-like grandfather lived to 100 (easy to visualise him smoking away at 99, harder to bring to mind thousands of medical papers demonstrating that he’s a statistical outlier, and that following in his footsteps would likely be fatal) and people being more afraid of flying than travelling in a car (the image of a plane crash, and unusual experience of being thousands of feet in the air overrides our knowledge of safety data).
The practical implications of our mental shortcuts can be wide reaching. They are often tied in with our tendency to make emotional rather than rational decisions. In day to day life, they prevent us from being paralyzed by impossible decisions where there is simply too much data for us to synthesise. However, when dealing with important issues such as climate change, they can lead us to irrationalities that obfuscate what’s really going on in the world. The key is to recognise where our instincts are leading us astray, and to use data and reason to better understand the issues.
Something to bear in mind next time someone tells you global warming isn’t happening during the cold, wet British summer.
* (Statistical aside; the possibility that views on global warming influenced their views on current temperature, was controlled for using actual temperature as an instrumental variable in their regression model)
There is an age old debate amongst undergraduates in universities the world over. It is a split that generally occurs between science and arts students, and goes something like this, “My subject is difficult, important and full of intellectual subtlety – yours is not”.
This has long been the case, but with recent cuts to higher education funding, the issue now has very real consequences; up and down the country, departments are facing closure, research funding is being channelled toward certain subjects (usually sciences and vocational degrees) over others. For example, Keele’s philosophy department narrowly avoided closure thanks to staunch protests. Most accept that cuts have to be made, but it is the humanities who are being lined up first for the chop. But how do we determine the value of various subjects and their university departments? Rather than an in depth look at impact metrics, let’s consider more broadly how the idea of impact is approached.
As people grow up out of and march forth, out of psychological adolescence and into adulthood, one would hope they would broaden their horizons beyond, “My subject is difficult, important and full of intellectual subtlety – yours is not”. This frequently does not happen. There are any number of reasons for this, some of which can be well elucidated by the psychological literature. For example, people will regard their subject as being more complex and subtle than one that they have little knowledge of, simply because fine distinctions are not apparent without some expert knowledge (a salient perceptual analog would be the way in which people of the same race are more able to tell each other apart than those of other races). Additionally, people perceive the world with themselves as the central locus – in other words, no matter how open minded people try to be, they will always tend towards seeing what they do as being worthwhile as they are close to the consequences and outcomes.
While this is all very understandable, we should always endeavour to see the value in others’ work. It is therefore good to see scientists coming out in support of the humanities. In particular, two excellent pieces in the Times Higher Ed can be found here and here. The former discusses the comments of prominent medical academic Professor John Martin. The piece begins, “John Martin is a cardiologist. He fixes people’s hearts not so they can return to employment but so they can ‘fulfil themselves by enjoying art, literature and music’.”
Most of us could find something within the humanities that appeals to us. How many of us would like to live life without music, or stories, or at least some knowledge of human history, and how we came to be where we are socially, as well as biologically?
It is here that I find opinions most frequently diverge. Most people, of course, will not condemn music, literature, art or history as being without value. It is the fact that they do not translate directly into jobs, or as they see it ‘contribute to the economy’. They are therefore not considered appropriate for university study.
This to me seems the greatest fallacy and misunderstanding of the role of the university. Universities are not simply designed to churn out technically skilled but unthinking individuals. They are melting pots of some of the brightest minds society has to offer, and it is through this process of skilled academics encouraging students to find themselves intellectually that produces novel ideas, advances in culture and society, science, literature, art, music and individuals who are confident and robust in their powers of thought.
This all sounds slightly whimsical to the practically minded. However, students across the board deciding to puruse their subject no further will still be eminently better prepared to approach employment as critical thinkers, able to communicate effectively both in writing and verbally, and able to function within an enviroment where they take responsibility for their own workload. This goes for humanities and sciences; the vast majority of Science Technology Engineering and Maths (STEM) graudates do not go on to do anything directly relevant to their degree, which often ends up being of less real world use than the skills taught by subjects that deal with language as their princple tool and subject of study.
When considering career academics however, it is fair enough to consider the practical impact of their work. With public money going into universities, you want to know that the benefits of the advancements being made will affect the lives of more than just the academics and students themselves. I agree; publicly funded universities have no right whatsoever to boil down to an elaborate circle jerk.
This attitude is prevalent among those holding the purse strings. In recent years academics have been informed that they must demonstrate impact at every turn; to show how their funding is advancing their field, and often what this means for wider society. This is one valid way of ensuring that the public gets bang for its buck. It does pose some problems for humanities and sciences alike. If you want to test a relatively new medical intervention that already has some pilot evidence behind it, a case can easily be made for the potential benefit to society in terms of lives, and money, that could potentially be saved. ‘Blue skies’ research however, where the work is largely exploratory and may well produce no results is harder to economically quantify. However, it is often blue skies research, and unforeseen consequences that often produce the greatest revolutions in science.
Humanities subjects often suffer a similar fate when trying to sell themselves on grant requests, but their impact is no less real. For example, Noam Chomsky, a linguist, was the single greatest driving factor in revolutionising psychology in the mid-20th century, and his contributions to cognitive psychology laid the foundations for Aaron Beck to develop the most effective psychological treatments we have today for treating and improving the quality of life (and reducing the number of sick days taken off work, use of NHS resources etc…) of the mentally ill. This was not a forseen consequence; this work came from a linguist tearing apart a psychologist’s work on cognitive models of language (BF Skinner). The point here, is that the impact that this disagreement over language would later have on how we understand human psychology, and how we treat the most vulnerable people in our society, could never have been predicted. It is the product of inquisitive minds trying to better understand who we are, and the world we live in.
On a more visible day-to-day level, most university humanities departments are also involved in the running of culturally significant institutions, such as museums, libraries, galleries, theatres, music venues etc. These are not irrelevant, ivory tower concerns. In truth, many bankers who we are often told will be leaving London for Zurich if they do not get their huge bonuses would probably not; many of the world’s richest and most productive and important cities are not simply economic power houses – their citizens demand a rich cultural infrastructure, and these are often provided and kept contemporary and cutting edge by the various English, History, Music, Drama etc departments in universities up and down the country. Is this factored into many people’s evaluation of the economic importance of the humanities? I suspect not.
Some old school academics will devoutly argue that such cherished classic subjects as English and History should not have to demonstrate their value in such a crass and economically minded fashion; their value is both self-evident and above such concerns. So long as public money in involved, I disagree with such a carte blanche notion at the societal level. There does need to be some consideration of whether the public’s money is being well spent – and this has to be appraised somehow. This applies to both research by established academics, and courses being taken by students. My call – and this is something I believe to be of utmost importance – is not to disregard the idea of impact or value, but instead to think about it with the flexibility such a complex matrix of influences and outcomes warrants. This value could be economic or cultural, intellectual or practical.
As any good psychologist will tell you; often it’s the variables that are hardest to measure that are the most important. That is no excuse for ignoring them altogether, rather cause to try and better understand them.
In recent times, science has been portrayed as sexy and exciting. From Professor Brian Cox’s Wonders of the Solar System/Universe to Hans Rosling’s attempts to convince us that statistics are sexy. And you know something? They’re right.
Exciting demonstrations of the joy of science can lead us to forget just how much time, dedication and in many cases, tedium is required to develop an understanding of the reality in which we live. Much of science, and even academia as a whole, is a case of iteratively updating our existing understanding through slight modifications of experiments and theories that have gone before, and the groundbreaking, “Eureka!” moments are few and far between, reserved for the exceptionally insightful, the fantastically lucky, and most importantly those who are able to make those ideas heard.
But we should celebrate those who conduct their research with the long haul in mind; after all, we stand not so much on the shoulders of giants, as on a tapestry of the endeavour of hundreds, thousands, millions of people all contributing their bit to society and understanding. One of the noblest aims is surely to be one of those strands, a fibre on which future generations will rest their feet as they reach that little bit higher in pursuit of knowledge than the generation before.
In health research, you can’t get much more long term oriented than epidemiology. Epidemiologists study (usually very large) groups (or cohorts) of people over time, often for years. One good example is the Whitehall study, which from 1967 followed a cohort of 18,000 men in the British civil service over the course of 10 years. A whole host of background information was recorded, such as social class, smoking status, available leisure time, and importantly if they died – when this occurred. The study is most famous for establishing the link between social class (judged by the grade of their employment, denoting the ‘status’ and income associated with their job) and mortality – people of lower social class died earlier, even after controlling for a raft of the most obvious associated risk factors such as smoking status, physical activity and diet. For example, the lowest class were 3.6 times more likely to die of cornonay causes than those in the highest. This has become known as the ‘social gradient’, and is now one of the key issues in medicine and public health in countries with a socialised healthcare system; just because a health service is free to all patients, this does not mean that wealth and social status does not affect the quality of health people can expect. Trying to understand why, and to improve outcomes for those currently worst off, is now a highly salient area of research.
Another excellent example is the 1946 Birth Cohort study. This, to me, is where it gets really sexy – in a very slow, methodical and unbelievably patient way. Normally as health researchers, when we devise a study we expect to have results that we can generalise to large populations of people within months or a few years. We take cross sections of people – if we want to know about children, we recruit children. If we want adults with renal disease, we recruit people with renal disease. In epidemiology though, we want to see what happens, prospectively, over time – to a whole range of people. We want to see who does well at school, or who gets sick; and we want to base it on good, prospective data, rather than asking people questions years later (e.g. “What were you like as a child”, to someone who has later become successful – as their present success may influence their recollection of the past).
The 1946 birth cohort study happens at the speed of human life. This fantastic article from the prominent journal Nature does a good job of reflecting on the enormity of 65 years of research – as the participants themselves have just turned 65. We’ve discovered a lot from the results so far, about the social gradient, about predictors of success, and beautifully (as described in the article by the study’s current head, Diana Kuh) that it appears that while our genes and backgrounds are significantly predictive of many things in life, there is always scope for individual agency and change over the course of the lifespan.
And the cohort is still going. As they enter old age, the ongoing stream of incoming data will be as important, if not moreso, than ever. With an ageing population, understanding what causes people to remain healthy, happy and functional in old age is more important than ever. With this study, we can look at the relationships between things that happened from the day these people were born with things that are happening to them now.
The article also presents the cohort from a wonderfully humanist perspective; that these are people who should be cooperatively included in research and treated with respect for their contribution to science. It’s well worth a read.
And so while it’s always tempting to think of modern science as something fast moving, dynamic and (according to the tabloid media) a place where every day brings world changing breakthroughs, let’s spare a thought for those playing the long game, advancing our understanding of the course of a human life from a scientific perspective; one year at a time.
I don’t know about you, but when I was in school I was taught that concentration was a limited resource. I remember quite vividly my plump, forthright Religious Education teacher telling the class that ‘studies had shown’ that concentration was limited to about 40 minutes at time, and that this was the reason we had breaks.
One of the most interesting things about psychology as a discipline is that observations alone can have the power to change what is being observed. This could be said of a number of disciplines such as physics (once you get to quantum mechanics) and biology, but in few places is the impact so marked, and where it has such relevance to our daily lives.
Which I why I find studies such as those described in this paper from a team of psychologists at Stanford University particularly interesting. It revisits our conundrum regarding concentration (referred to as ‘willpower’ here). It is certainly true that a number of studies have found that for many people, concentration is limited. Comparatively lacking are studies into what might cause differences between people in terms of their ability to focus for sustained periods of time. The studies conducted tested the hypothesis that the main factor that determined whether concentration remained stable, or waned with time, is whether the individual believes that concentration is a limited resource. They found over a number of studies that this was the case.
Our beliefs shape so much of our daily lives, so this is not entirely surprising. There are specific therapies targeted at changing people’s behaviour through changing beliefs, and it is not unexpected that these are often successful. It becomes truly fascinating, however, when you consider, for example, that whether or not people believe willpower to be a limited resource is influenced by findings in previous psychological studies. Thus even when only setting out to observe, psychology can still change people and their capabilities simply by reporting its results.
A paper recently published in The Journal of Personality and Social Psychology has been causing a bit of a stir. This a reputable, top journal. And the paper it has just published claims to support ‘psi’.
‘Psi’ refers to what you and I would call ‘psychic powers’, but is intended to avoid, or at least be neutral to, the supernatural connotations of that phrase. In this case, a series of nine experiments tested roughly 1000 participants for evidence of precognitive abilities; being able to predict future events. These were simple lab experiments; for example, participants would be asked to guess where an erotic image would appear (we’ll get back to that later) out of two possibilities, before a random number generator had even selected the correct answer. Eight of the nine experiments provided ‘statistically significant’ results in favour of a precognitive effect. In our example therefore, participants guessed where the image would appear with a much greater level of accuracy than would be expected by chance.
The first thing to note is that there’s a lot to recommend this paper. It is excellently written, by a respected expert in the less controversial field of self-perception (Dr Daryl Bem), and does right what so many other papers do wrong. The justification for the sample size used is provided, along with a rationale, alternative explanations are diligently explored, and most importantly, the methods and statistical procedures are detailed such that they could be easily replicated by others. This means that, rather than being able to shrug and say that other investigators ‘did it wrong’ if they attempted the same experiments and got different results, we would instead be able to say something meaningful about the possibility that the results were a one off caused by some artefact of the study as conducted by Bem.
It also means we can critique the methods employed. This paper by Wagenmakersteal et al eloquently sums up some of the possible problems with the study. Some of the arguments about what you do with your posterior odds in Bayesian analysis would take far too long to go into here, but there are a few simple points that are easy to make. The first is controlling for multiple comparisons; when conducting many statistical tests, one must make those tests more stringent than if one was only conducting a single test. If I rolled a 20 sided dice with the stated aim of getting 20, and succeeded, we might be mildly surprised. If we rolled it 40 times however, the surprise would be not having a single 20.
Indeed, in Bem’s psi paper, there are often many tests carried out in each experiment on a number of different variables. Worse, Bem is a known proponent of ‘exploratory’ analysis; looking at the data and generating hypotheses from it. This is fine, as long as one conducts follow-up confirmatory work. Remember, if we test enough variables, then some will come back positive by random chance. One way of constraining this is by selecting the tests we wish to conduct beforehand.
There are a number of other excellent statistical points. The main issue raised here is that many psychologists fall into the same statistical pitfalls. Which raises the question of how many studies there are that make ‘significant’ findings by chance. That is not to say that this paper has been fully refuted; I’m certainly in favour of challenging material making its way into the mainstream literature once in a while, especially when it’s as well conducted as this. But to me this seems more like an excellent chance to critique the lack of robustness that academic psychology seems to suffer from. We could also talk about psychology’s reliance on the ANOVA instead of regression, but that’s a book all of its own.
One final point. The paper by Bem ends with an excellent quote from Alice in Wonderland:
“On Believing Impossible Things
Near the end of her encounter with the White Queen, Alice protests that “one can’t believe impossible things,” a sentiment with which the 34% of academic psychologists who believe psi to be impossible would surely agree. The White Queen famously retorted, “I daresay you haven’t had much practice. When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast” (Carroll, 2006, p. 166).
Unlike the White Queen, I do not advocate believing impossible things. But perhaps this article will prompt the other 66% of academic psychologists to raise their posterior probabilities of believing at least one anomalous thing before breakfast.”
This is an excellent approach to science, as if we only dealt with that which we already have a good grasp, we would never have arrived where we have today, nor would we make any future progress. The great minds of every generation have to ‘think outside the box’ and challenge the realms of what is though probable, and indeed possible. It is noteworthy in Bem’s paper that 34% of psychologists believe psi to be impossible, compared to 2% in other areas such as the natural sciences. It strikes me that physicists have already begun to question the concept that time and causality are linear, therefore it is perhaps not surprising that they are more willing to accept the possibility of reverse causality than psychologists. If you want a good take on parapsychology, try Jonah Leher’s article here.
While I do not ‘believe’ in psi, I am scientist, and therefore do not believe in ignoring uncomfortable data. And besides, this paper makes a damn good exercise in statistical criticism, thanks to the scientific rigour with which it’s written. I’m glad it was published.
Alzheimer’s disease is a terrible affliction, both for those who suffer with it, and their families. It is a neurodegenerative disorder that involves the gradual loss of memory and ability to function, and is currently irreversible. It is also a disease that becomes more likely the older someone is.
The media have an affinity for devoting a hefty proportion of their science writing to health stories, particularly about age related diseases. It is completely understandable to want to cover the latest advances in treating such conditions. However the problem that sometimes arises is one of accuracy and false hope. While everyone gets things wrong sometimes, it is unhelpful to constantly pump out stories telling us, against the evidence, that a cure is just around the corner.
The latest example: “An instant test at 40 to predict Alzheimer’s: Routine screening could be here in 2 years” in almost every major UK paper. The articles across papers are similar, but I’ll use the Daily Mail’s for illustrative purposes. The article asserts that a 30 second reaction time test on a GP’s computer can predict who will and will not get Alzheimer’s in later years. So what is this based on?
Answer: a paper published in the Public Library of Science (PLoS) One entitled, “Cognitive Deficits Are Associated with Frontal and Temporal Lobe White Matter Lesions in Middle-Aged Adults Living in the Community“. The conceit of the paper was that there is evidence of an association in over 60s between certain scarring in the brain and variability of performance on a Reaction Time (RT) test, but that this has not been explored in people of middle age. ‘RT variability’ refers to the varation between reaction times in a set task within each participant.
The team used MRI scans to look for lesions in the white matter of the brains of 428 people between 44 and 48. They also gave them a raft of RT tests, as well as some other tests of cognitive performance, such as memory and recognition. The main finding was that the association between within-participant variability on RT tests and white matter lesions was present, as it is in over 60s. Scientifically, this is very interesting when considering the aetiology of Alzheimer’s, as it may mean that the damage that causes later cognitive impairments starts much earlier than previously thought, before any noticeable deficits in memory are detected.
However, this was a cross sectional study looking at an association between the physiological state of the brain and a sub-clinical behavioural anomaly (inconsistent reaction times). Remember, the story was that within two years we will be able to predict Alzheimer’s based on RT tests. But this study did not look at whether those who have more variable reaction times will go on to develop cognitive deficits of any kind, let alone Alzheimer’s. All we know (from other studies) is that there may be an association between RT variability and the development of cognitive deficits later in life. What this study has added is that those RT variations are associated with white matter lesions in middle age.
This study did nothing to test whether RT variability predicts cognitive deficits. Therefore the idea that it paves the way for a ‘simple screening test in the GP surgery’ is simply incorrect. Furthermore, even if some predictive value was demonstrated, saying it may be in use within 2 years ignores a raft of salient issues surrounding the validation of such tools – no-one in the health care community should want to be dishing out distressing diagnoses of future Alzheimer’s to patients, when the test itself is untested. Such work will require longitudinal studies over a number of years before being introduced to routine practice.
The fact that this study is front page news is down to a common effect in the media, where adding brain scans to any story legitimizes the contents, no matter how far removed the findings are from the headline of the story. Indeed, it seems the media may not be the ones trying to overplay the conclusions here – while their science editors should be able to read journal articles for themselves before making them into front page news, it is noteworthy that the University’s press release for this story made the dodgy “screening within 2 years” claim itself.
The findings of this study are interesting, and certainly useful to those studying a terrible condition. Translating scientific understanding into patient benefit should always be on the forefront of the agenda, but sometimes it’s worth stopping and thinking about whether the journal article you’ve just read really is one step away from a screening/treatment breakthrough, or whether you’ve just been wowed by a brain scan.
Temporary post of my response to this: http://ukhomeopathynews.com/2010/10/the-skeptic-and-the-homeopath/
That video is rather childish and unrepresentative of just about every skeptic I know. Generally speaking, the positions are reversed – any scientist worth their salt will be interested in evidence and its constructive questioning. This does not always happen, granted – there are variations within any disparate group of individuals united by common beliefs. Some who hold religious beliefs are peaceful, some believe in holy war. We therefore cannot assert that all religious people are either hateful or peaceful.
The preferable option is therefore to stick to evidence, not generalisations about people or ad hominems. Speaking as a scientist working in the NHS I would love homeopathy to work – think of the papers that could be published, the new questions it would raise about physics and chemistry, and above all, the benefit to patients in terms of options and cost effectiveness. Looking at the evidence though, (the systematic reviews and meta-analyses from the Cochrane collaboration and others, the Science and Technology Committee evidence check etc) there just doesn’t seem to be anything convincing, in terms of reliability of effect or clinical significance. Most scientists I know never believe something because the media tells them to. Usually because they’ve had at least one of their own papers taken completely out of context at one point or another.
And nor is not a childish proclamation that science is king, but an observation that if something cannot be demonstrated to work, then a metaphysical argument that empirical evidence is not necessary is problematic. Any other number of possible theoretical treatments could also work, but somehow be immune to proof through current scientific method. If we argue that a treatment may work but that science cannot ‘detect’ the effect, anything from spinning three times on the spot to eating cardboard may be effective (examples not meant to be derogatory, simply illustrations of potential absurdities).
I’ve no desire to deride you, your beliefs or what you choose to advocate. And there are certainly real issues here, such as a number of other current publicly funded treatments lacking evidence, that people put far less effort into questioning than homeopathy (the resolution to which should be testing them with equal rigour). But if you have any desire to be taken seriously by others (which presumably is at least partly the point of publishing a blog on the subject), then you may want to think about how you’re putting your message across. A series of absurd straw man arguments using caricatures that come across as quite mentally ill is inappropriate, insulting, and reflects badly upon no-one but yourself.
Friedrich Nietzsche’s famous quote, “That which does not kill me, makes me stronger…” is often accepted as a truism. Naturally, however, the reality when it comes to real people in the real world is a little more complicated than that.
In fact, there is a large body of research that demonstrates that experiencing what most people would construe as adverse circumstances (bereavements, suffering abuse etc) accumulated over the course of a lifetime reduce a person’s quality of life in the present, and result in a greater likelihood of depression and anxiety. However, whether you instinctively agree with Nietzsche or this early adversity research, it may be that oversimplification of the methodology and statistics involved are obscuring the reality of the situation.
A popular concept in psychology is one of resilience. What is perhaps really surprising about the outcomes of, for example, people who have been abused during childhood, is not the number that end up with psychological problems in adulthood, but the number that turn out perfectly ‘normal’. Increasingly, sophisticated methods of statistically analysing the trajectories of people’s mental wellbeing over time allow us to demonstrate that there are substantial differences between individuals in terms of their reaction to adversity. The fact that there are differences between individuals is obvious. However, when looking at how people’s reaction to adversity changes over time, often the mean of the entire group is used at each time point. What may then occur, is that depressive scores appear to remain stable over time, when in fact there are substantial fluctuations occurring in subgroups, obfuscated by the net mean remaining stable.
Statistical oversimplification can often provide these kinds of barriers to really understanding what’s going on. A paper recently published, and cited widely in the media last week, dealt with the issue of how cumulative lifetime adversity affects both current well being, and resilience when dealing with future adversity. Published in the Journal of Personality and Social Psychology, it did something that is often quite rare in psychology journals. It looked at non-linear relationships.
When psychologists want to know if A is related to B, they generally check to see if there is a linear relationship. Let’s say you wanted to know if depression score measured by a questionnaire was related to IQ. You would check to see if there is a statistically reliable relationship that allows you to express the relationship in terms of the equation of a straight line, i.e., that for each unit of change in depression score, a constant and relative change can be observed in IQ (incidentally there does exist a slight, negative correlation between depression and IQ – smarter people are less likely to be depressed).
This method is very useful, but there is a problem – just because the relationship between two variables is not constant in a linear fashion, does not mean it doesn’t exist. This is the beauty of the paper by Seery and colleagues. Instead of simply using checking to see if the relationship between cumulative lifetime adversity and well being was linear, they also check for curvilinear relationships. Sure enough, they found that a quadratic function – or “U” shape, better fits the data.
In statistics, its important to always keep sight of what the numbers, equations and graphs mean. The relationship here demonstrated that people who have experienced some adversity in their lifetime fare better than those who have experience none at all, and those who have experienced a great deal. Additionally, a similar relationship was found for ‘resilience’ , which may explain the increased well-being - experiencing some adversity over which one can eventually take control appears to make people better able to cope with future adversity than those who have experienced little adversity, or those who have experienced a great deal.
That which doesn’t kill you might make you stronger – up to a point. Maybe think twice before intentionally traumatising your kids – don’t say I don’t give you any helpful advice.