How to deal with science journalists

August 21, 2009

Recently I was contacted for the first time by a journalist who wanted to know more about my research. She had seen our latest paper and wanted to ask me a few questions! Flattering, of course! Communicating science to the non-scientist audience is an enshrined duty of all researchers, although not very many do much of it in reality.

So how would I go about it?

Well, I got some advice.

  1. Give the journalist a brief and non-technical summary. Do not assume that she has read the paper – she is calling you because she wants to know what was in it.
  2. Make certain that you phrase yourself in a way that lends itself to quotes. Metaphors, similes and other rhetorical devices are recommended.
  3. Ask to see the article and correct incorrect quotes and other inaccuracies.

And so I did, while a little strained for time.

The result? Slightly hilarious. You can see it here and here.

Obviously, I didn’t expect that the article would appear on sites that so strongly endorse products and services. Beyond that? I’m a little bit clueless. Will anybody read it? If they do, will it have any impact? Should I have given my answers differently? I don’t know.

But to be honest, the sight of my name in print with “Dr” in front of it is still enough to make my day!


Blatant misconduct

July 16, 2009

Publish or perish!

That’s more or less how it works, and how it probably needs to be. But sometimes it has humorous consequences. A retraction was just published in the Journal of Experimental and Clinical Cancer Research. I quote it in full (except the references):

The corresponding author submitted this article [1] to Journal of Experimental and Clinical Cancer Research although this article had been accepted and previously published by Cancer Biotherapy & Radiopharmaceuticals [2]. The article was also received and subsequently accepted and published by Nucleosides, Nucleotides and Nucleic Acids [3]. Since it has been brought to the attention of all authors that duplicate submission and publication have taken place the decision has been made to retract the article published in Journal of Experimental and Clinical Cancer Research. The authors are deeply sorry for any inconvenience this may have caused to the editorial staff and readers.

Amazingly, these people seem to have published the same article three times, but with different titles and wildly different author lists. And they would probably have gotten away with it if somebody hadn’t noticed and started making trouble over it.

Notice that their retraction contains no admission of actually doing anything wrong! They apologise for causing inconvinence by their retraction, but not for the actual multiple publication. Could it be that there are quarters where this sort of behaviour is accepted – or, perhaps, even encouraged?

I have always believed that fabrication and plagiarism are more widespread than reported. My guess is that about 15-20% of scientific papers contain deliberate fabrication or plagiarism, and at least 80% of the rest contain subtle omissions, “dressing up” of data, manipulation of images, changes of outcome criteria, and other dubious practices.

What to do?

I don’t know. Open online lab books might be a solution, although they are hard to reconcile with the need, in some cases, for secrecy. In the mean time, we must continue to doubt everything we read.


Monstrous effort to map a transcriptional network

July 8, 2009

ResearchBlogging.orgThe FANTOM consortium report in the latest issue of Nature Genetics that they have measured what happens with the entire, total, gene expression during the specific differentiation of a cell line called THP-1. Not the expression of just the 10 000 most important genes, all of them. At the same time.

Their findings are a heap of data which is probably larger than the whole body of research on medicine and biology up until the early 1900’s. If I try to say what their main finding is, I’d lean towards the interconnectedness of the signaling network. It doesn’t have one single weak spot, where you could knock out a certain gene and profoundly change the network dynamics. Knock-out of some genes had effects on many other parts of the network, but in general the system seems to be robust because of redundancy and interconnectedness. I have drawn similar conclusions in my own latest paper, though my methodology is a pair of binoculars compared to their multinational telescope.

Professor Hayashizaki of the RIKEN Omics Science Center was the general organiser of this study.

Professor Hayashizaki of the RIKEN Omics Science Center was the general organiser of this study.

My main thoughts, however, upon reading this paper were not so much about the actual research, but more about the way it was done.

  1.  With the advent of large-scale initiatives like these, we will perhaps have charted most of the “connectome” of the cell within the next decades. This is the map of the decision-making pathways. The neuroanatomy of the cell, if you wish. It has enormous potential to explain how, exactly, things go wrong in diseases such as cancer.
  2.  Biology is starting to resemble some branches of physics, where research advances through large concerted efforts. The author list of this paper is half a page long, with the authors’ affiliations taking up another half page. There will be less space for the nerdy loner scientists and greater demand for the entrepreneurial, outgoing kind of researcher in the future.
  3.  Seventeen figures and fourteen tables, and the whole methods section, have been relegated to the “supplementary material” that is only available online. Reporting on this kind of science in an 8-page article is like writing a short essay on “World War II”. I’m sure the best parts are in there, but you can’t begin to reenact it based on their descriptions. Lots of the interesting sub-analyses, which I presume must have been performed, will never see daylight. This is exactly the sort of science that benefits from the innovation of the online journal. No page limitations are needed there. Just last week, for example, I noticed that PlosOne had published a paper entitled “New Mid-Cretaceous (Latest Albian) Dinosaurs from Winton, Queensland, Australia”, which is 51 pages long and contains 40 illustrations, mainly of various bones photographed from different angles. Try getting that into a conventional journal!

Full reference:
Suzuki, H., Forrest, A., van Nimwegen, E., Daub, C., Balwierz, P., Irvine, K., Lassmann, T., Ravasi, T., Hasegawa, Y., de Hoon, M., Katayama, S., Schroder, K., Carninci, P., Tomaru, Y., Kanamori-Katayama, M., Kubosaki, A., Akalin, A., Ando, Y., Arner, E., Asada, M., Asahara, H., Bailey, T., Bajic, V., Bauer, D., Beckhouse, A., Bertin, N., Björkegren, J., Brombacher, F., Bulger, E., Chalk, A., Chiba, J., Cloonan, N., Dawe, A., Dostie, J., Engström, P., Essack, M., Faulkner, G., Fink, J., Fredman, D., Fujimori, K., Furuno, M., Gojobori, T., Gough, J., Grimmond, S., Gustafsson, M., Hashimoto, M., Hashimoto, T., Hatakeyama, M., Heinzel, S., Hide, W., Hofmann, O., Hörnquist, M., Huminiecki, L., Ikeo, K., Imamoto, N., Inoue, S., Inoue, Y., Ishihara, R., Iwayanagi, T., Jacobsen, A., Kaur, M., Kawaji, H., Kerr, M., Kimura, R., Kimura, S., Kimura, Y., Kitano, H., Koga, H., Kojima, T., Kondo, S., Konno, T., Krogh, A., Kruger, A., Kumar, A., Lenhard, B., Lennartsson, A., Lindow, M., Lizio, M., MacPherson, C., Maeda, N., Maher, C., Maqungo, M., Mar, J., Matigian, N., Matsuda, H., Mattick, J., Meier, S., Miyamoto, S., Miyamoto-Sato, E., Nakabayashi, K., Nakachi, Y., Nakano, M., Nygaard, S., Okayama, T., Okazaki, Y., Okuda-Yabukami, H., Orlando, V., Otomo, J., Pachkov, M., Petrovsky, N., Plessy, C., Quackenbush, J., Radovanovic, A., Rehli, M., Saito, R., Sandelin, A., Schmeier, S., Schönbach, C., Schwartz, A., Semple, C., Sera, M., Severin, J., Shirahige, K., Simons, C., St. Laurent, G., Suzuki, M., Suzuki, T., Sweet, M., Taft, R., Takeda, S., Takenaka, Y., Tan, K., Taylor, M., Teasdale, R., Tegnér, J., Teichmann, S., Valen, E., Wahlestedt, C., Waki, K., Waterhouse, A., Wells, C., Winther, O., Wu, L., Yamaguchi, K., Yanagawa, H., Yasuda, J., Zavolan, M., Hume, D., Arakawa, T., Fukuda, S., Imamura, K., Kai, C., Kaiho, A., Kawashima, T., Kawazu, C., Kitazume, Y., Kojima, M., Miura, H., Murakami, K., Murata, M., Ninomiya, N., Nishiyori, H., Noma, S., Ogawa, C., Sano, T., Simon, C., Tagami, M., Takahashi, Y., Kawai, J., & Hayashizaki, Y. (2009). The transcriptional network that controls growth arrest and differentiation in a human myeloid leukemia cell line Nature Genetics, 41 (5), 553-562 DOI: 10.1038/ng.375


BMC Research Notes – a place to publish the results languishing in my drawer?

July 1, 2009

I have previously addressed the unfortunate bias in the scientific literature that arises from the tendency not to publish results that are negative, or that simply show that a certain direction of scientific exploration is not very promising.

The other day, I learned about the relatively new journal BMC Research Notes. This is an open access online journal with the mission to publish:

scientifically sound research across all fields of biology and medicine, enabling authors to publish updates to previous research, software tools and databases, data sets, small-scale clinical studies, and reports of confirmatory or ‘negative’ results. Additionally the journal welcomes descriptions of incremental improvements to methods as well as short correspondence items and hypotheses.

Here is, at last, a scientific journal that will not shy away from making accessible data that will only be valuable to a small set of researchers. There have been several times when I have run across scientific questions that I know must have been addressed many times before, but the results of which haven’t been published. I also know that if my new question were known to the other scientists, they would probably have published. From this I have learned that it is difficult to predict which findings might become important in the future, and that the best course is to simply make data available.

The key question is not whether BMC Research Notes will come to be regarded as a ”dump journal”. It will, by arrogant investigators who would rather drop a project than publish it in a less prestigious publication. But their attitude is doing the scientific endeavour a great disservice by contributing to the already heavy bias of what gets published.

When you see another researcher’s publication list and it includes only top journals with the occasional Nature or Science paper like icing on the cake, the relevant response is not just “excellent!”, but also “what is this person hiding?”. Where are the great heaps of data that this scientist has generated, but that never made it into one of the top articles? Have they been forever discarded? Is that in anybody’s interest?


The Scientia Pro Publica blog carnival is up

June 15, 2009

Scientia Pro Publica

The sixth edition of the Scientia Pro Publica blog carnival is hosted by Kelsey Abbott at the Mauka to Makai blog. Among the rousing stories of sex, drugs, cannibalism, phylogenetic classification, and sheer madness, you will also find my recent post on Open Access publishing.

Head over there and have a look!


My research is published with Open Access!

June 8, 2009

My latest paper has just been accepted for publication in the Journal of Experimental and Clinical Cancer Research. This is an open-access, online-only scientific journal. When the paper comes out I will cover it in another post. Meanwhile, let me exhort my fervent commitment to open access scientific publishing.

The cycle of scientific endeavour typically goes like this (substitute your favourite researcher if you want):

  1. Somebody gives me money
  2. I do research
  3. I write up the tattered remnants of my grand designs into a decent manuscript and send it to a journal
  4. The journal sends the paper off to other scientists who review it for free, while I review papers from other scientists in other journals in my spare time because of loyalty to the Scientific Endeavour or something
  5. The journal decides, hopefully, to print it and then charges me at least 1000 USD for the luxury. It also charges subscribers and university libraries fo the right to read the article in print or online, making it impossible for anyone outside the system to access the knowledge I have painstakingly assembled.
  6. The Cancer Fund or the NIH (substitute your favourite funding body) counts my journal articles and then, hopefully, gives me more money.

While both I and the funding bodies (which are often backed by public tax-money) want the results to be as accessible as possible, the lock-in behind paywalls becomes an unfortunate consequence of the fact that journal article are nearly the only metric of achievement by which I can be measured.

Furthermore, I need access to all the journals in my field. It’s not the case that one could simply be substituted for another. This means that my university is almost completely price-insensitive, a situation upon which the scientific publishers have been quick to capitalise. Elsevier, one of the “Big 3” together with Springer and Wiley, has had an operating profit margin above 35% in recent years in the science and medicine section of its business.

A recent industry report (not online) from Deutsche Bank on scientific publishing states that “We believe the publisher adds relatively little value to the publishing process.” This is true. The value added lies, besides the typesetting, mainly in that the credibility of the research can be boosted by the strong brand of a prestigious journal. And I don’t have to add that that sort of bias in the scientific community is a problem even though some may benefit from it.

A bunch of corporate behemoths are making obscene amounts of money by keeping (mainly) tax-funded research results out of the public domain. Outrage is warranted.

The solution? Open access publishing, of course!

There are a few different models for open access, including self-archiving of manuscripts and data on public servers. But the simplest and best-functioning solution, in my humble opinion, is the open access journal. It’s just like an ordinary journal but with free online access for everyone. The costs of publication are covered by a fee (again, around 1000 USD) payed by the scientists themselves (us).

The health worker whose hands are in the foreground will be able to read my latest article online without a subscription, should he/she want to.

The health worker whose hands are in the foreground will be able to read my latest article online without a subscription, should he/she want to.

The cost is not greater than for many traditional journals, and is offset by the greater availability of the article. Some studies suggest that open-access articles are cited at least twice as often, at least in certain fields. But most of all, there is an imperative stemming from our purpose as scientists to generate knowledge and actively share it around the world. We have no reason to keep supporting the self-serving oligopolies of knowledge that still publish most scholarly articles, when we can instead make them freely available to the entire world.


Alan Sokal of the Social Text hoax visits Stockholm

May 28, 2009

Many of the readers of this blog will be familiar with the 1996 hoax perpetrated against the journal Social Text. The physicist Alan Sokal, in an attempt to expose the low standards of intellectual rigour in contemporary post-modernist scientific debate, wrote a parody of an article of scientific theory, and got it accepted.

The article begins by asserting that many scientists continue to “cling to the dogma […] that there exists an external world, whose properties are independent of any individual human being […].” It continues by weaving a strange cloth of disparate concepts from physics and philosophy, without any real justification, increasing in preposterousness to a magnificent climax, where Sokal claims that Lacan (the notorious psychoanalyst) has derived a mathematical justification for the phychoanalysis of AIDS from differential topology theory. It ends by saying that mathematics must be revised to be able to participate in the struggle against capitalism, patriarchy, and militarism.

Afterwards, he published another article, revealing that the first one was a hoax. This caused considerable debate, most of which is found on Sokal’s website. Richard Dawkins has summarized the arguments from the side of the stringent scientists with his usual humour and brilliant ire in a review of Sokal’s subsequent book.

Alan Sokal and your humble blogger outside the Royal Academy of Sciences
Alan Sokal and your humble blogger outside the Royal Academy of Sciences

In his speech yesterday at the Royal Academy of Sciences, Sokal set out in more general terms to discuss the importance of the scientific world view.

In essence, his main argument was that the scientific method is simply an effort to find out facts with rational methods, and furthermore, that most of us are completely able to do so in most aspects of our lives. Yet we often keep double standards and cease to question some aspects of our world view. For example, religious people are perfectly able to examine the factual basis of any religion besides their own – i.e. when were the sacred texts written, by whom, on what basis, and why should we believe them. But when it comes to their own religion they revert to the circular logic that the doctrine of the faith is true because the doctrine says it is.

In his talk, Sokal pointed out four main enemies of scientific reasoning:

  1. Potsmodernist theorists and extreme social constructivists. Sokal noted with pleasure that this group appears to have retreated somewhat over the past ten years.
  2. Advocates of pseudoscience. Here, Sokal went to some length to explain the impossibility of homeopathy, mainly by showing that it is entirely inconsistent with our current knowledge of chemistry.
  3. Advocates of religion.
  4. Lobbyists and spin doctors. And here, Sokal went into a rather long critique of the Iraq war and the loose factual premises on which the war was founded.

After the talk, I and a few others brought up a discussion on whether we should really be so hostile to useful placebos, even when they have absolutely no scientific merit (e.g. homeopathy and acupuncture). After all, these methods are very useful for patients with certain conditions such as chronic pain, where real medicine often has little to offer.

He responded that there is a need for an ethical debate on what sort of deception you can subject a patient to, and that the complementary therapists have to face up to that discussion. An excellent argument.

All in all, an enjoyable evening! My favourite quote was: “I am a scientist, not a politician. I have the luxury of saying what I think is true:”

In the audience, I spotted dr. Martin Rundkvist of the Aardvarchaeology blog. He has also written a post about the event.


Scientia Pro Publica is up

May 18, 2009

Scientia Pro Publica
The blog carnival Scientia Pro Publica has just been published at E. M. Johnson’s blog The Primate Diaries. It’s a cornucopia of interesting posts, among which one of my own has been included – on behavioural conditioning of the immune system. Among my favorites in this edition are a post by [weird things] on how a warp drive might work and another by biotunes about the evidence against antioxidant supplements.

Check it out!


Open Science on knol

May 11, 2009

In my last blog post, I discussed the need for a way to make scientific results publicly available more easily.

Damien Chaussable of the Science 2.0 blog has an interesting idea in this vein. A current paper of his is being drafted as a knol, completely open for everyone to see.

The advantages are obvious: rapid, free dissemination, and in the best case: a chance to get valuable feedback before it’s submitted to a journal. The drawbacks are also glaring; it’s not searchable in science-specific search engines, meaning that people in the field will find it only by word of mouth. Nobody takes the responsibility of maintaining the URL over time so I’d have to cite it as a “personal communication”, and it will not count as a merit when the authors are applying for jobs or grants.

You might be surprised that scientists have been so slow to simply post their results on the internet. Indeed, this is the first instance I have seen. The reason is that journal articles are the currency that determines everything, and the data should be new when it’s presented there. Because of the inevitable lag times, new in that context usually means secret since a long time, rather than just discovered. Physicists and mathematicians are exceptions: immediately when they had invented the internet, they started circulating preprints there instead of by paper. This practice has grown to a huge database called arXiv, which is now the primary source of literature for many in those fields, and often the only place that a physics or mathematics paper gets published. Nature has started a similar pre-print server for the biological siences, which has however attracted limited interest. Since 2007, it has only archived about 800 pre-print manuscripts.

I like Chaussable’s idea, and wish more people were trying to find new ways for science to open up. But I am pessimistic about the use of knol and similar unindexed sites for scholarly communication. My next manuscript will be posted on Nature Precedings, and I will beg the readers of this blog to read it and give me comments and feedback!


On publication bias in the laborative biosciences, and my feeble attempts to avoid it

May 7, 2009

Clinical researchers and epidemiologists are very much concerned about publication bias. That means all factors causing less than the totality of performed experiments to be reported. A well-known bias, for example, is that positive results are much more likely to get published than negative results. If someone then tries to assess all the evidence for and against the benefit of a certain treatment, for example, the picture will be distorted. Therefore, it is important to publish results even when they are not spectacular.

In the laborative biosciences, the problem is far more pervasive than in clinical research. Therefore, I find it a bit surprising that people talk so little about it.

Sins of omission
When we write papers, the objective is to tell a good story. Usually, multiple types of laboratory methods and experimental systems are used in support of each other. Far are we from the studies of yore that investigated a single hypothesis with jut one method. In general, this is a positive development, of course, driven by ever-better methods to generate data easily.

Stories require sacrifices of the narrative. Sidetracks must be removed, darlings killed, and details fitted into the grand plot. Inevitably, data that are not interesting will be thrown out the window.

This is not to say that people deliberately withhold contradictory data, at least not very often. But nearly every experiment starts with a small pilot run, and if it turns out contradictory or confusing, it is so very easy to simply prioritise something else and then never publish it on the grounds that it was never conclusive. The final paper may report just half of the hypotheses that were addressed in the course of the study.

And in this manner, most papers are navigated through a sea of uncertainty, leaving dead hypotheses as corpses under the surface, invisible and unknown to all but the scientists who left them there. And that can be pruriginously annoying. Because then I have to do the experiments myself to find out, even if it is obvious that someone ought to have tried it before. Some experiments, such as optimisation protocols, are never reported simply because it’s boring for everyone except the ten people (sometimes including me) that have to struggle for days or weeks to get the procedure to work in our own laboratories.

Who is to blame?
The current system incentivises everybody quite heavily to publish very selected subsets of their data. Scientific journals want papers with strong evidence that points directly in one direction. Scientists’ grants and reputations are pegged to their success in publishing in the same journals.

In my last project, I did a small experiment which, if positive, would have warranted a bigger experiment to validate the positive finding. But it was negative. I did it three times, and it came out the same way, to I decided to throw it in the paper anyway. Then the next person who might want to do it doesn’t have to, and I haven’t hidden any data. I stuck it in the supplementary section: files that won’t even be printed but are accessible at the publisher’s website.

When we got the review comments they were generally positive, but one of the reviewers lamented, inevitably, that that particular experiment was too weak to prove anything. We considered taking it out. But I decided to argue the point instead, this time, and keep it.

I can’t shake the feeling that there has to be a better way for me as a scientist to make my data available to people!