jump to navigation

Dark Matter Discovered – Loosing Control Of Your Data October 26, 2009

Posted by gordonwatts in GLAST, physics, physics life, science.
trackback

Ok, so it is a sensationalist title. But it was triggered by archive submission with the following title: Possible Evidence For Dark Matter Annihilation In The Inner Milky Way From The Fermi Gamma Ray Space Telescope. Wow! That is quite a title!

First, a bit of background on this paper. This is authored by two theorists who analyzed publically released FermiLAT/GLAST data. Fermi is a NASA funded project and one of its stipulations is that all data it collects must be made publically available 6 months after it has been collected. The authors of the paper downloaded the data, used a simple background model, added in their dark matter theory, and did a fit. And pow:

image The red points are the data from Fermi, the dash-dot line and the dotted line are backgrounds (galactic diffuse, and a single TeV source), and the dashed line is their model. Nice fit, eh? Yep – looking at this my first reaction is “Wow – is this right? This is big – how did Fermi miss this?” and then I run across the hall to find someone that actually knows this data well.

It turns out the basic problem with this analysis is that not all sources of background are included. This is the galactic center, and, as one would imagine, there are lots of sources there. Not just one TeV source modeled above. My impression from hallway conversations is that when you take into account all of these sources there is much less (if any) room left for the dark matter model. I don’t think that Fermi has published a paper on this yet, but I suspect they will try a some point soon.

Ok, so all’s well. Fermi will publish the paper and everyone will know the right way to do this non-trivial analysis. Except that things got away from them. Nature news has picked it up and wrote a short update. This is pretty widely read. Now Fermi has a PR problem on its hands – people are running around talking about their data and they’ve not really had a voice yet (the science coordinator for Fermi was interviewed for this bit, but her comments were relegated to the end of the post). Fermi is a big collaboration (yes, not the size of the LHC), even if their paper is close to publication it would probably be at least a month or more before the collaboration could agree on a response. So what to do?

There are a lot of issues surrounding making data public. To first order, it is the tax payers that are paying for these experiments, so the data should be public. On the other hand, you can already see that besides the work and infrastructure of making the data public (which costs real $$ – especially for a big experiment like Fermi or one of the LHC experiments), you have to respond to other folks that analyze your data – basically pointing out their mistakes and trying to help them along, even when they might be in competition with some of your internal analyses. In NASA’s case all the data has to be made public – it is written into every grant submission and NASA even provides money for it. This is not currently the case for particle physics. In many of these advanced experiments the data is quite complex – and someone that can’t depend on the large infrastructure of the experiment to help interpret it is bound to have some difficulties.

One only wishes that the authors had gotten in contact with some Fermi folks before submitting their note to the archive…

About these ads

Comments»

1. Max Sang - October 26, 2009

I don’t think it’s such a big deal really. These clowns’ reputations will be damaged by their hubris (the damage is therefore self-limiting in a sense) and it would be rejected by peer review anyway. A few instances of public humiliation like that and people will start being a bit more respectful of other people’s expertise. Maybe. Academics can have pretty big egos:)

The real issue here for me is the rise of the unreviewed preprint archive and the marginalising of journals due to their lag time and restrictive publication policies. We assume that things which turn up on preprint servers are things that will be published ‘properly’ anyway in a few months, but sometimes they’re just plain wrong.

2. Gordon Watts - October 26, 2009

Most of the papers on that archive are never submitted for review. I think it is important to have an outlet like that. But it is also important to have a review process in place. The problem is that review process costs money to do correctly. And the publishers are also bent on making big profits – to me it isn’t so much the restrictive publishing policies that cause problems as much as how much $$ they charge libraries to purchase their journals. I see budget cuts eliminating journals all the time. I don’t think there will be much in the way of paper journals soon!

But you are right about the first one. This is a way that everyone learns how to do this sort of thing. I’ve seen this in particle physics as well – theorists learned to use PGS because of it (a toy detector simulator – certianly not as good as the real ones the experiments use, but sophisticated enough that it prevents the theorists from being laughted at when they suggest something).


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 42 other followers

%d bloggers like this: