jump to navigation

Curious About ATLAS Reconstruction? June 24, 2008

Posted by gordonwatts in ATLAS, computers.
trackback

This dataflow diagram showed up the other day (warning: VERY LARGE JPEG) in an ATLAS e-news article. It shows how we go from raw data in ATLAS to fully reconstructed objects. It is part of an article describing our reconstruction software performance.

On the left side of the diagram signals digitally recorded enter. We also have information about where in space those signal occurred, and, of course, in what detector (calorimeter, muon system, tracking, etc.). As the data moves from the left to the right it is slowly assembled into objects like charged particle tracks, muon candidates, electron candidate, etc. One thing to keep in mind is we never know with 100% certainty what we are looking at. We might be 99% sure we are looking at an electron, for example, but never 100%. Heck — when we start we don’t even know if those signals that appear in the detector are real: they could be noise! These uncertainties must be tracked along with the objects as they move from the left to the right. It takes of order 15 seconds to get from one side of that diagram to the other on a modern processor (we write data to tape at about 200 Hz).

I don’t know what the oldest bit of software is in ATLAS. We, of course, have all the software in a cvs repository. But this is the product of over 10 years of work (this is just the software, remember, not the hardware which has been going on even longer).

Lets hope that it works!🙂

Agile programmers we are not!

Comments»

1. tim head - June 24, 2008

do you know how much is/can be done in parallel on a per event basis? or is one event dealt with by one machine and one machine only?

2. gordonwatts - June 25, 2008

Currently our reconstruction programs are single-threaded. So one event per core. There may be some very minor multi-threaded work going on, but I’d say that it was less than 5% in most HEP reconstruction code.

We just run multiple jobs in order to take advantage of the extra cores. For example, a 4 core machine will have 4 different copies of the reconstruction running, working on 4 different events. Each copy is totally unaware of the other copy.

So far our performance measurements have indicated that this works — we get the expected speedup. At some point, however (32 cores? 64 cores? 16 cores?) we are not going to be able to support this model any longer: we will be trying to move too much data in and out of each chip.

How much could be done? I would guess quite a bit. No one has taken a serious look at our reconstruction code with an eye to making it multi-threaded. As a result, I suspect there is a great deal of low hanging fruit.

3. 0.6 Lines Of Code Per Hour « Life as a Physicist - June 30, 2008

[…] Imagine if all of this code was written at 0.6 lines per […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: