jump to navigation

Long Lived Particles Break HEP May 13, 2009

Posted by gordonwatts in Conference, physics.
2 comments

In my last post I mentioned that long lived particles break some basic assumptions that we make in the way we design our software and hardware in HEP. One fascinating example of this that was brought into clear relief for me at this workshop is the interaction between Monte Carlo generation and detector simulation. Look again at the picture I had up last time:

PSIK_63581_366774_figureCALOPTOO

While what I’ve shown above is real data, lets imagine it was a simulation for sake of discussion. Simulation is crucial – it allows us to compare what we think we know against nature. You might imagine that the code that generates everything that happens at the very center of that picture on the left is different from the code that propagates the particles out through the detector (the green, yellow, and blue lines). In fact, this is exactly how we structure our code in HEP – as a two step process.

The first step is to generate the actual physics interaction. Say a top quark production, or Higgs production and decay, or  Hidden Valley decay. As output the generator produces a list of particles and what directions they are heading. Most of them will then stream through the detector leaving tracks and data similar to the right side of that above picture. At this point we’ve got the starting point for all those “lines” or particle trajectories on the left.

Then the detector simulator program takes over. Its job is to simulate the detector. It takes each one of the particles and steps it, a millimeter at a time, through the detector. As it moves through the detector it decides if it should loose some energy interacting with the material, or leave a signal in a detector, etc. Once the simulation is done what we have is something that looks like the experiment was actually run – we can feed it through the same software that we use for real data to find electrons, tracks, etc.

But some of these long lived particle models have particles that interact as they move through the detector. The Quirk model is the poster-boy for this (odd, a model without a web page! At least that I could find). As pairs of these move through the detector they interact with each other and with the material they are traveling through. In short – the detector simulation has to act a bit like the generator – we are mixing these two things.

The main detector simulation program (GEANT4) – written in C++ and carefully planned out – does not look anything like an event generator – written in FORTRAN (common blocks!? ‘nuff said – wait, that was flame bait, wasn’t it?). My guess is it will take a year or so to get GEANT4 updated to accommodate models like Quirks. While it isn’t a complete rewrite of the package – it was quite generally designed – the GEANT4 folks probably didn’t think of a modification to allow interactions like this as a possibility.

Which makes me wonder if in the future generators will really just be subroutines (methods, sub-classed objects, etc.) in detector simulations? 🙂 We all know that detectors are the most important things out there, after all!

Advertisements

Hidden Valley Workshop May 11, 2009

Posted by gordonwatts in Conference, Hidden Valley, physics.
3 comments

IMG_1332I spent a very enjoyable week attending a workshop here at UW – Workshop on Signatures of Long-Lived Exotic Particles at the LHC. These workshops are funded by the DOE – and allow us to fly in a small list of experts to discuss a particular topic for a week. As you might imagine, things can get pretty intense (in a good way!).

The point about long lived particles is they are long lived! And not much else in the standard model is long lived the way these guys can be. Sure, a bottom quark might travel a few millimeters – and most of us tend to call that long-lived. But the things considered at this workshop can go much furthers – meters even. All sorts of models can generate these particles – like SUSY or Hidden Valley.

Nothing in a particle physics experiment is really designed for these things – not the hardware and not the software, certainly. Not clear our brains are thinking about them too well either! This is part of what makes them so fascinating!

Take the hardware, for example. Just about everything in the Standard Model decays very quickly after it is created in a collider. Millimeters:

Exploded CDF Event DisplayThat is an exploded schematic view of what happens in our detector (this is a CDF event, I’ve stolen, from Fermilab). The inner circle on the left is about 2 inches in diameter. You see the exploded view on the right? The distance between the vertex and the secondary vertex is about a millimeter or so. That is normal long lived particle for particle physics. All of our code and the design of our detectors are built to discover exactly those kinds of long lived particles.

That picture is from the small conference dinner we set up at Anthony’s, a local nice fish place here in Seattle. I’ve got more pictures from the dinner posted on my flickr account.

Observed! March 5, 2009

Posted by gordonwatts in physics.
add a comment

Check it out:

Abstract: We report first observation of the electroweak production of single top quarks in ppbar collisions at sqrt(s) = 1.96 TeV based on 2.3 fb^-1 of data collected by the D0 detector at the Fermilab Tevatron Collider. Using events containing an isolated electron or muon and missing transverse energy, together with jets originating from the fragmentation of b quarks, we measure a cross section of sigma(ppbar -> tb + X, tqb + X) = 3.94 +- 0.88 pb. The probability to measure a cross section at this value or higher in the absence of signal is 2.5 x 10^-7, corresponding to a 5.0 standard deviation significance for the observation.

I don’t think it was 5 years in the making – but close to that. Congratulations to everyone involved, and there were a lot of people.

I and my students and post-doc were intimately involved in the evidence paper, but this one I was mostly looking in the from the outside. But getting to 5 sigma was definitely harder than our earlier 3 sigma result. I can’t tell you how happy I am that this result has been submitted to the journals! Excellent!

Update: And CDF got it too – a joint discovery!

We report observation of single top quark production using 3.2 fb1 of pp collision data with ps = 1:96 TeV collected by the Collider Detector at Fermilab. The signicance of the observed data is 5.0 standard deviations, and the expected sensitivity is in excess of 5.9 standard deviations. We measure a cross section of 2:3+0:6-0:5(stat + syst) pb, extract the CKM matrix element value|Vtb|= 0:91+0:11-0:11(stat + syst)  0:07(theory), and set the limit |Vtb|> 0:71 at the 95% C.L.

English Language Summaries December 19, 2008

Posted by gordonwatts in D0, physics, physics life.
add a comment

This is pretty neat. The RNA Biology journal is now requiring a Wikipedia article along with every submitted paper. The guidance from the journal is as follows:

At least one stub article (essentially an extended abstract) for the paper should be added to either an author’s userspace at Wikipedia (preferred route) or added directly to the main Wikipedia space (be sure to add literature references to avoid speedy deletion). This article will be reviewed alongside the manuscript and may require revision before acceptance. Upon acceptance the former articles can easily be exported to the main Wikipedia space.

Keep in mind that Wikipedia articles are to be targeted at a level that an undergraduate could comprehend. Try to avoid jargon and do provide links to other Wikipedia articles at the first use of specific terms, e.g. [[RNA]]. Also the title of the page should appear in bold at the first use of the text of the article, e.g. "eRNA."

This is fantastic. For a long time here at DZERO we were trying to write English Language Summaries (or Plain English Summaries) of all of our papers. For example, here is one for an old Z+b analysis. These were aimed at people who weren’t particle physicists, but had some real interest in the science – the general interested public. We have mostly given up on this, however (I haven’t followed why). Currently the best summaries of this nature I know about are on a blog – Tomasso’s, specifically (e.g. here and here for recent examples).

But Wikipedia is a great idea! It is an increasingly popular search destination. And it is, supposedly, better organized than a blog. And more permanent. Writing the results up there I think would be a great idea. The only thing thing that this doesn’t address is a central pillar of the power of Wikipedia: inter linking. For these articles to really fit in they have to be linked. And if similar results (for example, measurements by both CDF and DZERO of the same thing) are presented then pages would have to be combined or correctly linked. Perhaps a page a paper and then other pages that discuss the specific pages? The experiments could appoint topical editors (i.e. service work) that maintains all the W/Z results, all the Higgs results, etc. Ok, now this is starting to sound like lots of work!

A neat idea, however!

I found this reading read/write web.

Precision Science August 25, 2008

Posted by gordonwatts in physics.
5 comments

You can tell how old a set of tools for a field is by how precise their measurements are. Take the top quark. Its been around since 1996. The latest top quark mass result from both CDF and D0 is 172 +- 1.22 GeV – so we know that to better than 1%.

Some of the most stunning recent discoveries in science have been dark energy and dark matter. Well, I guess I shouldn’t call them discoveries — we don’t know what they are yet — but the fact that something is there is definitely a discovery. But the thing about astrophysics is that it isn’t a precision field.

Perhaps that is changing now – from an article on a new measurement of the Hubble constant done using the Hubble space telescope:

The news was not in Dr. Riess’s value… , but in the precision with which his group claimed to have measured it: an uncertainty of only 4.3 percent.
Only 30 years ago, distinguished astronomers could not agree within a factor of two on the value of Hubble’s constant, leaving every other parameter in cosmology uncertain by at least the same factor and provoking snickers from other fields of science.

Actually, even more recent than that! I remember a rather famous string theorist standing up and claiming “Hey – in cosmology we have finally learned how to use error bars!” And then poking fun at the size of the errors in astro physics.

But that is always the way when you find something new. The top quark, when we discovered it, we basically knew it was there and kind-a knew its mass. We have then spent the last 15 years making the measurement steadily more precise (knowing that mass very well tells us a lot about where to find the Higgs).

Getting down to the 1% level, or the 5% level, even, is a lot of careful work. And, at some level, not as much fun as actually being the first to measure the value. But after verifying that the discover exists, it is the most important thing. That is the beauty of science: all the numbers are connected. The more you know one set of numbers, the better you can predict a second.

Getting the top quark precision down has been 15 years of hard hard work, many graduate student theses, and many post-doc years. But because of that we know a lot more about where to hunt for the Higgs. Doing the same in astrophysics is bound to help with the quest to understand dark energy and dark matter. Can’t wait!

P.S. Can you tell I wrote this on vacation? I’m reading the newspaper!!

How Hard Will The Hunt Be? August 6, 2008

Posted by gordonwatts in D0, Fermilab, Higgs, physics.
2 comments

Yesterday I mentioned that the Tevatron experiments had finally started to rule out the Higgs. I thought I’d post another plot that shows exactly how hard it will be – and so gives you an idea of how much hope the Tevatron has of actually catching the Higgs. Click on the plot to get an enlarged version of the jpeg (here for details).

The most important lines in that plot are the black one (1-CLs Observed) and and the 95% CL thick blue line. The thick blue line is the point at which, in our best statistical estimate, we are 95% confident that we have not observed anything. While the blue line is the “goal”, the black line is where we are now – the current observation. A lot goes into that black line – many different physics analysis contribute (from both D0 and CDF), the physics of the Higgs decay, the physics of how the Higgs boson is supposedly made, and how good our detector is at seeing the Higgs. As you can see, we have just peaked above the 95% level near 170. And that is what allows us to say that we’ve excluded the Higgs around 170 GeV.

Now, the future. You’ll note that the curve is pretty flat near where it peaks above 170. That says to me that when we add more data and minor analysis improvements we will be able to quickly broaden the amount of the observed line is above the 95% CL line. Where the black line is steeply falling, however, it require a huge amount of work (even if it is possible at the Tevatron).

Finally, in yesterday’s post the plot started at 114 GeV. This one starts at 155. What about everything from 114 to 155? Yes — we are working on that. For example, at D0 we have individual results already (and if you look at this plot, given the discussion, you can see that how we are doing as far as getting towards ruling things out at low mass – though the plot is a very different type of plot – but you can guess what is going on if you are not familiar with it). I couldn’t find the recent update of the CDF combined results. But the low mass combination between the experiments was not completed in time for ICHEP. I’m hopeful that we will see it soon – but as they say, it ain’t out until it is ready to be out!

A hunting we will go… August 5, 2008

Posted by gordonwatts in D0, Higgs, physics.
4 comments

See that little red blob around 170? That is the Tevatron starting to seriously tackle the final big physics problem left on its plate. Where is the Higgs? The question is — will it finish the job before the LHC starts producing real physics?

The numbers on that plot are the mass of Higgs boson, the final bit of the Standard Model we physicists haven’t directly observed. The last experiment to search for the Higgs were the LEP experiments. As you can see, they searched up to 114 GeV. The Tevatron is searching from 114 up as high as it can go — it so happens the first bit it was able to exclude was around 170 GeV in mass.

The Higgs mechanism is what gives most particles mass. If it was absent from our theory then many masses (and other things) we have already measured would be wrong. That does not mean, by the way, that the Higgs has to exist – but something like it does have to exist. The Standard Model Higgs is just the simplest explanation that we came up with fix the masses. If that whole range is searched and nothing is found – that would be huge news. And very puzzling!

Press Release Here. And combined CDF and D0 note describing the analysis here.

Basic Physics in ATLAS July 17, 2008

Posted by gordonwatts in ATLAS, physics, Uncategorized.
4 comments

There are times when I worry that things I have taught in introductory physics – like electricity and magnetism – aren’t really used in particle physics (At UW these are called Physics 121, 122, and 123).

The biggest example is momentum conservation. We use this all the time. In fact, one of the primary ways we will discover a new beyond-the-standard-model particle is via momentum conservation. A common line of reasoning is that we’ve not been able to detect this particle up to now because it doesn’t interact with our matter and our detectors as we expect it to. This is where basic physics comes to the rescue. We know the initial momentum of the collision in our detector. If this new particle were to fly off into the distance and not interact with our detector, then when we summed up the momentum of all of the outgoing particles… well, there would be some missing momentum! Score! Of course, it isn’t quite that simple, things like neutrinos will mimic exactly that signal, but there are ways around it.

The second place basic physics often comes into play is in detector construction and operation. For example, ATLAS has two large and very powerful magnetic fields. The first is the inner tracking field, and the second is the outer toroid field. Magnetic fields interact – think of bringing together two North pole magnets. So these two fields were carefully designed not to interact.

Except, one has to pump current through the outer toroid field to the inner solenoid magnet. As anyone who has taken a basic E&M course will tell you, a current generates a magnetic field. This means the cables that carry the current have to be able to withstand the force of the magnetic field interaction! At these field strengths and the 1000’s of amps of current flowing – that is a lot.

Of course, the engineers knew about this, and designed the cable housing to withstand this. Trickier than it sounds since all of this is superconducting. Still, it was nice to hear the reported successful test of this.

In the picture the 8 large tubes that surround the ATLAS detector generate the toroid field – they are 8 really giant superconducting magnets.

What is ASTRA? June 26, 2008

Posted by gordonwatts in physics, politics.
add a comment

Lately I’ve been getting science funding updates from an organization calling itself ASTRA (www.aboutastra.org and www.usinnovation.org). What do people know of this organization? Is it on the up-and-up? While the data on it seems focused at science, in the email all links are redirected through “http://x.jtrk12.net/” — which seems a bit suspicious – claims “this domain is used as part of a tracking mechanism in an e-mail marketing application”. Which means they are tracking to see if you clicked on their links in the email. Which makes me take a dim view. Anyone know?

The Cost Of Free GRID Access June 13, 2008

Posted by gordonwatts in computers, physics, science, university.
5 comments

I was giving some thought to the health of our department at the University of Washington the other day. Cheap and readily available computing power means new types of physics simulations can be tackled that have never been done before. Think of it like weather forecasting – the more computer power brought to bear the better the models are at predicting reality. Not only are the old style models better, we can try new weather models and make predictions that were never possible with the previous versions. The same thing is happening in Physics. Techniques and levels of detail we never though possible are now tackled on a regular basis. NSF and DOE both have programs specifically designed to fund these sorts of endeavors.

This means there is a growing need for a physics department to have a strong connection to a large computing resource – in house or otherwise – in order for its faculty members to be able to participate in these cutting edge research topics.

Particle physics is no stranger to these sorts of large-scale computing requirements. In ATLAS, our current reconstruction programs take over 15 seconds per event — we expect to collect 200 events per second – we would need a farm of 200*15=3000 CPUs just to keep pace. And that says nothing about the ability to reprocess and the huge number of Monte Carlo events we must simulate (over 2 minutes per event). And then we have to do this over and over again as we refine our analysis strategy. Oh, and lets not forget analyzing the data either!

However, even though may of us are located at universities, we don’t make heavy use of local clusters. I think there are two reasons. First the small one: the jobs we run are different from most simulation tasks run by other physicist. Their research values high bandwidth communication between CPU’s (i.e. Lattice QCD calculations) and requires little memory per-processor. Ours does not need the communication bandwidth but needs a huge amount of memory per processor (2 GB and growing).

The second reason is more important – we HEP folks get access to a large international GRID for “free”. This GRID is tailor made for our needs – we drove much of the design of it actually. We saw a need for this more than a decade ago, and have been working on getting it built and working smoothly ever since. While we still have a way to go towards smooth operation, it does serve almost all of our needs well. And to a university group like ourselves at the University of Washington, cheaply. By function of being a member of the ATLAS or D0 collaboration, I get a security certificate that allows me to submit large batch jobs to the GRID. An example of the power: it took us weeks to simulate 40,000 events locally. When we submitted it to the GRID we had back 100,000 events in less than a week.

Given that us HEP’rs would rather spend money on a modest size local analysis system – which is quite small compared to what the rest of the physics department needs. And so we don’t really participate in these large systems in our local department. I wonder if there is a hidden cost to that. Could we gain something but moving more of our processing back locally?  Could you more easily convince the NSF to fund a physics compute cluster that was doing Lattice QCD, HEP simulation and analysis, and Astro simulations? Or would they get pissed off because we weren’t using the large centers they are already funding instead? Has anyone tried a proposal like that before?