jump to navigation

In Praise of 7” October 23, 2012

Posted by gordonwatts in Uncategorized.
89 comments

I have lots of posts I’d like to write, but I have no time. I swear! Unless external events force my hand. In this case, I suppose I should be writing about the apparent crazy conviction of the geologists who failed to predict a deadly earth quake in Italy (really not possible), or science policy of the USA’s presidential candidates (wish I had a nice link).

But in this case, I want to talk about tech. I’ve been using a small, 7” tablet for over a year now. My first was the B&N Nook Tablet that was a gift about a year ago. At the time it was the best for the low price on the market – beating the Fire easily on tech grounds (longer battery life, lighter, thinner, and it had an SD slot for expanded memory). This year when everyone had announced their tablets I decided to upgrade to the Google Nexus 7.

My path to these and how I use them today is perhaps a little odd. It was completely motivated by the New Yorker and the Economist. I receive both magazines (thanks Dad, Uncle Steve!!) and love them. However, I can never keep up. When I go on long plane flights I would stash 10 issues or so of each in my bag and carry them across the Atlantic to CERN or where ever I was traveling. And often I would carry 9 issues back. You know how heavy & fat those are? Yes. 1st world problem.

The nook was fantastic in solving this. And if I was away for more than a week I could still get new issues. I soon installed a few other apps – like a PDF reader. Suddenly I was no longer printing out lecture notes for my class – I’d load them onto the nook and bring them with me that way. I could keep the full quarter of lecture notes with me at all times for when a student would ask me something! I try to keep up on blogs, and I managed to side-load gReader before B&N locked down the nook. I soon was putting comments in papers and talks then I had to review – very comfortable sitting on the couch with this thing!

As the new crop of tablets showed up I started looking for something that was faster. And perhaps with a more modern web browser. The main thing that drove this was viewing my class notes in the PDF viewer – sometimes a 5 second lag would interfere with my lecture when I was trying to look up something I’d written quickly. Amazon’s HD Fire and B&N’s new nooks were pretty disappointing, and so I went with the Nexus 7. The performance is great. But there was something else I’d not expected.

I know this is a duh for most people: but the importance of a well stocked app store. Wow. Now the Nexus 7 is very integrated into my workday. I used it constantly! My todo lists, some of my lab notebooks, reading and marking up papers and talks, all that is done on thins thing now. B&N’s app store is ok, but nothing like the Google app store – pretty much whatever I want is there, and with Google’s free $25 for the app store to spend with the purchase of the Next 7, I’ve not actually had to spend a cent… and having now owned this thing for about a month my app purchasing has pretty much dropped off to zero. Basically, all I take back and forth to work now are my reading glasses and the Nexus 7 (it was the same with the tablet). I put Dropbox on it, and… well, it is all there.

I have a few complaints. About the hardware – the 16 GB version is barely enough – because I want to be able to load it up with TV/Movies/DVD’s for my long flights. For everything else (music included) the 16 GB is plenty. I think I can connect a USB key to the device, but it was very nice having all that extra space in the nook tablet with its SD slot. The battery life is worse than the nook – it will make it only through two days of heavy use. The nook tablet would good 3 or 4 (but I didn’t use it nearly as much, so this might not be a fair comparison). This guy has NFC, which if I understand the tech right, should be so much better than Bluetooth – so I’m eager to try that out… when I get other devices that support it.

The rest of the complaints I have are due to software and thus can be easily updated. For example, Microsoft’s OneNote app for android doesn’t display handwriting. Smile Many of my logbooks contain handwriting. Also the email app is really awful (seriously??) – though I should add it is serviceable for quick checks, triage, and reading. The only other mobile smart-device I own is a Windows Phone 7.5 – and the design and how the interface flows on android isn’t as nice or as integrated, but with Android 4.1 Google has done a great job. SkyDrive, which I like a lot better than dropbox, is on Android, but it doesn’t support in-place-editing (i.e. open a PDF file, annotate it, have it put back up to the cloud). With 7 GB free (25 because I was an early adopter), I’d drop dropbox if SkyDrive supported this on Android.

If you are still reading. I’m sure you know what triggered this post: Apple’s rumored 7” tablet that will be announced tomorrow. If you are locked into the Apple eco-system, and your work load looks anything like mine, you should get it. Otherwise, go with the Nexus 7” (at $250).

My wife has an older iPad, and I’ve played around with other iPad’s – for whatever reason, people don’t seem to carry them around to meetings, etc., very often. And I see people using them, but often not reading papers, etc. – but propped up on the stand and watching a movie. Also, the 10” form factor makes it very difficult to hold the tablet in your hand and thumb type: you need very big hands. For this sort of task, the 7” is perfect.

That isn’t to say that the 10” form factor isn’t great in itself. Microsoft with its W8 release is going to have a bunch of these tablets – and I can’t wait to buy one. Of course, for those of you who know me, my requirements are going to be a little weird: it must have an active digitizer. This is what allows you to write with a pen (as on my Tablet PC’s). Then I can finally get rid of the Tablet PC which is a compromise, and I can carry something optimized for each task: the 7” for quick work and reading, the 10” for a lab notebook, and a ultra-portable for the real work. Wait. Am I going to carry three now? Arrgh! What am I doing!?!?

The Higgs. Whaaaa? July 6, 2012

Posted by gordonwatts in ATLAS, CMS, Higgs, LHC, physics, press.
9 comments

Ok. This post is for all my non-physics friends who have been asking me… What just happened? Why is everyone talking about this Higgs thing!?

It does what!?

Actually, two things. It gives fundamental particles mass.  Not much help, eh? Smile Fundamental particles are, well, fundamental – the most basic things in nature. We are made out of arms & legs and a few other bits. Arms & legs and everything else are made out of cells. Cells are made out of molecules. Molecules are made out of atoms. Note we’ve not reached anything fundamental yet – we can keep peeling back the layers of the onion and peer inside. Inside the atom are electrons in a cloud around the nucleus. Yes! We’ve got a first fundamental particle: the electron! Everything we’ve done up to now says it stops with the electron. There is nothing inside it. It is a fundamental particle.

We aren’t done with the nucleus yet, however. Pop that open and you’ll find protons and neutrons. Not even those guys are fundamental, however – inside each of them you’ll find quarks – about 3 of them. Two “up” quarks and a “down” quark in the case of the proton and one “up” quark and two “down” quarks in the case of the neutron. Those quarks are fundamental particles.

The Higgs interacts with the electron and the quarks and gives them mass. You could say it “generates” the mass. I’m tempted to say that without the Higgs those fundamental particles wouldn’t have mass. So, there you have it. This is one of its roles. Without this Higgs, we would not understand at all how electrons and quarks have mass, and we wouldn’t understand how to correctly calculate the mass of an atom!

Now, any physicist who has made it this far is cringing with my last statement – as a quick reading of it implies that all the mass of an atom comes from the Higgs. It turns out that we know of several different ways that mass can be “generated” – and the Higgs is just one of them. It also happens to be the only one that, up until July 4th, we didn’t have any direct proof for. An atom, a proton, etc., has contributions from more than just the Higgs – indeed, most of a proton’s mass (and hence, an atom’s mass) comes from another mechanism. But this is a technical aside. And by reading this you know more than many reporters who are talking about the story!

The Higgs plays a second role. This is a little harder to explain, and I don’t see it discussed much in the press. And, to us physicists, this feels like the really important thing. “Electro-Weak Symmetry Breaking”. Oh yeah! It comes down to this: we want to tell a coherent, unified, story from the time of the big-bang to now. The thing about the big-bang is that was *really* hot. So hot, in fact, that the rules of physics that we see directly around us don’t seem to apply. Everything was symmetric back then – it all looked the same. We have quarks and electrons now, which gives us matter – but then it was so hot that they didn’t really exist – rather, we think, some single type of particle existed. Now, and the universe cooled down from the big bang, making its way towards present day, new particles froze out – perhaps the quarks froze out first, and then the electrons, etc. Let me see how far I can push this analogy… when water freezes, it does so into ice crystals. Say that an electron was one particular shape of ice crystal and a quark was a different shape. So you go from a liquid state where everything looks the same – heck – it is just water, to a solid state where the ice crystals have some set of shapes – and by their shape they become electrons or quarks.

Ok, big deal. It seems like the present day “froze” out of the Big Bang. Well, think about it. If our current particles evolved out of some previous state, then we had sure as hell be able to describe that freezing process. Even better – we had better be able to describe that original liquid – the Big Bang. In fact, you could argue, and we definitely do, that the rules that governed physics at the big bang would have to evolve to describe the rules that describe our present day particles. They should be connected. Unified!! Ha! See how I slipped that word in up above!?

We know about four forces in the universe: the strong (holds a proton together), weak (radioactive decay is an example), electro-magnetism (cell phones, etc. are examples), and gravity. The Higgs is a key player in the unification of the weak force and the electro-magnetic force. Finding it means we actually have a bead on how nature unifies those two forces. That is HUGE! This is a big step along the way to putting all the forces back together. We still have a lot of work to do!

Another technical aside. Smile We think of the first role – giving fundamental particles mass – a consequence of the second – they are not independent roles. The Higgs is key to the unification and in order to be that key, it must also be the source of the fundamental particle’s mass.

How long have you been searching for it?

A loooooong time. We are like archeologists. Nature is what nature is. Our job is to figure out how nature works. We have a mathematical model (called the Standard Model). We change it every time we find an experimental result that doesn’t agree with the calculation. The last time that happened was when we stumble upon the unexpected fact that neutrino’s have mass. The time before that was the addition of the Higgs, and that modification was first proposed in 1964 (it took a few years to become generally accepted). So, I suppose you could say in some sense we’ve been looking for it since 1964!

It isn’t until recently, however (say in the late 90’s) that the machines we use have become powerful enough that we could honestly say we were “in the hunt for the Higgs.” The LHC, actually, had finding the Higgs as one of its major physics goals. There was no guarantee – no reason nature had to work like that – so when we built it we were all a little nervous and excited… ok. a lot nervous and excited.

So, why did it take so long!? The main reason is we hardly ever make it in our accelerators! It is very very massive!! So it is very hard to make. Even at the LHC we make one every 3 hours… The LHC works by colliding protons together at a very high speed (almost the speed of light). We do that more than 1,000,000 times a second… and we make a Higgs only once every 3 hours. The very definition of “needle in a haystack!”

Who made this discovery?

Two very large teams of physicists, and a whole bunch of people running the LHC accelerator at CERN. The two teams are the two experiments: ATLAS and CMS. I and my colleagues at UW are on ATLAS. If you hear someone say “I discovered the Higgs” they are using the royal-I. This is big science. Heck – the detector is half a (American) football field long, and about 8 or 9 stories tall and wide. This is the sort of work that is done by lots of people and countries working together. ATLAS currently has people from 38 countries – the USA being one of them.

What does a Cocktail Party have to do with it?

The cocktail party analogy is the answer to why some fundamental particles are more massive than particles (sadly, not why I have to keep letting my belt out year-after-year).

This is a cartoon of a cocktail party. Someone very famous has just entered the room. Note how everyone has clumped around them! If they are trying to get to the other side of the room, they are just not going to get there very fast!!

Now, lets say I enter the room. I don’t know that many people, so while some friends will come up and talk to me, it will be nothing like that famous person. So I will be able to get across the room very quickly.

The fact that I can move quickly because I interact with few people means I have little mass. The famous person has lots of interactions and can’t move quickly – and in this analogy they have lots of mass.

Ok. Bringing it back to the Higgs. The party and the people – that is the Higgs field. How much a particle interacts with the Higgs field determines its mass. The more it interacts, the more mass is “generated.”

And that is the analogy. You’ve been reading a long time. Isn’t this making you thirsty? Go get a drink!

Really, is this that big a deal?

Yes. This is a huge piece of the puzzle. This work is definitely worth a Nobel prize – look for them to award one to the people that first proposed it in 1960 (there are 6 of them, one has passed away – no idea how the committee will sort out the max of 3 they can give it to). We have confirmed a major piece of how nature works. In fact, this was the one particle that the Standard Model predicted that we hadn’t found. We’d gotten all the rest! We now have a complete picture of the Standard Model is it is time to start work on extending the Standard Model. For example, dark matter and dark energy are not yet in the Standard Model. We have no figured out how to fully unify everything we know about.

No. The economy won’t see an up-tick or a down-tick because of this. This is pure research – we do it to understand how nature and the universe around us works. There are sometimes, by-luck, spin-offs. And there are people that work with us who take it on as one of their tasks to find spin offs. But that isn’t the reason we do this.

What is next?

Ok. You had to ask that. So… First, we are sure we have found a new boson, but the real world – and data, is a bit messy. We have looked for it, and expect it to appear in several different places. It appeared in most of them – one place it seems to be playing hide and seek (where the Higgs decays to tau’s – a tau is very much like a heavy electron). Now, only one of the two experiments has presented results in the tau’s (CMS), so we have to wait for my experiment, ATLAS, to present its results before we get worried.

Second, and this is what we’d be doing no matter what happened to the tau’s, is… HEY! We have a shiny new particle! We are going to spend some years looking at it from every single angle possible, taking it out for a test drive, you know – kicking the tires. There is actually a scientific point to doing that – there are other possible theories out there that predict the existence of a Higgs that looks exactly like the Standard Model Higgs except for some subtle differences. So we will be looking at this new Higgs every-which way to see if we can see any of those subtle differences.

ATLAS and CMS also do a huge amount of other types of physics – none of which we are talking about right now – and we will continue working on those as well.

Why do you call it the God Particle!?

We don’t. (especially check out the Pulp Fiction mash-up picture).

What will you all discover next?

I’ll get back to you on that…

Whew. I’m spent!

We only let students do posters June 5, 2012

Posted by gordonwatts in Uncategorized.
6 comments

I’m here at the PLHC conference in Vancouver, Canada (fantastic city, if you’ve not visited). I did a poster for the conference on some work I’ve done on combining the ATLAS b-tagging calibrations (the way their indico site is setup I have no idea how to link to the poster). I was sitting in the main meeting room, the large poster tube next to my seat, when this friend of mine walks by:

“Hey, brought one of your student’s posters?”

“Nope, did my own!”

“Wow. Really? We only let students do posters. I guess you’ve really fallen in the pecking order!”

Wow. It took me a little while to realize what got me upset about the exchange. So, first, it did hit a nerve. Those that know me know that I’ve been frustrated with the way the ATLAS experiment assigns talks – but this year they gave me a good talk. Friends of mine who are I think are deserving are also getting more talks now. So this is no longer really an issue. But comments like this still hit this nerve – you know, that general feeling of inadequacy that is left over from a traumatic high school experience or two. 🙂

But more to the point… are posters really such second class citizens? And if they are, should they remain as such?

I have always liked posters, and I have given many of them over my life. I like them because you end up in a detailed conversation with a number of people on the topic – something that almost never happens at a talk like the PLHC. In fact, my favorite thing to do is give a talk and a poster on the same topic. The talk then becomes an advertisement for the poster – a time when people that are very interested in my talk can come and talk in detail next to a large poster that lays out the details of the topic.

But more generally, my view of conferences as evolved over the past 5 years. I’ve been to many large conferences. Typically you get a set of plenary sessions with > 100 people in the audience, and then a string of parallel sessions. Each parallel talk is about 15-20 minutes long, and depending on the topic there can be quite a few people in the room. Only a few minutes are left for questions. The ICHEP series is a conference that symbolizes this.

Personally, I learn very little from this style of conference. Many of the topics and the analyses are quite complex. Too complex to really give an idea of the details in 15 or 20 slides. I personally am very interested in analysis details – not just the result. And getting to that level of detail requires – for me, at least – some back and forth. Especially if the topic is new I don’t even know what questions to ask! In short, these large conferences are fun, but I only get so much out of the talks. I learn much more from talking with the other attendees. And going to the poster sessions.

About 5 years ago I started getting invites to small workshops. These are usually about a week long, have about 20 to 40 people, and pick a specific topic. Dark Matter and Collider Physics. The Higgs. Something like that. There will be a few talks in the morning and maybe in the afternoon. Every talk that is given has at least the same amount of time set aside for discussion. Many times the workshop has some specific goals – better understanding of this particular theory systematic, or how to interpret the new results from the LHC, or how can the experiments get their results out in a more useful form for the theorist. The afternoons the group splits into working groups – where no level of detail is off-limits. I’ve been lucky enough to be invited to ones at UC Davis, Oregon, Maryland, and my own UW has been arranging a pretty nice series of them (see this recent workshop for links to previous ones). I can’t tell you how much I learn from these!

To me, posters are mini-versions of these workshops. You get 5 or 6 people standing around a poster discussing the details. A real transfer of knowledge. Here, at PLHC, there are 4 posters from ATLAS on b-tagging. We’ve all put them together in the poster room. If you walk by that end of the room you are trapped and surrounded by many of the experts – the people that actually did the work – and you can get almost any ATLAS b-tagging question answered. In a way that really isn’t, as far as I know, possible in many other public forums. PLHC is also doing some pretty cool stuff with posters. They have a jury that walks around and decides what poster is “best” and gives it an award. One thing the poster writer gets to do: give a talk at the plenary session. I recently attended CHEP – they did the same thing there. I’ve been told that CMS does something like this during their collaboration meetings too.

It is clear that conference organizers the world round are looking for more ways to get people attending the conference more involved in the posters that are being presented.

The attitude of my friend, however, is a fact of this field. Heck, even I have it. One of the things I look at in someone’s CV is how many talks they have given. I don’t look carefully at the posters they have listed. In general, this is a measure of what your peers think of you – have you done enough work in the collaboration to be given a nice talk? So this will remain with us. And those large conferences like ICHEP – nothing brings together more of our field all in one place than something like ICHEP. So they definitely still play a role.

Still the crass attitude “We only let students do posters” needs to end. And I think we still have more work to do getting details of our analysis and physics out to other members of our field, theorists and experimentalists.

CHEP Trends: Multi-Threading May 24, 2012

Posted by gordonwatts in Analysis, CHEP, computers.
6 comments

I find the topic of multi-threading fascinating. Moore’s law means that we now are heading to a multi-core world rather than just faster processors. But we’ve written all of our code as single threaded. So what do we do?

Before CHEP I was convinced that we needed an aggressive program to learn multithreaded programming techniques and to figure out how to re-implement many of our physics algorithms in that style. Now I’m not so sure – I don’t think we need to be nearly as aggressive.

Up to now we’ve solved things by just running multiple jobs – about one per core. That has worked out very well up to now, and scaling is very close to linear. Great! We’re done! Lets go home!

There are a number of efforts gong on right now to convert algorithms to be multi-threaded –rather than just running jobs in parallel. For example, re-implementing a track finding algorithm to run several threads of execution. This is hard work and takes a long time and “costs” a lot in terms of people’s time. Does it go faster? In the end, no. Or at least, not much faster than the parallel job! Certainly not enough to justify the effort, IMHO.

This was one take away from the conference this time that I’d not really appreciated previously. This is actually a huge relief: trying to make a reconstruction completely multi-threaded so that it efficiently uses all the cores in the machine is almost impossible.

But, wait. Hold your horses! Sadly, it doesn’t sound like it is quite that simple, at least in the long run. The problem is first the bandwidth between the CPU and the memory and second the cost of the memory. The second one is easy to talk about: each running instance of reconstruction needs something like 2 GB of memory. If you have 32 cores in one box, then that box needs 64 GB of main memory – or more including room for the OS.

The CPU I/O bandwidth is a bit tricky. The CPU has to access the event data to process it. Internally it does this by first asking its cache for the data and if the data hasn’t been cached, then it goes out to main memory to get it. The cache lookup is a very fast operation – perhaps one clock cycle or so. Accessing main memory is very slow, however, often taking many 10’s or more of cycles. In short, the CPU stalls while waiting. And if there isn’t other work to do, then the CPU really does sit idle, wasting time.

Normally, to get around this, you just make sure that the CPU is trying to do a number of different things at once. When the CPU can’t make progress on one instruction, it can do its best to make progress on another. But here is the problem: if it is trying to do too many different things, then it will be grabbing a lot of data from main memory. And the cache is of only finite size – so eventually it will fill up, and every memory request will displace something already in the cache. In short, the cache becomes useless and the CPU will grind to a halt.

The way around this is to try to make as many cores as possible work on the same data. So, for example, if you can make your tracking multithreaded, then the multiple threads will be working on the same set of tracking hits. Thus you have data for one event in memory being worked on by, say, 4 threads. In the other case, you have 4 separate jobs, all doing tracking on 4 different sets of tracking hits – which puts a much heavier load on the cache.

In retrospect the model in my head was all one or the other. You either ran a job for every core and did it single threaded, or you made one job use all the resources on your machine. Obviously, what we will move towards is a hybrid model. We will multi-thread those algorithms we can easily, and otherwise run a large number of jobs at once.

The key will be testing – to make sure something like this actually works faster. And you can imagine altering the scheduler in the OS to help you even (yikes!). Up to now we’ve not hit the memory-bandwidth limit. I think I saw a talk several years ago that said for a CMS reconstruction executable that occurred somewhere around 16 or so cores per CPU. So we still have a ways to go.

So, relaxed here in HEP. How about the real world? Their I see alarm bells going off – everyone is pushing multi-threading hard. Are we really different? And I think the answer is yes: there is one fundamental difference between them and us. We have a simple way to take advantage of multiple cores: run multiple jobs. In the real world many problems can’t do that – so the are not getting the benefit of the increasing number of cores unless they specifically do something about it. Now.

To, to conclude, some work moving forward on multithreaded re-implementation of algorithms is a good idea. As far as solving the above problem it is less useful to make the jet finding and track finding run at the same time, and more important to make the jet finding algorithm itself and the track finding algorithm itself multithreaded.

CHEP Trends: Libraries May 24, 2012

Posted by gordonwatts in Analysis, computers.
add a comment

I’m attending CHEP – Computers in High Energy Physics – which is being hosted by New York University this year, in New York City. A lot of fun – most of my family is on the east coast so it is cool to hang out with my sister and her family.

CHEP has been one my favorite conference series. For a while I soured on it as the GRID hijacked it. Everything else – algorithms, virtualization, etc., is making a come back now and makes the conference much more balanced and more interesting, IMHO.

There were a few striking themes (no, one of them wasn’t me being a smart-a** – that has always been true). Rene Brun, one of the inventors of ROOT, gave a talk about the history of data analysis. Check out this slide:

image

A little while later Jeff Hammerbacher from Cloudera gave a talk (Cloudera bases its cloud computing business on Hadoop). Check this these slide:

image

These two slides show, I think, two very different approaches to software architecture. In Rene’s slide, note that all the libraries are coalescing into a small number of projects (i.e. ROOT and GEANT). As anyone who has used ROOT knows, it is a bit of a kitchen sink. The Cloudera platform, on the other hand, is a project built of many small libraries mashed together. Some of them are written in-house, others are written by other groups. All open source (as far as I could understand from the talk). This is the current development paradigm in the open source world: make lots of libraries that end-programing can put together like Lego blocks.

This trend in the web world is, I think, the result of at least two forces at place: the rapid release cycle and the agile programming approach. Both mean that you want to develop small bits of functionality in isolation, if possible, which can then be rapidly integrated into the end project. As a result, development can proceed a pace on both projects, independently. However, a powerful side-effect is it also enables someone from the outside to come along and quickly build up a new system with a few unique aspects – in short, innovate.

I’ve used the fruits of this in some of my projects: it is trivial to download an load a library into one of my projects and with almost no work I’ve got a major building block. HTML parsers, and combinator parsers are two that I’ve used recently that have meant I could ignore some major bits of plumbing, but still get a very robust solution.

Will software development in particle physics ever adopt this strategy? Should it? I’m still figuring that out.

The Way You Look at the World Will Change… Soon December 2, 2011

Posted by gordonwatts in ATLAS, CERN, CMS, Higgs, physics.
7 comments

We are coming up on one of those “lucky to be alive to see this” moments. Sometime in the next year we will all know, one way or the other, if the Standard Model Higgs exists. Or it does not exist. How we think fundamental physics will change. I can’t understate the importance of this. And the first strike along this path will occur on December 13th.

If it does not exist that will force us to tear down and rebuild – in some totally unknown way – our model of physics. Our model that we’ve had for 40+ years now. Imagine that – 40 years and now that it finally meets data… poof! Gone. Or, we will find the Higgs, and we’ll have a mass. Knowing the mass will be in itself interesting, and finding the Higgs won’t change the fact that we still need something more than the Standard Model to complete our description of the universe. But now every single beyond-the-standard model theory will have to incorporate not only electrons, muons, quarks, W’s, Z’s, photons, gluons – at their measured masses, but a Higgs too with the appropriate masses we measure!

So, how do I know this is going to happen? Look at this plot that was released during the recent HCP conference (deepzoom version Smile) in Paris.

Ok, this takes a second to explain. First, when we look for the Higgs we do it as a function of its mass – the theory does not predict exactly how massive it will be. Second, the y-axis is the rate at which the Higgs is produced. When we look for it at a certain mass we make a statement “if the Higgs exists at mass 200 GeV/c2, then it must be happening at a rate less than 0.6 or we would have seen it.” I read the 0.6 off the plot by looking at the placement of the solid black line with the square points – the observed upper limit. The rate, the y-axis, is in funny units. Basically, the red line is the rate you’d expect if it was a standard model Higgs. The solid black line with the square points on it is the combined LHC exclusion line. Combined means ATLAS + CMS results. So, anywhere the solid black line dips below the red horizontal line means that we are fairly confident that the Standard Model Higgs doesn’t exist (BTW – even fairly confident has a very specific meaning here: we are 95% confident). The hatched areas are the areas where the Higgs has already been ruled out. Note the hatched areas at low mass (100 GeV or so) – those are from other experiments like LEP.

Now that is done. A fair question is where would we expect to find the Higgs. As it turns out, a Standard Model Higgs will mostly likely occur at low masses – exactly that region between 114 GeV/c2 and 140 GeV/c2. There isn’t a lot of room left for the Higgs to hide there!! These plots are with 2 fb-1 of data. Both experiments now have about 5 fb-1 of data recorded. And everyone wants to know exactly what they see. Heck, while in each experiment we basically know what we see, we desperately want to know what the other experiment sees. The first unveiling will occur at a joint seminar at 2pm on December 13th. I really hope it will be streamed on the web, as I’ll be up in Whistler for my winder ski vacation!

So what should you look for during that seminar (or in the talks that will be uploaded when the seminar is given)? The above plot will be a quick summary of what the status of the experiments. Each experiment will have an individual one. The key thing to look for is where the dashed line and the solid line deviate significantly. The solid line I’ve already explained – that says that for the HIggs of a particular mass if it is there, it must be at a rate less than what is shown. Now, the dashed line is what we expect – given everything was right – and the Higgs didn’t exist at that mass – that is how good we expect to be. So, for example, right around the 280 GeV/C2 level we expect to be able to see a rate of about 0.6, and that is almost exactly what we measure. Now look down around 120-130 GeV/c2. There you’ll notice that the observed line is well above the solid line. How much – well, it is just along the edge of the yellow band – which means 2 sigma. 2 sigma isn’t very much – so this plot has nothing to get very interested yet. But if one of the plots shown over the next year has a more significant excursion, and you see it in both experiments… then you have my permission to get a little excited. The real test will be if we can get to a 5 sigma excursion.

This seminar is the first step in this final chapter of the old realm of particle physics. We are about to start a new chapter. I, for one, can’t wait!

N.B. I’m totally glossing over the fact that if we do find something in the next year that looks like a Higgs, it will take us sometime to make sure it is a Standard Model Higgs, rather than some other type of Higgs! 2nd order effect, as they say. Also, in that last long paragraph, the sigma’s I’m talking about on the plot and the 5 sigma discovery aren’t the same – so I glossed over some real details there too (and this latter one is a detail I sometimes forget, much to my embarrassment at a meeting the other day!).

Update: Matt Strassler posted a great post detailing the ifs/ands/ors behind seeing or not seeing – basically a giant flow-chart. Check it out!

So long, and thanks for all the protons! September 29, 2011

Posted by gordonwatts in D0, Fermilab, physics life.
add a comment

And there were a lot of protons!

This is a picture of the Cockroft-Walton at Fermilab’s Tevatron. This is where it all starts.

Photo_0C91E05A-507A-6132-FD23-A7EC06FC757B

It isn’t that much of an exaggeration to say that my career started here. You are looking through a wire cage at one half of the Cockroft-Walton – the generator creates a very very very large electric field that ionizes Hydrogen gas (two protons and two electrons) by ripping one of the protons off. The gas, now charged, can be accelerated by an electric field. This is how protons start in the Tevatron.

And that is how most of the experimental data that I used for my Ph.D. research , post-doc research, and tenure research started. Basically, my career from graduate student to tenure is based on data from the Tevatron. The Tevatron delivers its last beam this Friday, at 2pm Central time (the 30th).

I’ll miss working at Fermilab. I’ll miss working at DZERO (the most recent Fermilab experiment I’ve been on). I’ll also miss the character of the experiments – CDF and DZERO now seem like such small experiments. Only 500 authors. I feel like I know everyone. It is a community in a way that I’ve not felt at the LHC yet. And I’ll miss directly owning a bit of the experiment – something I joined the LHC too late to do. But most of all I’ll miss the people. True – many of them have made the transition to the LHC – but not all of them. For reasons of travel, or perhaps retirement, these people I’ll probably see a lot less over the next 10 years. And that is too bad.

I’ll remain connected with DZERO for some time to come. I’m helping out with doing some paper reviews and I’m helping out with data preservation – making sure the DZERO data can be accessed long after the experiment has ceased running.

Tevatron. It has been a fantastic run. You have made my career. And I’ve had a wonderful time with the science opportunities you’ve provided.

So long, and thanks for all the (anti-)protons.

The Square Wheel September 19, 2011

Posted by gordonwatts in Analysis, computers, LINQToTTree, ROOT.
1 comment so far

Another geek post, I’m afraid. Last week I posted about some general difficulties I was having with doing analysis at the LHC. I actually got a fair amount of response – but all of it was people talking to me here at CERN rather than comments on the blog. So to summarize before moving on…

The biggest thing I got back was that as the corrections become well known, they get automated – so there is no need for this two step process I outlined before – running on MC and data, deriving a correction, and then running a third time to do the actual work, taking the correction into account. Rather, the ROOT files are centrally produced and the correction is applied there by the group. So the individual doesn’t have to worry. Sweet! That definitely improves life! However, the problem remains (i.e. when you are trying to derive a new correction).

I made three attempts before finally finding an analysis framework that worked (well, four if you count the traditional approach of C++, python, bash, and duct tape!). As you can tell – what I wanted was something that would correctly glue several phases of the analysis together. The example from last time:

  1. Correct the jet pT spectra in Monte Carlo (MC) to data
    1. Run on the full dataset and get the jetPt spectra.
    2. Do the same for MC
    3. Divide the two to get the ratio/correction.
  2. Run over the data and reweight my plot of jet variables by the above correction.

There are basically 4 steps in this: run on the data, run on the MC, divide the results, run on the data. Ding! This looks like workflow! My firs two attempts were based around this idea.

Workflow has a long tradition in particle physics. Many of our computing tasks require multiple steps and careful accounting every step of the way. We have lots of workflow systems that allow you to assemble a task from smaller tasks and keep careful track of everything that you do along the way. Indeed, all of our data processing and MC generation has been controlled by home-rolled workflow systems at ATLAS and DZERO. I would assume at every other experiment as well – it is the only way.

This approach appealed to me: I can build all the steps out of small tasks. One task that runs on data and one that runs on MC. And then add the “plot the jet pT” sub-task to each of those two, take the outputs, and then have a small generic tasks that would calculate a rate, and then another task that would weight the events and finally make the plots. Easy peasy!

So, first I tried Trident, something that came out of Microsoft Research. An open source system, it was designed to work with a number of scientists with large datasets that required frequent processing (NOAA related, I think). It had an attractive UW, and arbitrary data could be passed between the tasks, and the code interface for writing the tasks was pretty simple.

image

I managed to get some small things working with it – but there were two big things that caused it to fail. First, the way you pass around data was painful. I wanted to pass around a list of files to run on – and then from that I needed to pass around histograms. I wanted fine grained tasks that would manipulate histograms (dividing the plots) and the same time other tasks would be manipulating whole files (making the plots). Ugh! It was a lot of work just to do something simple! The second thing that killed it was that this particular tool – at the time – didn’t have sub-jobs. You couldn’t build a workflow, and then use it in other workflows. It was my fault that I missed that fact when I was choosing the tool.

So, I moved onto a second attempt. Since my biggest problem had been hooking everything up I decided to write my own. Instead of a GUI interface, I had an XML interface. And I did what is known as “coding-by-convention.” The idea is that I’d set a number of defaults into the design so that it “just worked” as long as the individual components obeyed the conventions. Since this was my own private framework there was no worry that this wouldn’t happen. The framework knew how to automatically combine similar histograms, for example, or if it was presented with multiple input datasets it knew how to combine those as well – something that would have required a another step in the Trident solution.

This solution went much better – I was able to do more than just do my demo – I tried moving beyond the reweighting example above and tried to do something more complex. And here is where, I think, I hit on the real reason that workflow doesn’t work for analysis (or at least for me): you are having to switch between various environments too often. The framework was written in XML. If I wanted a new task, then I had to write C++, or C# (depending). Then there was the code that ran the framework – I’d have to upgrade that periodically.

Really, all I wanted to do was make a stupid plot on two datasets, divide it, and then make a third plot using the first as a weight. Why did I need different languages and files to do that – why couldn’t I write that in a few lines??

Those of you who are active in this biz, of course, know the answer: two different environments. One set of code deals with looping over, possibly, terrabytes of data. That is the loop that makes the plot. Then you need some procedural code to do the histogram division. When that is done, you need another loop of code to do the final plots and reweighting. Take a step back. That is a lot of support code that I have to write! Loading up the MC and data files, running the loop over them, saving the resulting histogram. The number of lines I actually need to create the plot and put the data into the plot? Probably about 2 line or 3. The number of lines I need to actually run that job start to finished and make that plot? Closer to 150 or so, and in several files, some compiled and some interpreted. Too much ceremony for that one or two lines of code: 150 lines of boilerplate for 3 or so lines of the physics interesting code.

So, I needed something better. More on that next week.

BTW, the best visual analysis workflow I’ve seen (but not used) is something called VISPA. Had I known about it when I started the above project I would have gone to it first – it is cross platform, has batch manager, etc., integrated in, etc. (a fairly extensive list). Looking in retrospect it looks like it could support most of what I need to do. I say this only having done a quick scan of its documentation pages. I suspect I would have run into the same problem: having to move between different environments to code up something “simple”.

Reinventing the wheel September 10, 2011

Posted by gordonwatts in Analysis, computers, LINQToTTree, ROOT.
add a comment

Last October (2010) my term came to and end running the ATLAS flavor-tagging group. It was time to get back to being a plot-making member of ATLAS. I don’t know how most people feel when they run a large group like this, but I start to feel separated from actually doing physics. You know a lot more about the physics, and your input affects a lot of people, but you are actually doing very little yourself.

But I had a problem. By the time I stepped down in order to even show a plot in ATLAS you had to apply multiple corrections: the z distribution of the vertex was incorrect, the transverse momentum spectrum of the jets in the Monte Carlo didn’t match, etc. Each of these corrections had to first be derived, and then applied before someone would believe your plot.

To make your one really great plot then, lets look at what you have to do:

  1. Run over the data to get the distributions of each thing you will be reweighting (jet pT, vertex z position, etc.).
  2. Run over the Monte Carlo samples to get the same thing
  3. Calculate the reweighting factors
  4. Apply the reweighting factors
  5. Make the plot you’d like to make.

If you are lucky then the various items you need to reweight are not correlated – so you can just run the one job on the Data and the one job on the Monte Carlo in steps one and two. Otherwise you’ll have to run multiple times. These jobs are either batch jobs that run on the GRID, or a local ROOT job you run on PROOF or something similar. The results of these jobs are typically small ROOT files.

In step three you have to author a small script that will extract the results from the two jobs in steps 1 and 2, and create the reweighting function. This is often no more difficult that dividing one histogram by another. One can do this at the start of the plotting job (the job you create for steps 4 and 5) or do ti at the command line and save the result in another ROOT file that serves as one of the inputs to the next step.

Steps 4 and 5 can normally be combined into one job. Take the results of step 3 and apply it as a weight to each event, and then plot whatever your variable of interest is, as a function of that weight. Save the result to another ROOT file and you are done!!

Whew!

I don’t know about you, but this looked scary to me. I had several big issues with this. First, the LHC has been running gang-busters. This means having to constantly re-run all these steps. I’d better not be doing it by hand, especially as things get more complex, because I’m going to forget a step, or accidentally reuse an old result. Next, I was going back to be teaching a pretty difficult course – which means I was going to be distracted. So whatever I did was going to have to be able to survive me not looking at it for a week and then coming back to it… and me still being able to understand what I did! Mostly, the way I normally approach something like the above was going to lead to a mess of scripts and programs, etc., all floating around.

It took me three tries to come up with something that seems to work. It has some difficulties, and isn’t perfect in a number of respects, but it feels a lot better than what I’ve had to do in the past. Next post I’ll talk about my two failed attempts (it will be a week, but I promise it will be there!). After that I’ll discuss my 2011 Christmas project which lead to what I’m using this year.

I’m curious – what do others do to solve this? Mess of scripts and programs? Some sort of work flow? Makefiles?? What?? What I’ve outlined above doesn’t seem scalable!

Source Code In ATLAS June 11, 2011

Posted by gordonwatts in ATLAS, computers.
3 comments

I got asked in a comment what, really, was the size in lines of the source code that ATLAS uses. I have an imperfect answer. About 7 million total. This excludes comments in the code and blank lines in the code.

The break down is a bit under 4 million lines of C++ and almost 1.5 million lines of python – the two major programming languages used by ATLAS. Additionally, in those same C++ source files there are another about million blank lines and almost a million lines of comments. Python contains similar fractions.

There are 7 lines of LISP. Which was probably an accidental check-in. Once the build runs the # of lines of source code balloons almost a factor of 10 – but that is all generated code (and HTML documentation, actually) – so shouldn’t count in the official numbers.

This is imperfect because these are just the files that are built for the reconstruction program. This is the main program that takes the raw detector signals and coverts them into high level objects (electrons, muons, jets, etc.). There is another large body of code – the physics analysis code. That is the code that takes those high level objects and coverts them into actual interesting measurements – like a cross section, or a top quark mass, or a limit on your favorite SUSY model. That is not always in a source code repository, and is almost impossible to get an accounting of – but I would guess that it was about another x10 or so in size, based on experience in previous experiments.

So, umm… wow. That is big. But it isn’t quite as big as I thought! I mentioned in the last post talking about source control that I was worried about the size of the source and checking it out. However, Linux is apparently about 13.5 million lines of code, and uses one of these modern source control systems. So, I guess these things are up to the job…