jump to navigation

Food And Physics May 24, 2014

Posted by gordonwatts in Energy, ITER.
add a comment

I’ve been lucky enough to combine two of my favorite things recently: food and talking about physics. And by talking I mean getting outside my comfort zone and talking to non-physicists about what I do. After all, I love what I do. And I’m a bit of a ham…

I’ve done two Science Café’s. If you don’t know what they are, I definitely suggest you lookup a local schedule. They are fantastic, and done all across the USA. There I’ve talked about particle physics, and the Higgs.

But last night I went way out of my comfort zone and joined two other UW physicists to talk about ITER. Anna Goussiou, who does the same sort of physics I do, and Jerry Seidler and I all traveled to Shanik.

It all started when the owner of Shanik, Meeru, got very excited reading an article in the March 3’rd New Yorker called Star in a Bottle (it is available online). It describes the history of ITER and its quest for cheap clean energy. This nicely dovetailed with two of Meeru’s (and many other peoples) interests: the environment and science. Meeru then went looking for a way to share her excitement with others – which is how I and Anna and Jerry ended up in her bar with about 40 people talking about ITER. We got free food. If you are living in Seattle, I definitely recommend visiting. Amazing food.

As some context, check out the Livermore energy flow charts (from Lawrence Livermore National Lab). I’d not heard about these before. Click on the link, and check it out. It shows all the sources of energy (solar to petroleum) and how they are used (transportation, residential, etc.). One very nice thing: the total units add up to almost 100, so you can almost directly read the numbers as percent’s. And when it comes to bettering the environment we need to eliminate quite a chunk of the energy source. The hope is ITER can help with that.

What is ITER? Jerry made what I thought was a great analogy. We have the nuclear bomb, which we have harnessed for peaceful purposes in the form of a nuclear reactor. Bomb to electricity. ITER, and other fusion based research projects, are attempting to harness the H-bomb (hydrogen bomb) for peaceful purposes in the same way. Unlike a nuclear reactor, however, the radiation is going to be minimal. It will not have nearly the waste problem that a nuclear reactor has.

Frankly, I didn’t know much about ITER when this whole thing started. But the concept is pretty simple. You start with one of the major seed reaction in a star. You start with deuterium and tritium – both different forms of hydrogen (isotopes). If you can get them close enough to “touch”, they will bind to for Helium, an extra neutron, and a boat-load of energy. If you can capture that energy as heat, use it to boil water, then you can produce electricity by using the steam to run a turbine. The devil, however, is in the details. First, it takes a tremendous amount of force to get the deuterium and tritium close together. In the case of an H-Bomb a nuclear bomb is used to accomplish this! Obviously, you can’t blow up nuclear bombs in the middle of ITER! Second, when it starts to burn it is hot. Center of a star hot! Pretty much nothing can contain that – everything will melt! The ITER project, under construction now, thinks it has solved the heating problem and the confinement problem. ITER, big science, is very much that: a science experiment. There are decades of research that go into it, but there are some very real problems that remain to be solved and they can’t solve all of them until they build the machine and try it out.

But lets say the machine works. What would it take to displace some of the dirtier forms of energy? It comes down to price. The fuel for an ITER like power-plant is going to be cheap. Very cheap. But the upfront costs are going to be high. The reactor is a serious bit of tech. The current ITER project is probably going to cost of order $20 billion USD. If it works, the second one will be much cheaper. This is very much like a nuclear reactor. The fuel, uranium, is very cheap. But the plant itself is quite expensive. Guesses put the cost higher than current fossil fuels, but not much more expensive.

The current ITER project is also fascinating to me for another reason: it is a giant collaboration of many countries. Just like CERN, and my experiment, ATLAS. Only, ITER looks like it might be a little more dysfunctional than ATLAS right now. On the bright side, CERN did put together the world’s largest experiment, and it worked. So it should be possible.

Last thing I wanted to mention was the cost. This is a big international project. Many countries (including the USA) are involved. And because of that there are some big issues. Each country is trying to reduce its cost and a local decision can affect other components in ITER generating a ripple effect – and delays and cost overruns (of which they have a lot). Could one country build ITER? Lets look at the USA. We have successfully run a few really big projects in our past – the Manhattan project and the Apollo program come to mind. These were each about 1% of GDP. The USA’s current GDP is about 16 trillion. 1% of that is about 16 billion per year. ITER could be built for about half or a quarter of that per year, given it was a 20 billion dollar project, and it would take about 4 or 5 years to build. So if you considered clean energy efforts like this of similar importance to these other projects, the USA could totally do it. Another sense of the scale of the project: the financial bailout was $780 billion or so.

I have only one thing to say. Write your congress person and urge them to support all sorts of science research: be it ITER, solar power, or anything else. But get involved!

Reproducibility… September 26, 2013

Posted by gordonwatts in Analysis, Data Preservation, Fermilab, reproducible.
3 comments

I stumbled across an article on reproducibility recently, “Science is in a reproducibility crisis: How do we resolve it?”, with the following quotes which really caught me off guard:

Over the past few years, there has been a growing awareness that many experimentally established "facts" don’t seem to hold up to repeated investigation.

They made a reference to a 2010 alarmist New Yorker article, The Truth Wears Off (there is a link to a PDF of this article on the website, but I don’t know if it is legal, so I won’t link directly here).

Read that quote carefully: many. That means a lot. It would be all over! Searching on the internet, I stumbled on a Nature report. They looked carefully at a database of medical journal publications and retraction rates. Here is a image of the retraction rates the found as a function of time:

 

 

First, watch it for the axes here – multiply the numbers on the left by 10 to the 5th (100000), and numbers on the right by 10 to the –2 (0.01). IN short, the peak rate is 0.01%. This is a tiny number. And, as the report points out, there are two ways to interpret the results:

This conclusion, of course, can have two interpretations, each with very different implications for the state of science. The first interpretation implies that increasing competition in science and the pressure to publish is pushing scientists to produce flawed manuscripts at a higher rate, which means that scientific integrity is indeed in decline. The second interpretation is more positive: it suggests that flawed manuscripts are identified more successfully, which means that the self-correction of science is improving.

The truth is probably a mixture of the two. But this rate is still very very small!

The reason I harp on this is because I’m currently involved in a project that contains reproducibility as one of its possible uses: preserving the data of the DZERO experiment, one of the two general purpose detectors on the now-defunct Tevatron accelerator. Through this I’ve come to appreciate exactly how difficult and potentially expensive this process might be. Especially in my field.

Lets take a very simple example. Say you use Excel to process data for a paper you are writing. The final number comes from this spreadsheet and is copied into the conclusions paragraph of your paper. So you can now upload your excel spreadsheet to the journal along with the draft of the paper. The journal archives it forever. If someone is puzzled by your result, they can go to the journal and download the spreadsheet and see exactly what you did (aka modern economics papers). Win!

Only wait. What if the numbers that you typed into your spreadsheet came from some calculations you ran. Ok. You need to include that. And the inputs to the calculations. And so on and so on. For a medical study you would presumably have to upload the anonymous medical records of each patient, and then everything from there to the conclusion about a drug’s safety or efficacy. Uploading raw data from my field is not feasible – it is petabytes in size. This is all ad-hoc – the tools we use do not track the data as they flow through them.

As an early prof I was involved in a study that was trying to replicate and extend a result from a prior experiment. We couldn’t. The group from the other experiment was forced to resurrect code on a dead operating system, and figure out what they did – reproduce it – so they could ask our questions. The process took almost a year. In the end we found one error in that original paper – but the biggest change was just that modern tools were better had a better model of physics and that was the main reason we could not replicate their results. It delayed the publication of our paper by a long time.

So, clearly, it is useful to have reproducibility. Errors are made. Bias gets involved even with the best of intentions. Sometimes fraud is involved. But these negatives have to be balanced against the cost of making all the analyses reproducible. Our tools just aren’t there yet and it will be both expensive and time consuming to upgrade them. Do we do that? Or measure a new number, rule out a new effect, test a new drug?

Given the rates above, I’d be inclined to select the latter. And have a process of evolution of the tools. No crisis.

Running a Workshop July 13, 2013

Posted by gordonwatts in Conference, UW.
2 comments

DSC03874

I ran the second workshop of my career two weeks ago. There were two big differences and a small one between this one and the first one I ran. First, the first one was an OSG workshop. It had no parallel sessions – this one had 6 running at one point. I had administrative help as part of our group back then – that luxury is long gone! And there were about 20 or 30 more people attending this time.

In general, I had a great time. I hope most people who came to Seattle did as well. The weather couldn’t have been better – sun and heat. Almost too hot, actually. The sessions I managed to see looked great. Ironically, one reason I went after this workshop was to be able to attend the sessions and really see how this Snowmass process was coming along. Anyone who has organized one of these things would tell you how foolish I was: I barely managed to attend the plenary sessions. Anytime I stepped into a parallel room someone would come up to me with a question that required me running off to fetch something or lead them to some room or…

There were a few interesting things about this conference that I thought would be good for me to write down – and perhaps others will find this useful. I’d say I would find these notes useful, but I will never do this again. At least as long as it takes me to forget how much work it was (~5 years???).

First, people. Get yourself a few dedicated students who will be there from 8 am to 8 pm every single day. I had two – it would have been better with three. But they ran everything. This conference wouldn’t have worked without them (thanks Michelle Brochmann and Jordan Raisher!!!). It is amazing how much two people can do – run a registration desk, setup and wire a room for power, manage video each day, stand in a hallway and be helpful, track down coffee that has been delivered to a different building (3 times!)… I suppose no one job is all that large, but these are the sorts of things that if they are missing can really change the mood of a conference. People will forgive a lot of mistakes if they think you are making a good effort to make it right. Something I’m not totally sure I should admit. Winking smile

The other thing I discovered for a workshop this size was that my local department was willing to be so helpful! Folder stuffing? Done for free by people in the front office. Printing up the agendas? No problem! Double checking room reservations? Yes! Balance the budget and make sure everything comes out ok? You bet! They were like my third hand. I’m sure I could have hired help – but given the total hours spent, especially by some high-end staff, I’m sure it would have cost quite a bit.

DSC03893The budget was crazy. It has to be low to get people here – so nothing fancy. On the other hand, it has to be large enough to make everyone happy. What I really got tripped up by was I set the economic model about 3 or 4 weeks before the start of the conference. I had a certain amount of fixed costs, so after subtracting that and the university’s cut, I knew what to do for coffee break, I knew how much I could have and how often, etc. And then in the last two weeks a large number of people registered! I mean something like 40%. I was not expecting that. That meant the last week I was frantically calling increasing order sizes for coffee breaks, seeing if larger rooms were available, etc. As it was, some of the rooms didn’t have enough space. It was a close thing. Had another 20 shown up my coffee breaks would have had to be moved – as it was, it really only worked because the sun was out the whole conference so people could spill outside while drinking their coffee! So, next time, leave a little more room in the model for such a late bump. For the rest of you who plan to go – but wait till the last minute to register? Don’t!

DSC03871Sound. Wow. When I started this I never thought this was going to be an issue! I had a nice lecture hall to seat 300 people, I had about 130 people in the end. The lecture’s sound system was great. Large over-head speakers, and wireless microphone. I had a hand-head wireless mike put in the room so capture questions. And there was a tap in the sound system that said audio out. There were two things I’d not counted on, however. First, that audio-out was actually left over from a previous installation and no longer worked. Worse, by the time I discovered it the university couldn’t fix it. The second thing was the number of people that attended remotely. We had close to a 100 people sign up to attend remotely. And they had my Skype address. I tried all sorts of things to pipe the sound in. One weird thing: one group of people would say “great!” and another would say “unacceptable!” and I’d adjust something and their reactions would flip. In the end the only viable solution was to have a dedicated video microphone and force the speakers to stand right behind the podium and face a certain way. It was the only way to make it audible at CERN. What a bummer!

But this lead me to thinking about this situation a bit. Travel budgets in the USA have been cut a lot. Many less people are traveling right now; when we asked it was the most common reason given for not attending. But these folks that don’t attend do want to attend via video. In order for me to have done this correctly I could have thrown about $1000 at the problem. But, of course, I would have had to charge the people who were local – I do not think it is reasonable to charge the people who are attending remotely. As it was, the remote people had a rather dramatic effect on the local conference. If you throw a conference with any two-way remote participation, then you will have to budget for this. You will need at least two good wireless hand-held microphones. You will need to make sure there is a tap into your rooms sound system. Potentially you’ll need a mixer board. And most important you will have to set it up so that you do not have echo or feedback on the video line. This weirdness – that local people pay to enable remote people – is standard I suppose, but it is now starting to cost real money.

For this conference I purchased a USB presenter from Logitech. I did it for its 100’ range. I was going to have the conference pay for it, but I liked it so much I’m going to keep it instead. This is a Cadillac, and it is the best working one I’ve ever used. I do not feel guiltily using it. And the laser pointer? Bright (green)! And you can set it up so it vibrates when time runs out.

Another thing I should have had is a chat room for the people organizing and working with me. Something that everyone can have on their phone cheaply. For example, Whatsapp. Create a room. Then when you are at the supermarket buying flats of water and you get a call from a room that is missing a key bit of equipment, you can send a message “Anyone around?” rather than going through your phone book one after the other.

And then there are some things that can’t be fixed due to external forces. For example, there are lots of web sites out there that will mange registration and collection money for you for a fee of $3-$4 bucks a registration. Why can’t I use them? Some of the equipment wasn’t conference grade (the wireless microphones cut out at the back of the room). And, wow, restaurants around the UW campus during summer can be packed with people!

In Praise of 7” October 23, 2012

Posted by gordonwatts in Uncategorized.
6 comments

I have lots of posts I’d like to write, but I have no time. I swear! Unless external events force my hand. In this case, I suppose I should be writing about the apparent crazy conviction of the geologists who failed to predict a deadly earth quake in Italy (really not possible), or science policy of the USA’s presidential candidates (wish I had a nice link).

But in this case, I want to talk about tech. I’ve been using a small, 7” tablet for over a year now. My first was the B&N Nook Tablet that was a gift about a year ago. At the time it was the best for the low price on the market – beating the Fire easily on tech grounds (longer battery life, lighter, thinner, and it had an SD slot for expanded memory). This year when everyone had announced their tablets I decided to upgrade to the Google Nexus 7.

My path to these and how I use them today is perhaps a little odd. It was completely motivated by the New Yorker and the Economist. I receive both magazines (thanks Dad, Uncle Steve!!) and love them. However, I can never keep up. When I go on long plane flights I would stash 10 issues or so of each in my bag and carry them across the Atlantic to CERN or where ever I was traveling. And often I would carry 9 issues back. You know how heavy & fat those are? Yes. 1st world problem.

The nook was fantastic in solving this. And if I was away for more than a week I could still get new issues. I soon installed a few other apps – like a PDF reader. Suddenly I was no longer printing out lecture notes for my class – I’d load them onto the nook and bring them with me that way. I could keep the full quarter of lecture notes with me at all times for when a student would ask me something! I try to keep up on blogs, and I managed to side-load gReader before B&N locked down the nook. I soon was putting comments in papers and talks then I had to review – very comfortable sitting on the couch with this thing!

As the new crop of tablets showed up I started looking for something that was faster. And perhaps with a more modern web browser. The main thing that drove this was viewing my class notes in the PDF viewer – sometimes a 5 second lag would interfere with my lecture when I was trying to look up something I’d written quickly. Amazon’s HD Fire and B&N’s new nooks were pretty disappointing, and so I went with the Nexus 7. The performance is great. But there was something else I’d not expected.

I know this is a duh for most people: but the importance of a well stocked app store. Wow. Now the Nexus 7 is very integrated into my workday. I used it constantly! My todo lists, some of my lab notebooks, reading and marking up papers and talks, all that is done on thins thing now. B&N’s app store is ok, but nothing like the Google app store – pretty much whatever I want is there, and with Google’s free $25 for the app store to spend with the purchase of the Next 7, I’ve not actually had to spend a cent… and having now owned this thing for about a month my app purchasing has pretty much dropped off to zero. Basically, all I take back and forth to work now are my reading glasses and the Nexus 7 (it was the same with the tablet). I put Dropbox on it, and… well, it is all there.

I have a few complaints. About the hardware – the 16 GB version is barely enough – because I want to be able to load it up with TV/Movies/DVD’s for my long flights. For everything else (music included) the 16 GB is plenty. I think I can connect a USB key to the device, but it was very nice having all that extra space in the nook tablet with its SD slot. The battery life is worse than the nook – it will make it only through two days of heavy use. The nook tablet would good 3 or 4 (but I didn’t use it nearly as much, so this might not be a fair comparison). This guy has NFC, which if I understand the tech right, should be so much better than Bluetooth – so I’m eager to try that out… when I get other devices that support it.

The rest of the complaints I have are due to software and thus can be easily updated. For example, Microsoft’s OneNote app for android doesn’t display handwriting. Smile Many of my logbooks contain handwriting. Also the email app is really awful (seriously??) – though I should add it is serviceable for quick checks, triage, and reading. The only other mobile smart-device I own is a Windows Phone 7.5 – and the design and how the interface flows on android isn’t as nice or as integrated, but with Android 4.1 Google has done a great job. SkyDrive, which I like a lot better than dropbox, is on Android, but it doesn’t support in-place-editing (i.e. open a PDF file, annotate it, have it put back up to the cloud). With 7 GB free (25 because I was an early adopter), I’d drop dropbox if SkyDrive supported this on Android.

If you are still reading. I’m sure you know what triggered this post: Apple’s rumored 7” tablet that will be announced tomorrow. If you are locked into the Apple eco-system, and your work load looks anything like mine, you should get it. Otherwise, go with the Nexus 7” (at $250).

My wife has an older iPad, and I’ve played around with other iPad’s – for whatever reason, people don’t seem to carry them around to meetings, etc., very often. And I see people using them, but often not reading papers, etc. – but propped up on the stand and watching a movie. Also, the 10” form factor makes it very difficult to hold the tablet in your hand and thumb type: you need very big hands. For this sort of task, the 7” is perfect.

That isn’t to say that the 10” form factor isn’t great in itself. Microsoft with its W8 release is going to have a bunch of these tablets – and I can’t wait to buy one. Of course, for those of you who know me, my requirements are going to be a little weird: it must have an active digitizer. This is what allows you to write with a pen (as on my Tablet PC’s). Then I can finally get rid of the Tablet PC which is a compromise, and I can carry something optimized for each task: the 7” for quick work and reading, the 10” for a lab notebook, and a ultra-portable for the real work. Wait. Am I going to carry three now? Arrgh! What am I doing!?!?

The Higgs. Whaaaa? July 6, 2012

Posted by gordonwatts in ATLAS, CMS, Higgs, LHC, physics, press.
9 comments

Ok. This post is for all my non-physics friends who have been asking me… What just happened? Why is everyone talking about this Higgs thing!?

It does what!?

Actually, two things. It gives fundamental particles mass.  Not much help, eh? Smile Fundamental particles are, well, fundamental – the most basic things in nature. We are made out of arms & legs and a few other bits. Arms & legs and everything else are made out of cells. Cells are made out of molecules. Molecules are made out of atoms. Note we’ve not reached anything fundamental yet – we can keep peeling back the layers of the onion and peer inside. Inside the atom are electrons in a cloud around the nucleus. Yes! We’ve got a first fundamental particle: the electron! Everything we’ve done up to now says it stops with the electron. There is nothing inside it. It is a fundamental particle.

We aren’t done with the nucleus yet, however. Pop that open and you’ll find protons and neutrons. Not even those guys are fundamental, however – inside each of them you’ll find quarks – about 3 of them. Two “up” quarks and a “down” quark in the case of the proton and one “up” quark and two “down” quarks in the case of the neutron. Those quarks are fundamental particles.

The Higgs interacts with the electron and the quarks and gives them mass. You could say it “generates” the mass. I’m tempted to say that without the Higgs those fundamental particles wouldn’t have mass. So, there you have it. This is one of its roles. Without this Higgs, we would not understand at all how electrons and quarks have mass, and we wouldn’t understand how to correctly calculate the mass of an atom!

Now, any physicist who has made it this far is cringing with my last statement – as a quick reading of it implies that all the mass of an atom comes from the Higgs. It turns out that we know of several different ways that mass can be “generated” – and the Higgs is just one of them. It also happens to be the only one that, up until July 4th, we didn’t have any direct proof for. An atom, a proton, etc., has contributions from more than just the Higgs – indeed, most of a proton’s mass (and hence, an atom’s mass) comes from another mechanism. But this is a technical aside. And by reading this you know more than many reporters who are talking about the story!

The Higgs plays a second role. This is a little harder to explain, and I don’t see it discussed much in the press. And, to us physicists, this feels like the really important thing. “Electro-Weak Symmetry Breaking”. Oh yeah! It comes down to this: we want to tell a coherent, unified, story from the time of the big-bang to now. The thing about the big-bang is that was *really* hot. So hot, in fact, that the rules of physics that we see directly around us don’t seem to apply. Everything was symmetric back then – it all looked the same. We have quarks and electrons now, which gives us matter – but then it was so hot that they didn’t really exist – rather, we think, some single type of particle existed. Now, and the universe cooled down from the big bang, making its way towards present day, new particles froze out – perhaps the quarks froze out first, and then the electrons, etc. Let me see how far I can push this analogy… when water freezes, it does so into ice crystals. Say that an electron was one particular shape of ice crystal and a quark was a different shape. So you go from a liquid state where everything looks the same – heck – it is just water, to a solid state where the ice crystals have some set of shapes – and by their shape they become electrons or quarks.

Ok, big deal. It seems like the present day “froze” out of the Big Bang. Well, think about it. If our current particles evolved out of some previous state, then we had sure as hell be able to describe that freezing process. Even better – we had better be able to describe that original liquid – the Big Bang. In fact, you could argue, and we definitely do, that the rules that governed physics at the big bang would have to evolve to describe the rules that describe our present day particles. They should be connected. Unified!! Ha! See how I slipped that word in up above!?

We know about four forces in the universe: the strong (holds a proton together), weak (radioactive decay is an example), electro-magnetism (cell phones, etc. are examples), and gravity. The Higgs is a key player in the unification of the weak force and the electro-magnetic force. Finding it means we actually have a bead on how nature unifies those two forces. That is HUGE! This is a big step along the way to putting all the forces back together. We still have a lot of work to do!

Another technical aside. Smile We think of the first role – giving fundamental particles mass – a consequence of the second – they are not independent roles. The Higgs is key to the unification and in order to be that key, it must also be the source of the fundamental particle’s mass.

How long have you been searching for it?

A loooooong time. We are like archeologists. Nature is what nature is. Our job is to figure out how nature works. We have a mathematical model (called the Standard Model). We change it every time we find an experimental result that doesn’t agree with the calculation. The last time that happened was when we stumble upon the unexpected fact that neutrino’s have mass. The time before that was the addition of the Higgs, and that modification was first proposed in 1964 (it took a few years to become generally accepted). So, I suppose you could say in some sense we’ve been looking for it since 1964!

It isn’t until recently, however (say in the late 90’s) that the machines we use have become powerful enough that we could honestly say we were “in the hunt for the Higgs.” The LHC, actually, had finding the Higgs as one of its major physics goals. There was no guarantee – no reason nature had to work like that – so when we built it we were all a little nervous and excited… ok. a lot nervous and excited.

So, why did it take so long!? The main reason is we hardly ever make it in our accelerators! It is very very massive!! So it is very hard to make. Even at the LHC we make one every 3 hours… The LHC works by colliding protons together at a very high speed (almost the speed of light). We do that more than 1,000,000 times a second… and we make a Higgs only once every 3 hours. The very definition of “needle in a haystack!”

Who made this discovery?

Two very large teams of physicists, and a whole bunch of people running the LHC accelerator at CERN. The two teams are the two experiments: ATLAS and CMS. I and my colleagues at UW are on ATLAS. If you hear someone say “I discovered the Higgs” they are using the royal-I. This is big science. Heck – the detector is half a (American) football field long, and about 8 or 9 stories tall and wide. This is the sort of work that is done by lots of people and countries working together. ATLAS currently has people from 38 countries – the USA being one of them.

What does a Cocktail Party have to do with it?

The cocktail party analogy is the answer to why some fundamental particles are more massive than particles (sadly, not why I have to keep letting my belt out year-after-year).

This is a cartoon of a cocktail party. Someone very famous has just entered the room. Note how everyone has clumped around them! If they are trying to get to the other side of the room, they are just not going to get there very fast!!

Now, lets say I enter the room. I don’t know that many people, so while some friends will come up and talk to me, it will be nothing like that famous person. So I will be able to get across the room very quickly.

The fact that I can move quickly because I interact with few people means I have little mass. The famous person has lots of interactions and can’t move quickly – and in this analogy they have lots of mass.

Ok. Bringing it back to the Higgs. The party and the people – that is the Higgs field. How much a particle interacts with the Higgs field determines its mass. The more it interacts, the more mass is “generated.”

And that is the analogy. You’ve been reading a long time. Isn’t this making you thirsty? Go get a drink!

Really, is this that big a deal?

Yes. This is a huge piece of the puzzle. This work is definitely worth a Nobel prize – look for them to award one to the people that first proposed it in 1960 (there are 6 of them, one has passed away – no idea how the committee will sort out the max of 3 they can give it to). We have confirmed a major piece of how nature works. In fact, this was the one particle that the Standard Model predicted that we hadn’t found. We’d gotten all the rest! We now have a complete picture of the Standard Model is it is time to start work on extending the Standard Model. For example, dark matter and dark energy are not yet in the Standard Model. We have no figured out how to fully unify everything we know about.

No. The economy won’t see an up-tick or a down-tick because of this. This is pure research – we do it to understand how nature and the universe around us works. There are sometimes, by-luck, spin-offs. And there are people that work with us who take it on as one of their tasks to find spin offs. But that isn’t the reason we do this.

What is next?

Ok. You had to ask that. So… First, we are sure we have found a new boson, but the real world – and data, is a bit messy. We have looked for it, and expect it to appear in several different places. It appeared in most of them – one place it seems to be playing hide and seek (where the Higgs decays to tau’s – a tau is very much like a heavy electron). Now, only one of the two experiments has presented results in the tau’s (CMS), so we have to wait for my experiment, ATLAS, to present its results before we get worried.

Second, and this is what we’d be doing no matter what happened to the tau’s, is… HEY! We have a shiny new particle! We are going to spend some years looking at it from every single angle possible, taking it out for a test drive, you know – kicking the tires. There is actually a scientific point to doing that – there are other possible theories out there that predict the existence of a Higgs that looks exactly like the Standard Model Higgs except for some subtle differences. So we will be looking at this new Higgs every-which way to see if we can see any of those subtle differences.

ATLAS and CMS also do a huge amount of other types of physics – none of which we are talking about right now – and we will continue working on those as well.

Why do you call it the God Particle!?

We don’t. (especially check out the Pulp Fiction mash-up picture).

What will you all discover next?

I’ll get back to you on that…

Whew. I’m spent!

We only let students do posters June 5, 2012

Posted by gordonwatts in Uncategorized.
6 comments

I’m here at the PLHC conference in Vancouver, Canada (fantastic city, if you’ve not visited). I did a poster for the conference on some work I’ve done on combining the ATLAS b-tagging calibrations (the way their indico site is setup I have no idea how to link to the poster). I was sitting in the main meeting room, the large poster tube next to my seat, when this friend of mine walks by:

“Hey, brought one of your student’s posters?”

“Nope, did my own!”

“Wow. Really? We only let students do posters. I guess you’ve really fallen in the pecking order!”

Wow. It took me a little while to realize what got me upset about the exchange. So, first, it did hit a nerve. Those that know me know that I’ve been frustrated with the way the ATLAS experiment assigns talks – but this year they gave me a good talk. Friends of mine who are I think are deserving are also getting more talks now. So this is no longer really an issue. But comments like this still hit this nerve – you know, that general feeling of inadequacy that is left over from a traumatic high school experience or two. :-)

But more to the point… are posters really such second class citizens? And if they are, should they remain as such?

I have always liked posters, and I have given many of them over my life. I like them because you end up in a detailed conversation with a number of people on the topic – something that almost never happens at a talk like the PLHC. In fact, my favorite thing to do is give a talk and a poster on the same topic. The talk then becomes an advertisement for the poster – a time when people that are very interested in my talk can come and talk in detail next to a large poster that lays out the details of the topic.

But more generally, my view of conferences as evolved over the past 5 years. I’ve been to many large conferences. Typically you get a set of plenary sessions with > 100 people in the audience, and then a string of parallel sessions. Each parallel talk is about 15-20 minutes long, and depending on the topic there can be quite a few people in the room. Only a few minutes are left for questions. The ICHEP series is a conference that symbolizes this.

Personally, I learn very little from this style of conference. Many of the topics and the analyses are quite complex. Too complex to really give an idea of the details in 15 or 20 slides. I personally am very interested in analysis details – not just the result. And getting to that level of detail requires – for me, at least – some back and forth. Especially if the topic is new I don’t even know what questions to ask! In short, these large conferences are fun, but I only get so much out of the talks. I learn much more from talking with the other attendees. And going to the poster sessions.

About 5 years ago I started getting invites to small workshops. These are usually about a week long, have about 20 to 40 people, and pick a specific topic. Dark Matter and Collider Physics. The Higgs. Something like that. There will be a few talks in the morning and maybe in the afternoon. Every talk that is given has at least the same amount of time set aside for discussion. Many times the workshop has some specific goals – better understanding of this particular theory systematic, or how to interpret the new results from the LHC, or how can the experiments get their results out in a more useful form for the theorist. The afternoons the group splits into working groups – where no level of detail is off-limits. I’ve been lucky enough to be invited to ones at UC Davis, Oregon, Maryland, and my own UW has been arranging a pretty nice series of them (see this recent workshop for links to previous ones). I can’t tell you how much I learn from these!

To me, posters are mini-versions of these workshops. You get 5 or 6 people standing around a poster discussing the details. A real transfer of knowledge. Here, at PLHC, there are 4 posters from ATLAS on b-tagging. We’ve all put them together in the poster room. If you walk by that end of the room you are trapped and surrounded by many of the experts – the people that actually did the work – and you can get almost any ATLAS b-tagging question answered. In a way that really isn’t, as far as I know, possible in many other public forums. PLHC is also doing some pretty cool stuff with posters. They have a jury that walks around and decides what poster is “best” and gives it an award. One thing the poster writer gets to do: give a talk at the plenary session. I recently attended CHEP - they did the same thing there. I’ve been told that CMS does something like this during their collaboration meetings too.

It is clear that conference organizers the world round are looking for more ways to get people attending the conference more involved in the posters that are being presented.

The attitude of my friend, however, is a fact of this field. Heck, even I have it. One of the things I look at in someone’s CV is how many talks they have given. I don’t look carefully at the posters they have listed. In general, this is a measure of what your peers think of you – have you done enough work in the collaboration to be given a nice talk? So this will remain with us. And those large conferences like ICHEP – nothing brings together more of our field all in one place than something like ICHEP. So they definitely still play a role.

Still the crass attitude “We only let students do posters” needs to end. And I think we still have more work to do getting details of our analysis and physics out to other members of our field, theorists and experimentalists.

CHEP Trends: Multi-Threading May 24, 2012

Posted by gordonwatts in Analysis, CHEP, computers.
6 comments

I find the topic of multi-threading fascinating. Moore’s law means that we now are heading to a multi-core world rather than just faster processors. But we’ve written all of our code as single threaded. So what do we do?

Before CHEP I was convinced that we needed an aggressive program to learn multithreaded programming techniques and to figure out how to re-implement many of our physics algorithms in that style. Now I’m not so sure – I don’t think we need to be nearly as aggressive.

Up to now we’ve solved things by just running multiple jobs – about one per core. That has worked out very well up to now, and scaling is very close to linear. Great! We’re done! Lets go home!

There are a number of efforts gong on right now to convert algorithms to be multi-threaded –rather than just running jobs in parallel. For example, re-implementing a track finding algorithm to run several threads of execution. This is hard work and takes a long time and “costs” a lot in terms of people’s time. Does it go faster? In the end, no. Or at least, not much faster than the parallel job! Certainly not enough to justify the effort, IMHO.

This was one take away from the conference this time that I’d not really appreciated previously. This is actually a huge relief: trying to make a reconstruction completely multi-threaded so that it efficiently uses all the cores in the machine is almost impossible.

But, wait. Hold your horses! Sadly, it doesn’t sound like it is quite that simple, at least in the long run. The problem is first the bandwidth between the CPU and the memory and second the cost of the memory. The second one is easy to talk about: each running instance of reconstruction needs something like 2 GB of memory. If you have 32 cores in one box, then that box needs 64 GB of main memory – or more including room for the OS.

The CPU I/O bandwidth is a bit tricky. The CPU has to access the event data to process it. Internally it does this by first asking its cache for the data and if the data hasn’t been cached, then it goes out to main memory to get it. The cache lookup is a very fast operation – perhaps one clock cycle or so. Accessing main memory is very slow, however, often taking many 10’s or more of cycles. In short, the CPU stalls while waiting. And if there isn’t other work to do, then the CPU really does sit idle, wasting time.

Normally, to get around this, you just make sure that the CPU is trying to do a number of different things at once. When the CPU can’t make progress on one instruction, it can do its best to make progress on another. But here is the problem: if it is trying to do too many different things, then it will be grabbing a lot of data from main memory. And the cache is of only finite size – so eventually it will fill up, and every memory request will displace something already in the cache. In short, the cache becomes useless and the CPU will grind to a halt.

The way around this is to try to make as many cores as possible work on the same data. So, for example, if you can make your tracking multithreaded, then the multiple threads will be working on the same set of tracking hits. Thus you have data for one event in memory being worked on by, say, 4 threads. In the other case, you have 4 separate jobs, all doing tracking on 4 different sets of tracking hits – which puts a much heavier load on the cache.

In retrospect the model in my head was all one or the other. You either ran a job for every core and did it single threaded, or you made one job use all the resources on your machine. Obviously, what we will move towards is a hybrid model. We will multi-thread those algorithms we can easily, and otherwise run a large number of jobs at once.

The key will be testing – to make sure something like this actually works faster. And you can imagine altering the scheduler in the OS to help you even (yikes!). Up to now we’ve not hit the memory-bandwidth limit. I think I saw a talk several years ago that said for a CMS reconstruction executable that occurred somewhere around 16 or so cores per CPU. So we still have a ways to go.

So, relaxed here in HEP. How about the real world? Their I see alarm bells going off – everyone is pushing multi-threading hard. Are we really different? And I think the answer is yes: there is one fundamental difference between them and us. We have a simple way to take advantage of multiple cores: run multiple jobs. In the real world many problems can’t do that – so the are not getting the benefit of the increasing number of cores unless they specifically do something about it. Now.

To, to conclude, some work moving forward on multithreaded re-implementation of algorithms is a good idea. As far as solving the above problem it is less useful to make the jet finding and track finding run at the same time, and more important to make the jet finding algorithm itself and the track finding algorithm itself multithreaded.

CHEP Trends: Libraries May 24, 2012

Posted by gordonwatts in Analysis, computers.
add a comment

I’m attending CHEP – Computers in High Energy Physics – which is being hosted by New York University this year, in New York City. A lot of fun – most of my family is on the east coast so it is cool to hang out with my sister and her family.

CHEP has been one my favorite conference series. For a while I soured on it as the GRID hijacked it. Everything else – algorithms, virtualization, etc., is making a come back now and makes the conference much more balanced and more interesting, IMHO.

There were a few striking themes (no, one of them wasn’t me being a smart-a** – that has always been true). Rene Brun, one of the inventors of ROOT, gave a talk about the history of data analysis. Check out this slide:

image

A little while later Jeff Hammerbacher from Cloudera gave a talk (Cloudera bases its cloud computing business on Hadoop). Check this these slide:

image

These two slides show, I think, two very different approaches to software architecture. In Rene’s slide, note that all the libraries are coalescing into a small number of projects (i.e. ROOT and GEANT). As anyone who has used ROOT knows, it is a bit of a kitchen sink. The Cloudera platform, on the other hand, is a project built of many small libraries mashed together. Some of them are written in-house, others are written by other groups. All open source (as far as I could understand from the talk). This is the current development paradigm in the open source world: make lots of libraries that end-programing can put together like Lego blocks.

This trend in the web world is, I think, the result of at least two forces at place: the rapid release cycle and the agile programming approach. Both mean that you want to develop small bits of functionality in isolation, if possible, which can then be rapidly integrated into the end project. As a result, development can proceed a pace on both projects, independently. However, a powerful side-effect is it also enables someone from the outside to come along and quickly build up a new system with a few unique aspects – in short, innovate.

I’ve used the fruits of this in some of my projects: it is trivial to download an load a library into one of my projects and with almost no work I’ve got a major building block. HTML parsers, and combinator parsers are two that I’ve used recently that have meant I could ignore some major bits of plumbing, but still get a very robust solution.

Will software development in particle physics ever adopt this strategy? Should it? I’m still figuring that out.

The Way You Look at the World Will Change… Soon December 2, 2011

Posted by gordonwatts in ATLAS, CERN, CMS, Higgs, physics.
6 comments

We are coming up on one of those “lucky to be alive to see this” moments. Sometime in the next year we will all know, one way or the other, if the Standard Model Higgs exists. Or it does not exist. How we think fundamental physics will change. I can’t understate the importance of this. And the first strike along this path will occur on December 13th.

If it does not exist that will force us to tear down and rebuild – in some totally unknown way – our model of physics. Our model that we’ve had for 40+ years now. Imagine that – 40 years and now that it finally meets data… poof! Gone. Or, we will find the Higgs, and we’ll have a mass. Knowing the mass will be in itself interesting, and finding the Higgs won’t change the fact that we still need something more than the Standard Model to complete our description of the universe. But now every single beyond-the-standard model theory will have to incorporate not only electrons, muons, quarks, W’s, Z’s, photons, gluons – at their measured masses, but a Higgs too with the appropriate masses we measure!

So, how do I know this is going to happen? Look at this plot that was released during the recent HCP conference (deepzoom version Smile) in Paris.

Ok, this takes a second to explain. First, when we look for the Higgs we do it as a function of its mass – the theory does not predict exactly how massive it will be. Second, the y-axis is the rate at which the Higgs is produced. When we look for it at a certain mass we make a statement “if the Higgs exists at mass 200 GeV/c2, then it must be happening at a rate less than 0.6 or we would have seen it.” I read the 0.6 off the plot by looking at the placement of the solid black line with the square points – the observed upper limit. The rate, the y-axis, is in funny units. Basically, the red line is the rate you’d expect if it was a standard model Higgs. The solid black line with the square points on it is the combined LHC exclusion line. Combined means ATLAS + CMS results. So, anywhere the solid black line dips below the red horizontal line means that we are fairly confident that the Standard Model Higgs doesn’t exist (BTW – even fairly confident has a very specific meaning here: we are 95% confident). The hatched areas are the areas where the Higgs has already been ruled out. Note the hatched areas at low mass (100 GeV or so) – those are from other experiments like LEP.

Now that is done. A fair question is where would we expect to find the Higgs. As it turns out, a Standard Model Higgs will mostly likely occur at low masses – exactly that region between 114 GeV/c2 and 140 GeV/c2. There isn’t a lot of room left for the Higgs to hide there!! These plots are with 2 fb-1 of data. Both experiments now have about 5 fb-1 of data recorded. And everyone wants to know exactly what they see. Heck, while in each experiment we basically know what we see, we desperately want to know what the other experiment sees. The first unveiling will occur at a joint seminar at 2pm on December 13th. I really hope it will be streamed on the web, as I’ll be up in Whistler for my winder ski vacation!

So what should you look for during that seminar (or in the talks that will be uploaded when the seminar is given)? The above plot will be a quick summary of what the status of the experiments. Each experiment will have an individual one. The key thing to look for is where the dashed line and the solid line deviate significantly. The solid line I’ve already explained – that says that for the HIggs of a particular mass if it is there, it must be at a rate less than what is shown. Now, the dashed line is what we expect – given everything was right – and the Higgs didn’t exist at that mass – that is how good we expect to be. So, for example, right around the 280 GeV/C2 level we expect to be able to see a rate of about 0.6, and that is almost exactly what we measure. Now look down around 120-130 GeV/c2. There you’ll notice that the observed line is well above the solid line. How much – well, it is just along the edge of the yellow band – which means 2 sigma. 2 sigma isn’t very much – so this plot has nothing to get very interested yet. But if one of the plots shown over the next year has a more significant excursion, and you see it in both experiments… then you have my permission to get a little excited. The real test will be if we can get to a 5 sigma excursion.

This seminar is the first step in this final chapter of the old realm of particle physics. We are about to start a new chapter. I, for one, can’t wait!

N.B. I’m totally glossing over the fact that if we do find something in the next year that looks like a Higgs, it will take us sometime to make sure it is a Standard Model Higgs, rather than some other type of Higgs! 2nd order effect, as they say. Also, in that last long paragraph, the sigma’s I’m talking about on the plot and the 5 sigma discovery aren’t the same – so I glossed over some real details there too (and this latter one is a detail I sometimes forget, much to my embarrassment at a meeting the other day!).

Update: Matt Strassler posted a great post detailing the ifs/ands/ors behind seeing or not seeing – basically a giant flow-chart. Check it out!

So long, and thanks for all the protons! September 29, 2011

Posted by gordonwatts in D0, Fermilab, physics life.
add a comment

And there were a lot of protons!

This is a picture of the Cockroft-Walton at Fermilab’s Tevatron. This is where it all starts.

Photo_0C91E05A-507A-6132-FD23-A7EC06FC757B

It isn’t that much of an exaggeration to say that my career started here. You are looking through a wire cage at one half of the Cockroft-Walton – the generator creates a very very very large electric field that ionizes Hydrogen gas (two protons and two electrons) by ripping one of the protons off. The gas, now charged, can be accelerated by an electric field. This is how protons start in the Tevatron.

And that is how most of the experimental data that I used for my Ph.D. research , post-doc research, and tenure research started. Basically, my career from graduate student to tenure is based on data from the Tevatron. The Tevatron delivers its last beam this Friday, at 2pm Central time (the 30th).

I’ll miss working at Fermilab. I’ll miss working at DZERO (the most recent Fermilab experiment I’ve been on). I’ll also miss the character of the experiments – CDF and DZERO now seem like such small experiments. Only 500 authors. I feel like I know everyone. It is a community in a way that I’ve not felt at the LHC yet. And I’ll miss directly owning a bit of the experiment – something I joined the LHC too late to do. But most of all I’ll miss the people. True – many of them have made the transition to the LHC – but not all of them. For reasons of travel, or perhaps retirement, these people I’ll probably see a lot less over the next 10 years. And that is too bad.

I’ll remain connected with DZERO for some time to come. I’m helping out with doing some paper reviews and I’m helping out with data preservation – making sure the DZERO data can be accessed long after the experiment has ceased running.

Tevatron. It has been a fantastic run. You have made my career. And I’ve had a wonderful time with the science opportunities you’ve provided.

So long, and thanks for all the (anti-)protons.

Follow

Get every new post delivered to your Inbox.

Join 45 other followers